
Recherche avancée
Autres articles (109)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (14834)
-
Watch multi-stream video file while being recorded by ffmpeg
20 janvier 2020, par Luke GorrieI am using ffmpeg to record four 1080p video streams simultaneously and I would like to view these streams in real time while they are being recorded. Ideally I would like to view them from a separate machine, using some convenient Linux data sharing method like ssh/sshfs/..., but that is a nice-to-have. Is there a practical way to accomplish this ?
The ffmpeg recording command that I currently use is like this :
ffmpeg \
-f v4l2 -i $cam0 \
-f v4l2 -i $cam1 \
-f v4l2 -i $cam2 \
-f v4l2 -i $cam3 \
-f matroska \
-map 0 \
-map 1 \
-map 2 \
-map 3 \
-c copy \
$output.mkvMy problem would be solved if I could use a command like
ffplay
orvlc
to play one or more of the video streams, always tracking close to the end of the stream, playing more as data becomes available. I haven’t found such an option though.One alternative might be to have the recording ffmpeg process also tile the four 1080p videos into one 4K grid and pipe that to ffplay. Then I could at least preview on the same machine that is recording. I have not been able to work out the right ffmpeg command to do the tiling though and would appreciate an example.
-
NodeJs watch file Chokidar and create Thumbnail with FFMPEG, Code is hanging. Memory jam ?
30 juillet 2021, par MertI'm trying watch file Chokidar. Mp4 files in Folder. And I'm trying watch folder with create thumnail. But My code is hanging. It's getting stuck. I'm waiting ten min but i didnt get error and my code worked more little. Again, I'm waiting more ten min. But no change anything.


Code :


let fs = require('fs');
var chokidar = require('chokidar');
const ffmpegInstaller = require('@ffmpeg-installer/ffmpeg');
const ffmpeg = require('fluent-ffmpeg');
ffmpeg.setFfmpegPath(ffmpegInstaller.path)

var counter = 0

var Mywatcher = chokidar.watch('Z:\\WatchFolder', {ignored: /^\./, persistent: true, usePolling: true});
Mywatcher
 .on('add', function(path) {
 worder = '.mp4'
 if(path.includes(worder)){
 
 counter = counter + 1 
 
 console.log('added' + counter)

 ffmpeg({ source: path })
 .on('filenames', (filenames) => {
 console.log('Created file names', filenames);
 })
 .on('end', () => {
 console.log('Job done');
 })
 .on('error', (err) => {
 console.log('Error', err);
 })
 .takeScreenshots({
 filename: counter +'.jpg',
 timemarks: [ 2 ],
 
 }, 'public/images/thumbnail/');
 
 }

 })

 .on('change', function(path) {console.log('File', path, 'has been changed');})



 .on('unlink', function(path) {

 })




 .on('error', function(error) {console.error('Error happened', error);})



First Result : 1300 thumbnail completed. But not yet created. Thumbnail's folder is empty. My code is stuck.


added13023
Created file names [ '13023.jpg' ]
added13024
Created file names [ '13024.jpg' ]
added13025
Created file names [ '13025.jpg' ]
added13026
Created file names [ '13026.jpg' ]
added13027
Created file names [ '13027.jpg' ]
added13028
Created file names [ '13028.jpg' ]
added13029
Created file names [ '13029.jpg' ]
added13030
Created file names [ '13030.jpg' ]
added13031
Created file names [ '13031.jpg' ]
added13032
Created file names [ '13032.jpg' ]
added13033
Created file names [ '13033.jpg' ]
added13034
Created file names [ '13034.jpg' ]
added13035
Created file names [ '13035.jpg' ]
added13036
Created file names [ '13036.jpg' ]



Again I'm trying, i write 'nodemon start' and Result :


Created file names [ '13026.jpg' ]
added13027
Created file names [ '13027.jpg' ]
added13028
Created file names [ '13028.jpg' ]
added13029
Created file names [ '13029.jpg' ]
added13030
Created file names [ '13030.jpg' ]
added13031
Created file names [ '13031.jpg' ]
added13032
Created file names [ '13032.jpg' ]
added13033
Created file names [ '13033.jpg' ]
added13034
Created file names [ '13034.jpg' ]
added13035
Created file names [ '13035.jpg' ]
added13036
Created file names [ '13036.jpg' ]
added13037
Created file names [ '13037.jpg' ]



How can i fix it ? Thanks all. Regards.


-
Cannot watch RTSP stream from Live555 using FFMPEG
30 octobre 2023, par bobku123I am trying to create an RTSP server using Live555, with the stream source being raw h264 video of my USB webcam encoded with FFMPEG, sent over UDP.


I have used BasicUDPSource from Live555 sources as my FramedSource class. I created my own MediaSubsession class as per the Live555 FAQ. Here is the source code I created so far :


#include "liveMedia.hh"

#include "BasicUsageEnvironment.hh"
#include "announceURL.hh"
#include "FFMPEGH264StreamMediaSubsession.hh"
#include "BasicUDPSource.hh"
#include "H264VideoStreamFramer.hh"
#include "H265VideoRTPSink.hh"

UsageEnvironment* env;

// To make the second and subsequent client for each stream reuse the same
// input stream as the first client (rather than playing the file from the
// start for each client), change the following "False" to "True":
Boolean reuseFirstSource = False;

// To stream *only* MPEG-1 or 2 video "I" frames
// (e.g., to reduce network bandwidth),
// change the following "False" to "True":
Boolean iFramesOnly = False;

static void announceStream(RTSPServer* rtspServer, ServerMediaSession* sms,
 char const* streamName, char const* inputFileName); // forward


int main(int argc, char** argv) {
 // Begin by setting up our usage environment:
 TaskScheduler* scheduler = BasicTaskScheduler::createNew();
 env = BasicUsageEnvironment::createNew(*scheduler);

 UserAuthenticationDatabase* authDB = NULL;

 // Serve regular RTSP (over a TCP connection):
 RTSPServer* rtspServer = RTSPServer::createNew(*env, 8554, authDB);

 if (rtspServer == NULL) {
 *env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
 exit(1);
 }

 char const* descriptionString = "Session streamed by \"testFFMPEGRTSPServer\"";

 {
 char const* streamName = "FFMPEGRTSPStream";
 ServerMediaSession* sms
 = ServerMediaSession::createNew(*env, streamName, streamName,
 descriptionString);
 sms->addSubsession(FFMPEGH264StreamMediaSubsession
 ::createNew(*env, reuseFirstSource));
 rtspServer->addServerMediaSession(sms);

 announceStream(rtspServer, sms, streamName, "ffmpeg");
 }

 env->taskScheduler().doEventLoop(); // does not return
}

static void announceStream(RTSPServer* rtspServer, ServerMediaSession* sms,
 char const* streamName, char const* inputFileName) {
 UsageEnvironment& env = rtspServer->envir();

 env << "\n\"" << streamName << "\" stream, from the file \""
 << inputFileName << "\"\n";
 announceURL(rtspServer, sms);
}

FFMPEGH264StreamMediaSubsession*
 FFMPEGH264StreamMediaSubsession::createNew(UsageEnvironment& env, Boolean reuseFirstSource)
{
 return new FFMPEGH264StreamMediaSubsession(env, reuseFirstSource);
}

FFMPEGH264StreamMediaSubsession::FFMPEGH264StreamMediaSubsession(UsageEnvironment& env, Boolean reuseFirstSource) 
 : OnDemandServerMediaSubsession(env, reuseFirstSource)
{

}

FFMPEGH264StreamMediaSubsession::~FFMPEGH264StreamMediaSubsession()
{

}

FramedSource* FFMPEGH264StreamMediaSubsession::createNewStreamSource(unsigned clientSessionId,
 unsigned& estBitrate)
{
 estBitrate = 500; // kbps, estimate

 // Create the video source:
 // Create a 'groupsock' for the input multicast group,port:
 char const* inputAddressStr = "192.168.1.100";

 NetAddressList inputAddresses(inputAddressStr);
 struct sockaddr_storage inputAddress;
 copyAddress(inputAddress, inputAddresses.firstAddress());

 Port const inputPort(8888);
 unsigned char const inputTTL = 0; // we're only reading from this mcast group

 Groupsock inputGroupsock(envir(), inputAddress, inputPort, inputTTL);

 // Then create a liveMedia 'source' object, encapsulating this groupsock:
 FramedSource* source = BasicUDPSource::createNew(envir(), &inputGroupsock);

 // Create a framer for the Video Elementary Stream:
 return H264VideoStreamFramer::createNew(envir(), source);
} 

RTPSink* FFMPEGH264StreamMediaSubsession::createNewRTPSink(Groupsock* rtpGroupsock,
 unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource)
{
 return H265VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic);
}



I converted my USB webcam to H264 stream using the following FFMPEG command :


ffmpeg -video_size 1280x720 -framerate 30 -input_format mjpeg -f v4l2 -i /dev/video0 -c:v h264_nvenc -bf 0 -g 30 -bsf:v 'filter_units=remove_types=35|38-40' -f h264 udp://192.168.1.100:8888



However when I try to play it with FFPLAY using the following, I do not get any output :


ffplay rtsp://192.168.1.100:8554/FFMPEGRTSPStream



What am I doing wrong ?