Recherche avancée

Médias (1)

Mot : - Tags -/blender

Autres articles (26)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

Sur d’autres sites (5849)

  • Error : spawn process ffmpeg ChildProcessError

    10 juin 2019, par Karnon

    I wanted to make a program like OBS simply.

    My ideal code behavior is to create a child process in Node.js and execute FFMPEG commands to send a webcam stream to yourtube live RTMP server. However, the actual behavior is caused by an error in the child-process-promise module used in node.js.

    I’ve checked several questions, but I don’t have enough experience to understand them, and I hope there’s a clear solution.

    I guess it was because I couldn’t find the command address of FFMPEG in the Node environment. Or is calling from the socket environment a problem ?

    I checked that the FFMPEG command works in a Windows prompt environment.

    ※ Note : FFMPEG environment variables are registered.

    Environment : Window10, node.js, ffmpeg

    The code took advantage of a simple WebSocket example.

    When I first investigated, I thought that the only way to do this was to use "fluent-effmpeg."

    I tried "fluent-ffmpeg" but I couldn’t get my laptop webcam up and running in Windows environments as a parameter for the "fluent-ffmppeg" command.

    I’ve also thought about using WebRTC, but I think it’s not for personal use because it’s a P2P connection. (I also saw how to connect a peer connection to a WebRTC server like Janus, but I didn’t have enough references to understand it.)

    Below is the code of the problem.

    const SocketIO = require("socket.io");
    const ffmpeg = require("fluent-ffmpeg");
    const spawn = require("child-process-promise").spawn;

    module.exports = server => {
     const io = SocketIO(server, { path: "/socket.io" });

     io.on("connection", socket => {
       const req = socket.request;
       const ip = req.headers["x-forwarded-for"] || req.connection.remoteAddress;
       console.log("새로운 클라이언트 접속!", ip, socket.id, req.ip);
       socket.on("disconnect", () => {
         console.log("클라이언트 접속해제", ip, socket.id);
         clearInterval(socket.interval);
       });
       socket.on("error", error => {
         console.error(error);
       });
       socket.on("reply", data => {
         console.log(data);
         ffmpeg_command();
       });
     });

     function ffmpeg_command() {
       let arg = [
         "-f",
         "lavfi",
         "-i",
         "anullsrc=r=16000:cl=mono",
         "-f",
         "dshow",
         "-ac",
         "2",
         "-i",
         "video='HP Truevision HD'",
         "-s",
         "1280x720",
         "-r",
         "10",
         "-vcodec",
         "libx264",
         "-pix_fmt",
         "yuv420p",
         "-preset",
         "ultrafast",
         "-r",
         "25",
         "-g",
         "20",
         "-b:v",
         "2500k",
         "-codec:a",
         "libmp3lame",
         "-ar",
         "44100",
         "-threads",
         "6",
         "-b:a",
         "11025",
         "-bufsize",
         "512k",
         "-f",
         "flv",
         "rtmp://a.rtmp.youtube.com/live2/8dfu-69k0-dxyw-896q"
       ];
       spawn("ffmpeg", arg).catch(e => {
         console.log(e);
       });
     }
    };

    Here’s the error : The expected result is that your webcam is working and YouTube live streaming is successful.

    { ChildProcessError: `ffmpeg -f lavfi -i anullsrc=r=16000:cl=mono -f dshow -ac 2 -i video='HP Truevision HD' -s 1280x720 -r 10 -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -r 25 -g 20 -b:v 2500k -codec:a libmp3lame -ar 44100 -threads 6 -b:a 11025 -bufsize 512k -f flv rtmp://a.rtmp.youtube.com/live2/8dfu-69k0-dxyw-896q` failed with code 1
       at ChildProcess.<anonymous> (C:\Users\Tricky\Desktop\Work\ESC\ESC_temp\node_modules\child-process-promise\lib\index.js:132:23)
       at ChildProcess.emit (events.js:182:13)
       at ChildProcess.cp.emit (C:\Users\Tricky\Desktop\Work\ESC\ESC_temp\node_modules\child-process-promise\node_modules\cross-spawn\lib\enoent.js:40:29)
       at maybeClose (internal/child_process.js:962:16)
       at Socket.stream.socket.on (internal/child_process.js:381:11)
       at Socket.emit (events.js:182:13)
       at Pipe._handle.close (net.js:606:12)
     name: 'ChildProcessError',
     code: 1,
     childProcess:
      ChildProcess {
        _events: { error: [Function], close: [Function] },
        _eventsCount: 2,
        _maxListeners: undefined,
        _closesNeeded: 3,
        _closesGot: 3,
        connected: false,
        signalCode: null,
        exitCode: 1,
        killed: false,
        spawnfile: 'ffmpeg',
        _handle: null,
        spawnargs:
         [ 'ffmpeg',
           '-f',
           'lavfi',
           '-i',
           'anullsrc=r=16000:cl=mono',
           '-f',
           'dshow',
           '-ac',
           '2',
           '-i',
           'video=\'HP Truevision HD\'',
           '-s',
           '1280x720',
           '-r',
           '10',
           '-vcodec',
           'libx264',
           '-pix_fmt',
           'yuv420p',
           '-preset',
           'ultrafast',
           '-r',
           '25',
           '-g',
           '20',
           '-b:v',
           '2500k',
           '-codec:a',
           'libmp3lame',
           '-ar',
           '44100',
           '-threads',
           '6',
           '-b:a',
           '11025',
           '-bufsize',
           '512k',
           '-f',
           'flv',
           'rtmp://a.rtmp.youtube.com/live2/8dfu-69k0-dxyw-896q' ],
        pid: 18928,
        stdin:
         Socket {
           connecting: false,
           _hadError: false,
           _handle: null,
           _parent: null,
           _host: null,
           _readableState: [ReadableState],
           readable: false,
           _events: [Object],
           _eventsCount: 1,
           _maxListeners: undefined,
           _writableState: [WritableState],
           writable: false,
           allowHalfOpen: false,
           _sockname: null,
           _pendingData: null,
           _pendingEncoding: '',
           server: null,
           _server: null,
           [Symbol(asyncId)]: 132,
           [Symbol(lastWriteQueueSize)]: 0,
           [Symbol(timeout)]: null,
           [Symbol(kBytesRead)]: 0,
           [Symbol(kBytesWritten)]: 0 },
        stdout:
         Socket {
           connecting: false,
           _hadError: false,
           _handle: null,
           _parent: null,
           _host: null,
           _readableState: [ReadableState],
           readable: false,
           _events: [Object],
           _eventsCount: 2,
           _maxListeners: undefined,
           _writableState: [WritableState],
           writable: false,
           allowHalfOpen: false,
           _sockname: null,
           _pendingData: null,
           _pendingEncoding: '',
           server: null,
           _server: null,
           write: [Function: writeAfterFIN],
           [Symbol(asyncId)]: 133,
           [Symbol(lastWriteQueueSize)]: 0,
           [Symbol(timeout)]: null,
           [Symbol(kBytesRead)]: 0,
           [Symbol(kBytesWritten)]: 0 },
        stderr:
         Socket {
           connecting: false,
           _hadError: false,
           _handle: null,
           _parent: null,
           _host: null,
           _readableState: [ReadableState],
           readable: false,
           _events: [Object],
           _eventsCount: 2,
           _maxListeners: undefined,
           _writableState: [WritableState],
           writable: false,
           allowHalfOpen: false,
           _sockname: null,
           _pendingData: null,
           _pendingEncoding: '',
           server: null,
           _server: null,
           write: [Function: writeAfterFIN],
           [Symbol(asyncId)]: 134,
           [Symbol(lastWriteQueueSize)]: 0,
           [Symbol(timeout)]: null,
           [Symbol(kBytesRead)]: 1615,
           [Symbol(kBytesWritten)]: 0 },
        stdio: [ [Socket], [Socket], [Socket] ],
        emit: [Function] },
     stdout: undefined,
     stderr: undefined }
    </anonymous>
  • How add Data Stream into MXF(using mpeg2video) file with FFmpeg and C/C++

    26 mars 2019, par Helmuth Schmitz

    I’m a little bit stuck here trying create a MXF file
    with data stream on it. I have several MXF video files that contain
    this standard

    **1 Video Stream:
        Stream #0:0: Video: mpeg2video (4:2:2), yuv422p(tv, bt709, top first), 1920x1080 [SAR 1:1 DAR 16:9], 50000 kb/s, 29.9
    16 audio streams
        Audio: pcm_s24le, 48000 Hz, 1 channels, s32 (24 bit), 1152 kb/s
    1 Data Stream:
        Data: none**

    This data stream, contain personal data inside video file. I can
    open this stream and data is really there. Is all ok. But, when i try
    to create a file exactly like this, everytime i call "avformat_write_header"
    it returns an error.

    If i do comment the creation of this data streams the video file is succeffully
    created.

    If i change to "mpegts" with this data stream, the video file is also succeffully
    created.

    But, i can’t use mpets and i need this data stream.

    I know that is possible MXF with data stream cause i have this originals files
    that have this combination.

    So, i know that i missing something in my code.

    This is the way i create this Data Stream :

    void CFFmpegVideoWriter::addDataStream(EOutputStream *ost, AVFormatContext *oc, AVCodec **codec, enum AVCodecID codec_id)
       {
           AVCodecParameters *par;

           ost->stream = avformat_new_stream(oc, NULL);
           if (ost->stream == NULL)
           {
               fprintf(stderr, "OOooohhh man: avformat_new_stream() failed.\n");
               return;
           }

           par = ost->stream->codecpar;
           ost->stream->index = 17;
           par->codec_id = AV_CODEC_ID_NONE;
           par->codec_type = AVMEDIA_TYPE_DATA;

           ost->stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

    the file openning is this :

    CFFMpegVideoWriter::CFFMpegVideoWriter(QString outputfilename) : QThread()
    {
       av_register_all();
       avcodec_register_all();

       isOpen = false;
       shouldClose = false;

       frameIndex = 0;

    #ifdef __linux__
       QByteArray bFilename = outputfilename.toUtf8();
    #else
       QByteArray bFilename = outputfilename.toLatin1();
    #endif

       const char* filename = bFilename.data();

       codecContext = NULL;

       //encontra o formato desejado...
       outputFormat = av_guess_format("mp2v", filename, nullptr);
       if (!outputFormat)
       {
           qDebug("Could not find suitable output format\n");
           return;
       }

       //encontra o codec...
       codec = avcodec_find_encoder(outputFormat->video_codec);
       if (!codec)
       {
           qDebug( "Codec not found\n");
           return;
       }

       //aloca o contexto do codec...
       codecContext = avcodec_alloc_context3(codec);
       codecContext->field_order = AV_FIELD_TT;
       codecContext->profile = FF_PROFILE_MPEG2_422;

       //aloca o contexto do formato...
       formatContext = avformat_alloc_context();
       formatContext->oformat = outputFormat;

       //aloca o contexto da midia de saida...
       avformat_alloc_output_context2(&amp;formatContext, NULL, NULL, filename);
       if (!formatContext)
       {
           qDebug("Erro");
           return;
       }

       videoStream.tmp_frame = NULL;
       videoStream.swr_ctx = NULL;

       //adiciona a stream de video...
       if (outputFormat->video_codec != AV_CODEC_ID_NONE)
       {
           addVideoStream(&amp;videoStream, formatContext, &amp;video_codec, outputFormat->video_codec);      
       }

       //adiciona as 16 streams de audio...
       if (outputFormat->audio_codec != AV_CODEC_ID_NONE)
       {
           for (int i = 0; i &lt; 16; i++)
           {
               addAudioStream(&amp;audioStream[i], formatContext, &amp;audio_codec, outputFormat->audio_codec);
           }      
       }

       addDataStream(&amp;datastream, formatContext, &amp;video_codec, outputFormat->video_codec);    

       videoStream.sws_ctx = NULL;
       for (int i = 0; i &lt; 16; i++)
       {
           audioStream[i].sws_ctx = NULL;
       }  
       opt = NULL;


       //carreca o codec de video para stream de video...      
       initVideoCodec(formatContext, video_codec, &amp;videoStream, opt);


       //carrega o codec de audio para stream de audio...s
       for (int i = 0; i &lt; 16; i++)
       {
           initAudioCodec(formatContext, audio_codec, &amp;audioStream[i], opt);
       }


       av_dump_format(formatContext, 0, filename, 1);

       //abrea o arquivo de saida..
       if (!(outputFormat->flags &amp; AVFMT_NOFILE))
       {
           ret = avio_open(&amp;formatContext->pb, filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0)
           {
               qDebug("Could not open'%s", filename);
               return;
           }
       }

       //escreve o cabecalho do arquivo...
       ret = avformat_write_header(formatContext, &amp;opt);
       if (ret &lt; 0)
       {
           qDebug("Error occurred when opening output file");
           return;
       }

       isOpen = true;

       QThread::start();
    }

    The code always fails at "avformat_write_header" call.

    But if i remove "datastream" or change it to mpegts everything runs fine.

    Any ideia of what am i doing wrong here ?

    Thanks for reading this.

    Helmuth

  • Advice on how to specify length of animated GPX video with ffmpeg/image2pipe

    21 mai 2019, par Chris Olin

    I’m working on a personal project involving an action camera that records GPS data alongside video from an image sensor. I found an open source projected on GitHub called ’trackanimation’ that uses a colored marker to trace the GPX path on a OpenStreetMaps overlay, but it appears that the project has been abandoned. I’m trying to sync the trackanimation video to the image sensor video, but when I try using video editing software to slow the GPX video down to 1%, it still ends up being shorter than the image sensor video. I’ve tried messing with the baked in ffmpeg command in make_video(), but still can’t get the output video to be as long as I want it to be.

    I started digging into the library source to see how the video was being created, tried tweaking a couple things to no avail.

    import trackanimation
    from trackanimation.animation import AnimationTrack

    gpx_file = "Videos/20190516 unity ride #2.mp4.gpx"
    gpx_track = trackanimation.read_track(gpx_file)

    fig = AnimationTrack(df_points=gpx_track, dpi=300, bg_map=True, map_transparency=0.7)
    fig.make_video(output_file="Videos/1-11trackanimationtest.mp4", framerate=30, linewidth=1.0)
       def make_video(self, linewidth=0.5, output_file='video', framerate=5):
           cmdstring = ('ffmpeg',
                        '-y',
                        '-loglevel', 'quiet',
                        '-framerate', str(framerate),
                        '-f', 'image2pipe',
                        '-i', 'pipe:',
                        '-r', '25',
                        '-s', '1920x1080',
                        '-pix_fmt', 'yuv420p',
                        output_file + '.mp4'
                        )

    I expect that I should be able to linearly "slow" the GPX video to a dynamic value based on the length of the video and the length I want it to be.