Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • FFmpeg add a text to last image only

    29 mai 2018, par BentCoder

    I managed to create a video from set of non-sequential images and attached an audio to it. Also I added a "Copyright" text on top right hand corner so that the text appears throughout the video. However, I would like that text to appear only on the last image. How should I change my code below to address this?

    ffmpeg \
    -thread_queue_size 512 -f image2 -pattern_type glob -framerate 1/3 \
    -i '*.jpg' \
    -i 'audio.mp3' \
    -c:a aac -c:v libx264 \
    -vf scale=640:480, format=yuv420p, drawtext="text='Copyright':fontcolor=white:box=1:boxcolor=black@0.5:boxborderw=5:x=w-tw-5:y=5" \
    -preset medium \
    video.mp4
    
  • NanoPi NEO Plus2 h264 encode

    29 mai 2018, par Harutyun Kamalyan

    Has anybody tried to encode h264 on All winner h5 via ffmpeg cedrus264? any help would be appropriated. thanks in advnace.

  • ffmpeg rtsp stream to YouTube livestream not doing anything

    29 mai 2018, par felixosth

    I'm using C# to initiate ffmpeg. I've a onvif bridge server for a CCTV VMS and I'm building a application to enable the user to livestream any CCTV camera to YouTube.

    The rtsp stream to the camera looks like this:

    rtsp://onvif:bridge@localhost:554/live/xxxxx-xxxguidtocameraxxx-xxxxx

    I'm new to ffmpeg and I'm using this arg:

    -f lavfi -i anullsrc -rtsp_transport udp -i {camerastreamurl} -tune zerolatency -vcodec libx264 -pix_fmt + -c:v copy -c:a aac -strict experimental -f flv rtmp://x.rtmp.youtube.com/live2/{streamkey} -loglevel debug

    It seems like ffmpeg isn't finding the camera stream, only the fake audio one. It just freezes.

    This is the result of the debug log:

    ffmpeg version N-91172-gebf85d3190 Copyright (c) 2000-2018 the FFmpeg developers
      built with gcc 7.3.0 (GCC)
      configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth
      libavutil      56. 18.102 / 56. 18.102
      libavcodec     58. 19.104 / 58. 19.104
      libavformat    58. 17.100 / 58. 17.100
      libavdevice    58.  4.100 / 58.  4.100
      libavfilter     7. 24.100 /  7. 24.100
      libswscale      5.  2.100 /  5.  2.100
      libswresample   3.  2.100 /  3.  2.100
      libpostproc    55.  2.100 / 55.  2.100
    Splitting the commandline.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'lavfi'.
    Reading option '-i' ... matched as input url with argument 'anullsrc'.
    Reading option '-rtsp_transport' ... matched as AVOption 'rtsp_transport' with argument 'udp'.
    Reading option '-i' ... matched as input url with argument 'rtsp://onvif:bridge@localhost:554/live/41cf4f34-e137-4559-8278-47d912c64c5b'.
    Reading option '-tune' ... matched as AVOption 'tune' with argument 'zerolatency'.
    Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'libx264'.
    Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument '+'.
    Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'copy'.
    Reading option '-c:a' ... matched as option 'c' (codec name) with argument 'aac'.
    Reading option '-strict' ...Routing option strict to both codec and muxer layer
     matched as AVOption 'strict' with argument 'experimental'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'flv'.
    Reading option 'rtmp://x.rtmp.youtube.com/live2/xxxxxxxx' ... matched as output url.
    Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option loglevel (set logging level) with argument debug.
    Successfully parsed a group of options.
    Parsing a group of options: input url anullsrc.
    Applying option f (force format) with argument lavfi.
    Successfully parsed a group of options.
    Opening an input file: anullsrc.
    detected 8 logical cores
    [AVFilterGraph @ 0000027a34bad7c0] query_formats: 2 queried, 3 merged, 0 already done, 0 delayed
    [Parsed_anullsrc_0 @ 0000027a34badb80] sample_rate:44100 channel_layout:'stereo' nb_samples:1024
    [lavfi @ 0000027a34babc80] All info found
    Input #0, lavfi, from 'anullsrc':
      Duration: N/A, start: 0.000000, bitrate: 705 kb/s
        Stream #0:0, 1, 1/44100: Audio: pcm_u8, 44100 Hz, stereo, u8, 705 kb/s
    Successfully opened the file.
    Parsing a group of options: input url rtsp://onvif:bridge@localhost:554/live/41cf4f34-e137-4559-8278-47d912c64c5b.
    Successfully parsed a group of options.
    Opening an input file: rtsp://onvif:bridge@localhost:554/live/41cf4f34-e137-4559-8278-47d912c64c5b.
    [tcp @ 0000027a34bb5980] No default whitelist set
    

    Edit: I'm getting this result with minimal args: ffmpeg result

  • ffmpeg PNG to mp4 - Black screen

    29 mai 2018, par ilciavo

    I can create a mpg video using this line

    ffmpeg -f image2 -i 100%03d0.png movie.mpg
    

    But If I try creating an mp4 video I get a video with a black screen.

    ffmpeg -f image2 -i 100%03d0.png movie.mp4
    

    My directory with figures looks like: 1000010.png,1000020.png,...1001260.png

  • Stream webm to node.js from c# application in chunks

    29 mai 2018, par Dan-Levi Tømta

    I am in the process of learning about streaming between node.js with socket.io and c#.

    I have code that successfully records the screen with ffmpeg, redirects it StandardOutput.BaseStream and stores it into a Memorybuffer, when i click stop in my application it sends the memorystream as a byte array to the node.js server which are storing the file so the clients can play it. This are working just fine and here are my setup for that:

    C#

    bool ffWorkerIsWorking = false;
    private void btnFFMpeg_Click(object sender, RoutedEventArgs e)
    {
        BackgroundWorker ffWorker = new BackgroundWorker();
        ffWorker.WorkerSupportsCancellation = true;
        ffWorker.DoWork += ((ffWorkerObj,ffWorkerEventArgs) =>
        {
            ffWorkerIsWorking = true;
            using (var FFProcess = new Process())
            {
                var processStartInfo = new ProcessStartInfo
                {
                    FileName = "ffmpeg.exe",
                    RedirectStandardInput = true,
                    RedirectStandardOutput = true,
                    UseShellExecute = false,
                    CreateNoWindow = false,
                    Arguments = " -loglevel panic -hide_banner -y -f gdigrab -draw_mouse 1 -i desktop -threads 2 -deadline realtime  -f webm -"
                };
                FFProcess.StartInfo = processStartInfo;
                FFProcess.Start();
    
                byte[] buffer = new byte[32768];
                using (MemoryStream ms = new MemoryStream())
                {
                    while (!FFProcess.HasExited)
                    {
                        int read = FFProcess.StandardOutput.BaseStream.Read(buffer, 0, buffer.Length);
                        if (read <= 0)
                            break;
                        ms.Write(buffer, 0, read);
                        Console.WriteLine(ms.Length);
                        if (!ffWorkerIsWorking)
                        {                                
                            clientSocket.Emit("video", ms.ToArray());                                 
                            ffWorker.CancelAsync();
                            break;
                        }
                    }
                }
            }
        });
        ffWorker.RunWorkerAsync();
    }
    

    JS (Server)

    socket.on('video', function(data) {
        fs.appendFile('public/fooTest.webm', data, function (err) {
          if (err) throw err;
          console.log('File uploaded');
        });
    });
    

    Now i need to change this code so it instead of sending the whole file it should sends chunks of byte arrays instead of the whole video, and node will then initially create a file and then append those chunks of byte arrays as they are received. Ok sound easy enough, but apparently not.

    I need to somehow instruct the code to use a offset and just the bytes after that offset and then update the offset.

    On the server side i think the best approach is to create a file and append the byte arrays to that file as they are received.

    On the server side i would do something like this:

    JS (Server)

    var buffer = new Buffer(32768);
    var isBuffering = false;
    socket.on('video', function(data) {
        //concatenate the buffer with the incoming data and broadcast.emit to clients
    
    });
    

    How am i able to setup the offset for the bytes to be sent and update that offset, and how would i approach the way of concatenating the data to the initialized buffer?

    I have tried to write some code that only reads from the offset to the end and it seems like this is working although the video when added up in node is just black:

    C#

    while (!FFProcess.HasExited)
    {
        int read = FFProcess.StandardOutput.BaseStream.Read(buffer, 0, buffer.Length);
        if (read <= 0)
            break;
        int offset = (read - buffer.Length > 0 ? read - buffer.Length : 0);
        ms.Write(buffer, offset, read);
        clientSocket.Emit("videoChunk", buffer.ToArray());
        if (!ffWorkerIsWorking)
        {                                
            ffWorker.CancelAsync();
            break;
        }
    }
    

    Node console output

    Bytes read

    JS (Server)

    socket.on('videoChunk', function(data) {
        if (!isBufferingDone) {
            buffer = Buffer.concat([buffer, data]);
            console.log(data.length);
        }
    });
    
    socket.on('cancelVideo', function() {
        isBufferingDone = true;
        setTimeout(function() {
            fs.writeFile("public/test.webm", buffer, function(err) {
                if(err) {
                    return console.log(err);
                }
                console.log("The file was saved!");
                buffer = new Buffer(32768);
            }); 
        }, 1000); 
    });
    

    JS (Client)

    socket.on('video', function(filePath) {
       console.log('path: ' + filePath);
       $('#videoSource').attr('src',filePath);
       $('#video').play();
    });
    

    Thanks!