Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • How to apply 2 filters drawtext and drawbox using FFMPEG

    9 juillet 2017, par user6972

    I'm having problems combining filters. I'm trying to take video from the camera, apply a timer on it and also overlay a box in the center. I can put a time code (local time and pts) using the -vf drawtext command no problems:

    ffmpeg -f video4linux2 -input_format mjpeg -s 1280x720 -i /dev/video0 \
    -vf "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: \
    text='%{localtime} %{pts\:hms}':  fontcolor=white: fontsize=24: box=1: \
    boxcolor=black@0.8: boxborderw=5: x=0: y=0" -vcodec libx264 \
    -preset ultrafast -f mp4 -pix_fmt yuv420p -y output.mp4
    

    Then I have one that draws a small box using drawbox:

    ffmpeg -f video4linux2 -input_format mjpeg -s 1280x720 -i /dev/video0 \
    -filter_complex " drawbox=x=iw/2:y=0:w=10:h=ih:color=red@0.1": \
    -vcodec libx264 -preset ultrafast -f mp4 -pix_fmt yuv420p -y output.mp4
    

    I assumed I could combine these with the filter_complex switch and separate them using the semicolon like this

    ffmpeg -f video4linux2 -input_format mjpeg -s 1280x720 -i /dev/video0 -filter_complex "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='%{localtime} %{pts\:hms}':  fontcolor=white: fontsize=24: box=1: boxcolor=black@0.8;drawbox=x=iw/2:y=0:w=10:h=ih:color=red@0.1": -vcodec libx264 -preset ultrafast -f mp4 -pix_fmt yuv420p -y output.mp4
    

    But it fails to find the input stream on the second filter:

    Input #0, video4linux2,v4l2, from '/dev/video0':

    Duration: N/A, start: 10651.720690, bitrate: N/A

    Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1280x720, -5 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc

    Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_drawbox_1

    I tried to direct it to [0] like this:

    ffmpeg -f video4linux2 -input_format mjpeg -s 1280x720 -i /dev/video0 -filter_complex " \
    drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: \
    text='%{localtime} %{pts\:hms}':  fontcolor=white: fontsize=24: box=1: \
    boxcolor=black@0.8;[0] drawbox=x=iw/2:y=0:w=10:h=ih:color=red@0.1": \
    -vcodec libx264 -preset ultrafast -f mp4 -pix_fmt yuv420p -y output.mp4
    

    But it fails to put the box on the output.

    So I tried to split streams like this

    ffmpeg -f video4linux2 -input_format mjpeg -s 1280x720 -i /dev/video0 -filter_complex " \
    split [main][tmp];\
    [main] drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: \
    text='%{localtime} %{pts\:hms}':  fontcolor=white: fontsize=24: box=1: boxcolor=black@0.8 [tmp];\
    [main] drawbox=x=iw/2:y=0:w=10:h=ih:color=red@0.1 [tmp2]; [tmp][tmp2] overlay": \
    -vcodec libx264 -preset ultrafast -f mp4 -pix_fmt yuv420p -y output.mp4
    

    But my build doesn't have the overlay filter complied with it. At this point I decided to stop and ask if I'm making this harder than it should be. The end result is I just want a timer and a box drawn on the video. Is there a better way or a formatting trick to do this?

    Thanks

  • Understanding ffmpeg input stream information

    9 juillet 2017, par David Parks

    When running ffmpeg I get the following input/output/stream statements. I need to understand the details here.

    $ ffmpeg -y -nostdin -f v4l2 -framerate 30 -video_size 1920x1080 -c:v mjpeg -i /dev/video1 -c:v copy /tmp/v1.mov
    
    Input #0, video4linux2,v4l2, from '/dev/video1':
      Duration: N/A, start: 762195.237801, bitrate: N/A
        Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1920x1080, -5 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
    
    Output #0, mov, to '/tmp/v1.mov':
      Metadata:
        encoder         : Lavf56.40.101
        Stream #0:0: Video: mjpeg (jpeg / 0x6765706A), yuvj422p, 1920x1080, q=2-31, -5 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
    
    Stream mapping:
      Stream #0:0 -> #0:0 (copy)
    frame= 1685 fps= 30 q=-1.0 Lsize=  212483kB time=00:00:56.08 bitrate=31036.6kbits/s    
    

    I want to connect 2 USB cameras over a USB 3.0 hub. My cameras are USB 2.0 cameras. Running 2 cameras at low resolution or framerate works, but at high resolution/framerate, I run out of USB bandwidth.

    Does Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown) means that ffmpeg is receiving both the compresses mjpeg stream and and uncompressed yuv stream? If this is the case it explains the bandwidth issue. I ask because I can see that the compressed bitrate is only 31 Mbit in the Stream mapping section.

    My question would then become, can I force the camera to only stream the compressed mjpeg stream?

    p.s. I know I can plug the cameras into separate USB ports, but I only have 3 ports and need to record 6 cameras, so I need at least 2 cameras per USB (3.0) hub.

  • Use popen with a relative or absolute path on Windows in C

    9 juillet 2017, par Tiwenty

    I'm trying to a GUI wrapper for FFmpeg. I decided to build my program around the binary instead of the libraries. I want to call ffmpeg.exe in parallel to my program and get the console output instantly to parse it and show to the user a progress bar or something.

    So I found popen which sends me the output when it comes so I can do what I want with it. But there are some problems with paths in popen on Windows.

    When I try ffmpeg.exe -i TEST.mkv TEST.mp4 (when I'm in the folder of both ffmpeg.exe and my source file either from start or a chdir) it works well and I get what I want.

    But when I do

    bin/ffmpeg.exe -i TEST.mkv TEST.mp4

    (with the executable obviously being in the bin folder) it doesn't find the command bin:

    'bin' is not recognized as an internal or external command, operable program or batch file.

    It also doesn't work with quotation marks around the path:

    "bin/ffmpeg.exe" -i TEST.mkv TEST.mp4

    But it does work with an absolute path and quotation marks:

    "C:/PATH/TO/USER/FFmpeg GUI/bin/ffmpeg.exe" -i jellyfish.mkv jellyfish.mp4.

    But it gets funnier.

    If I use relative paths for the source and output, it works.

    "C:/Users/tibtw_000/Raccourci/Code/FFmpeg GUI/bin/ffmpeg.exe" -i videos/jellyfish.mkv videos/jellyfish.mp4

    But if I want to put an absolute path to my source or output files, it now doesn't work.

    "C:/PATH/TO/USER/FFmpeg GUI/bin/ffmpeg.exe" -i "C:/PATH/TO/USER/FFmpeg GUI/videos/jellyfish.mkv"

    and now returns

    'C:/PATH/TO/USER/FFmpeg' is not recognized as an internal or external command, operable program or batch file.

    Now I'm starting to wonder if the space in my path is at fault. And it was. When I use

    C:/PATH/TO/USER/FFmpegGUI/bin/ffmpeg.exe -i C:/PATH/TO/USER/FFmpegGUI/videos/jellyfish.mkv C:/PATH/TO/USER/FFmpegGUI/videos/jellyfish.mp4 in popen everything works perfectly.

    So what should I do? Do I force the user to not put his files in paths with spaces? Is there a special character that I can put instead of a normal space in the path to make popen understand what I want it to do?

  • FFMpeg concat streams

    9 juillet 2017, par chourizo

    I am trying to receive two H264 UDP streams from two cameras, and save them to one file (so they are always synchronized). I tried a lot of things, but it always says that there is no video on the second stream (although I can watch it).

    ffmpeg -probesize 20M -analyzeduration 20M -i udp://@127.0.0.1:1234 -probesize 20M -analyzeduration 20M -i udp://@127.0.0.1:1235 -filter_complex "[0:v]fps=15,scale=320:240,setsar=1/1,setpts=PTS-STARTPTS[v0]; [1:v]fps=15,scale=320:240,setsar=1/1,setpts=PTS-STARTPTS[v1]; [v0][v1]concat=n=2:v=1:a=0 [v0] [v1]" -map "[v0]" -map "[v1]" -threads 0 -y kk.ts
    

    Is it possible to apply the concat to real time streams, so we have a video with two programs as a result?

  • ffmpeg & C, writing video to a buffer [on hold]

    9 juillet 2017, par gogoer

    I'm using next code to get video from rtsp stream and writing it to file.

    int active_flag = 0;
    int f_capture() {
    
      AVFormatContext *ifcx = NULL;
      AVCodecContext *iccx;
      AVStream *ist;
    
      int i_index;
    
      AVFormatContext *ofcx;
      AVOutputFormat *ofmt;
      AVStream *ost;
    
      AVPacket pkt;
    
      int ix;
    
      const char *sFileInput = "rtsp://192.168.100.10/user=admin&password=&channel=1&stream=1.sdp?";
      const char *sFileOutput = "test.avi";
    
      av_log_set_level( AV_LOG_DEBUG ); 
      av_register_all();
      avcodec_register_all();
      avformat_network_init();
    
      if ( avformat_open_input( &ifcx, sFileInput, NULL, NULL) != 0 ) {
        printf( "ERROR: Cannot open input file\n" );
        return EXIT_FAILURE;
      }
    
      if ( avformat_find_stream_info( ifcx, NULL ) < 0 ) {
        printf( "ERROR: Cannot find stream info\n" );
        avformat_close_input( &ifcx );
        return EXIT_FAILURE;
      }
    
      snprintf( ifcx->filename, sizeof( ifcx->filename ), "%s", sFileInput );
    
      i_index = -1;
      for ( ix = 0; ix < ifcx->nb_streams; ix++ ) {
        iccx = ifcx->streams[ ix ]->codec;
        if ( iccx->codec_type == AVMEDIA_TYPE_VIDEO ) {
          ist = ifcx->streams[ ix ];
          i_index = ix;
          break;
        }
      }
      if ( i_index < 0 ) {
        printf( "ERROR: Cannot find input video stream\n" );
        avformat_close_input( &ifcx );
        return EXIT_FAILURE;
      }
    
      ofmt = av_guess_format( NULL, sFileOutput, NULL );
      ofcx = avformat_alloc_context();
      ofcx->oformat = ofmt;
      avio_open2( &ofcx->pb, sFileOutput, AVIO_FLAG_WRITE, NULL, NULL );
    
      ost = avformat_new_stream( ofcx, NULL );
      avcodec_copy_context( ost->codec, iccx );
    
      ost->sample_aspect_ratio.num = iccx->sample_aspect_ratio.num;
      ost->sample_aspect_ratio.den = iccx->sample_aspect_ratio.den;
    
      ost->r_frame_rate = ist->r_frame_rate;
      ost->avg_frame_rate = ost->r_frame_rate;
      ost->time_base = av_inv_q( ost->r_frame_rate );
      ost->codec->time_base = ost->time_base;
    
      avformat_write_header( ofcx, NULL );
    
      snprintf( ofcx->filename, sizeof( ofcx->filename ), "%s", sFileOutput );
    
      av_dump_format( ifcx, 0, ifcx->filename, 0 );
      av_dump_format( ofcx, 0, ofcx->filename, 1 );
    
    
      ix = 0;
      av_init_packet( &pkt );
      while ( av_read_frame( ifcx, &pkt ) >= 0 && active_flag>0){
    
        if ( pkt.stream_index == i_index ) {
    
          pkt.stream_index = ost->id;
          pkt.pts = ix++;
          pkt.dts = pkt.pts;
          av_interleaved_write_frame( ofcx, &pkt );
    
        }
        av_free_packet( &pkt );
        av_init_packet( &pkt );
    
      }
      av_read_pause( ifcx );
      av_write_trailer( ofcx );
      avio_close( ofcx->pb );
      avformat_free_context( ofcx );
    
      avformat_network_deinit();
    
    return EXIT_SUCCESS;
    }
    

    this code just gets packets from rtsp stream and writes them to file. when signal 'start' comes to program, function starts. When signal 'stop' comes, writing to file stoped. I need to write to file 10 sec before signal 'start' and 10 sec after signal 'stop' comes. How can i create 10 sec buffer for video, and write it to file when signal 'start' comes?