Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • How can I convert an FFmpeg AVFrame with pixel format AV_PIX_FMT_CUDA to a new AVFrame with pixel format AV_PIX_FMT_RGB

    28 mars 2019, par costef

    I have a simple C++ application that uses FFmpeg 3.2 to receive an H264 RTP stream. In order to save CPU, I'm doing the decoding part with the codec h264_cuvid. My FFmpeg 3.2 is compiled with hw acceleration enabled. In fact, if I do the command:

    ffmpeg -hwaccels
    

    I get

    cuvid
    

    This means that my FFmpeg setup has everything OK to "speak" with my NVIDIA card. The frames that the function avcodec_decode_video2 provides me have the pixel format AV_PIX_FMT_CUDA. I need to convert those frames to new ones with AV_PIX_FMT_RGB. Unfortunately, I can't do the conversion using the well knwon functions sws_getContext and sws_scale because the pixel format AV_PIX_FMT_CUDA is not supported. If I try with swscale I get the error:

    "cuda is not supported as input pixel format"

    Do you know how to convert an FFmpeg AVFrame from AV_PIX_FMT_CUDA to AV_PIX_FMT_RGB ? (pieces of code would be very appreciated)

  • Elastic Transcoder output duration doesnt match with the sum of my Input duration

    28 mars 2019, par auricless

    I have multiple media file to concatenate into a single video file. Composed of different media types including video, audio and image. I use FFMPEG to convert audio and images to a video then finally, will use Elastic Transcoder to stitch/concatenate the video files in a single one. On creating transcoder job, whenever I placed the input video which is originally an image, converted by FFMPEG, to be the last input in order, it shrinks the duration of its exposure in the final output by 5 seconds whenever its original duration is > 5. This happens only with that condition.

    Example: (1) video1 - 10s (2) image1 - 10s (3) video2 - 15s (4) image2 - 20s output: video - 40s (image2's duration or exposure in the output shrinks to approx. 5s)

    Clearly, the sum of Input duration and the Output duration does not match. It is even explicitly stated on the Job result of elastic transcoder.


    Thought I had a wrong conversion settings in FFMPEG so I changed some options. After some changes and comparing the image converted to video (V1) with an authentic video to stitch with (V2), their settings are almost the same. In this I use ffmpeg -i myVideo.mp4 to check its details. They differ only on SAR, DAR, tbr and tbn and I dont really know what are their use.

    Already checked the duration of the converted images after ffmpeg conversion and it is accurate, it only messed up after feeding it to the elastic transcoder and placed as the last input.

    Here is my conversion command with FFMPEG(image to video): ffmpeg -r 29.97 -i [input.jpg] -f lavfi -i anullsrc=r=48000:cl:stereo -t [duration] -acodec aac -vcodec libx264 -profile:v baseline -pix_fmt yuv420p -t [duration] -vf scale=854:480 -strict -2 [output.mp4]

    The expected result should be that the output file is consistent with the actual duration it has.

    [EDIT]

    Here's real videos I feed on Elastic Transcoder using ffprobe filename:

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'clip2.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        encoder         : Lavf57.71.100
      Duration: 00:00:10.05, start: 0.042667, bitrate: 476 kb/s
        Stream #0:0(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
        Metadata:
          handler_name    : SoundHandler
        Stream #0:1(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 854x480 [SAR 2136:2135 DAR 89:50], 341 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
        Metadata:
          handler_name    : VideoHandler
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'image2.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        encoder         : Lavf56.12.100
      Duration: 00:02:10.03, start: 0.033333, bitrate: 130 kb/s
        Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 854x480 [SAR 1943:1004 DAR 829661:240960], 2636 kb/s, SAR 283440:146461 DAR 1181:343, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
        Metadata:
          handler_name    : VideoHandler
        Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 128 kb/s (default)
        Metadata:
          handler_name    : SoundHandler
    
  • Delete frame in video with ffmpeg

    28 mars 2019, par Hòa Hưng Ngô

    I have a video at 24 frames per second. I understand that means that 24 images will appear in 1 second in a row? Is that wrong? If that is true, can each image be deleted and edited in 24 images that appear on that 1 second? and can ffmpeg do that? This is just an idea I suddenly thought of to be able to interfere more deeply with an existing video. Anyone think like that?

  • How to run ffmpeg as systemd service ?

    28 mars 2019, par Dr.eel

    My problem is that when the system service starts ffmpeg at boot it creates defunct process. It just won't start. Even I try to start ffmpeg as subprocess by another app with systemd it also crashes.

  • getting error in ffmpeg complex filter init

    28 mars 2019, par Parbat

    I am trying to build an application using ffmpeg library and that app will scale video and show volume bars (using showvolume filter) over scaled video.

    When I use ffmpeg command at that time it works like charm but somehow I cant make it work in c program, please help me to solve this issue

    I am trying to set complex filter in two different methods:

    1. using avfilter_graph_parse2 function
    2. manually link each filters

    method 1 Error:

    [Parsed_showvolume_0 @ 0x1aca580] Format change is not supported Error while feeding the filtergraph

    output screenshot

    filter graph

    method 2 Error:

    [afifo @ 0x9cd880] Format change is not supported Error while feeding the filtergraph

    output screenshot

    filter graph

    C Code: main.c file

    ffmpeg version:

    ubuntu@ubuntu-VirtualBox:~/eclipse-workspace/compexFilterTest/Debug$ ffmpeg -version
    ffmpeg version git-2018-07-11-bd8a5c6 Copyright (c) 2000-2018 the FFmpeg developers
    built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.1) 20160609
    configuration: --enable-gpl --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree --enable-version3 --enable-pic --enable-libfreetype --enable-libfdk-aac --enable-openssl --enable-shared --enable-libass --enable-libvpx --enable-libx265 --enable-libtwolame
    libavutil      56. 18.102 / 56. 18.102
    libavcodec     58. 21.104 / 58. 21.104
    libavformat    58. 17.101 / 58. 17.101
    libavdevice    58.  4.101 / 58.  4.101
    libavfilter     7. 25.100 /  7. 25.100
    libswscale      5.  2.100 /  5.  2.100
    libswresample   3.  2.100 /  3.  2.100
    libpostproc    55.  2.100 / 55.  2.100
    

    ffmpeg CLI command:

    ffmpeg -loglevel 40 -i test.ts  -filter_complex "[0:a]afifo,showvolume=w=240:h=20:o=1:f=0.50:r=25[vol0];nullsrc=size=320x240[base1];[0:v]fifo,setpts=PTS-STARTPTS,scale=320x240[w0h0];[w0h0][vol0]overlay=eval=0:x=280[w0h0a];[base1][w0h0a]overlay=shortest=1:eval=0" -vcodec h264 -profile:v main -level 4 -metadata service_provider=testScreenProvider  -metadata service_name=testScreen  -f mpegts -y test123.ts
    

    Thank you in advance.