Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • ffmpeg fails when try to use drawtext on android

    1er juin 2018, par Rafael Lima

    I'm trying to make an android app using ffmpeg, almost everything is running fine but now i got a problem trying to use drawtext

    the original command works fine:

    ffmpeg -y -i /storage/emulated/0/DCIM/Camera/asd.mp4 -c:v libx264 -preset veryfast -crf 24 -tune film -c:a copy -r 30 -force_key_frames expr:gte(t,n_forced*15) temp.mp4
    

    but if i try to add drawtext filter it fails:

    ffmpeg -y -i /storage/emulated/0/DCIM/Camera/asd.mp4 -c:v libx264 -preset veryfast -crf 24 -tune film -vf "drawtext=text='Made with Instagram Story Splitter':fontfile=/data/data/com.tomatedigital.instagram.storysplitter/fonts/arial.ttf" -c:a copy -r 30 -force_key_frames expr:gte(t,n_forced*15) temp.mp4
    

    error:

    Error selecting an encoder for stream 0:1
    

    I'm sure the font file is okay... and i tested same command on ffmpeg windows and ubuntu and it worked

  • Imageio python converts GIF to MP4 incorrectly

    1er juin 2018, par Apurva Koti

    I am writing a function to speed up/down a given GIF (or .gifv) file, and save the resulting animation as an .mp4 file.

    I'm using the python imageio package (and its ffmpeg plugin) to do this - download the raw binary data from the gif, write each frame to an mp4, and set the fps of the mp4 to whatever.

    My code is -

    def changespeed(vid, mult):
        vid = vid.replace('.gifv', '.gif')
        data = urllib2.urlopen(vid).read()
        reader = imageio.get_reader(data, 'gif')
        dur = (float(reader.get_meta_data()['duration']))
        oldfps = 1000.0 / (10 if dur == 0 else dur)
    
    
        writer = imageio.get_writer('output.mp4', fps=(oldfps*mult), quality=8.0)
    
        for frame in reader:
            writer.append_data(frame)
        writer.close()
    

    The problem is, at times the output colors will be heavily corrupted, and there doesn't seem to be any predictability. This happens with some gifs and doesn't happen with others. I have tried setting a high quality parameter in the writer but this doesn't help.

    Here is an example of a problematic GIF -

    Input: https://i.imgur.com/xFezNYK.gif

    Output: https://giant.gfycat.com/MelodicShimmeringBarb.mp4

    I can see this issue locally in output.mp4, so the issue isn't with uploading to Gfycat.

    Is there anything I can do to avoid this behavior? Thanks.

  • Converting alpha channel of PNG sequence to Y channel of H265

    31 mai 2018, par Krumelur

    I have a video renderer that expects two H265 streams (YUV420) and I need to bake them to have one of them form an alpha mask to the other one. This is all solved and works well, however if I follow the instructions here: ffmpeg splitting RGB and Alpha channels using filter the alpha channel is slighly off. My hypothesis is that this is due to the alpha channel being scaled through the RGB->YUV matrix.

    The input is a sequence of PNG files, the output is two MKV files.

    The question is then: How can I tell FFmpeg to “reinterpret” the alpha channel as the Y channel without touching the pixel data? Ideally by producing both MKV:s in one command line as shown in the other question, but at least without rewriting the source files.

  • How to get H.264 data from an RTP stream using ffmpeg

    31 mai 2018, par Brad Reiter

    I know we can combine videos using ffmpeg, but can I use ffmpeg to extract the H.264 content from a decrypted RTP stream? If so, how?

  • Specifying audio/video for a multiple stream/multiple file setup using ffmpeg

    31 mai 2018, par Robert Smith

    Folks, I have the following ffmpeg command:

    ffmpeg
        -i video1a -i video2a -i video3a -i video4a
        -i video1b -i video2b -i video3b -i video4b
        -i video1c
        -filter_complex "
            nullsrc=size=640x480 [base];
            [0:v] setpts=PTS-STARTPTS+   0/TB, scale=320x240 [1a];
            [1:v] setpts=PTS-STARTPTS+ 300/TB, scale=320x240 [2a];
            [2:v] setpts=PTS-STARTPTS+ 400/TB, scale=320x240 [3a];
            [3:v] setpts=PTS-STARTPTS+ 400/TB, scale=320x240 [4a];
            [4:v] setpts=PTS-STARTPTS+2500/TB, scale=320x240 [1b];
            [5:v] setpts=PTS-STARTPTS+ 800/TB, scale=320x240 [2b];
            [6:v] setpts=PTS-STARTPTS+ 700/TB, scale=320x240 [3b];
            [7:v] setpts=PTS-STARTPTS+ 800/TB, scale=320x240 [4b];
            [8:v] setpts=PTS-STARTPTS+3000/TB, scale=320x240 [1c];
            [base][1a] overlay=eof_action=pass [o1];
            [o1][1b] overlay=eof_action=pass [o1];
            [o1][1c] overlay=eof_action=pass:shortest=1 [o1];
            [o1][2a] overlay=eof_action=pass:x=320 [o2];
            [o2][2b] overlay=eof_action=pass:x=320 [o2];
            [o2][3a] overlay=eof_action=pass:y=240 [o3];
            [o3][3b] overlay=eof_action=pass:y=240 [o3];
            [o3][4a] overlay=eof_action=pass:x=320:y=240[o4];
            [o4][4b] overlay=eof_action=pass:x=320:y=240"
        -c:v libx264 output.mp4
    

    I have just found out something regarding the files I will be processing with above command: that some mp4 files are video/audio, some mp4 files are audio alone and some mp4 files are video alone. I am already able to determine which ones have audio/video/both using ffprobe. My question is how do I modify above command to state what each file contains (video/audio/both).

    This is the scenario of which file has video/audio/both:

    video   time
    ======= =========
    Area 1:
    video1a    audio
    video1b     both
    video1c    video
    
    Area 2:
    video2a    video
    video2b    audio
    
    Area 3:
    video3a    video
    video3b    audio
    
    Area 4:
    video4a    video
    video4b    both
    

    My question is how to correctly modify command above to specify what the file has (audio/video/both). Thank you.

    Update #1

    I ran test as follows:

    -i "video1a.flv"
    -i "video1b.flv"
    -i "video1c.flv"
    -i "video2a.flv"
    -i "video3a.flv"
    -i "video4a.flv"
    -i "video4b.flv"
    -i "video4c.flv"
    -i "video4d.flv"
    -i "video4e.flv"
    
    -filter_complex 
    
    nullsrc=size=640x480[base];
    [0:v]setpts=PTS-STARTPTS+120/TB,scale=320x240[1a];
    [1:v]setpts=PTS-STARTPTS+3469115/TB,scale=320x240[1b];
    [2:v]setpts=PTS-STARTPTS+7739299/TB,scale=320x240[1c];
    [5:v]setpts=PTS-STARTPTS+4390466/TB,scale=320x240[4a];
    [6:v]setpts=PTS-STARTPTS+6803937/TB,scale=320x240[4b];
    [7:v]setpts=PTS-STARTPTS+8242005/TB,scale=320x240[4c];
    [8:v]setpts=PTS-STARTPTS+9811577/TB,scale=320x240[4d];
    [9:v]setpts=PTS-STARTPTS+10765190/TB,scale=320x240[4e];
    [base][1a]overlay=eof_action=pass[o1];
    [o1][1b]overlay=eof_action=pass[o1];
    [o1][1c]overlay=eof_action=pass:shortest=1[o1];
    [o1][4a]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4b]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4c]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4d]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4e]overlay=eof_action=pass:x=320:y=240;
    [0:a]asetpts=PTS-STARTPTS+120/TB,aresample=async=1,apad[a1a];
    [1:a]asetpts=PTS-STARTPTS+3469115/TB,aresample=async=1,apad[a1b];
    [2:a]asetpts=PTS-STARTPTS+7739299/TB,aresample=async=1[a1c];
    [3:a]asetpts=PTS-STARTPTS+82550/TB,aresample=async=1,apad[a2a];
    [4:a]asetpts=PTS-STARTPTS+2687265/TB,aresample=async=1,apad[a3a];
    [a1a][a1b][a1c][a2a][a3a]amerge=inputs=5
    
    -c:v libx264 -c:a aac -ac 2 output.mp4
    

    This is the stream data from ffmpeg:

    Input #0
      Stream #0:0: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
      Stream #0:1: Audio: nellymoser, 11025 Hz, mono, flt
    
    Input #1
      Stream #1:0: Audio: nellymoser, 11025 Hz, mono, flt
      Stream #1:1: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
    
    Input #2
      Stream #2:0: Audio: nellymoser, 11025 Hz, mono, flt
      Stream #2:1: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
    
    Input #3
      Stream #3:0: Audio: nellymoser, 11025 Hz, mono, flt
    
    Input #4
      Stream #4:0: Audio: nellymoser, 11025 Hz, mono, flt
    
    Input #5
      Stream #5:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
    
    Input #6
      Stream #6:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
    
    Input #7
      Stream #7:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
    
    Input #8
      Stream #8:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
    
    Input #9
      Stream #9:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
    

    This is the error:

    Stream mapping:
      Stream #0:0 (vp6f) -> setpts
      Stream #0:1 (nellymoser) -> asetpts
    
      Stream #1:0 (nellymoser) -> asetpts
      Stream #1:1 (vp6f) -> setpts
    
      Stream #2:0 (nellymoser) -> asetpts
      Stream #2:1 (vp6f) -> setpts
    
      Stream #3:0 (nellymoser) -> asetpts
    
      Stream #4:0 (nellymoser) -> asetpts
    
      Stream #5:0 (vp6f) -> setpts
    
      Stream #6:0 (vp6f) -> setpts
    
      Stream #7:0 (vp6f) -> setpts
    
      Stream #8:0 (vp6f) -> setpts
    
      Stream #9:0 (vp6f) -> setpts
    
      overlay -> Stream #0:0 (libx264)
      amerge -> Stream #0:1 (aac)
    Press [q] to stop, [?] for help
    
    Enter command: |all 

    Update #2

    Would it be like this:

    -i "video1a.flv"
    -i "video1b.flv"
    -i "video1c.flv"
    -i "video2a.flv"
    -i "video3a.flv"
    -i "video4a.flv"
    -i "video4b.flv"
    -i "video4c.flv"
    -i "video4d.flv"
    -i "video4e.flv"
    
    -filter_complex 
    
    nullsrc=size=640x480[base];
    [0:v]setpts=PTS-STARTPTS+120/TB,scale=320x240[1a];
    [1:v]setpts=PTS-STARTPTS+3469115/TB,scale=320x240[1b];
    [2:v]setpts=PTS-STARTPTS+7739299/TB,scale=320x240[1c];
    [5:v]setpts=PTS-STARTPTS+4390466/TB,scale=320x240[4a];
    [6:v]setpts=PTS-STARTPTS+6803937/TB,scale=320x240[4b];
    [7:v]setpts=PTS-STARTPTS+8242005/TB,scale=320x240[4c];
    [8:v]setpts=PTS-STARTPTS+9811577/TB,scale=320x240[4d];
    [9:v]setpts=PTS-STARTPTS+10765190/TB,scale=320x240[4e];
    [base][1a]overlay=eof_action=pass[o1];
    [o1][1b]overlay=eof_action=pass[o1];
    [o1][1c]overlay=eof_action=pass:shortest=1[o1];
    [o1][4a]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4b]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4c]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4d]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4e]overlay=eof_action=pass:x=320:y=240;
    [0:a]asetpts=PTS-STARTPTS+120/TB,aresample=async=1,pan=1c|c0=c0,apad[a1a];
    [1:a]asetpts=PTS-STARTPTS+3469115/TB,aresample=async=1,pan=1c|c0=c0,apad[a1b];
    [2:a]asetpts=PTS-STARTPTS+7739299/TB,aresample=async=1,pan=1c|c0=c0[a1c];
    [3:a]asetpts=PTS-STARTPTS+82550/TB,aresample=async=1,pan=1c|c0=c0,apad[a2a];
    [4:a]asetpts=PTS-STARTPTS+2687265/TB,aresample=async=1,pan=1c|c0=c0,apad[a3a];
    [a1a][a1b][a1c][a2a][a3a]amerge=inputs=5
    
    -c:v libx264 -c:a aac -ac 2 output.mp4
    

    Update #3

    Now getting this error:

    Stream mapping:
      Stream #0:0 (vp6f) -> setpts
      Stream #0:1 (nellymoser) -> asetpts
      Stream #1:0 (nellymoser) -> asetpts
      Stream #1:1 (vp6f) -> setpts
      Stream #2:0 (nellymoser) -> asetpts
      Stream #2:1 (vp6f) -> setpts
      Stream #3:0 (nellymoser) -> asetpts
      Stream #4:0 (nellymoser) -> asetpts
      Stream #5:0 (vp6f) -> setpts
      Stream #6:0 (vp6f) -> setpts
      Stream #7:0 (vp6f) -> setpts
      Stream #8:0 (vp6f) -> setpts
      Stream #9:0 (vp6f) -> setpts
      overlay -> Stream #0:0 (libx264)
      amerge -> Stream #0:1 (aac)
    Press [q] to stop, [?] for help
    
    Enter command: |all