Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Adjust PTS and DTS before mp4 video creation

    17 mars 2018, par Cristiano

    I'm retrieving raw h264 compressed frames from usb camera to create an mp4 video. This is my simple code:

    for(int i = 0; i<120;i++)
    {
        AVPacket pkt;
        av_init_packet(&pkt);
        ret = av_read_frame(inputFormatCtx, &pkt);
    
        pkt.pts = pkt.dts = i;
        pkt.pts = av_rescale_q(pkt.pts, inputStream->time_base, outputStream->time_base);
        pkt.dts = av_rescale_q(pkt.dts, inputStream->time_base, outputStream->time_base);
    
        ret = av_interleaved_write_frame(outputFormatCtx, &pkt);
        av_packet_unref(&pkt);
    } 
    ret = av_write_trailer(outputFormatCtx);
    

    This works well. Now I would like to store these AVPackets to create the video in a second moment. I changed my code in this way

    for(int i = 0; i<120;i++){
    AVPacket pkt;
    av_init_packet(&pkt);
    ret = av_read_frame(inputFormatCtx, &pkt);
    packets.push_back(pkt);
    } 
    
    vector::reverse_iterator it;
    int j = 0;
    for(it = packets.rbegin(); it != packets.rend(); it++){
        AVPacket n = (*it);
        n.pts = n.dts = j;
        j++;
        n.pts = av_rescale_q(n.pts, inputStream->time_base, outputStream->time_base);
        n.dts = av_rescale_q(n.dts, inputStream->time_base, outputStream->time_base); 
        ret = av_interleaved_write_frame(outputFormatCtx, &n);
        av_packet_unref(&n);
    }
    
    ret = av_write_trailer(outputFormatCtx);
    

    The resulting video is not so fluid and I used ffprobe to see more details. These are the first three frame generating with the first block of code.

    [FRAME]
    media_type=video
    stream_index=0
    key_frame=1
    pkt_pts=0
    pkt_pts_time=0.000000
    pkt_dts=0
    pkt_dts_time=0.000000
    best_effort_timestamp=0
    best_effort_timestamp_time=0.000000
    pkt_duration=512
    pkt_duration_time=0.033333
    pkt_pos=48
    pkt_size=12974
    width=1920
    height=1080
    pix_fmt=yuv420p
    sample_aspect_ratio=N/A
    pict_type=I
    coded_picture_number=0
    display_picture_number=0
    interlaced_frame=0
    top_field_first=0
    repeat_pict=0
    [/FRAME]
    [FRAME]
    media_type=video
    stream_index=0
    key_frame=0
    pkt_pts=512
    pkt_pts_time=0.033333
    pkt_dts=512
    pkt_dts_time=0.033333
    best_effort_timestamp=512
    best_effort_timestamp_time=0.033333
    pkt_duration=512
    pkt_duration_time=0.033333
    pkt_pos=13022
    pkt_size=473
    width=1920
    height=1080
    pix_fmt=yuv420p
    sample_aspect_ratio=N/A
    pict_type=P
    coded_picture_number=1
    display_picture_number=0
    interlaced_frame=0
    top_field_first=0
    repeat_pict=0
    [/FRAME]
    [FRAME]
    media_type=video
    stream_index=0
    key_frame=0
    pkt_pts=1024
    pkt_pts_time=0.066667
    pkt_dts=1024
    pkt_dts_time=0.066667
    best_effort_timestamp=1024
    best_effort_timestamp_time=0.066667
    pkt_duration=512
    pkt_duration_time=0.033333
    pkt_pos=13495
    pkt_size=511
    width=1920
    height=1080
    pix_fmt=yuv420p
    sample_aspect_ratio=N/A
    pict_type=P
    coded_picture_number=2
    display_picture_number=0
    interlaced_frame=0
    top_field_first=0
    repeat_pict=0
    [/FRAME]
    

    While these are the same frames created using the second block of code.

    [FRAME]
    media_type=video
    stream_index=0
    key_frame=1
    pkt_pts=14848
    pkt_pts_time=0.966667
    pkt_dts=14848
    pkt_dts_time=0.966667
    best_effort_timestamp=14848
    best_effort_timestamp_time=0.966667
    pkt_duration=512
    pkt_duration_time=0.033333
    pkt_pos=757791
    pkt_size=65625
    width=1920
    height=1080
    pix_fmt=yuv420p
    sample_aspect_ratio=N/A
    pict_type=I
    coded_picture_number=58
    display_picture_number=0
    interlaced_frame=0
    top_field_first=0
    repeat_pict=0
    [/FRAME]
    [FRAME]
    media_type=video
    stream_index=0
    key_frame=0
    pkt_pts=15360
    pkt_pts_time=1.000000
    pkt_dts=15360
    pkt_dts_time=1.000000
    best_effort_timestamp=15360
    best_effort_timestamp_time=1.000000
    pkt_duration=512
    pkt_duration_time=0.033333
    pkt_pos=823416
    pkt_size=29642
    width=1920
    height=1080
    pix_fmt=yuv420p
    sample_aspect_ratio=N/A
    pict_type=P
    coded_picture_number=60
    display_picture_number=0
    interlaced_frame=0
    top_field_first=0
    repeat_pict=0
    [/FRAME]
    [FRAME]
    media_type=video
    stream_index=0
    key_frame=1
    pkt_pts=30208
    pkt_pts_time=1.966667
    pkt_dts=30208
    pkt_dts_time=1.966667
    best_effort_timestamp=30208
    best_effort_timestamp_time=1.966667
    pkt_duration=512
    pkt_duration_time=0.033333
    pkt_pos=1546454
    pkt_size=66021
    width=1920
    height=1080
    pix_fmt=yuv420p
    sample_aspect_ratio=N/A
    pict_type=I
    coded_picture_number=117
    display_picture_number=0
    interlaced_frame=0
    top_field_first=0
    repeat_pict=0
    [/FRAME]
    

    I immediately noticed that pkt_pts_time and other fields are not starting from 0 and are not increasing linearly (respect to the 30 fps that I established). It is possible to obtain the same video writing the AVPackets in a second moment as I'm doing? Thank you.

  • what is the ffmpeg path for godaddy server hosting ? [duplicate]

    17 mars 2018, par Jignesh Gohil

    This question already has an answer here:

    HTML code

        
    `

    PHP code:

      if(isset($_POST['submit'])){
            $ffmpeg = "C:\\ffmpeg\\bin\\ffmpeg";        
            $videoFile = $_FILES['file_data']['tmp_name'];
            $imageFile = "one.png";
            $size = "200*120";
    
            $getFromSeconds = 10;
            $cmd = "$ffmpeg -i $videoFile -an -ss $getFromSeconds -s $size $imageFile";            
    
            if(!shell_exec($cmd)){
                echo "Thumbnail Created!";
            }else{
                echo "Error Creating Thumbnail";
            }                        
        }
    

    i want to create a thumbnail from video while upload a video file. it's working on my local computer.

    my local path for ffmpeg is: C:\ffmpeg\bin\ffmpeg

    so how can i parform on my godaddy server hosting with new ffmpeg path ?

    please help me...

    thanks you

  • audio player jwplayer wma files failes with error - Task Queue failed at step 5

    17 mars 2018, par Sabeena

    i have a jw player which plays mp3 files but wma files it gives the error "Task Queue failed at step 5: Playlist could not be loaded: Playlist file did not contain a valid playlist"

    i thought of two reasons 1. there is no support for wma but please confirm me this. 2. somewhere i need to setup the type of file i am using in this player.

    if wma not supported in jwplayer how can play wma and mp3 files in my website?

    is it ffmpeg needed to convert wma to mp3 while uploading ?

  • How to overlay 2 videos at different time over another video in single ffmpeg command ?

    17 mars 2018, par Qandeel Abbasi

    Using ffmpeg command line I want to overlay 2 different videos on top of another (main video) at different time for different duration. I have successfully overlayed 1 video over the main video at specific time and for specific duration using following command:

    ffmpeg -i main.mp4 -i first.mp4 \
    -filter_complex "[1:v]setpts=PTS-32/TB[a]; \
                     [0:v][a]overlay=enable=gte(t\,5):eof_action=pass[out]; \
                     [1] scale=480:270 [over]; [0][over] overlay=400:400" \
    -map [out] -map 0:a \
    -c:v libx264 -crf 18 -pix_fmt yuv420p \
    -c:a copy \
    output.mp4
    

    How can i modify the same command to apply the same operations on two secondary videos at the same time?

  • Looking for a solution to my ffmpeg video overlay code :

    16 mars 2018, par Michael Hovan

    Looking for a solution to my ffmpeg video overlay code:

    import subprocess as sp
    
    cmd='ffmpeg', '-i', 'C:/Users/Michael.hovan/Desktop/test/front.mov', '-i', 'C:/Users/Michael.hovan/Desktop/test/persp.mov', '-i', 'C:/Users/Michael.hovan/Desktop/test/side.mov', '-i', 'C:/Users/Michael.hovan/Desktop/test/top.mov', '-filter_complex' \
     '[1:v]scale=iw/4:-1:flags=lanczos[pip1];' \
     '[2:v]scale=iw/4:-1:flags=lanczos[pip2];' \
     '[3:v]scale=iw/4:-1:flags=lanczos[pip3];' \
     '[0:v][pip1]overlay=main_w-overlay_w-10:main_h-overlay_h-10[bg1];' \
     '[bg1][pip2]overlay=(main_w-overlay_w)/2:main_h-overlay_h-10[bg2];' \
     '[bg2][pip3]overlay=10:main_h-overlay_h-10,format=yuv420p[v];' \
     '[0:a][1:a][2:a][3:a]amerge=inputs=4[a]' \
    '-map', '[v]', '-map', '[a]', '-ac', '1', 'C:/Users/Michael.hovan/Desktop/test/output.avi'
    
    sp.call(cmd)
    

    I'm honestly stumped as to why the code is giving me a "returned non zero exit status".