Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • How to fix ffmpeg's offical tutorials03 bug that sound does't work well ?

    31 janvier 2019, par xiaodai

    I want to learn to make a player with ffmpeg and sdl. The tutorial I used is this.[http://dranger.com/ffmpeg/tutorial03.html] Though I have resampled the audio from decode stream, the sound still plays with loud noise.

    I have no ideas to fix it anymore.

    I used the following:

    • the latest ffmpeg and sdl1
    • Visual Studio 2010
    // tutorial03.c
    // A pedagogical video player that will stream through every video frame as fast as it can
    // and play audio (out of sync).
    //
    // This tutorial was written by Stephen Dranger (dranger@gmail.com).
    //
    // Code based on FFplay, Copyright (c) 2003 Fabrice Bellard,
    // and a tutorial by Martin Bohme (boehme@inb.uni-luebeckREMOVETHIS.de)
    // Tested on Gentoo, CVS version 5/01/07 compiled with GCC 4.1.1
    //
    // Use the Makefile to build all examples.
    //
    // Run using
    // tutorial03 myvideofile.mpg
    //
    // to play the stream on your screen.
    
    extern "C"{
    #include avcodec.h>
    #include avformat.h>
    #include swscale.h>
    #include channel_layout.h>
    #include common.h>
    #include frame.h>
    #include samplefmt.h>
    #include "libswresample/swresample.h"
    
    #include SDL.h>
    #include SDL_thread.h>
    };
    #ifdef __WIN32__
    #undef main /* Prevents SDL from overriding main() */
    #endif
    
    #include 
    
    #define SDL_AUDIO_BUFFER_SIZE 1024
    #define MAX_AUDIO_FRAME_SIZE 192000
    
    struct SwrContext *audio_swrCtx;
    FILE *pFile=fopen("output.pcm", "wb");
    FILE *pFile_stream=fopen("output_stream.pcm","wb");
    int audio_len;
    typedef struct PacketQueue {
        AVPacketList *first_pkt, *last_pkt;
        int nb_packets;
        int size;
        SDL_mutex *mutex;
        SDL_cond *cond;
    } PacketQueue;
    
    PacketQueue audioq;
    
    int quit = 0;
    
    void packet_queue_init(PacketQueue *q) {
        memset(q, 0, sizeof(PacketQueue));
        q->mutex = SDL_CreateMutex();
        q->cond = SDL_CreateCond();
    }
    
    int packet_queue_put(PacketQueue *q, AVPacket *pkt) {
    
        AVPacketList *pkt1;
    
        if(av_dup_packet(pkt) < 0) {
            return -1;
        }
    
        pkt1 = (AVPacketList *)av_malloc(sizeof(AVPacketList));
    
        if(!pkt1) {
            return -1;
        }
    
        pkt1->pkt = *pkt;
        pkt1->next = NULL;
    
    
        SDL_LockMutex(q->mutex);
    
        if(!q->last_pkt) {
            q->first_pkt = pkt1;
        }
    
        else {
            q->last_pkt->next = pkt1;
        }
    
        q->last_pkt = pkt1;
        q->nb_packets++;
        q->size += pkt1->pkt.size;
        SDL_CondSignal(q->cond);
    
        SDL_UnlockMutex(q->mutex);
        return 0;
    }
    
    static int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block) {
        AVPacketList *pkt1;
        int ret;
    
        SDL_LockMutex(q->mutex);
    
        for(;;) {
    
            if(quit) {
                ret = -1;
                break;
            }
    
            pkt1 = q->first_pkt;
    
            if(pkt1) {
                q->first_pkt = pkt1->next;
    
                if(!q->first_pkt) {
                    q->last_pkt = NULL;
                }
    
                q->nb_packets--;
                q->size -= pkt1->pkt.size;
                *pkt = pkt1->pkt;
                av_free(pkt1);
                ret = 1;
                break;
    
            } else if(!block) {
                ret = 0;
                break;
    
            } else {
                SDL_CondWait(q->cond, q->mutex);
            }
        }
    
        SDL_UnlockMutex(q->mutex);
        return ret;
    }
    
     int audio_decode_frame(AVCodecContext *aCodecCtx, uint8_t *audio_buf, int buf_size) {
    
    
         static AVPacket pkt; 
         static uint8_t *audio_pkt_data = NULL;
         static int audio_pkt_size = 0;
         static AVFrame frame;
    
         int len1, data_size = 0;
    
         for(;;) {
             while(audio_pkt_size > 0) {
                 int got_frame = 0;
                 len1 = avcodec_decode_audio4(aCodecCtx, &frame, &got_frame, &pkt);
    
                 if(len1 < 0) {
                     /* if error, skip frame */
                     audio_pkt_size = 0;
                     break;
                 }
                 audio_pkt_data += len1;
                 audio_pkt_size -= len1;
                 data_size = 0;
                 /*
    
                 au_convert_ctx = swr_alloc();
                 au_convert_ctx=swr_alloc_set_opts(au_convert_ctx,out_channel_layout, out_sample_fmt, out_sample_rate,
                 in_channel_layout,pCodecCtx->sample_fmt , pCodecCtx->sample_rate,0, NULL);
                 swr_init(au_convert_ctx);
    
                 swr_convert(au_convert_ctx,&out_buffer, MAX_AUDIO_FRAME_SIZE,(const uint8_t **)pFrame->data , pFrame->nb_samples);
    
    
                 */
                 if( got_frame ) {
                     audio_swrCtx=swr_alloc();
                     audio_swrCtx=swr_alloc_set_opts(audio_swrCtx,  // we're allocating a new context
                         AV_CH_LAYOUT_STEREO,//AV_CH_LAYOUT_STEREO,     // out_ch_layout
                         AV_SAMPLE_FMT_S16,         // out_sample_fmt
                         44100, // out_sample_rate
                         aCodecCtx->channel_layout, // in_ch_layout
                         aCodecCtx->sample_fmt,     // in_sample_fmt
                         aCodecCtx->sample_rate,    // in_sample_rate
                         0,                         // log_offset
                         NULL);                     // log_ctx
                     int ret=swr_init(audio_swrCtx);
                     int out_samples = av_rescale_rnd(swr_get_delay(audio_swrCtx, aCodecCtx->sample_rate) + 1024, 44100, aCodecCtx->sample_rate, AV_ROUND_UP);
                     ret=swr_convert(audio_swrCtx,&audio_buf, MAX_AUDIO_FRAME_SIZE,(const uint8_t **)frame.data ,frame.nb_samples);
                     data_size =
                         av_samples_get_buffer_size
                         (
                         &data_size,
                         av_get_channel_layout_nb_channels(AV_CH_LAYOUT_STEREO),
                         ret,
                         AV_SAMPLE_FMT_S16,
                         1
                         );
                      fwrite(audio_buf, 1, data_size, pFile);
                     //memcpy(audio_buf, frame.data[0], data_size);
                     swr_free(&audio_swrCtx);
                 }
    
                 if(data_size <= 0) {
                     /* No data yet, get more frames */
                     continue;
                 }
    
                 /* We have data, return it and come back for more later */
                 return data_size;
             }
    
             if(pkt.data) {
                 av_free_packet(&pkt);
             }
    
             if(quit) {
                 return -1;
             }
    
             if(packet_queue_get(&audioq, &pkt, 1) < 0) {
                 return -1;
             }
    
             audio_pkt_data = pkt.data;
             audio_pkt_size = pkt.size;
         }
     }
    
    
    
    void audio_callback(void *userdata, Uint8 *stream, int len) {
    
        AVCodecContext *aCodecCtx = (AVCodecContext *)userdata;
        int /*audio_len,*/ audio_size;
    
        static uint8_t audio_buf[(MAX_AUDIO_FRAME_SIZE * 3) / 2];
        static unsigned int audio_buf_size = 0;
        static unsigned int audio_buf_index = 0;
    
        //SDL_memset(stream, 0, len);
        while(len > 0) {
    
            if(audio_buf_index >= audio_buf_size) {
                /* We have already sent all our data; get more */
                audio_size = audio_decode_frame(aCodecCtx, audio_buf, audio_buf_size);
    
                if(audio_size < 0) {
                    /* If error, output silence */
                    audio_buf_size = 1024; // arbitrary?
                    memset(audio_buf, 0, audio_buf_size);
    
                } else {
                    audio_buf_size = audio_size;
                }
    
                audio_buf_index = 0;
            }
    
            audio_len = audio_buf_size - audio_buf_index;
    
            if(audio_len > len) {
                audio_len = len;
            }
    
            memcpy(stream, (uint8_t *)audio_buf , audio_len);
            //SDL_MixAudio(stream,(uint8_t*)audio_buf,audio_len,SDL_MIX_MAXVOLUME);
            fwrite(audio_buf, 1, audio_len, pFile_stream);
            len -= audio_len;
            stream += audio_len;
            audio_buf_index += audio_len;
            audio_len=len;
        }
    }
    
    int main(int argc, char *argv[]) {
        AVFormatContext *pFormatCtx = NULL;
        int             i, videoStream, audioStream;
        AVCodecContext  *pCodecCtx = NULL;
        AVCodec         *pCodec = NULL;
        AVFrame         *pFrame = NULL;
        AVPacket        packet;
        int             frameFinished;
    
        //float           aspect_ratio;
    
        AVCodecContext  *aCodecCtx = NULL;
        AVCodec         *aCodec = NULL;
    
        SDL_Overlay     *bmp = NULL;
        SDL_Surface     *screen = NULL;
        SDL_Rect        rect;
        SDL_Event       event;
        SDL_AudioSpec   wanted_spec, spec;
    
        struct SwsContext   *sws_ctx            = NULL;
        AVDictionary        *videoOptionsDict   = NULL;
        AVDictionary        *audioOptionsDict   = NULL;
    
        if(argc < 2) {
                fprintf(stderr, "Usage: test \n");
                exit(1);
            }
    
            // Register all formats and codecs
        av_register_all();
    
        if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
            fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
            exit(1);
        }
    
        // Open video file
        if(avformat_open_input(&pFormatCtx, argv[1]/*"file.mov"*/, NULL, NULL) != 0) {
            return -1;    // Couldn't open file
        }
    
        // Retrieve stream information
        if(avformat_find_stream_info(pFormatCtx, NULL) < 0) {
            return -1;    // Couldn't find stream information
        }
    
        // Dump information about file onto standard error
        av_dump_format(pFormatCtx, 0, argv[1], 0);
    
        // Find the first video stream
        videoStream = -1;
        audioStream = -1;
    
        for(i = 0; i < pFormatCtx->nb_streams; i++) {
            if(pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO &&
                videoStream < 0) {
                    videoStream = i;
            }
    
            if(pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO &&
                audioStream < 0) {
                    audioStream = i;
            }
        }
    
        if(videoStream == -1) {
            return -1;    // Didn't find a video stream
        }
    
        if(audioStream == -1) {
            return -1;
        }
    
        aCodecCtx = pFormatCtx->streams[audioStream]->codec;
        // Set audio settings from codec info
        wanted_spec.freq = 44100;
        wanted_spec.format = AUDIO_S16SYS;
        wanted_spec.channels = av_get_channel_layout_nb_channels(AV_CH_LAYOUT_STEREO);;
        wanted_spec.silence = 0;
        wanted_spec.samples = 1024;
        wanted_spec.callback = audio_callback;
        wanted_spec.userdata = aCodecCtx;
    
        if(SDL_OpenAudio(&wanted_spec, &spec) < 0) {
            fprintf(stderr, "SDL_OpenAudio: %s\n", SDL_GetError());
            return -1;
        }
    
    
        aCodec = avcodec_find_decoder(aCodecCtx->codec_id);
    
        if(!aCodec) {
            fprintf(stderr, "Unsupported codec!\n");
            return -1;
        }
    
        avcodec_open2(aCodecCtx, aCodec, &audioOptionsDict);
    
        // audio_st = pFormatCtx->streams[index]
        packet_queue_init(&audioq);
        SDL_PauseAudio(0);
    
        // Get a pointer to the codec context for the video stream
        pCodecCtx = pFormatCtx->streams[videoStream]->codec;
    
        // Find the decoder for the video stream
        pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
    
        if(pCodec == NULL) {
            fprintf(stderr, "Unsupported codec!\n");
            return -1; // Codec not found
        }
    
        // Open codec
        if(avcodec_open2(pCodecCtx, pCodec, &videoOptionsDict) < 0) {
            return -1;    // Could not open codec
        }
    
        // Allocate video frame
        pFrame = av_frame_alloc();
    
        // Make a screen to put our video
    
    #ifndef __DARWIN__
        screen = SDL_SetVideoMode(pCodecCtx->width, pCodecCtx->height, 0, 0);
    #else
        screen = SDL_SetVideoMode(pCodecCtx->width, pCodecCtx->height, 24, 0);
    #endif
    
        if(!screen) {
            fprintf(stderr, "SDL: could not set video mode - exiting\n");
            exit(1);
        }
    
        // Allocate a place to put our YUV image on that screen
        bmp = SDL_CreateYUVOverlay(pCodecCtx->width,
            pCodecCtx->height,
            SDL_YV12_OVERLAY,
            screen);
        sws_ctx =
            sws_getContext
            (
            pCodecCtx->width,
            pCodecCtx->height,
            pCodecCtx->pix_fmt,
            pCodecCtx->width,
            pCodecCtx->height,
            PIX_FMT_YUV420P,
            SWS_BILINEAR,
            NULL,
            NULL,
            NULL
            );
    
    
        // Read frames and save first five frames to disk
        i = 0;
    
        while(av_read_frame(pFormatCtx, &packet) >= 0) {
            // Is this a packet from the video stream?
            if(packet.stream_index == videoStream) {
                // Decode video frame
                avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,
                    &packet);
    
                // Did we get a video frame?
                if(frameFinished) {
                    SDL_LockYUVOverlay(bmp);
    
                    AVPicture pict;
                    pict.data[0] = bmp->pixels[0];
                    pict.data[1] = bmp->pixels[2];
                    pict.data[2] = bmp->pixels[1];
    
                    pict.linesize[0] = bmp->pitches[0];
                    pict.linesize[1] = bmp->pitches[2];
                    pict.linesize[2] = bmp->pitches[1];
    
                    // Convert the image into YUV format that SDL uses
                    sws_scale
                        (
                        sws_ctx,
                        (uint8_t const * const *)pFrame->data,
                        pFrame->linesize,
                        0,
                        pCodecCtx->height,
                        pict.data,
                        pict.linesize
                        );
    
                    SDL_UnlockYUVOverlay(bmp);
    
                    rect.x = 0;
                    rect.y = 0;
                    rect.w = pCodecCtx->width;
                    rect.h = pCodecCtx->height;
                    SDL_DisplayYUVOverlay(bmp, &rect);
                    SDL_Delay(40);
                    av_free_packet(&packet);
                }
    
            } else if(packet.stream_index == audioStream) {
                packet_queue_put(&audioq, &packet);
    
            } else {
                av_free_packet(&packet);
            }
    
            // Free the packet that was allocated by av_read_frame
            SDL_PollEvent(&event);
    
            switch(event.type) {
            case SDL_QUIT:
                quit = 1;
                SDL_Quit();
                exit(0);
                break;
    
            default:
                break;
            }
    
        }
    
        // Free the YUV frame
        av_free(pFrame);
        /*swr_free(&audio_swrCtx);*/
        // Close the codec
        avcodec_close(pCodecCtx);
        fclose(pFile);
        fclose(pFile_stream);
        // Close the video file
        avformat_close_input(&pFormatCtx);
    
        return 0;
    }
    

    I hope to play normally.

  • Can I dynamic crop two input video then stack them using ffmpeg ?

    31 janvier 2019, par Wei Deng

    I want to make an effect of switching back and forth between two videos.

    I tried to dynamic crop two input video then stack them using ffmpeg:

    ffmpeg -i input1.mp4 -i input2.mp4 -filter_complex \
    "[0:v]crop=iw:'2+(mod(n,ih))':0:0[a];[1:v]crop=iw:'ih-2- 
    (mod(n,ih))':0:'2+(mod(n,ih))'[b];[a][b]vstack=inputs=2[v]" \
    -map [v] output.mp4
    

    Skip 2 pixel to prevent crop zero.

    But the output video is not what I want. It seems '(mod(n,ih))' is zero all the time.

    I don't know what's wrong with it.

  • React Native + FFProbe + FFMpeg

    31 janvier 2019, par Adam Soffer

    Is it possible to use FFmpeg or FFprobe inside a react native app for interacting with a video stream? I see an FFprobe package published to npm but it's only Node.js compatible: https://www.npmjs.com/package/ffprobe

  • Seeking low latency vlc playback

    31 janvier 2019, par georgvontrapp

    I'm interested in the proper settings for configuring command line VLC to playback an RTSP stream with the lowest latency possible at the expense of video quality. In other words, I would accept frame drops in order to minimize lag.

    Using ffplay I can achieve this with the following setting:

    ffplay --fflags nobuffer -flags low_delay -framedrop -strict experimental -rtsp_transport tcp rtsp://my_rtsp_stream
    

    Can anyone recommend similar settings for VLC? Seems like everything I try always starts with low latency but buffering starts and increments over time.

  • Renaming ffmpeg to something unique when running on linux [duplicate]

    30 janvier 2019, par glenskie16

    This question already has an answer here:

    I am currently trying to run multiple instances of ffmpeg on my vps, and i have a script that checks every 15 seconds to see if its still running. This script checks for "ffmpeg". What i need is to have it check for ffmpeg123 as example so that there can be multiple scripts to check if a specific ffmpeg has crashed so it can restart it.

        while [ 1 ]; do
    
        if pgrep -x "ffmpeg" > /dev/null
        then
            echo "Running"
        else
            //restart ffmpeg
        fi
        sleep 15
    done
    

    So what i would like is to have like 5 differently named ffmpeg's running as like ffmpeg1,ffmpeg2,ffmpeg3, etc.

    Thank you in advanced! FFMPEG is also started within this script.