Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Re-stream RTSP using FFmpeg

    25 janvier 2019, par azharmalik3

    I always restream from the camera using RTSP stream with FFmpeg with the following command

    ffmpeg -rtsp_transport tcp -stimeout 6000000 -i 'rtsp://ddns.com:52521/axis-media/media.amp?resolution=800x600' -f lavfi -i aevalsrc=0 -vcodec copy -acodec aac -map 0:0 -map 1:0 -shortest -strict experimental -f flv rtmp://localhost:1935/live/dunkettle

    Camera stream hangs or disconnects for a few seconds and stopped FFmpeg process.

    Is there a way to don't stop FFmpeg and wait for stream back on.

    Does anyone have any idea how that would be possible?

  • How to encode .cap Closed Caption into MPEG video

    24 janvier 2019, par Sanjeev Pandey

    I have a .cap Closed Captions file and a .mpg video that I want to add this into. I have seen several hundreds examples of how this is done for .srt (subtitles) and any video format using ffmpeg but there is no solution that I could find for .cap.

    End goal is converting this video with Closed Caption to HLS stream. There are two ways it could be done, I think: 1. Encode the captions to video first > Then convert to HLS OR 2. Convert the video to HLS first > Then add closed captions on .ts segments

    I could not find a way to include the .cap file though. ffmpeg throws the following error - mycaptionsfile.cap: Invalid data found when processing input

    This is the cmd I am using for my video to hls conversion though - ffmpeg -hide_banner -y -i myvideo.mpg -vf scale=w=1280:h=720:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -c:v h264 -profile:v main -crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -hls_time 4 -hls_playlist_type vod -b:v 2800k -maxrate 2996k -bufsize 4200k -b:a 128k -hls_segment_filename 720p_%03d.ts 720p.m3u8

  • FFMPEG filter_complex with paletteuse and subtitles with force_style

    24 janvier 2019, par Adam Silva

    So I'm creating gifs from a video input, and to improve the quality I'm also using paletteuse among other options like scaling with this command

    /usr/bin/ffmpeg -ss 10 -t 3 -i /tmp/download.mp4 -i logo.png -filter_complex '[0:v] fps=12,scale=480:-1,overlay=x=(main_w-overlay_w)-5:y=(main_h-overlay_h)-5, split [a][b];[a] palettegen [p];[b][p] paletteuse' /var/www/html/youtube.gif
    

    Now I want to add subtitles to this gif with some custom styles like this:

    subtitles=subs.srt:force_style='FontName=Impact,Shadow=0.5'
    

    This is what I tried when combining the two:

    /usr/bin/ffmpeg -ss 10 -t 3 -i /tmp/download.mp4 -i logo.png -filter_complex '[0:v] fps=12,scale=480:-1,overlay=x=(main_w-overlay_w)-5:y=(main_h-overlay_h)-5,subtitles=subs.srt:force_style="'FontName=Impact,Shadow=0.5'" split [a][b];[a] palettegen [p];[b][p] paletteuse' /var/www/html/create-gifs/youtube.gif
    

    However, it's not recognizing the Shadow style. If I run the commands separately they work, but the quality of the gif goes down when adding the subtitles. How can I make this work?

  • FFMPEG audio filter/settings to crossfade but with only the first file becoming quiet

    24 janvier 2019, par howcountrywidefield

    I have two audio files that I want to concatenate, while crossfading between them at 3 seconds offset. But I do not want a traditional crossfade with the first file getting quiet while simultaniously the second file gets louder, which I know could be achieved with the acrossfade filter, but rather I want only the first file getting quieter, while the second file should start at 100% loudness right away.

    I have an idea for a step-by-step procedure where I would

    1. strip away the last 3 seconds of the first file into a temporary file

    2. strip away the first 3 seconds of the second file into a temporary file

    3. apply a afade=out filter to the first temporary file

    4. merge the two temporary files together

    5. concatenate the first file, the merged file, and the second file

    all using seperate ffmpeg commands

    I guess this would do the trick, but it would be very error-prone I guess, and I was wondering if there was a way to do all I want to achieve with just one command.

  • ffmpeg convert to webm error "too many invisible frames"

    24 janvier 2019, par Вадим Коломиец

    I need to convert any format (for example, mp4, avi etc) to .webm with own ioContext. I build ffmpeg with vpx, ogg, vorbis, opus and create simple project. But when i write any frame i get error "Too many invisible frames. Failed to send packet to filter vp9_superframe for stream 0"

    I've already tried convert from webm to webm with copy codec params with avcodec_parameters_copy and this works.

        #include 
    #include 
    #include 
    #include 
    
    extern "C" {
    #include avcodec.h>
    #include avformat.h>
    #include timestamp.h>
    #include avformat.h>
    #include buffersink.h>
    #include buffersrc.h>
    #include opt.h>
    #include pixdesc.h>
    }
    
    using namespace std;
    
    struct BufferData {
        QByteArray data;
        uint fullsize;
    
        BufferData() {
            fullsize =0;
        }
    };
    
    
    static int write_packet_to_buffer(void *opaque, uint8_t *buf, int buf_size)         {
        BufferData *bufferData = static_cast(opaque);
        bufferData->fullsize += buf_size;
        bufferData->data.append((const char*)buf, buf_size);
        return buf_size;
    }
    
    
    static bool writeBuffer(const QString &filename, BufferData *bufferData) {
        QFile file(filename);
        if( !file.open(QIODevice::WriteOnly) )  return false;
        file.write(bufferData->data);
        qDebug()<<"FILE SIZE = " << file.size();
        file.close();
        return true;
    }
    
    int main(int argc, char *argv[])
    {
        QCoreApplication a(argc, argv);
        AVOutputFormat *ofmt = NULL;
        AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
        AVPacket pkt;
        int ret;
        int stream_index = 0;
        int *stream_mapping = NULL;
        int stream_mapping_size = 0;
    
        const char *in_filename  = "../assets/sample.mp4";
        const char *out_filename = "../assets/sample_new.webm";
    
    
        //------------------------  Input file  ----------------------------
        if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
            fprintf(stderr, "Could not open input file '%s'", in_filename);
            return 1;
        }
    
        if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
            fprintf(stderr, "Failed to retrieve input stream information");
            return 1;
        }
        av_dump_format(ifmt_ctx, 0, in_filename, 0);
        //-----------------------------------------------------------------
    
    
        //---------------------- BUFFER -------------------------
       AVIOContext *avio_ctx = NULL;
       uint8_t *avio_ctx_buffer = NULL;
       size_t avio_ctx_buffer_size = 4096*1024;
       const size_t bd_buf_size = 1024*1024;
       /* fill opaque structure used by the AVIOContext write callback */
       avio_ctx_buffer = (uint8_t*)av_malloc(avio_ctx_buffer_size);
       if (!avio_ctx_buffer) return AVERROR(ENOMEM);
    
       BufferData bufferData;
       avio_ctx = avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size,
                                     1, &bufferData, NULL,
                                     &write_packet_to_buffer, NULL);
    
    
       if (!avio_ctx) return AVERROR(ENOMEM);
       //------------------------------------------------------
    
    
        avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
            if (!ofmt_ctx) {
            fprintf(stderr, "Could not create output context\n");
            ret = AVERROR_UNKNOWN;
            return 1;
        }
    
        //------------------------  Stream list  ----------------------------
        stream_mapping_size = ifmt_ctx->nb_streams;
        stream_mapping = (int*)av_mallocz_array(stream_mapping_size,     sizeof(*stream_mapping));
        if (!stream_mapping) {
            ret = AVERROR(ENOMEM);
            return 1;
        }
        //-------------------------------------------------------------------
    
    
    
        //------------------------  Output file  ----------------------------
        AVCodec *encoder;
        AVCodecContext *input_ctx;
        AVCodecContext *enc_ctx;
        for (int i=0; i < ifmt_ctx->nb_streams; i++) {
            AVStream *out_stream;
            AVStream *in_stream = ifmt_ctx->streams[i];
            AVCodecParameters *in_codecpar = in_stream->codecpar;
    
            if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
                in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
                in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
                stream_mapping[i] = -1;
                continue;
            }
    
            enc_ctx = avcodec_alloc_context3(encoder);
            if (!enc_ctx) {
                av_log(NULL, AV_LOG_FATAL, "Failed to allocate the encoder context\n");
                return AVERROR(ENOMEM);
            }
    
            stream_mapping[i] = stream_index++;
    
            out_stream = avformat_new_stream(ofmt_ctx, NULL);
            if (!out_stream) {
                fprintf(stderr, "Failed allocating output stream\n");
                ret = AVERROR_UNKNOWN;
                return 1;
            }
    
            out_stream->codecpar->width = in_codecpar->width;
            out_stream->codecpar->height = in_codecpar->height;
            out_stream->codecpar->level = in_codecpar->level;
            out_stream->codecpar->format =in_codecpar->format;
            out_stream->codecpar->profile =in_codecpar->profile;
            out_stream->codecpar->bit_rate =in_codecpar->bit_rate;
            out_stream->codecpar->channels =in_codecpar->channels;
            out_stream->codecpar->codec_tag = 0;
            out_stream->codecpar->color_trc =in_codecpar->color_trc;
            out_stream->codecpar->codec_type =in_codecpar->codec_type;
            out_stream->codecpar->frame_size =in_codecpar->frame_size;
            out_stream->codecpar->block_align =in_codecpar->block_align;
            out_stream->codecpar->color_range =in_codecpar->color_range;
            out_stream->codecpar->color_space =in_codecpar->color_space;
            out_stream->codecpar->field_order =in_codecpar->field_order;
            out_stream->codecpar->sample_rate =in_codecpar->sample_rate;
            out_stream->codecpar->video_delay =in_codecpar->video_delay;
            out_stream->codecpar->seek_preroll =in_codecpar->seek_preroll;
            out_stream->codecpar->channel_layout =in_codecpar->channel_layout;
            out_stream->codecpar->chroma_location =in_codecpar->chroma_location;
            out_stream->codecpar->color_primaries =in_codecpar->color_primaries;
            out_stream->codecpar->initial_padding =in_codecpar->initial_padding;
            out_stream->codecpar->trailing_padding =in_codecpar->trailing_padding;
            out_stream->codecpar->bits_per_raw_sample = in_codecpar->bits_per_raw_sample;
            out_stream->codecpar->sample_aspect_ratio.num = in_codecpar->sample_aspect_ratio.num;
            out_stream->codecpar->sample_aspect_ratio.den = in_codecpar->sample_aspect_ratio.den;
            out_stream->codecpar->bits_per_coded_sample   = in_codecpar->bits_per_coded_sample;
    
    
            if (in_codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
                out_stream->codecpar->codec_id =ofmt_ctx->oformat->video_codec;
            }
            else if(in_codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
                out_stream->codecpar->codec_id = ofmt_ctx->oformat-    >audio_codec;
            }
        }
        av_dump_format(ofmt_ctx, 0, out_filename, 1);
        ofmt_ctx->pb = avio_ctx;
    
        ret = avformat_write_header(ofmt_ctx, NULL);
        if (ret < 0) {
            fprintf(stderr, "Error occurred when opening output file\n");
            return 1;
        }
        //------------------------------------------------------------------------------
    
    
        while (1) {
            AVStream *in_stream, *out_stream;
    
            ret = av_read_frame(ifmt_ctx, &pkt);
            if (ret < 0)
                break;
    
            in_stream  = ifmt_ctx->streams[pkt.stream_index];
            if (pkt.stream_index >= stream_mapping_size ||
                stream_mapping[pkt.stream_index] < 0) {
                av_packet_unref(&pkt);
                continue;
            }
    
            pkt.stream_index = stream_mapping[pkt.stream_index];
            out_stream = ofmt_ctx->streams[pkt.stream_index];
    
            /* copy packet */
            pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
            pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
            pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
            pkt.pos = -1;
    
            ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
            if (ret < 0) {
                fprintf(stderr, "Error muxing packet\n");
                break;
            }
            av_packet_unref(&pkt);
        }
        av_write_trailer(ofmt_ctx);
        avformat_close_input(&ifmt_ctx);
    
        /* close output */
        writeBuffer(fileNameOut, &bufferData);
        avformat_free_context(ofmt_ctx);
        av_freep(&stream_mapping);
        if (ret < 0 && ret != AVERROR_EOF) {
            fprintf(stderr, "Error occurred: %d\n",ret);
            return 1;
        }
        return a.exec();
    }