Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • symbol lookup error in opencv

    28 mars 2017, par abolfazl taghribi

    I'm facing an error when I am trying to run laptev's stip code (space time interest point). I download the code from here but at first I face an error related to linking issues and I solve it according to stackoverflow but after this error have been solved I face the new error:

    ./bin/stipdet: symbol lookup error: ./bin/stipdet: undefined symbol: cvCreateFileCapture
    

    I installed opencv 3.2. on ubuntu 14.04 with ffmpeg. should I install opencv 2 to use this code? does anyone else have the same problem?

  • using FFmpeg, how to decode H264 packets

    28 mars 2017, par Jun

    I'm new to FFmpeg struggling to decode H264 packets which can be obtained as an array of uint8_t.

    After many of investigations, I think it should be able to just put the array into an AVPacket like the below

    AVPacket *avpkt = (AVPacket *)malloc(sizeof(AVPacket) * 1);
    av_init_packet(avpkt);  
    avpkt->data = ct;   // ct is the array
    avpkt->length =....
    

    and decode by avcodec_decode_video2(). A part of the code is like

    ...
    codec = avcodec_find_decoder(CODEC_ID_H264);
    gVideoCodecCtx = avcodec_alloc_context();
    gFrame = avcodec_alloc_frame();
    avcodec_decode_video2(gVideoCodecCtx, gFrame, &frameFinished, packet);
    ...
    

    I guess I set all required properties properly but this function is returning only -1 :(

    I just found the -1 is coming from

    ret = avctx->codec->decode(avctx, picture, got_picture_ptr, avpkt);

    in the avcodec_decode_video2();

    Actually, what I'm wondering is if I can decode H264 packets (without RTP header) by avcodec_decode_video2().

    Thanks for the help in advance.


    /////////// added

    OK, I'm still trying to find a solution. What I'm doing now is the below

    ** the H264 stream in this RTP stream is encoded by FU-A

    1. receive a RTP packet

    2. look if the second byte of the RTP header is > 0 which means it's the first packet (and possibly will be followed)

    3. see if the next RTP packet has > 0 at its second byte also, then it means the previous frame was a complete NAL or if this is < 0, the packet should be appended to the previous packet.

    4. remove all RTP header of the packets so it has only like FU indicator | FU header | NAL

    5. try play it with avcodec_decode_video2()

    but it's only returning -1..... am I supposed to remove FU indicator and header too??

    any suggestion will be very appreciated

    thanks in advance.

  • Java.lang.UnsatisfiedLinkError : No implementation found for int [duplicate]

    28 mars 2017, par Muthukumar Subramaniam

    This question already has an answer here:

    I executed youtube watch me android application project. I just add some classes in my project and build with ndk. I got the error like

    java.lang.UnsatisfiedLinkError: No implementation found for int com.ephronsystem.mobilizerapp.Ffmpeg.encodeVideoFrame(byte[]) (tried Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeVideoFrame and Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeVideoFrame___3B).

    My code:

    package com.ephronsystem.mobilizerapp;
    
    public class Ffmpeg {
    
         static {
            System.loadLibrary("ffmpeg");
        }
    
        public static native boolean init(int width, int height, int audio_sample_rate, String rtmpUrl);
    
        public static native void shutdown();
    
        // Returns the size of the encoded frame.
        public static native int encodeVideoFrame(byte[] yuv_image);
    
        public static native int encodeAudioFrame(short[] audio_data, int length);
    }
    

    This is ffmpeg-jni.c

     #include log.h>
    #include 
    #include 
    #include "libavcodec/avcodec.h"
    #include "libavformat/avformat.h"
    #include "libavutil/opt.h"
    
    #ifdef __cplusplus
    extern "C" {
    #endif
    
    JNIEXPORT jboolean JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_init(JNIEnv *env, jobject thiz,
                                                                     jint width, jint height,
                                                                     jint audio_sample_rate,
                                                                     jstring rtmp_url);
    JNIEXPORT void JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_shutdown(JNIEnv *env,
    jobject thiz
    );
    JNIEXPORT jint JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeVideoFrame(JNIEnv
    *env,
    jobject thiz,
            jbyteArray
    yuv_image);
    JNIEXPORT jint JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeAudioFrame(JNIEnv *env,
                                                                                 jobject thiz,
                                                                                 jshortArray audio_data,
                                                                                 jint length);
    
    #ifdef __cplusplus
    }
    #endif
    
    #define LOGI(...) __android_log_print(ANDROID_LOG_INFO, "ffmpeg-jni", __VA_ARGS__)
    #define URL_WRONLY 2
            static AVFormatContext *fmt_context;
            static AVStream *video_stream;
            static AVStream *audio_stream;
    
            static int pts
    = 0;
    static int last_audio_pts = 0;
    
    // Buffers for UV format conversion
    static unsigned char *u_buf;
    static unsigned char *v_buf;
    
    static int enable_audio = 1;
    static int64_t audio_samples_written = 0;
    static int audio_sample_rate = 0;
    
    // Stupid buffer for audio samples. Not even a proper ring buffer
    #define AUDIO_MAX_BUF_SIZE 16384  // 2x what we get from Java
    static short audio_buf[AUDIO_MAX_BUF_SIZE];
    static int audio_buf_size = 0;
    
    void AudioBuffer_Push(const short *audio, int num_samples) {
        if (audio_buf_size >= AUDIO_MAX_BUF_SIZE - num_samples) {
            LOGI("AUDIO BUFFER OVERFLOW: %i + %i > %i", audio_buf_size, num_samples,
                 AUDIO_MAX_BUF_SIZE);
            return;
        }
        for (int i = 0; i < num_samples; i++) {
            audio_buf[audio_buf_size++] = audio[i];
        }
    }
    
    int AudioBuffer_Size() { return audio_buf_size; }
    
    short *AudioBuffer_Get() { return audio_buf; }
    
    void AudioBuffer_Pop(int num_samples) {
        if (num_samples > audio_buf_size) {
            LOGI("Audio buffer Pop WTF: %i vs %i", num_samples, audio_buf_size);
            return;
        }
        memmove(audio_buf, audio_buf + num_samples, num_samples * sizeof(short));
        audio_buf_size -= num_samples;
    }
    
    void AudioBuffer_Clear() {
        memset(audio_buf, 0, sizeof(audio_buf));
        audio_buf_size = 0;
    }
    
    static void log_callback(void *ptr, int level, const char *fmt, va_list vl) {
        char x[2048];
        vsnprintf(x, 2048, fmt, vl);
        LOGI(x);
    }
    
    JNIEXPORT jboolean JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_init(JNIEnv *env, jobject thiz,
                                                                     jint width, jint height,
                                                                     jint audio_sample_rate_param,
                                                                     jstring rtmp_url) {
        avcodec_register_all();
        av_register_all();
        av_log_set_callback(log_callback);
    
        fmt_context = avformat_alloc_context();
        AVOutputFormat *ofmt = av_guess_format("flv", NULL, NULL);
        if (ofmt) {
            LOGI("av_guess_format returned %s", ofmt->long_name);
        } else {
            LOGI("av_guess_format fail");
            return JNI_FALSE;
        }
    
        fmt_context->oformat = ofmt;
        LOGI("creating video stream");
        video_stream = av_new_stream(fmt_context, 0);
    
        if (enable_audio) {
            LOGI("creating audio stream");
            audio_stream = av_new_stream(fmt_context, 1);
        }
    
        // Open Video Codec.
        // ======================
        AVCodec *video_codec = avcodec_find_encoder(AV_CODEC_ID_H264);
        if (!video_codec) {
            LOGI("Did not find the video codec");
            return JNI_FALSE;  // leak!
        } else {
            LOGI("Video codec found!");
        }
        AVCodecContext *video_codec_ctx = video_stream->codec;
        video_codec_ctx->codec_id = video_codec->id;
        video_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
        video_codec_ctx->level = 31;
    
        video_codec_ctx->width = width;
        video_codec_ctx->height = height;
        video_codec_ctx->pix_fmt = PIX_FMT_YUV420P;
        video_codec_ctx->rc_max_rate = 0;
        video_codec_ctx->rc_buffer_size = 0;
        video_codec_ctx->gop_size = 12;
        video_codec_ctx->max_b_frames = 0;
        video_codec_ctx->slices = 8;
        video_codec_ctx->b_frame_strategy = 1;
        video_codec_ctx->coder_type = 0;
        video_codec_ctx->me_cmp = 1;
        video_codec_ctx->me_range = 16;
        video_codec_ctx->qmin = 10;
        video_codec_ctx->qmax = 51;
        video_codec_ctx->keyint_min = 25;
        video_codec_ctx->refs = 3;
        video_codec_ctx->trellis = 0;
        video_codec_ctx->scenechange_threshold = 40;
        video_codec_ctx->flags |= CODEC_FLAG_LOOP_FILTER;
        video_codec_ctx->me_method = ME_HEX;
        video_codec_ctx->me_subpel_quality = 6;
        video_codec_ctx->i_quant_factor = 0.71;
        video_codec_ctx->qcompress = 0.6;
        video_codec_ctx->max_qdiff = 4;
        video_codec_ctx->time_base.den = 10;
        video_codec_ctx->time_base.num = 1;
        video_codec_ctx->bit_rate = 3200 * 1000;
        video_codec_ctx->bit_rate_tolerance = 0;
        video_codec_ctx->flags2 |= 0x00000100;
    
        fmt_context->bit_rate = 4000 * 1000;
    
        av_opt_set(video_codec_ctx, "partitions", "i8x8,i4x4,p8x8,b8x8", 0);
        av_opt_set_int(video_codec_ctx, "direct-pred", 1, 0);
        av_opt_set_int(video_codec_ctx, "rc-lookahead", 0, 0);
        av_opt_set_int(video_codec_ctx, "fast-pskip", 1, 0);
        av_opt_set_int(video_codec_ctx, "mixed-refs", 1, 0);
        av_opt_set_int(video_codec_ctx, "8x8dct", 0, 0);
        av_opt_set_int(video_codec_ctx, "weightb", 0, 0);
    
        if (fmt_context->oformat->flags & AVFMT_GLOBALHEADER)
            video_codec_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
    
        LOGI("Opening video codec");
        AVDictionary *vopts = NULL;
        av_dict_set(&vopts, "profile", "main", 0);
        //av_dict_set(&vopts, "vprofile", "main", 0);
        av_dict_set(&vopts, "rc-lookahead", 0, 0);
        av_dict_set(&vopts, "tune", "film", 0);
        av_dict_set(&vopts, "preset", "ultrafast", 0);
        av_opt_set(video_codec_ctx->priv_data, "tune", "film", 0);
        av_opt_set(video_codec_ctx->priv_data, "preset", "ultrafast", 0);
        av_opt_set(video_codec_ctx->priv_data, "tune", "film", 0);
        int open_res = avcodec_open2(video_codec_ctx, video_codec, &vopts);
        if (open_res < 0) {
            LOGI("Error opening video codec: %i", open_res);
            return JNI_FALSE;   // leak!
        }
    
        // Open Audio Codec.
        // ======================
    
        if (enable_audio) {
            AudioBuffer_Clear();
            audio_sample_rate = audio_sample_rate_param;
            AVCodec *audio_codec = avcodec_find_encoder(AV_CODEC_ID_AAC);
            if (!audio_codec) {
                LOGI("Did not find the audio codec");
                return JNI_FALSE;  // leak!
            } else {
                LOGI("Audio codec found!");
            }
            AVCodecContext *audio_codec_ctx = audio_stream->codec;
            audio_codec_ctx->codec_id = audio_codec->id;
            audio_codec_ctx->codec_type = AVMEDIA_TYPE_AUDIO;
            audio_codec_ctx->bit_rate = 128000;
            audio_codec_ctx->bit_rate_tolerance = 16000;
            audio_codec_ctx->channels = 1;
            audio_codec_ctx->profile = FF_PROFILE_AAC_LOW;
            audio_codec_ctx->sample_fmt = AV_SAMPLE_FMT_FLT;
            audio_codec_ctx->sample_rate = 44100;
    
            LOGI("Opening audio codec");
            AVDictionary *opts = NULL;
            av_dict_set(&opts, "strict", "experimental", 0);
            open_res = avcodec_open2(audio_codec_ctx, audio_codec, &opts);
            LOGI("audio frame size: %i", audio_codec_ctx->frame_size);
    
            if (open_res < 0) {
                LOGI("Error opening audio codec: %i", open_res);
                return JNI_FALSE;   // leak!
            }
        }
    
        const jbyte *url = (*env)->GetStringUTFChars(env, rtmp_url, NULL);
    
        // Point to an output file
        if (!(ofmt->flags & AVFMT_NOFILE)) {
            if (avio_open(&fmt_context->pb, url, URL_WRONLY) < 0) {
                LOGI("ERROR: Could not open file %s", url);
                return JNI_FALSE;  // leak!
            }
        }
        (*env)->ReleaseStringUTFChars(env, rtmp_url, url);
    
        LOGI("Writing output header.");
        // Write file header
        if (avformat_write_header(fmt_context, NULL) != 0) {
            LOGI("ERROR: av_write_header failed");
            return JNI_FALSE;
        }
    
        pts = 0;
        last_audio_pts = 0;
        audio_samples_written = 0;
    
        // Initialize buffers for UV format conversion
        int frame_size = video_codec_ctx->width * video_codec_ctx->height;
        u_buf = (unsigned char *) av_malloc(frame_size / 4);
        v_buf = (unsigned char *) av_malloc(frame_size / 4);
    
        LOGI("ffmpeg encoding init done");
        return JNI_TRUE;
    }
    
    JNIEXPORT void JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_shutdown(JNIEnv
    *env,
    jobject thiz
    ) {
    av_write_trailer(fmt_context);
    avio_close(fmt_context
    ->pb);
    avcodec_close(video_stream
    ->codec);
    if (enable_audio) {
    avcodec_close(audio_stream
    ->codec);
    }
    av_free(fmt_context);
    av_free(u_buf);
    av_free(v_buf);
    
    fmt_context = NULL;
    u_buf = NULL;
    v_buf = NULL;
    }
    
    JNIEXPORT jint JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeVideoFrame(JNIEnv
    *env,
    jobject thiz,
            jbyteArray
    yuv_image) {
    int yuv_length = (*env)->GetArrayLength(env, yuv_image);
    unsigned char *yuv_data = (*env)->GetByteArrayElements(env, yuv_image, 0);
    
    AVCodecContext *video_codec_ctx = video_stream->codec;
    //LOGI("Yuv size: %i w: %i h: %i", yuv_length, video_codec_ctx->width, video_codec_ctx->height);
    
    int frame_size = video_codec_ctx->width * video_codec_ctx->height;
    
    const unsigned char *uv = yuv_data + frame_size;
    
    // Convert YUV from NV12 to I420. Y channel is the same so we don't touch it,
    // we just have to deinterleave UV.
    for (
    int i = 0;
    i < frame_size / 4; i++) {
    v_buf[i] = uv[i * 2];
    u_buf[i] = uv[i * 2 + 1];
    }
    
    AVFrame source;
    memset(&source, 0, sizeof(AVFrame));
    source.data[0] =
    yuv_data;
    source.data[1] =
    u_buf;
    source.data[2] =
    v_buf;
    source.linesize[0] = video_codec_ctx->
    width;
    source.linesize[1] = video_codec_ctx->width / 2;
    source.linesize[2] = video_codec_ctx->width / 2;
    
    // only for bitrate regulation. irrelevant for sync.
    source.
    pts = pts;
    pts++;
    
    int out_length = frame_size + (frame_size / 2);
    unsigned char *out = (unsigned char *) av_malloc(out_length);
    int compressed_length = avcodec_encode_video(video_codec_ctx, out, out_length, &source);
    
    (*env)->
    ReleaseByteArrayElements(env, yuv_image, yuv_data,
    0);
    
    // Write to file too
    if (compressed_length > 0) {
    AVPacket pkt;
    av_init_packet(&pkt);
    pkt.
    pts = last_audio_pts;
    if (video_codec_ctx->coded_frame && video_codec_ctx->coded_frame->key_frame) {
    pkt.flags |= 0x0001;
    }
    pkt.
    stream_index = video_stream->index;
    pkt.
    data = out;
    pkt.
    size = compressed_length;
    if (
    av_interleaved_write_frame(fmt_context,
    &pkt) != 0) {
    LOGI("Error writing video frame");
    }
    } else {
    LOGI("??? compressed_length <= 0");
    }
    
    last_audio_pts++;
    
    av_free(out);
    return
    compressed_length;
    }
    
    JNIEXPORT jint JNICALL Java_com_ephronsystem_mobilizerapp_Ffmpeg_encodeAudioFrame(JNIEnv
    *env,
    jobject thiz,
            jshortArray
    audio_data,
    jint length
    ) {
    if (!enable_audio) {
    return 0;
    }
    
    short *audio = (*env)->GetShortArrayElements(env, audio_data, 0);
    //LOGI("java audio buffer size: %i", length);
    
    AVCodecContext *audio_codec_ctx = audio_stream->codec;
    
    unsigned char *out = av_malloc(128000);
    
    AudioBuffer_Push(audio, length
    );
    
    int total_compressed = 0;
    while (
    
    AudioBuffer_Size()
    
    >= audio_codec_ctx->frame_size) {
    AVPacket pkt;
    av_init_packet(&pkt);
    
    int compressed_length = avcodec_encode_audio(audio_codec_ctx, out, 128000,
                                                 AudioBuffer_Get());
    
    total_compressed +=
    compressed_length;
    audio_samples_written += audio_codec_ctx->
    frame_size;
    
    int new_pts = (audio_samples_written * 1000) / audio_sample_rate;
    if (compressed_length > 0) {
    pkt.
    size = compressed_length;
    pkt.
    pts = new_pts;
    last_audio_pts = new_pts;
    //LOGI("audio_samples_written: %i  comp_length: %i   pts: %i", (int)audio_samples_written, (int)compressed_length, (int)new_pts);
    pkt.flags |= 0x0001;
    pkt.
    stream_index = audio_stream->index;
    pkt.
    data = out;
    if (
    av_interleaved_write_frame(fmt_context,
    &pkt) != 0) {
    LOGI("Error writing audio frame");
    }
    }
    AudioBuffer_Pop(audio_codec_ctx
    ->frame_size);
    }
    
    (*env)->
    ReleaseShortArrayElements(env, audio_data, audio,
    0);
    
    av_free(out);
    return
    total_compressed;
    }
    
  • FFMPEG:avfilter_graph_create_filter method failed when initializing filter

    28 mars 2017, par IT_Layman

    I want to implement the transcoding.c sample hosted on the FFMPEG website. But the avfilter_graph_create_filterfunction failed with a return code of -22 (Line 175). I only made a minor change to the source code to make it runnable in my C++ console application. I also searched it online, but couldn't find any helpful information. Below is my code:

     extern "C" 
    {
    #include avcodec.h>
    #include avformat.h>
    #include avfiltergraph.h>
    #include buffersink.h>
    #include buffersrc.h>
    #include opt.h>
    #include pixdesc.h>
    }
    static AVFormatContext *ifmt_ctx;
    static AVFormatContext *ofmt_ctx;
    typedef struct FilteringContext {
        AVFilterContext *buffersink_ctx;
        AVFilterContext *buffersrc_ctx;
        AVFilterGraph *filter_graph;
    } FilteringContext;
    static FilteringContext *filter_ctx;
    
    static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
        AVCodecContext *enc_ctx, const char *filter_spec)
    {
        char args[512];
        int ret = 0;
        AVFilter *buffersrc = NULL;
        AVFilter *buffersink = NULL;
        AVFilterContext *buffersrc_ctx = NULL;
        AVFilterContext *buffersink_ctx = NULL;
        AVFilterInOut *outputs = avfilter_inout_alloc();
        AVFilterInOut *inputs = avfilter_inout_alloc();
        AVFilterGraph *filter_graph = avfilter_graph_alloc();
        if (!outputs || !inputs || !filter_graph) {
            ret = AVERROR(ENOMEM);
            goto end;
        }
        if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
            buffersrc = avfilter_get_by_name("buffer");
            buffersink = avfilter_get_by_name("buffersink");
            if (!buffersrc || !buffersink) {
                av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
                ret = AVERROR_UNKNOWN;
                goto end;
            }
            /*sprintf(args, sizeof(args),
                "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
                dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
                dec_ctx->time_base.num, dec_ctx->time_base.den,
                dec_ctx->sample_aspect_ratio.num,
                dec_ctx->sample_aspect_ratio.den);*/
            ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
                args, NULL, filter_graph);
            if (ret < 0) {
                av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
                goto end;
            }
            ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
                NULL, NULL, filter_graph);
            if (ret < 0) {
                av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
                goto end;
            }
            ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
                (uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
                AV_OPT_SEARCH_CHILDREN);
            if (ret < 0) {
                av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
                goto end;
            }
        }
        else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
            buffersrc = avfilter_get_by_name("abuffer");
            buffersink = avfilter_get_by_name("abuffersink");
            if (!buffersrc || !buffersink) {
                av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
                ret = AVERROR_UNKNOWN;
                goto end;
            }
            if (!dec_ctx->channel_layout)
                dec_ctx->channel_layout =
                av_get_default_channel_layout(dec_ctx->channels);
            /*snprintf(args, sizeof(args),
                "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
                dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
                av_get_sample_fmt_name(dec_ctx->sample_fmt),
                dec_ctx->channel_layout);*/
            ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
                args, NULL, filter_graph);
            if (ret < 0) {
                av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
                goto end;
            }
            ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
                NULL, NULL, filter_graph);
            if (ret < 0) {
                av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
                goto end;
            }
            ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
                (uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
                AV_OPT_SEARCH_CHILDREN);
            if (ret < 0) {
                av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
                goto end;
            }
            ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
                (uint8_t*)&enc_ctx->channel_layout,
                sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
            if (ret < 0) {
                av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
                goto end;
            }
            ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
                (uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
                AV_OPT_SEARCH_CHILDREN);
            if (ret < 0) {
                av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
                goto end;
            }
        }
        else {
            ret = AVERROR_UNKNOWN;
            goto end;
        }
        /* Endpoints for the filter graph. */
        outputs->name = av_strdup("in");
        outputs->filter_ctx = buffersrc_ctx;
        outputs->pad_idx = 0;
        outputs->next = NULL;
        inputs->name = av_strdup("out");
        inputs->filter_ctx = buffersink_ctx;
        inputs->pad_idx = 0;
        inputs->next = NULL;
        if (!outputs->name || !inputs->name) {
            ret = AVERROR(ENOMEM);
            goto end;
        }
        if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
            &inputs, &outputs, NULL)) < 0)
            goto end;
        if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
            goto end;
        /* Fill FilteringContext */
        fctx->buffersrc_ctx = buffersrc_ctx;
        fctx->buffersink_ctx = buffersink_ctx;
        fctx->filter_graph = filter_graph;
    end:
        avfilter_inout_free(&inputs);
        avfilter_inout_free(&outputs);
        return ret;
    }
    static int init_filters(void)
    {
        const char *filter_spec;
        unsigned int i;
        int ret;
        filter_ctx = (FilteringContext *)av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));
        if (!filter_ctx)
            return AVERROR(ENOMEM);
        for (i = 0; i < ifmt_ctx->nb_streams; i++) {
            filter_ctx[i].buffersrc_ctx = NULL;
            filter_ctx[i].buffersink_ctx = NULL;
            filter_ctx[i].filter_graph = NULL;
            if (!(ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
                || ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
                continue;
            if (ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
                filter_spec = "null"; /* passthrough (dummy) filter for video */
            else
                filter_spec = "anull"; /* passthrough (dummy) filter for audio */
            ret = init_filter(&filter_ctx[i], ifmt_ctx->streams[i]->codec,
                ofmt_ctx->streams[i]->codec, filter_spec);
            if (ret)
                return ret;
        }
        return 0;
    }
    
    int main(int argc, char **argv)
    {
        int ret;
        AVPacket packet = {  NULL, 0 };
        AVFrame *frame = NULL;
        enum AVMediaType type;
        unsigned int stream_index;
        unsigned int i;
        int got_frame;
        int(*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);
    
        av_register_all();
        avfilter_register_all();
    
        if ((ret = init_filters()) < 0)
            goto end;
        /*...*/
        system("Pause");
        return ret ? 1 : 0;
    }
    
  • Live HTML5 Video to Node.js to HTML5 Video streaming

    28 mars 2017, par sandorvasas

    I've searched around in similar topics, but haven't really found the answer for my question. I'm making a webcam live-streaming site, and reading video input from HTML5 , periodically uploading the frames via WebSocket to a Node.js server, which -so far as I understood-, should write the incoming frames' data to a videofile, so that file can be streamed with ffmpeg or gstreamer to broadcast the live stream to multiple viewers.

    I'm planning to use livecam, since it can stream from a file as well.

    My uncertainty arises at the point when the frames are received from the broadcaster:

    I have this simple node RTC endpoint:

    const RTCAPI = (apiServer) => {
    
      let primus = new Primus(apiServer, { 
        transformer: 'uws', 
        parser: 'binary',
        pathname: '/messaging',
        plugin: {
          rooms: PrimusRooms,
          responder: PrimusResponder
        }
      });
    
    
      let clients = {};
    
      primus.on('connection', spark => {
        clients[spark.id] = spark;
    
        spark.on('data', data => {
    
            // here -- fs.createWriteStream? 
    
        });
    
      });
    
    }
    

    A side question is, how can I safely write the frames to a file that ffmpeg/gstreamer could stream? Is it safe append raw incoming data to the file?

    Since this would be live-stream only, I won't need to keep the recorded files, so I guess the file should only keep the last N frames, deleting the last one when adding a new. I'm not sure how can I achieve this. I'm not even sure I have to deal with these manually or ffmpeg/gstreamer supports the 'moving window of frames' out of the box.

    Any advice would be greatly appreciated!

    Thanks.