Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • How to hide/disable ffmpeg erros when using OpenCV (C++) ?

    25 mai 2017, par Kobe Nein

    I use OpenCV to read several videos, but the warning message disturbs me.

    My program just reads frames from a video, and calculates the MD5 of every frame.

    string VIDEO::getEndingHash(){
        int idx = 0;
        cv::Mat frame;
    
        while (1){
            Cap_.set(CV_CAP_PROP_POS_FRAMES, _FrameCount - idx);
            Cap_ >> frame;
    
            if (frame.empty())
                idx++;
            else
                break;
        }
        Cap_.set(CV_CAP_PROP_POS_FRAMES, 0);
    
        return MD5::MatMD5(frame);
    }
    

    How to hide/disable ffmpeg erros when using OpenCV (C++)? enter image description here

  • The encoding of ffmpeg does not work on iOS

    25 mai 2017, par Deric

    I would like to send encoded streaming encoded using ffmpeg. The encoding transfer developed under the source below does not work. Encoding Before packet operation with vlc player is done well, encoded packets can not operate. I do not know what's wrong. Please help me.

    av_register_all();
    avformat_network_init();
    AVOutputFormat *ofmt = NULL;
    //Input AVFormatContext and Output AVFormatContext
    AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
    AVPacket pkt;
    //const char *in_filename, *out_filename;
    int ret, i;
    int videoindex=-1;
    int frame_index=0;
    int64_t start_time=0;
    
    av_register_all();
    //Network
    avformat_network_init();
    //Input
    if ((ret = avformat_open_input(&ifmt_ctx, "rtmp://", 0, 0)) < 0) {
        printf( "Could not open input file.");
    }
    if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
        printf( "Failed to retrieve input stream information");
    }
    
    
    AVCodecContext *context = NULL;
    
    for(i=0; inb_streams; i++) {
        if(ifmt_ctx->streams[i]->codecpar->codec_type==AVMEDIA_TYPE_VIDEO){
    
            videoindex=i;
    
            AVCodecParameters *params = ifmt_ctx->streams[i]->codecpar;
            AVCodec *codec = avcodec_find_decoder(params->codec_id);
            if (codec == NULL)  { return; };
    
            context = avcodec_alloc_context3(codec);
    
            if (context == NULL) { return; };
    
            ret = avcodec_parameters_to_context(context, params);
            if(ret < 0){
                avcodec_free_context(&context);
            }
    
            context->framerate = av_guess_frame_rate(ifmt_ctx, ifmt_ctx->streams[i], NULL);
    
            ret = avcodec_open2(context, codec, NULL);
            if(ret < 0) {
                NSLog(@"avcodec open2 error");
                avcodec_free_context(&context);
            }
    
            break;
        }
    }
    av_dump_format(ifmt_ctx, 0, "rtmp://", 0);
    
    //Output
    
    avformat_alloc_output_context2(&ofmt_ctx, NULL, "flv", "rtmp://"); //RTMP
    //avformat_alloc_output_context2(&ofmt_ctx, NULL, "mpegts", out_filename);//UDP
    
    if (!ofmt_ctx) {
        printf( "Could not create output context\n");
        ret = AVERROR_UNKNOWN;
    }
    ofmt = ofmt_ctx->oformat;
    for (i = 0; i < ifmt_ctx->nb_streams; i++) {
        //Create output AVStream according to input AVStream
        AVStream *in_stream = ifmt_ctx->streams[i];
        AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
        if (!out_stream) {
            printf( "Failed allocating output stream\n");
            ret = AVERROR_UNKNOWN;
        }
    
        out_stream->time_base = in_stream->time_base;
    
        //Copy the settings of AVCodecContext
        ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
        if (ret < 0) {
            printf( "Failed to copy context from input to output stream codec context\n");
        }
    
        out_stream->codecpar->codec_tag = 0;
        if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) {
            out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
        }
    }
    //Dump Format------------------
    av_dump_format(ofmt_ctx, 0, "rtmp://", 1);
    //Open output URL
    if (!(ofmt->flags & AVFMT_NOFILE)) {
        ret = avio_open(&ofmt_ctx->pb, "rtmp://", AVIO_FLAG_WRITE);
        if (ret < 0) {
            printf( "Could not open output URL ");
       }
    }
    //Write file header
    ret = avformat_write_header(ofmt_ctx, NULL);
    if (ret < 0) {
        printf( "Error occurred when opening output URL\n");
    }
    
    // Encoding
    AVCodec *codec;
    AVCodecContext *c;
    
    AVStream *video_st = avformat_new_stream(ofmt_ctx, 0);
    video_st->time_base.num = 1;
    video_st->time_base.den = 25;
    
    if(video_st == NULL){
        NSLog(@"video stream error");
    }
    
    
    codec = avcodec_find_encoder(AV_CODEC_ID_H264);
    if(!codec){
        NSLog(@"avcodec find encoder error");
    }
    
    c = avcodec_alloc_context3(codec);
    if(!c){
        NSLog(@"avcodec alloc context error");
    }
    
    
    c->profile = FF_PROFILE_H264_BASELINE;
    c->width = ifmt_ctx->streams[videoindex]->codecpar->width;
    c->height = ifmt_ctx->streams[videoindex]->codecpar->height;
    c->time_base.num = 1;
    c->time_base.den = 25;
    c->bit_rate = 800000;
    //c->time_base = { 1,22 };
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    c->thread_count = 2;
    c->thread_type = 2;
    
    AVDictionary *param = 0;
    
    av_dict_set(&param, "preset", "slow", 0);
    av_dict_set(&param, "tune", "zerolatency", 0);
    
    if (avcodec_open2(c, codec, NULL) < 0) {
        fprintf(stderr, "Could not open codec\n");
    }
    
    
    
    AVFrame *pFrame = av_frame_alloc();
    
    start_time=av_gettime();
    while (1) {
    
        AVPacket encoded_pkt;
    
        av_init_packet(&encoded_pkt);
        encoded_pkt.data = NULL;
        encoded_pkt.size = 0;
    
        AVStream *in_stream, *out_stream;
        //Get an AVPacket
        ret = av_read_frame(ifmt_ctx, &pkt);
        if (ret < 0) {
            break;
        }
    
        //FIX:No PTS (Example: Raw H.264)
        //Simple Write PTS
        if(pkt.pts==AV_NOPTS_VALUE){
            //Write PTS
            AVRational time_base1=ifmt_ctx->streams[videoindex]->time_base;
            //Duration between 2 frames (us)
            int64_t calc_duration=(double)AV_TIME_BASE/av_q2d(ifmt_ctx->streams[videoindex]->r_frame_rate);
            //Parameters
            pkt.pts=(double)(frame_index*calc_duration)/(double)(av_q2d(time_base1)*AV_TIME_BASE);
            pkt.dts=pkt.pts;
            pkt.duration=(double)calc_duration/(double)(av_q2d(time_base1)*AV_TIME_BASE);
        }
        //Important:Delay
        if(pkt.stream_index==videoindex){
            AVRational time_base=ifmt_ctx->streams[videoindex]->time_base;
            AVRational time_base_q={1,AV_TIME_BASE};
            int64_t pts_time = av_rescale_q(pkt.dts, time_base, time_base_q);
            int64_t now_time = av_gettime() - start_time;
            if (pts_time > now_time) {
                av_usleep(pts_time - now_time);
            }
    
        }
    
        in_stream  = ifmt_ctx->streams[pkt.stream_index];
        out_stream = ofmt_ctx->streams[pkt.stream_index];
        /* copy packet */
        //Convert PTS/DTS
        //pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
        //pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
        pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
        pkt.pos = -1;
    
        //Print to Screen
        if(pkt.stream_index==videoindex){
            //printf("Send %8d video frames to output URL\n",frame_index);
            frame_index++;
        }
    
    
    
        // Decode and Encode
        if(pkt.stream_index == videoindex) {
    
            ret = avcodec_send_packet(context, &pkt);
    
            if(ret<0){
                NSLog(@"avcode send packet error");
            }
    
            ret = avcodec_receive_frame(context, pFrame);
            if(ret<0){
                NSLog(@"avcodec receive frame error");
            }
    
            ret = avcodec_send_frame(c, pFrame);
    
            if(ret < 0){
                NSLog(@"avcodec send frame - %s", av_err2str(ret));
            }
    
            ret = avcodec_receive_packet(c, &encoded_pkt);
    
            if(ret < 0){
                NSLog(@"avcodec receive packet error");
            }
    
        }
    
        //ret = av_write_frame(ofmt_ctx, &pkt);
    
        //encoded_pkt.stream_index = pkt.stream_index;
        av_packet_rescale_ts(&encoded_pkt, c->time_base, ofmt_ctx->streams[videoindex]->time_base);
    
    
        ret = av_interleaved_write_frame(ofmt_ctx, &encoded_pkt);
    
        if (ret < 0) {
            printf( "Error muxing packet\n");
            break;
        }
    
        av_packet_unref(&encoded_pkt);
        av_free_packet(&pkt);
    
    }
    //Write file trailer
    av_write_trailer(ofmt_ctx);
    

  • ffmpeg record audio from Xvfb on Centos

    25 mai 2017, par boygiandi

    I'm trying to record audio from Xvfb. And I have some problems :

    1. What's the different between alsa and pulse. I get confuse
    2. Server Centos have no soud card:

    arecord -l

    arecord: device_list:268: no soundcards found...

    1. I may have many Xvfb process, how to record video and audio from specific Xvfb process. I checked this https://trac.ffmpeg.org/wiki/Capture/ALSA#Recordaudiofromanapplication but still don't understand how it works.

    ffmpeg -f alsa -ac 2 -i hw:0,0 -acodec pcm_s16le output.wav

    I seen many command like this, but I don't know how to get hw:0,0 ( id of sound card ? )

    Please help. Thanks

  • How to listen to 2 incoming rtsp streams at the same time with FFMpeg

    24 mai 2017, par Alexander Ushakov

    I can listen and receive one rtsp stream with FFMpeg library using this code:

    AVFormatContext* format_context = NULL
    char* url = "rtsp://example.com/in/1";
    AVDictionary *options = NULL;
    av_dict_set(&options, "rtsp_flags", "listen", 0);
    av_dict_set(&options, "rtsp_transport", "tcp", 0);
    
    int status = avformat_open_input(&format_context, url, NULL, &options);
    av_dict_free(&options);
    if( status >= 0 )
    {
        status = avformat_find_stream_info( format_context, NULL);
        if( status >= 0 )
        {
            AVPacket av_packet;
            av_init_packet(&av_packet);
    
            for(;;)
            {                                                                      
                status = av_read_frame( format_context, &av_packet );
                if( status < 0 )
                {
                    break;
                }
            }
        }
        avformat_close_input(&format_context);
    }
    

    But if I try to open another similar listener (in another thread with another url) at the same time, I get error:

    Unable to open RTSP for listening rtsp://example.com/in/2: Address already in use

    It looks like avformat_open_input tries to open socket which is already opened by previous call of avformat_open_input. Is there any way to share this socket between 2 threads? May be there is some dispatcher in FFMpeg for such task.

    Important Note: In my case my application must serve as a listen server for incoming RTSP connections! It is not a client connecting to another RTSP server.

  • FFmpeg mp4 encoder for android html

    24 mai 2017, par Dell Watson

    Hello I'm trying to put video .mp4 auto-captured by my webcam using ffmpeg into HTML, and then activated my localhost so my android could see it.

    the video in my android WAS ABLE to play BUT it's all white and pixels error, so it's a fail.

    I thought because android has difference surface because in my desktop it runs perfectly, then i keep searching and trying with ogv/webm.

    In the end, I just use a downloaded mp4 and it runs perfectly tho. now I think the problem was coming from my mp4-webcam created by ffmpeg(run in cmd)

    I compare a mp4-webcam vs mp4-downloaded

    5sec vs 1min,

    Data-rate: 16477kbps vs 613kbps

    framerate: 30frm/s vs 23frm/s

    size: 9MB vs 5 MB

    even tho it's only 5sec video by webcam, it still has larger data than a 1min video-downloaded maybe it was because without conversion.

    but the question, is that the reason of the problem ? android-html(google chrome) wasn't able to display and make a dead pixels since in desktop it runs. it shouldn't be the problem right ?

    I really need to transfer webcam-record into android-surface (my web-app).

    I have no idea to fix it, any advice ? I've been searching a lot. Maybe there was another problem I do not know yet.

    EDIT: my cmd ffmpeg run : ffmpeg -y -f v4l2 -i /dev/video1 -codec:v libx264 -qp 0 -t 0:00:05 hss.mp4

    EDIT 2: my 2nd thought because ffmpeg encoder that I used(libx264) isnot support for android. but i still no idea