Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Articles published on the website

  • C run linux shell command(fmpeg) 10x slower than typing directly in terminal

    14 November 2014, by dadylonglegs

    CLOSED

    I'm writing an application that execute a linux shell command (ffmpeg) from my C code. Such as:

    char command[2000];
    sprintf(command, "ffmpeg -i %s/%s -r 1 -vf scale=-1:120 -vframes 1 -ss  00:00:00 %s.gif", publicFolder, mediaFile, mediaFile);
    system(command);
    

    To extract video thumbnail from a specific video. But the strange that it is too much slower when executing shell command form C compare to typing directly to the terminal. I have no idea about this. Can anybody help me pls?. Thanks in advance.

  • Add logo in a specific time interval relative to the end

    14 November 2014, by Kåre Brøgger Sørensen

    Is there a way - in a one liner - where I do not have to use ffmpeg to calculate the duration of the whole video before i render ..

    I would like something like:

    ffmpeg -i ny_video_no_logo.mp4 -i ~/Pictures/home_logo_roed_new.png -filter_complex "[0:v][1:v]overlay=main_w-overlay_w-10:main_h-overlay_h-10:enable='between(t,5.50,FULl_LENGTH - 10)'" -codec:a copy out.mp4
    

    I there a simple way like the "main_w" to get the video length inserted - I am using a non existing variable FULL_LENGTH in the above?

    Or do i have to make a ffmpeg -i and extract the time from this?

  • conversion from cv::mat to avframe

    14 November 2014, by danics

    i create a method for converting between cv::mat to avframe with PIX_FMT_YUV420P format (oframe format) but _convCtx always return null, whats wrong? thanks in advance!

    void processToFrame(cv::Mat* _mat, AVFrame* oframe)
      {
          // Create and allocate the conversion frame.
          if (_oframe == nullptr) {
              _oframe = avcodec_alloc_frame();  
              if (_oframe == nullptr)
                  throw std::runtime_error("Matrix Converter: Could not allocate the output frame.");
    
              avpicture_alloc(reinterpret_cast(_oframe), 
                  PIX_FMT_BGR24, _mat->cols, _mat->rows);             
          }
    
          avpicture_fill(reinterpret_cast(_oframe),
              (uint8_t *)_mat->data,
              AV_PIX_FMT_BGR24,
              _mat->cols,
              _mat->rows);
    
          // Convert the image from its native format to BGR.
          if (_convCtx == nullptr) {
              _convCtx = sws_getContext(
                  oframe->width, oframe->height, (enum AVPixelFormat) oframe->format, 
                  _oframe->width, _oframe->height, PIX_FMT_BGR24, 
                  SWS_BICUBIC, nullptr, nullptr, nullptr);
          }
          if (_convCtx == nullptr)
              throw std::runtime_error("Matrix Converter: Unable to initialize the conversion context.");   
    
          // Scales the source data according to our SwsContext settings.
          if (sws_scale(_convCtx,
              _oframe->data, _oframe->linesize, 0, _oframe->height,
              oframe->data, oframe->linesize) < 0)
              throw std::runtime_error("Matrix Converter: Pixel format conversion not supported.");
      }
    

    edit: create oframe

    void createVideoFile(const char* filename, int w, int h, int codec_id, int fps)
      {
          /* find the mpeg1 video encoder */
          if(codec == nullptr) 
          {
              /* find the mpeg1 video encoder */
              codec = avcodec_find_encoder((AVCodecID)codec_id);
              if (!codec) {
                  throw std::runtime_error("codec not found\n");
              }
          }
    
          if(c == nullptr)
          {
              c = avcodec_alloc_context3(codec);
              if(!c)
              {
                  throw std::runtime_error("Could not allocate video codec context\n");
              }
              /* put sample parameters */
              c->bit_rate = 400000;
              /* resolution must be a multiple of two */
              c->width = 2 * (w / 2);
              c->height = 2 * (h / 2);
              /* frames per second */
              AVRational ar;
              ar.den = 1;
              ar.num = fps;
    
              c->time_base= ar; //(AVRational){1,25};
              c->gop_size = 10; /* emit one intra frame every ten frames */
              c->max_b_frames=1;
              c->pix_fmt = PIX_FMT_YUV420P;
    
              if(codec_id == AV_CODEC_ID_H264)
                  av_opt_set(c->priv_data, "preset", "slow", 0);
    
              /* open it */
              if (avcodec_open2(c, codec, NULL) < 0) {
                  throw std::runtime_error("Could not open codec\n");
              }
    
              f = fopen(filename, "wb");
              if (!f) {
                  throw std::runtime_error("could not open file\n");
              }
    
              frame = avcodec_alloc_frame();
              if (!frame) {
                  throw std::runtime_error("Could not allocate video frame\n");
              }
              frame->format = c->pix_fmt;
              frame->width  = c->width;
              frame->height = c->height;
    
              /* the image can be allocated by any means and av_image_alloc() is
               * just the most convenient way if av_malloc() is to be used */
    
              int ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height, c->pix_fmt, 32);
    
              if (ret < 0) {
                  throw std::runtime_error("Could not allocate raw picture buffer\n");
              }
          }
    }
    

    read mat images

    void video_encode_example(const cv::Mat& fin)
    {
        AVPacket pkt;
        /* encode 1 second of video */
        av_init_packet(&pkt);
        pkt.data = NULL;    // packet data will be allocated by the encoder
        pkt.size = 0;
    
        fflush(stdout);
    
        cv::Mat res;
        cv::resize(fin, res, cv::Size(c->width, c->height));
    
        processToFrame(&res, frame);
        frame->pts = i;
    
        /* encode the image */
        int ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
        if (ret < 0) {
        throw std::runtime_error("Error encoding frame\n");
        }
    
        if (got_output) {
            printf("Write frame %3d (size=%5d)\n", i, pkt.size);
            fwrite(pkt.data, 1, pkt.size, f);
            av_free_packet(&pkt);
        }
        i++;
        /* get the delayed frames */
        /*for (got_output = 1; got_output; i++) {
            fflush(stdout);
    
            ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
            if (ret < 0) {
                throw std::runtime_error("Error encoding frame\n");
            }
    
            if (got_output) {
                printf("Write frame %3d (size=%5d)\n", i, pkt.size);
                fwrite(pkt.data, 1, pkt.size, f);
                av_free_packet(&pkt);
            }
        }*/
    }
    
  • How to convert an IP Camera video stream into a video file?

    14 November 2014, by AgentFire

    I have a URL (/ipcam/mpeg4.cgi) which points to my IP camera which is connected via Ethernet. Accessing the URL resuls in a infinite stream of video (possibly with audio) data.

    I would like to store this data into a video file and play it later with a video player (HTML5's video tag is preferred as the player).

    However, a straightforward approach, which is simple saving the stream data into .mp4 file, didn't work.

    I have looked into the file and here is what I saw (click to enlarge):

    It turned out, there are some HTML headers, which I further on manually excluded using the binary editing tool, and yet no player could play the rest of the file.

    The HTML headers are:

    --myboundary
    Content-Type: image/mpeg4
    Content-Length: 76241
    X-Status: 0
    X-Tag: 1693923
    X-Flags: 0
    X-Alarm: 0
    X-Frametype: I
    X-Framerate: 30
    X-Resolution: 1920*1080
    X-Audio: 1
    X-Time: 2000-02-03 02:46:31
    alarm: 0000
    

    My question is pretty clear now, and I would like any help or suggestion. I suspect, I have to manually create some MP4 headers myself based on those values above, however, I fail to understand format descriptions such as these.

    I have the following video stream settings on my IP camera (click to enlarge):

    I could also use the ffmpeg tool, but no matter how I try and mix the arguments to the program, it keeps telling me this error:

  • How to play a part of the MP4 video stream?

    14 November 2014, by AgentFire

    I have a URL (/ipcam/mpeg4.cgi) which points to my IP camera which is connected via Ethernet. Accessing the URL resuls in a infinite stream of video (possibly with audio) data.

    I would like to store this data into a video file and play it later with a video player (HTML5's video tag is preferred as the player).

    However, a straightforward approach, which is simple saving the stream data into .mp4 file, didn't work.

    I have looked into the file and here is what I saw (click to enlarge):

    It turned out, there are some HTML headers, which I further on manually excluded using the binary editing tool, and yet no player could play the rest of the file.

    The HTML headers are:

    --myboundary
    Content-Type: image/mpeg4
    Content-Length: 76241
    X-Status: 0
    X-Tag: 1693923
    X-Flags: 0
    X-Alarm: 0
    X-Frametype: I
    X-Framerate: 30
    X-Resolution: 1920*1080
    X-Audio: 1
    X-Time: 2000-02-03 02:46:31
    alarm: 0000
    

    My question is pretty clear now, and I would like any help or suggestion. I suspect, I have to manually create some MP4 headers myself based on those values above, however, I fail to understand format descriptions such as these.

    I have the following video stream settings on my IP camera (click to enlarge):

    I could also use the ffmpeg tool, but no matter how I try and mix the arguments to the program, it keeps telling me this error: