Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • create AVI file from compressed data

    3 décembre 2015, par Qureshi

    I am using ffmpeg libararies to create an AVI file as mentioned in the post (Make AVI file from H264 compressed data), this guy had the same problem as i currently have (i-e getting error value -22.

    Please anyone can explain me what's the meaning of this error code "-22" that i get from "av_interleaved_write_frame" ?

    he suggested that "By setting pts and dts with AV_NOPTS_VALUE I've solved the problem." please share any example how to set pts value with AV_NOPTS_VALUE ? and what should be the value of pts any rought estimate ?

  • Custom buffer for FFMPEG

    3 décembre 2015, par nmarevic

    I have a question regarding buffer read with ffmpeg. Idea is as it follows: an outside module (can not change it) is providing me video stream in chunks of data and it is giving me input data and it's size in bytes ("framfunction" function input parameters). I have to copy input data to a buffer and read it with ffmpeg (Zeranoe) and extract video frames. Each time I receive new data, my function "framfunction" will be called. All unprocessed data from a first run will be moved at the beginning of the buffer followed by a new data on the second run and so on. It is essentially based on source and Dranger tutorials. My current attempt is like this and just look at comments (I've left only ones regarding current buffer function) in code to get a picture what I want to do(I know it is messy and it works - sort of; skipping some frames. Any suggestions around ffmpeg code and buffer design are welcome):

    #include 
    #include 
    
    extern "C"
    {
    #include avformat.h>
    #include avcodec.h>
    #include swscale.h>
    #include avio.h>
    #include file.h>
    }
    struct buffer_data {
       uint8_t *ptr;
       size_t size;
    };
    
    static int read_packet(void *opaque, uint8_t *buf, int buf_size)
    {
       struct buffer_data *bd = (struct buffer_data *)opaque;
       buf_size = FFMIN(buf_size, bd->size);
       memcpy(buf, bd->ptr, buf_size);
       bd->ptr += buf_size;
       bd->size -= buf_size;
       return buf_size;
    }
    
    class videoclass
    {
    private:
       uint8_t* inputdatabuffer;
       size_t offset;
    
    public:
       videoclass();
       ~videoclass();
       int framfunction(uint8_t* inputbytes, int inputbytessize);
    };
    
    videoclass::videoclass()
       : inputdatabuffer(nullptr)
       , offset(0)
    {
       inputdatabuffer = new uint8_t[8388608]; //buffer where the input data will be stored
    }
    
    videoclass::~videoclass()
    {
       delete[] inputdatabuffer;
    }
    
    
    int videoclass::framfunction(uint8_t* inputbytes, int inputbytessize)
    {
       int i, videoStream, numBytes, frameFinished;
       AVFormatContext *pFormatCtx = NULL;
       AVCodecContext *pCodecCtx = NULL;
       AVIOContext   *avio_ctx = NULL;
       AVCodec   *pCodec = NULL;
       AVFrame   *pFrame = NULL;
       AVFrame   *pFrameRGB = NULL;
       AVPacket packet;
       uint8_t   *buffer = NULL;
       uint8_t   *avio_ctx_buffer = NULL;
       size_t   avio_ctx_buffer_size = 4096;
       size_t   bytes_processed = 0;
       struct buffer_data bd = { 0 };
    
       //if (av_file_map("sample.ts", &inputbytes, &inputbytessize, 0, NULL) < 0)//
       //   return -1;
    
       memcpy(inputdatabuffer + offset, inputbytes, inputbytessize);//copy new data to buffer inputdatabuffer with offset calculated at the end of previous function run. In other words - cope new data after unprocessed data from a previous call 
       offset += inputbytessize; //total number of bytes in buffer. Size of an unprocessed data from the last run + size of new data (inputbytessize)
    
       bd.ptr = inputdatabuffer;
       bd.size = offset;
    
       if (!(pFormatCtx = avformat_alloc_context()))
          return -1;
       avio_ctx_buffer = (uint8_t *)av_malloc(avio_ctx_buffer_size);
       avio_ctx = avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size,0, &bd, &read_packet, NULL, NULL);
       pFormatCtx->pb = avio_ctx;
    
       av_register_all(); 
       avcodec_register_all();
    
       pFrame = av_frame_alloc(); 
       pFrameRGB = av_frame_alloc(); 
    
       if (avformat_open_input(&pFormatCtx, NULL, NULL, NULL) != 0) 
          return -2;
       if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
          return -3;
    
       videoStream = -1;
       for (i = 0; i < pFormatCtx->nb_streams; i++)
          if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) { 
             videoStream = i;
             break;
          }
       if (videoStream == -1) 
          return -4;
    
       pCodecCtx = pFormatCtx->streams[videoStream]->codec; 
    
       pCodec = avcodec_find_decoder(pCodecCtx->codec_id); 
       if (pCodec == NULL){
          std::cout << "Unsupported codec" << std::endl;
          return -5;
       }
    
       if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
          return -6; 
    
       numBytes = avpicture_get_size(PIX_FMT_BGR24, pCodecCtx->width, pCodecCtx->height); 
       buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
    
       avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_BGR24, pCodecCtx->width, pCodecCtx->height); 
    
       while (av_read_frame(pFormatCtx, &packet) >= 0){
          if (packet.stream_index == videoStream){ 
             avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet); 
             if (frameFinished){ 
                std::cout << "Yaay, frame found" << std::endl;
                }
    
          }
          av_free_packet(&packet); 
    
          bytes_processed = (size_t)pFormatCtx->pb->pos; //data which is processed so far (x bytes out of inputbytessize)??????????????????????????????
       }
    
       offset -= bytes_processed; //size of unprocessed data
    
       av_free(buffer);
       av_free(pFrameRGB);
       av_free(pFrame);
    
       avcodec_close(pCodecCtx);
    
       av_freep(&avio_ctx->buffer);
       av_freep(&avio_ctx);
    
    
       avformat_close_input(&pFormatCtx);
    
    
       memmove(inputdatabuffer, inputdatabuffer + bytes_processed, offset);//move unprocessed data to begining of the main buffer
    
          return 0;
    }
    

    Call of my function would be something like this

    WHILE(VIDEO_INPUT)
    {
        READ VIDEO DATA FROM INPUT BUFFER
        STORE DATA FROM INPUT BUFFER AND SIZE OP THAT DATA TO VARIABLES NEW_DATA AND NEW_DATA_SIZE
        CALL FUNCTION FRAMMUNCTION AND PASS NEW_DATA AND NEW_DATA_FUNCTION
        DO OTHER THINGS
    }
    

    What I would like to know is what is exact size of unprocessed data. Comments in code are showing my attempt but I think it is not good enough so I need some help with that issue.

    EDIT: magic question is how to get correct "bytes_processed" size. I've also made an pdf with explanation how my buffer should work pdf file Thanks

  • Parsed_concat_1 @ 02ad98e0 Failed to configure output pad on Parsed_concat_1

    3 décembre 2015, par user650922

    I am using NReco.VideoConverter(FFMPegConverter) to ConcatMedia. However, I am getting error as Parsed_concat_1 @ 02ad98e0 Failed to configure output pad on Parsed_concat_1.

    Here is the code snippet of my code :

    try
            {
                var ffMpeg = new FFMpegConverter();
                var set = new ConcatSettings
                {
                    ConcatAudioStream = false
                };
                set.SetVideoFrameSize(1080, 720);
                set.MaxDuration = 500;
                ffMpeg.ConcatMedia(videos, output + ".mp4", Format.mp4, set);
                return output + ".mp4";
            }
            catch (Exception ex)
            {
                var text = new List();
                text.AddRange(videos);
                text.Add(ex.Message);
                throw new Exception(string.Join(", ", text), ex);
            }
    

    VideoFrameSize of all the videos is(1080, 720) and FrameRate is 25.

    Could some one please help me to resolve the issue.

  • Converted mp4 not displaying image which was added with FFMPEG [duplicate]

    3 décembre 2015, par Avi

    This question already has an answer here:

    I am converting a mp3 to a mp4 and simultaneously adding an image to it using FFMPEG.

    Here is the command I am using.

    ffmpeg -loop 1 -i ../image.jpg -i "track 1.mp3" -c:v libx264 -c:a aac -strict experimental -b:a 192k -shortest -vf scale=800:400 "track 1".mp4

    The image appears when the video is played in a web browser. However, it doesn't appear when the video is played on Windows Media Player or on QuickTime.

    Any ideas why?

  • creation of video file with alpha channel/transparency

    3 décembre 2015, par boerfi

    I'm experimenting with a video file with alpha channel. Later on the file should be an overlay for another video. It will be merged with another file in real time and saved to another file. But that is not the important point. The problem starts earlier because I can't even play it.

    The video is a png-encoded Quicktime-file which I cannot play with any video player. WMP, VLC, MPC and ffplay all show various problems which all lead to the problem that the images seem to get decoded too slowly. The strange thing is that neither cpu (i7) nor my ssd are on limit but the file isn't played correctly.

    Since the problem appears with all players I think it's based on using png in video. I googled but didn't find a proper way to create a partly transparent video file. I tried various methods of exporting (with adobe media encoder) with no result: the file lost its transparency or is running too slow.

    The resulting file which I create with my SDK is correct (video and audio are synchronous and fluid) but it takes 1 minute to render a video of 40 seconds although it works in real time with files without transparency.

    Does anyone know what kind of file I should export that has a minimum resolution of 720p, that is partly transparent and that can be played with ffplay in real time? I would also appreciate any experiences with partly transparent videos which could help me because I couldn't find any helpful links?

    Thanks, Marius