Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • FFMPEG merge two .mp4 videos - resolution distorted

    9 juin 2016, par Misha Moryachok

    I am trying to merge two .mp4 videos, and in some cases the seconds video part is distorted in the output video. Providing an example below: https://www.youtube.com/watch?v=wWMNTBWJ37A

    The real video is: https://www.youtube.com/watch?v=ASio-j-Epi8

    As you can see, we added intro before the real content, however, the real content is stretched. In my opinion it happens because first video is 1280x720 and the second is 460x720.

    Providing commands for merging videos:

    *1st step (convert the videos from .mp4 to .ts)

     ffmpeg -i videoPathMP4 -c copy -bsf:v h264_mp4toannexb -f mpegts videoPathTS
    

    *2nd step (merge videos)

    ffmpeg -i "concat:$video1 | $video2" -c copy -bsf:a aac_adtstoasc $meagePathMP4
    

    Video output is like you saw in provided videolink on youtube. I also tried to change the first video resolution to be like the second video:

    ffmpeg -i inputVideo.mp4 -s 460x720 outputVideo.mp4
    

    However it doesn't helped. Is anyone know how to solve this? Thanks

  • Increase/Decrease audio volume using FFmpeg

    9 juin 2016, par williamtroup

    I'm am currently using C# invokes to call the FFmpeg APIs to handle video and audio. I have the following code in place to extract the audio from a video and write it to a file.

    while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
    {
        if (packet.stream_index == streamIndex)
        {
            while (packet.size > 0)
            {
                int frameDecoded;
                int frameDecodedResult = ffmpeg.avcodec_decode_audio4(codecContext, frame, &frameDecoded, packet);
    
                if (frameDecoded > 0 && frameDecodedResult >= 0)
                {
                    //writeAudio.WriteFrame(frame);
    
                    packet.data += totalBytesDecoded;
                    packet.size -= totalBytesDecoded;
                }
            }
    
            frameIndex++;
        }
    
        Avcodec.av_free_packet(&packet);
    }
    

    This is all working correctly. I'm currently using the FFmpeg.AutoGen project for the API access.

    I want to be able to increase/decrease the volume of the audio before its written to the file, but I cannot seem to find a command or any help with this. Does it have to be done manually?

    Update 1:

    After receiving some help, this is the class layout I have:

    public unsafe class FilterVolume
    {
        #region Private Member Variables
    
        private AVFilterGraph* m_filterGraph = null;
        private AVFilterContext* m_aBufferSourceFilterContext = null;
        private AVFilterContext* m_aBufferSinkFilterContext = null;
    
        #endregion
    
        #region Private Constant Member Variables
    
        private const int EAGAIN = 11;
    
        #endregion
    
        public FilterVolume(AVCodecContext* codecContext, AVStream* stream, float volume)
        {
            CodecContext = codecContext;
            Stream = stream;
            Volume = volume;
    
            Initialise();
        }
    
        public AVFrame* Adjust(AVFrame* frame)
        {
            AVFrame* returnFilteredFrame = ffmpeg.av_frame_alloc();
    
            if (m_aBufferSourceFilterContext != null && m_aBufferSinkFilterContext != null)
            {
                int bufferSourceAddFrameResult = ffmpeg.av_buffersrc_add_frame(m_aBufferSourceFilterContext, frame);
                if (bufferSourceAddFrameResult < 0)
                {
                }
    
                int bufferSinkGetFrameResult = ffmpeg.av_buffersink_get_frame(m_aBufferSinkFilterContext, returnFilteredFrame);
                if (bufferSinkGetFrameResult < 0 && bufferSinkGetFrameResult != -EAGAIN)
                {
                }
            }
    
            return returnFilteredFrame;
        }
    
        public void Dispose()
        {
            Cleanup(m_filterGraph);
        }
    
        #region Private Properties
    
        private AVCodecContext* CodecContext { get; set; }
        private AVStream* Stream { get; set; }
        private float Volume { get; set; }
    
        #endregion
    
        #region Private Setup Helper Functions
    
        private void Initialise()
        {
            m_filterGraph = GetAllocatedFilterGraph();
    
            string aBufferFilterArguments = string.Format("sample_fmt={0}:channel_layout={1}:sample_rate={2}:time_base={3}/{4}",
                (int)CodecContext->sample_fmt,
                CodecContext->channel_layout,
                CodecContext->sample_rate,
                Stream->time_base.num,
                Stream->time_base.den);
    
            AVFilterContext* aBufferSourceFilterContext = CreateFilter("abuffer", m_filterGraph, aBufferFilterArguments);
            AVFilterContext* volumeFilterContext = CreateFilter("volume", m_filterGraph, string.Format("volume={0}", Volume));
            AVFilterContext* aBufferSinkFilterContext = CreateFilter("abuffersink", m_filterGraph);
    
            LinkFilter(aBufferSourceFilterContext, volumeFilterContext);
            LinkFilter(volumeFilterContext, aBufferSinkFilterContext);
    
            SetFilterGraphConfiguration(m_filterGraph, null);
    
            m_aBufferSourceFilterContext = aBufferSourceFilterContext;
            m_aBufferSinkFilterContext = aBufferSinkFilterContext;
        }
    
        #endregion
    
        #region Private Cleanup Helper Functions
    
        private static void Cleanup(AVFilterGraph* filterGraph)
        {
            if (filterGraph != null)
            {
                ffmpeg.avfilter_graph_free(&filterGraph);
            }
        }
    
        #endregion
    
        #region Provate Helpers
    
        private AVFilterGraph* GetAllocatedFilterGraph()
        {
            AVFilterGraph* filterGraph = ffmpeg.avfilter_graph_alloc();
            if (filterGraph == null)
            {
            }
    
            return filterGraph;
        }
    
        private AVFilter* GetFilterByName(string name)
        {
            AVFilter* filter = ffmpeg.avfilter_get_by_name(name);
            if (filter == null)
            {
            }
    
            return filter;
        }
    
        private void SetFilterGraphConfiguration(AVFilterGraph* filterGraph, void* logContext)
        {
            int filterGraphConfigResult = ffmpeg.avfilter_graph_config(filterGraph, logContext);
            if (filterGraphConfigResult < 0)
            {
            }
        }
    
        private AVFilterContext* CreateFilter(string filterName, AVFilterGraph* filterGraph, string filterArguments = null)
        {
            AVFilter* filter = GetFilterByName(filterName);
            AVFilterContext* filterContext;
    
            int aBufferFilterCreateResult = ffmpeg.avfilter_graph_create_filter(&filterContext, filter, filterName, filterArguments, null, filterGraph);
            if (aBufferFilterCreateResult < 0)
            {
            }
    
            return filterContext;
        }
    
        private void LinkFilter(AVFilterContext* source, AVFilterContext* destination)
        {
            int filterLinkResult = ffmpeg.avfilter_link(source, 0, destination, 0);
            if (filterLinkResult < 0)
            {
            }
        }
    
        #endregion
    }
    

    The Adjust() function is called after a frame is decoded. I'm currently getting a -22 error when av_buffersrc_add_frame() is called. This indicates that a parameter is invalid, but after debugging, I cannot see anything that would be causing this.

    This is how the code is called:

    while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
    {
        if (packet.stream_index == streamIndex)
        {
            while (packet.size > 0)
            {
                int frameDecoded;
                int frameDecodedResult = ffmpeg.avcodec_decode_audio4(codecContext, frame, &frameDecoded, packet);
    
                if (frameDecoded > 0 && frameDecodedResult >= 0)
                {
                    AVFrame* filteredFrame = m_filterVolume.Adjust(frame);
    
                    //writeAudio.WriteFrame(filteredFrame);
    
                    packet.data += totalBytesDecoded;
                    packet.size -= totalBytesDecoded;
                }
            }
    
            frameIndex++;
        }
    
        Avcodec.av_free_packet(&packet);
    }
    

    Update 2:

    Cracked it. The "channel_layout" option in the filter argument string is supposed to be a hexadecimal. This is what the string formatting should look like:

    string aBufferFilterArguments = string.Format("sample_fmt={0}:channel_layout=0x{1}:sample_rate={2}:time_base={3}/{4}",
        (int)CodecContext->sample_fmt,
        CodecContext->channel_layout,
        CodecContext->sample_rate,
        Stream->time_base.num,
        Stream->time_base.den);
    
  • Xuggler can't open IContainer of icecast server [Webm live video stream]

    9 juin 2016, par Roy Bean

    I'm trying to stream a live webm stream.

    I tested some server and Icecast is my pic.

    With ffmpeg capturing from an IP camera and publishing in icecast server I'm able to see video in html5

    using this command:

    ffmpeg.exe -rtsp_transport tcp -i "rtsp://192.168.230.121/profile?token=media_profile1&SessionTimeout=60" -f webm -r 20 -c:v libvpx -b:v 3M -s 300x200 -acodec none -content_type video/webm -crf 63 -g 0 icecast://source:hackme@192.168.0.146:8001/test

    I'm using java and tryed to make this with xuggler, but I'm getting an error when opening the stream

     final String urlOut = "icecast://source:hackme@192.168.0.146:8001/agora.webm";
        final IContainer    outContainer = IContainer.make();
    
        final IContainerFormat outContainerFormat = IContainerFormat.make();
        outContainerFormat.setOutputFormat("webm", urlOut, "video/webm");
    
        int rc = outContainer.open(urlOut, IContainer.Type.WRITE, outContainerFormat);
    
        if(rc>=0) {
        }else  {
            Logger.getLogger(WebmPublisher.class.getName()).log(Level.INFO, "Fail to open Container " + IError.make(rc));
        }
    

    Any help? I'm getting the error -2: Error: could not open file (../../../../../../../csrc/com/xuggle/xuggler/Container.cpp:544)

    It's is also very importatn to set the content type as video/webm because icecast by default set the mime type to audio/mpeg

  • What is the output of a rawvideo in rgb24 pixel format in a mpegts container ?

    9 juin 2016, par Matt

    I'm trying to read bitmap info in a video stream, however, the data is not as I expect. Here is the ffmpeg command I'm using to generate the video (it's basically a screen cap)

     ffmpeg -video_size 1920x1080 -framerate 20 -f x11grab -i :0.0 -c:v rawvideo -f mpegts -pixel_format rgb24 capture.raw
    

    Here is a snippet of the data:

    byte hex binary
    00   47  01000111
    01   40  01000000
    02   11  00010001
    03   10  00010000
    04   ff  11111111
    05   7f  01111111
    06   00  00000000
    07   00  00000000
    08   58  01011000
    09   7f  01111111
    10   7a  01111010
    11   02  00000010
    12   00  00000000
    13   00  00000000
    14   00  00000000
    15   00  00000000
    16   04  00000100
    17   00  00000000
    18   00  00000000
    19   00  00000000
    

    The first 4 bytes are just as I expected (the mpegts header), but the payload is not. Is there some other packet spec I am missing or something else?

  • FFMPEG converted video from .AVI to .MP4 is not working in Chrome but in Firefox and IE

    9 juin 2016, par Tarun P

    I have an window application that converts video from *.wmv video to *.avi video using FFMPEG, but .avi videos are not supported by Html video tag in browser, so I have converted .avi videos to .mp4 using same FFMPEG tool to run it on browser. Now the videos are working fine in Firefox and IE browsers but not working in Chrome.

    If anybody have any suggestion on it, please share. Thanks.