Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Ffmpeg unable to initialize swsContext for H264 frame data received through RTP Payload

    26 mars 2015, par Praveen

    I am trying to decode H264 video frames received through RTSP streaming.

    I followed this post: How to process raw UDP packets so that they can be decoded by a decoder filter in a directshow source filter

    I was able to identify start of the frame & end of the frame in RTP packets and reconstructed my Video Frame.

    But I didnt receive any SPS,PPS data from my RTSP session. I looked for string "sprop-parameter-sets" in my SDP(Session Description Protocol) and there was none.

    Reconstructing Video Frame from RTP Packets:

    Payload in the first RTP Packet goes like this : "1c 80 00 00 01 61 9a 03 03 6a 59 ff 97 e0 a9 f6"

    This says that its a fragmented data("1C") and start of the frame("80"). I copied the rest of the payload data(except the first 2 bytes "1C 80").

    Following RTP Packets have the "Payload" start with "1C 00" which is continuation of the frame data. I kept adding payload data(except the first 2 bytes "1C 00") into the byte buffer for all the following RTP Packets.

    When I get the RTP packet with payload starts with "1C 40", which is end of the frame, I copied the rest of the payload data(except the first 2 bytes "1C 40") of that RTP Packet into the byte buffer.

    Thus I reconstructed the Video Frame into the byte buffer.

    Then I prepended 4 bytes [0x00, 0x00 , 0x00, 0x01] to the byte buffer before sending to the decoder, because I didnt receive any SPS, PPS NAL bytes.

    When I send this byte buffer to the decoder, decoder fails when it tries to initialize sws Context.

    Am I sending the NAL bytes and video frame data correctly?

  • Sync voice with the music through ffmpeg. live karaoke FMS

    26 mars 2015, par LifeSoul

    Using the AMS (FMS) server.

    User voice broadcasts to the server. ffmpeg does mix of voice from rtmp and music from disk, and sends to the server

    The problem is that the voice or music, do not coincide in time.

    Is there a way to sync via ffmpeg?

    Example

    -re -i DISK:/path/music.mp3 -i rtmp://x.x.x.x/karaoke/voice -filter_complex amix=inputs=2:duration=first,volume=2.000000 -ar 22050 -q:a 2 -ac 2 -f flv rtmp://x.x.x.x/karaoke/stream
    

    Time difference between 0.0-0.5 seconds (random)

  • How to limit the backward dependency between coded frames in ffmpeg/x264

    26 mars 2015, par Bastian35022

    I am currently playing with ffmpeg + libx264, but i couldn't find a way to limit the backward dependency between coded frames.

    Let me explain what i mean: I want the coded frames to only contain references to at most, let's say, 5 frames in the future. As a result, no frame has to "wait" for more than 5 frames to be coded (makes sense for low latency applications).

    I am aware of the -tune zerolatency option, but that's not what i want; I still want bidirectional prediction.

  • Android JavaCV FFmpegFrameRecorder.stop() throws exception

    26 mars 2015, par bakua

    I am using FFmpegFrameRecorder to capture preview frames from camera. I am using this setting:

    mVideoRecorder = new FFmpegFrameRecorder(mVideoPath, 300, 300, 1);
    mVideoRecorder.setFormat("mp4");
    mVideoRecorder.setSampleRate(44100);
    mVideoRecorder.setFrameRate(30);
    mVideoRecorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
    mVideoRecorder.setAudioCodec(avcodec.AV_CODEC_ID_AAC);
    mVideoRecorder.setVideoQuality(0);
    mVideoRecorder.setAudioQuality(0);
    mVideoRecorder.setVideoBitrate(768000);
    mVideoRecorder.setAudioBitrate(128000);
    mVideoRecorder.setGopSize(1);
    

    After I have finished capturing all frames by calling .record(IplImage) method I call mVideoRecorder.stop().

    But from time to time stop() method throws

    org.bytedeco.javacv.FrameRecorder$Exception: av_interleaved_write_frame() error -22 while writing interleaved video frame.
    at org.bytedeco.javacv.FFmpegFrameRecorder.record(FFmpegFrameRecorder.java:727)
    at org.bytedeco.javacv.FFmpegFrameRecorder.stop(FFmpegFrameRecorder.java:613)
    

    I have not seen any regularity in this behaviour nor have been able to found what error -22 is. And after that no ffmpeg call on file in mVideoPath works (I guess that file is not even valid due that error).

    I would really appreciate any help with this issue, thanks :)

  • Distributed video decoding over a network

    26 mars 2015, par tkcast

    i'm developing a videowall controller. I can use any technology of programming language needed, and I want to decode videos of arbitrarily high resolution on my videowall.

    One possible solution is: -split the ultra high video into several slices using ffmpeg and have one computer to decode each tile of the videowall separately. I'd use the network only to control the playback

    Another interesting solution: -only a master computer would have the huge video, and it would control a distributed decoding over the network. Is it even possible? how?

    Thanks!