Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • remuxing audio and videos (screen and presenter) captured at the same time does not synchronize

    22 septembre 2014, par user28163

    trying to merge a screencast with a video (without sound) and a sound steam which has been captured separately using ffmpeg using a bash command. All the stream-capture were started at the same time and all ffmpeg processes killed at the same time (pkill). But when I remux them together, the screencast and video does not match, and thus sound does not synchronize either.

    Where did I go wrong? Any inputs appreciated from ffmpeg experts here. Thanks in advance.

    Please find the ffmpeg output as follows:

    1. The ffmpeg log of two videos muxing (http://pastebin.com/XwnDSf5i)
    2. The ffmpeg log of remuxing the sound with the side-by side video as of 1 above (cannot paste as the pastebin limit exeeded :( ).

    UPDATE:

    After checking the lenght of the screencast, I figured out that the screen capture (though started and stopped at the same time as video and sound using a bash script), is shorter in lenght by 1m54s than video and audio (former 34:22 vs later 36:16). The video was captured in h264 mp4 wrapper at -r 30. So is screen capture but lossless

    %ffmpeg -report -f x11grab -r 30 -s 1920x1080 -i :0.0 -qscale 0 -vcodec libx264 -threads 4 screen.m4v

    Could that be the reason for the delay ? Is there any way to extend the screencast against the videos? Thanks!

  • How to change color scheme of frame extracted via FFMPEG library on C++ ?

    22 septembre 2014, par user2742299

    I have been able to decode the video file using the FFMPEG as well as to save them in *.ppm files.

    I would like to know how can I change the color of each frame soon after decoding and before saving them?

    For example: if I have a colored frame which I decoded and extracted from the video file, I would now like to change it to black and white frame and then save it to the file.

    I am quite new to FFMPEG library and further more less documentation is available on it for developers. Therefore any help on it would be highly appreciable.

    Thanks

  • Remux video file and get the output bytes on the fly

    22 septembre 2014, par Miguel Botón

    I have been looking for a solution too many hours and maybe I didn't use the proper keywords to find it but I give up. I hope somebody can help me with this.

    My goal is to create a small C code that remux a video file to another container (mp4 or whatever, not decided yet) and create a JNI interface to read the raw output bytes in my Java application.

    Create the JNI interface is not a problem and remux the video is pretty simple (and there is a great example at http://ffmpeg.org/doxygen/trunk/doc_2examples_2remuxing_8c-example.html), but I'm having a headache trying to figure out how I can tell FFmpeg to deliver the data when my application requests it.

    I was thinking in create a thread, where the remux would be done, and write the output data to a buffer where my application would read it. When the buffer is full the remux would be paused until it is flushed, but I don't know how I can achieve that or if it would be a proper solution.

    I believe the best solution will involve creating an AVIOContext but I'm not very sure.

    Thank you very much in advance.

    EDIT:

    A solution I found, but I don't like it at all, is to run the "ffmpeg" executable, tell it to write the output to "stdout" and read this output from my Java application using an InputStream.

  • Configure ffmpeg for android in windows 7 on ubuntu

    22 septembre 2014, par user2659308

    Can anyone help me regarding how to configure ffmpeg with eclipse for android in windows7 or in ubuntu

    I tried many time with ndk10rb but not configured , tried many times

    Can anyone help me asap

  • FFmpeg Negative Duration [migrated]

    22 septembre 2014, par filby

    i have a large number of small videos (parts of a larger) of the form :

    Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf55.43.100
    Duration: 00:00:05.53, start: -1.831497, bitrate: 3355 kb/s
    Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 4760 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 157 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
    

    I want to completely discard the negative start, the 1.83 seconds, and keep only the 3.67 seconds in the positive 'axis'.

    I cut the videos with the correct duration however i believe due to some sync i did in sound (in the original), resulted in me having a video of 5.53 second, of which 1.83 is audio only, and the last 3.67 seconds are the ones i want to keep (and the ones i did cut).

    In a video player i can watch the 3.67 seconds i want to however if i extract the audio i get +1.83 seconds of audio at start from the original video (from which i cut the parts) which i didnt intend to cut. What can i do;

    P.S. Example of the command i used to cut the part:

    ffmpeg -ss 818 -i input -t 3.67 -codec:v libx264 -profile:v high -preset slow -b:v 5000k -maxrate 5000k -threads 0 -codec:a copy output