Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Android Studio : Array of Bitmaps into Video with ffmpeg

    14 février 2020, par Steve Jobs Kappa

    I'm Facing current porblem. I have an Array of 100 Bitmaps. these are Screenshots i took from a view. I used JCodec to make it to a video, buts its waay to slow. Im hoping to get better results with FFmpeg

    Now i want to use the FFmpeg Library. Simillar questions were asked but i have no Idea how to use ffmpeg and how i have to use it in my specific case. All i see are weird Complex Commands See:

     File dir = your directory where image stores;
        String filePrefix = "picture"; //imagename prefix
        String fileExtn = ".jpg";//image extention
        filePath = dir.getAbsolutePath();
        File src = new File(dir, filePrefix + "%03d" + fileExtn);// image name should ne picture001, picture002,picture003 soon  ffmpeg takes as input valid
    

    complexCommand = new String[]{"-i", src + "", "-c:v", "libx264", "-c:a", "aac", "-vf", "setpts=2*PTS", "-pix_fmt", "yuv420p", "-crf", "10", "-r", "15", "-shortest", "-y", "/storage/emulated/0/" + app_name + "/Video/" + app_name + "_Video" + number + ".mp4"};

    The Problem is, that in this case he is using a Path. I need it to be from an Array. And i have no idea what to do with the String (ComplexCommands) :/ My Bitmaps are like this:

    Bitmap[] bitmaps = new Bitmaps[100];

    this is filled later on.

    if anyone is searching on how to do it with JCodec:

     try {
                        out = NIOUtils.writableFileChannel( Environment.getExternalStorageDirectory().getAbsolutePath()+"/***yourpath***/output"+System.currentTimeMillis()+".mp4");
                        // for Android use: AndroidSequenceEncoder
    
                        AndroidSequenceEncoder encoder = new AndroidSequenceEncoder(out, Rational.R(25, 1));
                        for (int i = 0 ; i < 100 ; i++) {
                            // Generate the image, for Android use Bitmap
    
                            // Encode the image
                            System.out.println("LOO2P"+i);
                            encoder.encodeImage(bitmaps[i]);
                        }
                        // Finalize the encoding, i.e. clear the buffers, write the header, etc.
                        encoder.finish();
    
                    } catch (FileNotFoundException e) {
                        System.out.println("fNF");
                        e.printStackTrace();
    
                    } catch (IOException e) {
                        System.out.println("IOE");
                        e.printStackTrace();
                    } finally {
                        System.out.println("IOSSE");
                        NIOUtils.closeQuietly(out);
                    }
    
  • How to write a video stream containing B-frame and no DTS to a MP4 container ?

    14 février 2020, par SteveH

    I want to save a h264 video stream received from a RTSP source to a MP4 container. Not like other questions asked on SO, here the challenges I face are:

    • The stream contains B frames.

    • The stream has only PTS given by the RTP/RTCP.

    Here is the code I did

    //  ffmpeg
        pkt->data = ..;
        pkt->size = ..;
        pkt->flags = bKeyFrame? AV_PKT_FLAG_KEY : 0;    
        pkt->dts = AV_NOPTS_VALUE;
        pkt->pts = PTS;
    
        // PTS is based on epoch microseconds so I ignored re-scaling.
        //av_packet_rescale_ts(pkt, { 1, AV_TIME_BASE }, muxTimebase);
    
        auto ret = av_interleaved_write_frame(m_pAVFormatCtx, pkt);
    

    I received a lot of error messages like this: "Application provided invalid, non monotonically increasing dts to muxer ...".

    Result: the mp4 file is playable via VLC but the FPS is just a half of the original FPS and the video duration is incorrect (VLC shows a weird number).

    So how do I set correct DTS and PTS before sending to the container?

    Update: I have tried some changes, though not successfully yet, I found that the reason of the frame rate drop is due to the muxer discards frames having incorrect DTS. Additionally, if I set start of PTS and DTS value too big, some players like VLC has to delay some time before showing video.

  • Using FFMPEG To Fill in Gaps of a Raw Audio UDP Stream

    14 février 2020, par Wallace

    I have a software defined radio (SDR) that picks up audio from emergency services and with the help of software, streams raw audio using UDP. The audio is PCM signed 16-bit little-endian. The UDP stream is also not constant and only has data when audio is detected.

    The problem I'm trying to solve is that I would like the gaps in recorded audio to be filled with silent or Null audio. Below are just a couple of my attempts at resolving this:

    1. ffmpeg -f s16le -ar 8000 -i udp://127.0.0.1:23456 -af aresample=async=1 -acodec libmp3lame - f rtp rtp://127.0.0.1:1234
    2. ffmpeg -re -f lavfi -i anullsrc -f s16le -ar 8000 -i udp://127.0.0.1:23456 -filter_complex amix=inputs=2:duration=first -acodec libmp3lame -f rtp rtp://127.0.0.1:1234

    I guess my questions are regarding the best way to resolve this and whether or not ffmpeg can be used in a way to accomplish this?

  • Cannot find x264 codec device for FFmpeg transcoding

    14 février 2020, par Tom Ato

    I am messing around with an FFmpeg transcoding tutorial (source: https://ffmpeg.org/doxygen/trunk/transcoding_8c-example.html)

    I am using FFmpeg 4.0.5 and built it successfully as follows:

    $ sudo apt-get install libx264-dev
    $ cd ffmpeg-4.0.5
    $ ./configure --prefix=buildout --enable-shared --disable-static --disable-doc  --enable-gpl --disable-opencl --enable-libx264
    $ make
    $ sudo make install
    $ sudo ldconfig
    

    I have been looking at an article as well for guidance. (http://www.programmersought.com/article/71051173025/;jsessionid=2D01469BCFABF65530FCC81DBC04E9C0)

    The transcoding.c source file does compile:

    $ gcc transcoding.c -o out -lavformat -lavcodec -lavutil -lavfilter
    

    Calling the executable is straight-forward:

    $ ./out $INPUT_VIDEO $OUTPUT_VIDEO
    

    where $INPUT_VIDEO is an mp4 container (h264/aac)

    Up until this point everything appears to be working on my Debian 10 VM (if that is relevant). However, I get the following erroneous output:

    [h264_v4l2m2m @ 0x564f664eebc0] Could not find a valid device
    [h264_v4l2m2m @ 0x564f664eebc0] can't configure encoder
    Cannot open video encoder for stream #0
    Error occurred: Invalid argument
    

    The article I attached says I need to add it when compiling FFmpeg, so I changed the ./configure line to as above, but to no avail. Any help or guidance would be appreciated.

  • FFmpeg Concat Filter High Memory Usage

    14 février 2020, par user2248702

    I'm using FFmpeg to join many short clips into a single long video using the concat filter. FFmpeg seems to load all clips into memory at once and quickly runs out of RAM (For 100 clips it eats over 32GB). Is there a way to limit the memory used by the concat filter?

    The command I would use for 3 inputs is as follows:

    ffmpeg -i 0.mp4 -i 1.mp4 -i 2.mp4 -filter_complex "[0:v][0:a][1:v][1:a][2:v][2:a]concat=n=3:v=1:a=1" out.mp4
    

    It seems to use around 200MB per additional input, which quickly uses all the memory in my system.