Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • FFmpeg player backporting to Android 2.1 - one more problem

    17 avril, par tretdm

    I looked for a lot of information about how to build and use FFmpeg in early versions of Android, looked at the source codes of players from 2011-2014 and was able to easily build FFmpeg 4.0.4 and 3.1.4 on the NDKv5 platform. I have highlighted the main things for this purpose:

    • bitmap.h> and native_window.h> before Android 2.2 (API Level 8) such a thing did not exist
    • this requires some effort to implement buffer management for A/V streams, since in practice, when playing video, the application silently crashed after a few seconds due to overflow (below code example in C++ and Java)
    • FFmpeg - imho, the only way to support a sufficient number of codecs that are not officially included in Android 2.1 and above
    void decodeVideoFromPacket(JNIEnv *env, jobject instance,
                               jclass mplayer_class, AVPacket avpkt, 
                               int total_frames, int length) {
        AVFrame     *pFrame = NULL
        AVFrame     *pFrameRGB = NULL;
        pFrame = avcodec_alloc_frame();
        pFrameRGB = avcodec_alloc_frame();
        int frame_size = avpicture_get_size(PIX_FMT_RGB32, gVideoCodecCtx->width, gVideoCodecCtx->height);
        unsigned char* buffer = (unsigned char*)av_malloc((size_t)frame_size * 3);
        if (!buffer) {
            av_free(pFrame);
            av_free(pFrameRGB);
            return;
        }
        jbyteArray buffer2;
        jmethodID renderVideoFrames = env->GetMethodID(mplayer_class, "renderVideoFrames", "([BI)V");
        int frameDecoded;
        avpicture_fill((AVPicture*) pFrame,
                       buffer,
                       gVideoCodecCtx->pix_fmt,
                       gVideoCodecCtx->width,
                       gVideoCodecCtx->height
                      );
    
        if (avpkt.stream_index == gVideoStreamIndex) { // If video stream found
            int size = avpkt.size;
            total_frames++;
            struct SwsContext *img_convert_ctx = NULL;
            avcodec_decode_video2(gVideoCodecCtx, pFrame, &frameDecoded, &avpkt);
            if (!frameDecoded || pFrame == NULL) {
                return;
            }
    
            try {
                PixelFormat pxf;
                // RGB565 by default for Android Canvas in pre-Gingerbread devices.
                if(android::get_android_api_version(env) >= ANDROID_API_CODENAME_GINGERBREAD) {
                    pxf = PIX_FMT_BGR32;
                } else {
                    pxf = PIX_FMT_RGB565;
                }
    
                int rgbBytes = avpicture_get_size(pxf, gVideoCodecCtx->width,
                                                gVideoCodecCtx->height);
    
                // Converting YUV to RGB frame & RGB frame to char* buffer 
                
                buffer = convertYuv2Rgb(pxf, pFrame, rgbBytes); // result of av_image_copy_to_buffer()
    
                if(buffer == NULL) {
                    return;
                }
    
                buffer2 = env->NewByteArray((jsize) rgbBytes);
                env->SetByteArrayRegion(buffer2, 0, (jsize) rgbBytes,
                                        (jbyte *) buffer);
                env->CallVoidMethod(instance, renderVideoFrames, buffer2, rgbBytes);
                env->DeleteLocalRef(buffer2);
                free(buffer);
            } catch (...) {
                if (debug_mode) {
                    LOGE(10, "[ERROR] Render video frames failed");
                    return;
                }
            }
        }
    }
    
    private void renderVideoFrames(final byte[] buffer, final int length) {
            new Thread(new Runnable() {
                @Override
                public void run() {
                    Canvas c;
                    VideoTrack track = null;
                    for (int tracks_index = 0; tracks_index < tracks.size(); tracks_index++) {
                        if (tracks.get(tracks_index) instanceof VideoTrack) {
                            track = (VideoTrack) tracks.get(tracks_index);
                        }
                    }
                    if (track != null) {
                        int frame_width = track.frame_size[0];
                        int frame_height = track.frame_size[1];
                        if (frame_width > 0 && frame_height > 0) {
                            try {
                                // RGB_565  == 65K colours (16 bit)
                                // RGB_8888 == 16.7M colours (24 bit w/ alpha ch.)
                                int bpp = Build.VERSION.SDK_INT > 9 ? 16 : 24;
                                Bitmap.Config bmp_config =
                                        bpp == 24 ? Bitmap.Config.RGB_565 : Bitmap.Config.ARGB_8888;
                                Paint paint = new Paint();
                                if(buffer != null && holder != null) {
                                    holder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
                                    if((c = holder.lockCanvas()) == null) {
                                        Log.d(MPLAY_TAG, "Lock canvas failed");
                                        return;
                                    }
                                    ByteBuffer bbuf =
                                            ByteBuffer.allocateDirect(minVideoBufferSize);
                                    bbuf.rewind();
                                    for(int i = 0; i < buffer.length; i++) {
                                        bbuf.put(i, buffer[i]);
                                    }
                                    bbuf.rewind();
    
                                    // The approximate location where the application crashed.
                                    Bitmap bmp = Bitmap.createBitmap(frame_width, frame_height, bmp_config);
                                    bmp.copyPixelsFromBuffer(bbuf);
                                    
                                    float aspect_ratio = (float) frame_width / (float) frame_height;
                                    int scaled_width = (int)(aspect_ratio * (c.getHeight()));
                                    c.drawBitmap(bmp,
                                            null,
                                            new RectF(
                                                    ((c.getWidth() - scaled_width) / 2), 0,
                                                    ((c.getWidth() - scaled_width) / 2) + scaled_width,
                                                    c.getHeight()),
                                            null);
                                    holder.unlockCanvasAndPost(c);
                                    bmp.recycle();
                                    bbuf.clear();
                                } else {
                                    Log.d(MPLAY_TAG, "Video frame buffer is null");
                                }
                            } catch (Exception ex) {
                                ex.printStackTrace();
                            } catch (OutOfMemoryError oom) {
                                oom.printStackTrace();
                                stop();
                            }
                        }
                    }
                }
            }).start();
        }
    

    Exception (tested in Android 4.1.2 emulator):

    E/dalvikvm-heap: Out of memory on a 1228812-byte allocation
    I/dalvikvm: "Thread-495" prio=5 tid=21 RUNNABLE
       ................................................
         at android.graphics.Bitmap.nativeCreate(Native Method)
         at android.graphics.Bitmap.createBitmap(Bitmap.java:640)
         at android.graphics.Bitmap.createBitmap(Bitmap.java:620)
         at [app_package_name].MediaPlayer$5.run(MediaPlayer.java:406)
         at java.lang.Thread.run(Thread.java:856)
    

    For clarification: I first compiled FFmpeg 0.11.x on a virtual machine with Ubuntu 12.04 LTS from my written build script, looked for player examples suitable for Android below 2.2 (there is little information about them, unfortunately) and opened the file on the player and after showing the first frames it crashed into a stack or buffer overflow, on I put off developing the player for some time.

    Is there anything ready-made that, as a rule, fits into one C++ file and takes into account all the nuances of backporting? Thanks in advance.

  • FFmpeg C++ API : Using HW acceleration (VAAPI) to transcode video coming from a webcam

    17 avril, par nicoh

    I'm actually trying to use HW acceleration with the FFmpeg C++ API in order to transcode the video coming from a webcam (which may vary from one config to another) into a given output format (i.e: converting the video stream coming from the webcam in MJPEG to H264 so that it can be written into a MP4 file).

    I already succeeded to achieve this by transferring the AVFrame output by the HW decoder from GPU to CPU, then transfer this to the HW encoder input (so from CPU to GPU). This is not so optimized and on top of that, for the given above config (MJPEG => H264), I cannot provide the output of the decoder as an input for the encoder as the MJPEG HW decoder wants to output in RGBA pixel format, and the H264 encoder wants NV12. So I have to perform pixel format conversion on CPU side.

    That's why I would like to connect the output of the HW video decoder directly to the input of the HW encoder (inside the GPU). To do this, I followed this example given by FFmpeg : https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/vaapi_transcode.c.

    This works fine when transcoding an AVI file with MJPEG inside to H264 but it fails when using a MJPEG stream coming from a webcam as input. In this case, the encoder says:

    [h264_vaapi @ 0x5555555e5140] No usable encoding profile found.
    

    Below the code of the FFmpeg example I modified to connect on webcam instead of opening input file:

    /*
     * Permission is hereby granted, free of charge, to any person obtaining a copy
     * of this software and associated documentation files (the "Software"), to deal
     * in the Software without restriction, including without limitation the rights
     * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
     * copies of the Software, and to permit persons to whom the Software is
     * furnished to do so, subject to the following conditions:
     *
     * The above copyright notice and this permission notice shall be included in
     * all copies or substantial portions of the Software.
     *
     * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
     * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
     * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
     * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
     * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
     * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
     * THE SOFTWARE.
     */
    
    /**
     * @file Intel VAAPI-accelerated transcoding API usage example
     * @example vaapi_transcode.c
     *
     * Perform VAAPI-accelerated transcoding.
     * Usage: vaapi_transcode input_stream codec output_stream
     * e.g: - vaapi_transcode input.mp4 h264_vaapi output_h264.mp4
     *      - vaapi_transcode input.mp4 vp9_vaapi output_vp9.ivf
     */
    
    #include 
    #include 
    #include 
    
    //#define USE_INPUT_FILE
    
    extern "C"{
    #include hwcontext.h>
    #include avcodec.h>
    #include avformat.h>
    #include avdevice.h>
    }
    
    static AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
    static AVBufferRef *hw_device_ctx = NULL;
    static AVCodecContext *decoder_ctx = NULL, *encoder_ctx = NULL;
    static int video_stream = -1;
    static AVStream *ost;
    static int initialized = 0;
    
    static enum AVPixelFormat get_vaapi_format(AVCodecContext *ctx,
                                               const enum AVPixelFormat *pix_fmts)
    {
        const enum AVPixelFormat *p;
    
        for (p = pix_fmts; *p != AV_PIX_FMT_NONE; p++) {
            if (*p == AV_PIX_FMT_VAAPI)
                return *p;
        }
    
        std::cout << "Unable to decode this file using VA-API." << std::endl;
        return AV_PIX_FMT_NONE;
    }
    
    static int open_input_file(const char *filename)
    {
        int ret;
        AVCodec *decoder = NULL;
        AVStream *video = NULL;
        AVDictionary    *pInputOptions = nullptr;
    
    #ifdef USE_INPUT_FILE
        if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
            char errMsg[1024] = {0};
            std::cout << "Cannot open input file '" << filename << "', Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
            return ret;
        }
    #else
        avdevice_register_all();
        av_dict_set(&pInputOptions, "input_format", "mjpeg", 0);
        av_dict_set(&pInputOptions, "framerate", "30", 0);
        av_dict_set(&pInputOptions, "video_size", "640x480", 0);
    
        if ((ret = avformat_open_input(&ifmt_ctx, "/dev/video0", NULL, &pInputOptions)) < 0) {
            char errMsg[1024] = {0};
            std::cout << "Cannot open input file '" << filename << "', Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
            return ret;
        }
    #endif
    
        ifmt_ctx->flags |= AVFMT_FLAG_NONBLOCK;
    
        if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
            char errMsg[1024] = {0};
            std::cout << "Cannot find input stream information. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
            return ret;
        }
    
        ret = av_find_best_stream(ifmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &decoder, 0);
        if (ret < 0) {
            char errMsg[1024] = {0};
            std::cout << "Cannot find a video stream in the input file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
            return ret;
        }
        video_stream = ret;
    
        if (!(decoder_ctx = avcodec_alloc_context3(decoder)))
            return AVERROR(ENOMEM);
    
        video = ifmt_ctx->streams[video_stream];
        if ((ret = avcodec_parameters_to_context(decoder_ctx, video->codecpar)) < 0) {
            char errMsg[1024] = {0};
            std::cout << "avcodec_parameters_to_context error. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
            return ret;
        }
    
        decoder_ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx);
        if (!decoder_ctx->hw_device_ctx) {
            std::cout << "A hardware device reference create failed." << std::endl;
            return AVERROR(ENOMEM);
        }
        decoder_ctx->get_format    = get_vaapi_format;
    
        if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) < 0)
        {
            char errMsg[1024] = {0};
            std::cout << "Failed to open codec for decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
        }
    
        return ret;
    }
    
    static int encode_write(AVPacket *enc_pkt, AVFrame *frame)
    {
        int ret = 0;
    
        av_packet_unref(enc_pkt);
    
        AVHWDeviceContext *pHwDevCtx = reinterpret_cast(encoder_ctx->hw_device_ctx);
        AVHWFramesContext *pHwFrameCtx = reinterpret_cast(encoder_ctx->hw_frames_ctx);
    
        if ((ret = avcodec_send_frame(encoder_ctx, frame)) < 0) {
            char errMsg[1024] = {0};
            std::cout << "Error during encoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
            goto end;
        }
        while (1) {
            ret = avcodec_receive_packet(encoder_ctx, enc_pkt);
            if (ret)
                break;
    
            enc_pkt->stream_index = 0;
            av_packet_rescale_ts(enc_pkt, ifmt_ctx->streams[video_stream]->time_base,
                                 ofmt_ctx->streams[0]->time_base);
            ret = av_interleaved_write_frame(ofmt_ctx, enc_pkt);
            if (ret < 0) {
                char errMsg[1024] = {0};
                std::cout << "Error during writing data to output file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
                return -1;
            }
        }
    
    end:
        if (ret == AVERROR_EOF)
            return 0;
        ret = ((ret == AVERROR(EAGAIN)) ? 0:-1);
        return ret;
    }
    
    static int dec_enc(AVPacket *pkt, const AVCodec *enc_codec, AVCodecContext *pDecCtx)
    {
        AVFrame *frame;
        int ret = 0;
    
        ret = avcodec_send_packet(decoder_ctx, pkt);
        if (ret < 0) {
            char errMsg[1024] = {0};
            std::cout << "Error during decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
            return ret;
        }
    
        while (ret >= 0) {
            if (!(frame = av_frame_alloc()))
                return AVERROR(ENOMEM);
    
            ret = avcodec_receive_frame(decoder_ctx, frame);
            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
                av_frame_free(&frame);
                return 0;
            } else if (ret < 0) {
                char errMsg[1024] = {0};
                std::cout << "Error while decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
                goto fail;
            }
    
            if (!initialized) {
                AVHWFramesContext *pHwFrameCtx = reinterpret_cast(decoder_ctx->hw_frames_ctx);
                
                /* we need to ref hw_frames_ctx of decoder to initialize encoder's codec.
                   Only after we get a decoded frame, can we obtain its hw_frames_ctx */
                encoder_ctx->hw_frames_ctx = av_buffer_ref(pDecCtx->hw_frames_ctx);
                if (!encoder_ctx->hw_frames_ctx) {
                    ret = AVERROR(ENOMEM);
                    goto fail;
                }
                /* set AVCodecContext Parameters for encoder, here we keep them stay
                 * the same as decoder.
                 * xxx: now the sample can't handle resolution change case.
                 */
                if(encoder_ctx->time_base.den == 1 && encoder_ctx->time_base.num == 0)
                {
                    encoder_ctx->time_base = av_inv_q(ifmt_ctx->streams[video_stream]->avg_frame_rate);
                }
                else
                {
                    encoder_ctx->time_base = av_inv_q(decoder_ctx->framerate);
                }
                encoder_ctx->pix_fmt   = AV_PIX_FMT_VAAPI;
                encoder_ctx->width     = decoder_ctx->width;
                encoder_ctx->height    = decoder_ctx->height;
    
                if ((ret = avcodec_open2(encoder_ctx, enc_codec, NULL)) < 0) {
                    char errMsg[1024] = {0};
                    std::cout << "Failed to open encode codec. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
                    goto fail;
                }
    
                if (!(ost = avformat_new_stream(ofmt_ctx, enc_codec))) {
                    std::cout << "Failed to allocate stream for output format." << std::endl;
                    ret = AVERROR(ENOMEM);
                    goto fail;
                }
    
                ost->time_base = encoder_ctx->time_base;
                ret = avcodec_parameters_from_context(ost->codecpar, encoder_ctx);
                if (ret < 0) {
                    char errMsg[1024] = {0};
                    std::cout << "Failed to copy the stream parameters. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
                    goto fail;
                }
    
                /* write the stream header */
                if ((ret = avformat_write_header(ofmt_ctx, NULL)) < 0) {
                    char errMsg[1024] = {0};
                    std::cout << "Error while writing stream header. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
                    goto fail;
                }
    
                initialized = 1;
            }
    
            if ((ret = encode_write(pkt, frame)) < 0)
                std::cout << "Error during encoding and writing." << std::endl;
    
    fail:
            av_frame_free(&frame);
            if (ret < 0)
                return ret;
        }
        return 0;
    }
    
    int main(int argc, char **argv)
    {
        const AVCodec *enc_codec;
        int ret = 0;
        AVPacket *dec_pkt;
    
        if (argc != 4) {
            fprintf(stderr, "Usage: %s   \n"
                    "The output format is guessed according to the file extension.\n"
                    "\n", argv[0]);
            return -1;
        }
    
        ret = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_VAAPI, NULL, NULL, 0);
        if (ret < 0) {
            char errMsg[1024] = {0};
            std::cout << "Failed to create a VAAPI device. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
            return -1;
        }
    
        dec_pkt = av_packet_alloc();
        if (!dec_pkt) {
            std::cout << "Failed to allocate decode packet" << std::endl;
            goto end;
        }
    
        if ((ret = open_input_file(argv[1])) < 0)
            goto end;
    
        if (!(enc_codec = avcodec_find_encoder_by_name(argv[2]))) {
            std::cout << "Could not find encoder '" << argv[2] << "'" << std::endl;
            ret = -1;
            goto end;
        }
    
        if ((ret = (avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, argv[3]))) < 0) {
            char errMsg[1024] = {0};
            std::cout << "Failed to deduce output format from file extension. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
            goto end;
        }
    
        if (!(encoder_ctx = avcodec_alloc_context3(enc_codec))) {
            ret = AVERROR(ENOMEM);
            goto end;
        }
    
        ret = avio_open(&ofmt_ctx->pb, argv[3], AVIO_FLAG_WRITE);
        if (ret < 0) {
            char errMsg[1024] = {0};
            std::cout << "Cannot open output file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
            goto end;
        }
    
        /* read all packets and only transcoding video */
        while (ret >= 0) {
            if ((ret = av_read_frame(ifmt_ctx, dec_pkt)) < 0)
                break;
    
            if (video_stream == dec_pkt->stream_index)
                ret = dec_enc(dec_pkt, enc_codec, decoder_ctx);
    
            av_packet_unref(dec_pkt);
        }
    
        /* flush decoder */
        av_packet_unref(dec_pkt);
        ret = dec_enc(dec_pkt, enc_codec, decoder_ctx);
    
        /* flush encoder */
        ret = encode_write(dec_pkt, NULL);
    
        /* write the trailer for output stream */
        av_write_trailer(ofmt_ctx);
    
    end:
        avformat_close_input(&ifmt_ctx);
        avformat_close_input(&ofmt_ctx);
        avcodec_free_context(&decoder_ctx);
        avcodec_free_context(&encoder_ctx);
        av_buffer_unref(&hw_device_ctx);
        av_packet_free(&dec_pkt);
        return ret;
    }
    

    And the content of the associated CMakeLists.txt file to build it using gcc:

    cmake_minimum_required(VERSION 3.5)
    
    include(FetchContent)
    
    set(CMAKE_CXX_STANDARD 17)
    set(CMAKE_CXX_STANDARD_REQUIRED ON)
    
    set(CMAKE_VERBOSE_MAKEFILE ON)
    
    SET (FFMPEG_HW_TRANSCODE_INCS
        ${CMAKE_CURRENT_LIST_DIR})
    
    include_directories(
        ${CMAKE_INCLUDE_PATH}
        ${CMAKE_CURRENT_LIST_DIR}
    )
    
    project(FFmpeg_HW_transcode LANGUAGES CXX)
    
    set(CMAKE_CXX_FLAGS "-Wall -Werror=return-type -pedantic -fPIC -gdwarf-4")
    set(CMAKE_CPP_FLAGS "-Wall -Werror=return-type -pedantic -fPIC -gdwarf-4")
    
    set(EXECUTABLE_OUTPUT_PATH "${CMAKE_CURRENT_LIST_DIR}/build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode")
    set(LIBRARY_OUTPUT_PATH "${CMAKE_CURRENT_LIST_DIR}/build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode")
    
    add_executable(${PROJECT_NAME})
    
    target_sources(${PROJECT_NAME} PRIVATE
                    vaapi_transcode.cpp)
    
    target_link_libraries(${PROJECT_NAME}
                    -L${CMAKE_CURRENT_LIST_DIR}/../build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode
                    -lavdevice
                    -lavformat
                    -lavutil
                    -lavcodec)
    

    Has anyone tried to do this kind of stuff ?

    Thanks for your help.

  • When usg ffmpeg to convert from .mp4 to .avi, how do I get the .avi files to contain the correct (trimmed) frames from the .mp4 ?

    17 avril, par cassenav

    I am using ffmpeg to convert a batch of .mp4 videos to the .avi format from the command line. The .mp4 videos are all trimmed videos that are sub-videos of a longer video. I have basically segmented the longer video into small, non-overlapping chunks (or maybe with a couple milliseconds of overlap), and I want to convert all of these smaller chunks (.mp4) to .avi. However, I am running into an issue: the .avi files all contain a some additional frames at the beginning of each video which I explicitly trimmed in the .mp4 videos. I'm not sure what exactly is causing this problem or how to solve it. I suspect that there is some inconsistency in the frame rate when converting the videos, but I could be wrong.

    I have tried using the following commands:

    for file in *.mp4; do ffmpeg -i "$file" -acodec copy -vcodec copy "${file%.mp4}.avi"; done

    for file in *.mp4; do ffmpeg -i "$file" -acodec copy -vcodec copy -fps_mode 2 "${file%.mp4}.avi"; done

    for file in *.mp4; do ffmpeg -i "$file" -c:v copy -c:a copy -fps_mode 2 "${file%.mp4}.avi"; done

    for file in *.mp4; do ffmpeg -i "$file" -c:v copy -c:a copy -fps_mode 1 "${file%.mp4}.avi"; done

    for file in *.mp4; do ffmpeg -i "$file" -c:v copy -c:a copy "${file%.mp4}.avi"; done

    all to no avail. How can I solve this issue such that only the frames in the trimmed .mp4 video are processed in the resulting .avi?

  • In mp4/h264, why does duplicate a frame need intensive re-encoding with ffmpeg ?

    17 avril, par kwjsksai

    What I want to achieve is to duplicate all frames in 30fps.mp4 and result in a 60 fps video.
    I tried ffmpeg -i 30fps.mp4 -filter:v fps=60 out.mp4. It does what I want but with re-encoding.
    And the question is can it be done without re-encoding but only remux?
    The reason I'm asking is because intuitively speaking, we usually do shallow copy when same data is used multiple times.
    So can it be done? Does the codec/container even allow it?

  • Modify the ffmpeg command to work with more than one image

    16 avril, par Koi Farm ĐaMi

    ffmpeg -y -i image.jpg -filter_complex "[0:v] scale=1280:-1,loop=-1:size=2,trim=0:40,setpts=PTS-STARTPTS,setsar=1,crop='w=1280:h=720:x=0:y=min(max(0, t - 3), (40 - 3 - 3)) / (40 - 3 - 3) * (ih-oh)'" unix.mp4

    I want to edit the above command so it can be used for multiple images or a folder containing many images. And give all images used the same size and aspect ratio. Please help me, thank you everyone.