Recherche avancée

Médias (91)

Autres articles (76)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (8792)

  • Why does adding audio stream to ffmpeg's libavcodec output container cause a crash ?

    29 mars 2021, par Sniggerfardimungus

    As it stands, my project correctly uses libavcodec to decode a video, where each frame is manipulated (it doesn't matter how) and output to a new video. I've cobbled this together from examples found online, and it works. The result is a perfect .mp4 of the manipulated frames, minus the audio.

    


    My problem is, when I try to add an audio stream to the output container, I get a crash in mux.c that I can't explain. It's in static int compute_muxer_pkt_fields(AVFormatContext *s, AVStream *st, AVPacket *pkt). Where st->internal->priv_pts->val = pkt->dts; is attempted, priv_pts is nullptr.

    


    I don't recall the version number, but this is from a November 4, 2020 ffmpeg build from git.

    


    My MediaContentMgr is much bigger than what I have here. I'm stripping out everything to do with the frame manipulation, so if I'm missing anything, please let me know and I'll edit.

    


    The code that, when added, triggers the nullptr exception, is called out inline

    


    The .h :

    


    #ifndef _API_EXAMPLE_H&#xA;#define _API_EXAMPLE_H&#xA;&#xA;#include <glad></glad>glad.h>&#xA;#include <glfw></glfw>glfw3.h>&#xA;#include "glm/glm.hpp"&#xA;&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>avutil.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;}&#xA;&#xA;#include "shader_s.h"&#xA;&#xA;class MediaContainerMgr {&#xA;public:&#xA;    MediaContainerMgr(const std::string&amp; infile, const std::string&amp; vert, const std::string&amp; frag, &#xA;                      const glm::vec3* extents);&#xA;    ~MediaContainerMgr();&#xA;    void render();&#xA;    bool recording() { return m_recording; }&#xA;&#xA;    // Major thanks to "shi-yan" who helped make this possible:&#xA;    // https://github.com/shi-yan/videosamples/blob/master/libavmp4encoding/main.cpp&#xA;    bool init_video_output(const std::string&amp; video_file_name, unsigned int width, unsigned int height);&#xA;    bool output_video_frame(uint8_t* buf);&#xA;    bool finalize_output();&#xA;&#xA;private:&#xA;    AVFormatContext*   m_format_context;&#xA;    AVCodec*           m_video_codec;&#xA;    AVCodec*           m_audio_codec;&#xA;    AVCodecParameters* m_video_codec_parameters;&#xA;    AVCodecParameters* m_audio_codec_parameters;&#xA;    AVCodecContext*    m_codec_context;&#xA;    AVFrame*           m_frame;&#xA;    AVPacket*          m_packet;&#xA;    uint32_t           m_video_stream_index;&#xA;    uint32_t           m_audio_stream_index;&#xA;    &#xA;    void init_rendering(const glm::vec3* extents);&#xA;    int decode_packet();&#xA;&#xA;    // For writing the output video:&#xA;    void free_output_assets();&#xA;    bool                   m_recording;&#xA;    AVOutputFormat*        m_output_format;&#xA;    AVFormatContext*       m_output_format_context;&#xA;    AVCodec*               m_output_video_codec;&#xA;    AVCodecContext*        m_output_video_codec_context;&#xA;    AVFrame*               m_output_video_frame;&#xA;    SwsContext*            m_output_scale_context;&#xA;    AVStream*              m_output_video_stream;&#xA;    &#xA;    AVCodec*               m_output_audio_codec;&#xA;    AVStream*              m_output_audio_stream;&#xA;    AVCodecContext*        m_output_audio_codec_context;&#xA;};&#xA;&#xA;#endif&#xA;

    &#xA;

    And, the hellish .cpp :

    &#xA;

    #include &#xA;#include &#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;#include "media_container_manager.h"&#xA;&#xA;MediaContainerMgr::MediaContainerMgr(const std::string&amp; infile, const std::string&amp; vert, const std::string&amp; frag,&#xA;    const glm::vec3* extents) :&#xA;    m_video_stream_index(-1),&#xA;    m_audio_stream_index(-1),&#xA;    m_recording(false),&#xA;    m_output_format(nullptr),&#xA;    m_output_format_context(nullptr),&#xA;    m_output_video_codec(nullptr),&#xA;    m_output_video_codec_context(nullptr),&#xA;    m_output_video_frame(nullptr),&#xA;    m_output_scale_context(nullptr),&#xA;    m_output_video_stream(nullptr)&#xA;{&#xA;    // AVFormatContext holds header info from the format specified in the container:&#xA;    m_format_context = avformat_alloc_context();&#xA;    if (!m_format_context) {&#xA;        throw "ERROR could not allocate memory for Format Context";&#xA;    }&#xA;    &#xA;    // open the file and read its header. Codecs are not opened here.&#xA;    if (avformat_open_input(&amp;m_format_context, infile.c_str(), NULL, NULL) != 0) {&#xA;        throw "ERROR could not open input file for reading";&#xA;    }&#xA;&#xA;    printf("format %s, duration %lldus, bit_rate %lld\n", m_format_context->iformat->name, m_format_context->duration, m_format_context->bit_rate);&#xA;    //read avPackets (?) from the avFormat (?) to get stream info. This populates format_context->streams.&#xA;    if (avformat_find_stream_info(m_format_context, NULL) &lt; 0) {&#xA;        throw "ERROR could not get stream info";&#xA;    }&#xA;&#xA;    for (unsigned int i = 0; i &lt; m_format_context->nb_streams; i&#x2B;&#x2B;) {&#xA;        AVCodecParameters* local_codec_parameters = NULL;&#xA;        local_codec_parameters = m_format_context->streams[i]->codecpar;&#xA;        printf("AVStream->time base before open coded %d/%d\n", m_format_context->streams[i]->time_base.num, m_format_context->streams[i]->time_base.den);&#xA;        printf("AVStream->r_frame_rate before open coded %d/%d\n", m_format_context->streams[i]->r_frame_rate.num, m_format_context->streams[i]->r_frame_rate.den);&#xA;        printf("AVStream->start_time %" PRId64 "\n", m_format_context->streams[i]->start_time);&#xA;        printf("AVStream->duration %" PRId64 "\n", m_format_context->streams[i]->duration);&#xA;        printf("duration(s): %lf\n", (float)m_format_context->streams[i]->duration / m_format_context->streams[i]->time_base.den * m_format_context->streams[i]->time_base.num);&#xA;        AVCodec* local_codec = NULL;&#xA;        local_codec = avcodec_find_decoder(local_codec_parameters->codec_id);&#xA;        if (local_codec == NULL) {&#xA;            throw "ERROR unsupported codec!";&#xA;        }&#xA;&#xA;        if (local_codec_parameters->codec_type == AVMEDIA_TYPE_VIDEO) {&#xA;            if (m_video_stream_index == -1) {&#xA;                m_video_stream_index = i;&#xA;                m_video_codec = local_codec;&#xA;                m_video_codec_parameters = local_codec_parameters;&#xA;            }&#xA;            m_height = local_codec_parameters->height;&#xA;            m_width = local_codec_parameters->width;&#xA;            printf("Video Codec: resolution %dx%d\n", m_width, m_height);&#xA;        }&#xA;        else if (local_codec_parameters->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;            if (m_audio_stream_index == -1) {&#xA;                m_audio_stream_index = i;&#xA;                m_audio_codec = local_codec;&#xA;                m_audio_codec_parameters = local_codec_parameters;&#xA;            }&#xA;            printf("Audio Codec: %d channels, sample rate %d\n", local_codec_parameters->channels, local_codec_parameters->sample_rate);&#xA;        }&#xA;&#xA;        printf("\tCodec %s ID %d bit_rate %lld\n", local_codec->name, local_codec->id, local_codec_parameters->bit_rate);&#xA;    }&#xA;&#xA;    m_codec_context = avcodec_alloc_context3(m_video_codec);&#xA;    if (!m_codec_context) {&#xA;        throw "ERROR failed to allocate memory for AVCodecContext";&#xA;    }&#xA;&#xA;    if (avcodec_parameters_to_context(m_codec_context, m_video_codec_parameters) &lt; 0) {&#xA;        throw "ERROR failed to copy codec params to codec context";&#xA;    }&#xA;&#xA;    if (avcodec_open2(m_codec_context, m_video_codec, NULL) &lt; 0) {&#xA;        throw "ERROR avcodec_open2 failed to open codec";&#xA;    }&#xA;&#xA;    m_frame = av_frame_alloc();&#xA;    if (!m_frame) {&#xA;        throw "ERROR failed to allocate AVFrame memory";&#xA;    }&#xA;&#xA;    m_packet = av_packet_alloc();&#xA;    if (!m_packet) {&#xA;        throw "ERROR failed to allocate AVPacket memory";&#xA;    }&#xA;}&#xA;&#xA;MediaContainerMgr::~MediaContainerMgr() {&#xA;    avformat_close_input(&amp;m_format_context);&#xA;    av_packet_free(&amp;m_packet);&#xA;    av_frame_free(&amp;m_frame);&#xA;    avcodec_free_context(&amp;m_codec_context);&#xA;&#xA;&#xA;    glDeleteVertexArrays(1, &amp;m_VAO);&#xA;    glDeleteBuffers(1, &amp;m_VBO);&#xA;}&#xA;&#xA;&#xA;bool MediaContainerMgr::advance_frame() {&#xA;    while (true) {&#xA;        if (av_read_frame(m_format_context, m_packet) &lt; 0) {&#xA;            // Do we actually need to unref the packet if it failed?&#xA;            av_packet_unref(m_packet);&#xA;            continue;&#xA;            //return false;&#xA;        }&#xA;        else {&#xA;            if (m_packet->stream_index == m_video_stream_index) {&#xA;                //printf("AVPacket->pts %" PRId64 "\n", m_packet->pts);&#xA;                int response = decode_packet();&#xA;                av_packet_unref(m_packet);&#xA;                if (response != 0) {&#xA;                    continue;&#xA;                    //return false;&#xA;                }&#xA;                return true;&#xA;            }&#xA;            else {&#xA;                printf("m_packet->stream_index: %d\n", m_packet->stream_index);&#xA;                printf("  m_packet->pts: %lld\n", m_packet->pts);&#xA;                printf("  mpacket->size: %d\n", m_packet->size);&#xA;                if (m_recording) {&#xA;                    int err = 0;&#xA;                    //err = avcodec_send_packet(m_output_video_codec_context, m_packet);&#xA;                    printf("  encoding error: %d\n", err);&#xA;                }&#xA;            }&#xA;        }&#xA;&#xA;        // We&#x27;re done with the packet (it&#x27;s been unpacked to a frame), so deallocate &amp; reset to defaults:&#xA;/*&#xA;        if (m_frame == NULL)&#xA;            return false;&#xA;&#xA;        if (m_frame->data[0] == NULL || m_frame->data[1] == NULL || m_frame->data[2] == NULL) {&#xA;            printf("WARNING: null frame data");&#xA;            continue;&#xA;        }&#xA;*/&#xA;    }&#xA;}&#xA;&#xA;int MediaContainerMgr::decode_packet() {&#xA;    // Supply raw packet data as input to a decoder&#xA;    // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3&#xA;    int response = avcodec_send_packet(m_codec_context, m_packet);&#xA;&#xA;    if (response &lt; 0) {&#xA;        char buf[256];&#xA;        av_strerror(response, buf, 256);&#xA;        printf("Error while receiving a frame from the decoder: %s\n", buf);&#xA;        return response;&#xA;    }&#xA;&#xA;    // Return decoded output data (into a frame) from a decoder&#xA;    // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga11e6542c4e66d3028668788a1a74217c&#xA;    response = avcodec_receive_frame(m_codec_context, m_frame);&#xA;    if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {&#xA;        return response;&#xA;    } else if (response &lt; 0) {&#xA;        char buf[256];&#xA;        av_strerror(response, buf, 256);&#xA;        printf("Error while receiving a frame from the decoder: %s\n", buf);&#xA;        return response;&#xA;    } else {&#xA;        printf(&#xA;            "Frame %d (type=%c, size=%d bytes) pts %lld key_frame %d [DTS %d]\n",&#xA;            m_codec_context->frame_number,&#xA;            av_get_picture_type_char(m_frame->pict_type),&#xA;            m_frame->pkt_size,&#xA;            m_frame->pts,&#xA;            m_frame->key_frame,&#xA;            m_frame->coded_picture_number&#xA;        );&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;&#xA;bool MediaContainerMgr::init_video_output(const std::string&amp; video_file_name, unsigned int width, unsigned int height) {&#xA;    if (m_recording)&#xA;        return true;&#xA;    m_recording = true;&#xA;&#xA;    advance_to(0L); // I&#x27;ve deleted the implmentation. Just seeks to beginning of vid. Works fine.&#xA;&#xA;    if (!(m_output_format = av_guess_format(nullptr, video_file_name.c_str(), nullptr))) {&#xA;        printf("Cannot guess output format.\n");&#xA;        return false;&#xA;    }&#xA;&#xA;    int err = avformat_alloc_output_context2(&amp;m_output_format_context, m_output_format, nullptr, video_file_name.c_str());&#xA;    if (err &lt; 0) {&#xA;        printf("Failed to allocate output context.\n");&#xA;        return false;&#xA;    }&#xA;&#xA;    //TODO(P0): Break out the video and audio inits into their own methods.&#xA;    m_output_video_codec = avcodec_find_encoder(m_output_format->video_codec);&#xA;    if (!m_output_video_codec) {&#xA;        printf("Failed to create video codec.\n");&#xA;        return false;&#xA;    }&#xA;    m_output_video_stream = avformat_new_stream(m_output_format_context, m_output_video_codec);&#xA;    if (!m_output_video_stream) {&#xA;        printf("Failed to find video format.\n");&#xA;        return false;&#xA;    } &#xA;    m_output_video_codec_context = avcodec_alloc_context3(m_output_video_codec);&#xA;    if (!m_output_video_codec_context) {&#xA;        printf("Failed to create video codec context.\n");&#xA;        return(false);&#xA;    }&#xA;    m_output_video_stream->codecpar->codec_id = m_output_format->video_codec;&#xA;    m_output_video_stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    m_output_video_stream->codecpar->width = width;&#xA;    m_output_video_stream->codecpar->height = height;&#xA;    m_output_video_stream->codecpar->format = AV_PIX_FMT_YUV420P;&#xA;    // Use the same bit rate as the input stream.&#xA;    m_output_video_stream->codecpar->bit_rate = m_format_context->streams[m_video_stream_index]->codecpar->bit_rate;&#xA;    m_output_video_stream->avg_frame_rate = m_format_context->streams[m_video_stream_index]->avg_frame_rate;&#xA;    avcodec_parameters_to_context(m_output_video_codec_context, m_output_video_stream->codecpar);&#xA;    m_output_video_codec_context->time_base = m_format_context->streams[m_video_stream_index]->time_base;&#xA;    &#xA;    //TODO(P1): Set these to match the input stream?&#xA;    m_output_video_codec_context->max_b_frames = 2;&#xA;    m_output_video_codec_context->gop_size = 12;&#xA;    m_output_video_codec_context->framerate = m_format_context->streams[m_video_stream_index]->r_frame_rate;&#xA;    //m_output_codec_context->refcounted_frames = 0;&#xA;    if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H264) {&#xA;        av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);&#xA;    } else if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H265) {&#xA;        av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);&#xA;    } else {&#xA;        av_opt_set_int(m_output_video_codec_context, "lossless", 1, 0);&#xA;    }&#xA;    avcodec_parameters_from_context(m_output_video_stream->codecpar, m_output_video_codec_context);&#xA;&#xA;    m_output_audio_codec = avcodec_find_encoder(m_output_format->audio_codec);&#xA;    if (!m_output_audio_codec) {&#xA;        printf("Failed to create audio codec.\n");&#xA;        return false;&#xA;    }&#xA;

    &#xA;

    I've commented out all of the audio stream init beyond this next line, because this is where&#xA;the trouble begins. Creating this output stream causes the null reference I mentioned. If I&#xA;uncomment everything below here, I still get the null deref. If I comment out this line, the&#xA;deref exception vanishes. (IOW, I commented out more and more code until I found that this&#xA;was the trigger that caused the problem.)

    &#xA;

    I assume that there's something I'm doing wrong in the rest of the commented out code, that,&#xA;when fixed, will fix the nullptr and give me a working audio stream.

    &#xA;

        m_output_audio_stream = avformat_new_stream(m_output_format_context, m_output_audio_codec);&#xA;    if (!m_output_audio_stream) {&#xA;        printf("Failed to find audio format.\n");&#xA;        return false;&#xA;    }&#xA;    /*&#xA;    m_output_audio_codec_context = avcodec_alloc_context3(m_output_audio_codec);&#xA;    if (!m_output_audio_codec_context) {&#xA;        printf("Failed to create audio codec context.\n");&#xA;        return(false);&#xA;    }&#xA;    m_output_audio_stream->codecpar->codec_id = m_output_format->audio_codec;&#xA;    m_output_audio_stream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;&#xA;    m_output_audio_stream->codecpar->format = m_format_context->streams[m_audio_stream_index]->codecpar->format;&#xA;    m_output_audio_stream->codecpar->bit_rate = m_format_context->streams[m_audio_stream_index]->codecpar->bit_rate;&#xA;    m_output_audio_stream->avg_frame_rate = m_format_context->streams[m_audio_stream_index]->avg_frame_rate;&#xA;    avcodec_parameters_to_context(m_output_audio_codec_context, m_output_audio_stream->codecpar);&#xA;    m_output_audio_codec_context->time_base = m_format_context->streams[m_audio_stream_index]->time_base;&#xA;    */&#xA;&#xA;    //TODO(P2): Free assets that have been allocated.&#xA;    err = avcodec_open2(m_output_video_codec_context, m_output_video_codec, nullptr);&#xA;    if (err &lt; 0) {&#xA;        printf("Failed to open codec.\n");&#xA;        return false;&#xA;    }&#xA;&#xA;    if (!(m_output_format->flags &amp; AVFMT_NOFILE)) {&#xA;        err = avio_open(&amp;m_output_format_context->pb, video_file_name.c_str(), AVIO_FLAG_WRITE);&#xA;        if (err &lt; 0) {&#xA;            printf("Failed to open output file.");&#xA;            return false;&#xA;        }&#xA;    }&#xA;&#xA;    err = avformat_write_header(m_output_format_context, NULL);&#xA;    if (err &lt; 0) {&#xA;        printf("Failed to write header.\n");&#xA;        return false;&#xA;    }&#xA;&#xA;    av_dump_format(m_output_format_context, 0, video_file_name.c_str(), 1);&#xA;&#xA;    return true;&#xA;}&#xA;&#xA;&#xA;//TODO(P2): make this a member. (Thanks to https://emvlo.wordpress.com/2016/03/10/sws_scale/)&#xA;void PrepareFlipFrameJ420(AVFrame* pFrame) {&#xA;    for (int i = 0; i &lt; 4; i&#x2B;&#x2B;) {&#xA;        if (i)&#xA;            pFrame->data[i] &#x2B;= pFrame->linesize[i] * ((pFrame->height >> 1) - 1);&#xA;        else&#xA;            pFrame->data[i] &#x2B;= pFrame->linesize[i] * (pFrame->height - 1);&#xA;        pFrame->linesize[i] = -pFrame->linesize[i];&#xA;    }&#xA;}&#xA;

    &#xA;

    This is where we take an altered frame and write it to the output container. This works fine&#xA;as long as we haven't set up an audio stream in the output container.

    &#xA;

    bool MediaContainerMgr::output_video_frame(uint8_t* buf) {&#xA;    int err;&#xA;&#xA;    if (!m_output_video_frame) {&#xA;        m_output_video_frame = av_frame_alloc();&#xA;        m_output_video_frame->format = AV_PIX_FMT_YUV420P;&#xA;        m_output_video_frame->width = m_output_video_codec_context->width;&#xA;        m_output_video_frame->height = m_output_video_codec_context->height;&#xA;        err = av_frame_get_buffer(m_output_video_frame, 32);&#xA;        if (err &lt; 0) {&#xA;            printf("Failed to allocate output frame.\n");&#xA;            return false;&#xA;        }&#xA;    }&#xA;&#xA;    if (!m_output_scale_context) {&#xA;        m_output_scale_context = sws_getContext(m_output_video_codec_context->width, m_output_video_codec_context->height, &#xA;                                                AV_PIX_FMT_RGB24,&#xA;                                                m_output_video_codec_context->width, m_output_video_codec_context->height, &#xA;                                                AV_PIX_FMT_YUV420P, SWS_BICUBIC, nullptr, nullptr, nullptr);&#xA;    }&#xA;&#xA;    int inLinesize[1] = { 3 * m_output_video_codec_context->width };&#xA;    sws_scale(m_output_scale_context, (const uint8_t* const*)&amp;buf, inLinesize, 0, m_output_video_codec_context->height,&#xA;              m_output_video_frame->data, m_output_video_frame->linesize);&#xA;    PrepareFlipFrameJ420(m_output_video_frame);&#xA;    //TODO(P0): Switch m_frame to be m_input_video_frame so I don&#x27;t end up using the presentation timestamp from&#xA;    //          an audio frame if I threadify the frame reading.&#xA;    m_output_video_frame->pts = m_frame->pts;&#xA;    printf("Output PTS: %d, time_base: %d/%d\n", m_output_video_frame->pts,&#xA;        m_output_video_codec_context->time_base.num, m_output_video_codec_context->time_base.den);&#xA;    err = avcodec_send_frame(m_output_video_codec_context, m_output_video_frame);&#xA;    if (err &lt; 0) {&#xA;        printf("  ERROR sending new video frame output: ");&#xA;        switch (err) {&#xA;        case AVERROR(EAGAIN):&#xA;            printf("AVERROR(EAGAIN): %d\n", err);&#xA;            break;&#xA;        case AVERROR_EOF:&#xA;            printf("AVERROR_EOF: %d\n", err);&#xA;            break;&#xA;        case AVERROR(EINVAL):&#xA;            printf("AVERROR(EINVAL): %d\n", err);&#xA;            break;&#xA;        case AVERROR(ENOMEM):&#xA;            printf("AVERROR(ENOMEM): %d\n", err);&#xA;            break;&#xA;        }&#xA;&#xA;        return false;&#xA;    }&#xA;&#xA;    AVPacket pkt;&#xA;    av_init_packet(&amp;pkt);&#xA;    pkt.data = nullptr;&#xA;    pkt.size = 0;&#xA;    pkt.flags |= AV_PKT_FLAG_KEY;&#xA;    int ret = 0;&#xA;    if ((ret = avcodec_receive_packet(m_output_video_codec_context, &amp;pkt)) == 0) {&#xA;        static int counter = 0;&#xA;        printf("pkt.key: 0x%08x, pkt.size: %d, counter:\n", pkt.flags &amp; AV_PKT_FLAG_KEY, pkt.size, counter&#x2B;&#x2B;);&#xA;        uint8_t* size = ((uint8_t*)pkt.data);&#xA;        printf("sizes: %d %d %d %d %d %d %d %d %d\n", size[0], size[1], size[2], size[2], size[3], size[4], size[5], size[6], size[7]);&#xA;        av_interleaved_write_frame(m_output_format_context, &amp;pkt);&#xA;    }&#xA;    printf("push: %d\n", ret);&#xA;    av_packet_unref(&amp;pkt);&#xA;&#xA;    return true;&#xA;}&#xA;&#xA;bool MediaContainerMgr::finalize_output() {&#xA;    if (!m_recording)&#xA;        return true;&#xA;&#xA;    AVPacket pkt;&#xA;    av_init_packet(&amp;pkt);&#xA;    pkt.data = nullptr;&#xA;    pkt.size = 0;&#xA;&#xA;    for (;;) {&#xA;        avcodec_send_frame(m_output_video_codec_context, nullptr);&#xA;        if (avcodec_receive_packet(m_output_video_codec_context, &amp;pkt) == 0) {&#xA;            av_interleaved_write_frame(m_output_format_context, &amp;pkt);&#xA;            printf("final push:\n");&#xA;        } else {&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    av_packet_unref(&amp;pkt);&#xA;&#xA;    av_write_trailer(m_output_format_context);&#xA;    if (!(m_output_format->flags &amp; AVFMT_NOFILE)) {&#xA;        int err = avio_close(m_output_format_context->pb);&#xA;        if (err &lt; 0) {&#xA;            printf("Failed to close file. err: %d\n", err);&#xA;            return false;&#xA;        }&#xA;    }&#xA;&#xA;    return true;&#xA;}&#xA;

    &#xA;

    EDIT&#xA;The call stack on the crash (which I should have included in the original question) :

    &#xA;

    avformat-58.dll!compute_muxer_pkt_fields(AVFormatContext * s, AVStream * st, AVPacket * pkt) Line 630   C&#xA;avformat-58.dll!write_packet_common(AVFormatContext * s, AVStream * st, AVPacket * pkt, int interleaved) Line 1122  C&#xA;avformat-58.dll!write_packets_common(AVFormatContext * s, AVPacket * pkt, int interleaved) Line 1186    C&#xA;avformat-58.dll!av_interleaved_write_frame(AVFormatContext * s, AVPacket * pkt) Line 1241   C&#xA;CamBot.exe!MediaContainerMgr::output_video_frame(unsigned char * buf) Line 553  C&#x2B;&#x2B;&#xA;CamBot.exe!main() Line 240  C&#x2B;&#x2B;&#xA;

    &#xA;

    If I move the call to avformat_write_header so it's immediately before the audio stream initialization, I still get a crash, but in a different place. The crash happens on line 6459 of movenc.c, where we have :

    &#xA;

    /* Non-seekable output is ok if using fragmentation. If ism_lookahead&#xA; * is enabled, we don&#x27;t support non-seekable output at all. */&#xA;if (!(s->pb->seekable &amp; AVIO_SEEKABLE_NORMAL) &amp;&amp;  //  CRASH IS HERE&#xA;    (!(mov->flags &amp; FF_MOV_FLAG_FRAGMENT) || mov->ism_lookahead)) {&#xA;    av_log(s, AV_LOG_ERROR, "muxer does not support non seekable output\n");&#xA;    return AVERROR(EINVAL);&#xA;}&#xA;

    &#xA;

    The exception is a nullptr exception, where s->pb is NULL. The call stack is :

    &#xA;

    avformat-58.dll!mov_init(AVFormatContext * s) Line 6459 C&#xA;avformat-58.dll!init_muxer(AVFormatContext * s, AVDictionary * * options) Line 407  C&#xA;[Inline Frame] avformat-58.dll!avformat_init_output(AVFormatContext *) Line 489 C&#xA;avformat-58.dll!avformat_write_header(AVFormatContext * s, AVDictionary * * options) Line 512   C&#xA;CamBot.exe!MediaContainerMgr::init_video_output(const std::string &amp; video_file_name, unsigned int width, unsigned int height) Line 424  C&#x2B;&#x2B;&#xA;CamBot.exe!main() Line 183  C&#x2B;&#x2B;&#xA;

    &#xA;

  • Why does adding audio stream to libavcodec output container causes a crash ?

    19 mars 2021, par Sniggerfardimungus

    As it stands, my project correctly uses libavcodec to decode a video, where each frame is manipulated (it doesn't matter how) and output to a new video. I've cobbled this together from examples found online, and it works. The result is a perfect .mp4 of the manipulated frames, minus the audio.

    &#xA;

    My problem is, when I try to add an audio stream to the output container, I get a crash in mux.c that I can't explain. It's in static int compute_muxer_pkt_fields(AVFormatContext *s, AVStream *st, AVPacket *pkt). Where st->internal->priv_pts->val = pkt->dts; is attempted, priv_pts is nullptr.

    &#xA;

    I don't recall the version number, but this is from a November 4, 2020 ffmpeg build from git.

    &#xA;

    My MediaContentMgr is much bigger than what I have here. I'm stripping out everything to do with the frame manipulation, so if I'm missing anything, please let me know and I'll edit.

    &#xA;

    The code that, when added, triggers the nullptr exception, is called out inline

    &#xA;

    The .h :

    &#xA;

    #ifndef _API_EXAMPLE_H&#xA;#define _API_EXAMPLE_H&#xA;&#xA;#include <glad></glad>glad.h>&#xA;#include <glfw></glfw>glfw3.h>&#xA;#include "glm/glm.hpp"&#xA;&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>avutil.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;}&#xA;&#xA;#include "shader_s.h"&#xA;&#xA;class MediaContainerMgr {&#xA;public:&#xA;    MediaContainerMgr(const std::string&amp; infile, const std::string&amp; vert, const std::string&amp; frag, &#xA;                      const glm::vec3* extents);&#xA;    ~MediaContainerMgr();&#xA;    void render();&#xA;    bool recording() { return m_recording; }&#xA;&#xA;    // Major thanks to "shi-yan" who helped make this possible:&#xA;    // https://github.com/shi-yan/videosamples/blob/master/libavmp4encoding/main.cpp&#xA;    bool init_video_output(const std::string&amp; video_file_name, unsigned int width, unsigned int height);&#xA;    bool output_video_frame(uint8_t* buf);&#xA;    bool finalize_output();&#xA;&#xA;private:&#xA;    AVFormatContext*   m_format_context;&#xA;    AVCodec*           m_video_codec;&#xA;    AVCodec*           m_audio_codec;&#xA;    AVCodecParameters* m_video_codec_parameters;&#xA;    AVCodecParameters* m_audio_codec_parameters;&#xA;    AVCodecContext*    m_codec_context;&#xA;    AVFrame*           m_frame;&#xA;    AVPacket*          m_packet;&#xA;    uint32_t           m_video_stream_index;&#xA;    uint32_t           m_audio_stream_index;&#xA;    &#xA;    void init_rendering(const glm::vec3* extents);&#xA;    int decode_packet();&#xA;&#xA;    // For writing the output video:&#xA;    void free_output_assets();&#xA;    bool                   m_recording;&#xA;    AVOutputFormat*        m_output_format;&#xA;    AVFormatContext*       m_output_format_context;&#xA;    AVCodec*               m_output_video_codec;&#xA;    AVCodecContext*        m_output_video_codec_context;&#xA;    AVFrame*               m_output_video_frame;&#xA;    SwsContext*            m_output_scale_context;&#xA;    AVStream*              m_output_video_stream;&#xA;    &#xA;    AVCodec*               m_output_audio_codec;&#xA;    AVStream*              m_output_audio_stream;&#xA;    AVCodecContext*        m_output_audio_codec_context;&#xA;};&#xA;&#xA;#endif&#xA;

    &#xA;

    And, the hellish .cpp :

    &#xA;

    #include &#xA;#include &#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;#include "media_container_manager.h"&#xA;&#xA;MediaContainerMgr::MediaContainerMgr(const std::string&amp; infile, const std::string&amp; vert, const std::string&amp; frag,&#xA;    const glm::vec3* extents) :&#xA;    m_video_stream_index(-1),&#xA;    m_audio_stream_index(-1),&#xA;    m_recording(false),&#xA;    m_output_format(nullptr),&#xA;    m_output_format_context(nullptr),&#xA;    m_output_video_codec(nullptr),&#xA;    m_output_video_codec_context(nullptr),&#xA;    m_output_video_frame(nullptr),&#xA;    m_output_scale_context(nullptr),&#xA;    m_output_video_stream(nullptr)&#xA;{&#xA;    // AVFormatContext holds header info from the format specified in the container:&#xA;    m_format_context = avformat_alloc_context();&#xA;    if (!m_format_context) {&#xA;        throw "ERROR could not allocate memory for Format Context";&#xA;    }&#xA;    &#xA;    // open the file and read its header. Codecs are not opened here.&#xA;    if (avformat_open_input(&amp;m_format_context, infile.c_str(), NULL, NULL) != 0) {&#xA;        throw "ERROR could not open input file for reading";&#xA;    }&#xA;&#xA;    printf("format %s, duration %lldus, bit_rate %lld\n", m_format_context->iformat->name, m_format_context->duration, m_format_context->bit_rate);&#xA;    //read avPackets (?) from the avFormat (?) to get stream info. This populates format_context->streams.&#xA;    if (avformat_find_stream_info(m_format_context, NULL) &lt; 0) {&#xA;        throw "ERROR could not get stream info";&#xA;    }&#xA;&#xA;    for (unsigned int i = 0; i &lt; m_format_context->nb_streams; i&#x2B;&#x2B;) {&#xA;        AVCodecParameters* local_codec_parameters = NULL;&#xA;        local_codec_parameters = m_format_context->streams[i]->codecpar;&#xA;        printf("AVStream->time base before open coded %d/%d\n", m_format_context->streams[i]->time_base.num, m_format_context->streams[i]->time_base.den);&#xA;        printf("AVStream->r_frame_rate before open coded %d/%d\n", m_format_context->streams[i]->r_frame_rate.num, m_format_context->streams[i]->r_frame_rate.den);&#xA;        printf("AVStream->start_time %" PRId64 "\n", m_format_context->streams[i]->start_time);&#xA;        printf("AVStream->duration %" PRId64 "\n", m_format_context->streams[i]->duration);&#xA;        printf("duration(s): %lf\n", (float)m_format_context->streams[i]->duration / m_format_context->streams[i]->time_base.den * m_format_context->streams[i]->time_base.num);&#xA;        AVCodec* local_codec = NULL;&#xA;        local_codec = avcodec_find_decoder(local_codec_parameters->codec_id);&#xA;        if (local_codec == NULL) {&#xA;            throw "ERROR unsupported codec!";&#xA;        }&#xA;&#xA;        if (local_codec_parameters->codec_type == AVMEDIA_TYPE_VIDEO) {&#xA;            if (m_video_stream_index == -1) {&#xA;                m_video_stream_index = i;&#xA;                m_video_codec = local_codec;&#xA;                m_video_codec_parameters = local_codec_parameters;&#xA;            }&#xA;            m_height = local_codec_parameters->height;&#xA;            m_width = local_codec_parameters->width;&#xA;            printf("Video Codec: resolution %dx%d\n", m_width, m_height);&#xA;        }&#xA;        else if (local_codec_parameters->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;            if (m_audio_stream_index == -1) {&#xA;                m_audio_stream_index = i;&#xA;                m_audio_codec = local_codec;&#xA;                m_audio_codec_parameters = local_codec_parameters;&#xA;            }&#xA;            printf("Audio Codec: %d channels, sample rate %d\n", local_codec_parameters->channels, local_codec_parameters->sample_rate);&#xA;        }&#xA;&#xA;        printf("\tCodec %s ID %d bit_rate %lld\n", local_codec->name, local_codec->id, local_codec_parameters->bit_rate);&#xA;    }&#xA;&#xA;    m_codec_context = avcodec_alloc_context3(m_video_codec);&#xA;    if (!m_codec_context) {&#xA;        throw "ERROR failed to allocate memory for AVCodecContext";&#xA;    }&#xA;&#xA;    if (avcodec_parameters_to_context(m_codec_context, m_video_codec_parameters) &lt; 0) {&#xA;        throw "ERROR failed to copy codec params to codec context";&#xA;    }&#xA;&#xA;    if (avcodec_open2(m_codec_context, m_video_codec, NULL) &lt; 0) {&#xA;        throw "ERROR avcodec_open2 failed to open codec";&#xA;    }&#xA;&#xA;    m_frame = av_frame_alloc();&#xA;    if (!m_frame) {&#xA;        throw "ERROR failed to allocate AVFrame memory";&#xA;    }&#xA;&#xA;    m_packet = av_packet_alloc();&#xA;    if (!m_packet) {&#xA;        throw "ERROR failed to allocate AVPacket memory";&#xA;    }&#xA;}&#xA;&#xA;MediaContainerMgr::~MediaContainerMgr() {&#xA;    avformat_close_input(&amp;m_format_context);&#xA;    av_packet_free(&amp;m_packet);&#xA;    av_frame_free(&amp;m_frame);&#xA;    avcodec_free_context(&amp;m_codec_context);&#xA;&#xA;&#xA;    glDeleteVertexArrays(1, &amp;m_VAO);&#xA;    glDeleteBuffers(1, &amp;m_VBO);&#xA;}&#xA;&#xA;&#xA;bool MediaContainerMgr::advance_frame() {&#xA;    while (true) {&#xA;        if (av_read_frame(m_format_context, m_packet) &lt; 0) {&#xA;            // Do we actually need to unref the packet if it failed?&#xA;            av_packet_unref(m_packet);&#xA;            continue;&#xA;            //return false;&#xA;        }&#xA;        else {&#xA;            if (m_packet->stream_index == m_video_stream_index) {&#xA;                //printf("AVPacket->pts %" PRId64 "\n", m_packet->pts);&#xA;                int response = decode_packet();&#xA;                av_packet_unref(m_packet);&#xA;                if (response != 0) {&#xA;                    continue;&#xA;                    //return false;&#xA;                }&#xA;                return true;&#xA;            }&#xA;            else {&#xA;                printf("m_packet->stream_index: %d\n", m_packet->stream_index);&#xA;                printf("  m_packet->pts: %lld\n", m_packet->pts);&#xA;                printf("  mpacket->size: %d\n", m_packet->size);&#xA;                if (m_recording) {&#xA;                    int err = 0;&#xA;                    //err = avcodec_send_packet(m_output_video_codec_context, m_packet);&#xA;                    printf("  encoding error: %d\n", err);&#xA;                }&#xA;            }&#xA;        }&#xA;&#xA;        // We&#x27;re done with the packet (it&#x27;s been unpacked to a frame), so deallocate &amp; reset to defaults:&#xA;/*&#xA;        if (m_frame == NULL)&#xA;            return false;&#xA;&#xA;        if (m_frame->data[0] == NULL || m_frame->data[1] == NULL || m_frame->data[2] == NULL) {&#xA;            printf("WARNING: null frame data");&#xA;            continue;&#xA;        }&#xA;*/&#xA;    }&#xA;}&#xA;&#xA;int MediaContainerMgr::decode_packet() {&#xA;    // Supply raw packet data as input to a decoder&#xA;    // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3&#xA;    int response = avcodec_send_packet(m_codec_context, m_packet);&#xA;&#xA;    if (response &lt; 0) {&#xA;        char buf[256];&#xA;        av_strerror(response, buf, 256);&#xA;        printf("Error while receiving a frame from the decoder: %s\n", buf);&#xA;        return response;&#xA;    }&#xA;&#xA;    // Return decoded output data (into a frame) from a decoder&#xA;    // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga11e6542c4e66d3028668788a1a74217c&#xA;    response = avcodec_receive_frame(m_codec_context, m_frame);&#xA;    if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {&#xA;        return response;&#xA;    } else if (response &lt; 0) {&#xA;        char buf[256];&#xA;        av_strerror(response, buf, 256);&#xA;        printf("Error while receiving a frame from the decoder: %s\n", buf);&#xA;        return response;&#xA;    } else {&#xA;        printf(&#xA;            "Frame %d (type=%c, size=%d bytes) pts %lld key_frame %d [DTS %d]\n",&#xA;            m_codec_context->frame_number,&#xA;            av_get_picture_type_char(m_frame->pict_type),&#xA;            m_frame->pkt_size,&#xA;            m_frame->pts,&#xA;            m_frame->key_frame,&#xA;            m_frame->coded_picture_number&#xA;        );&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;&#xA;bool MediaContainerMgr::init_video_output(const std::string&amp; video_file_name, unsigned int width, unsigned int height) {&#xA;    if (m_recording)&#xA;        return true;&#xA;    m_recording = true;&#xA;&#xA;    advance_to(0L); // I&#x27;ve deleted the implmentation. Just seeks to beginning of vid. Works fine.&#xA;&#xA;    if (!(m_output_format = av_guess_format(nullptr, video_file_name.c_str(), nullptr))) {&#xA;        printf("Cannot guess output format.\n");&#xA;        return false;&#xA;    }&#xA;&#xA;    int err = avformat_alloc_output_context2(&amp;m_output_format_context, m_output_format, nullptr, video_file_name.c_str());&#xA;    if (err &lt; 0) {&#xA;        printf("Failed to allocate output context.\n");&#xA;        return false;&#xA;    }&#xA;&#xA;    //TODO(P0): Break out the video and audio inits into their own methods.&#xA;    m_output_video_codec = avcodec_find_encoder(m_output_format->video_codec);&#xA;    if (!m_output_video_codec) {&#xA;        printf("Failed to create video codec.\n");&#xA;        return false;&#xA;    }&#xA;    m_output_video_stream = avformat_new_stream(m_output_format_context, m_output_video_codec);&#xA;    if (!m_output_video_stream) {&#xA;        printf("Failed to find video format.\n");&#xA;        return false;&#xA;    } &#xA;    m_output_video_codec_context = avcodec_alloc_context3(m_output_video_codec);&#xA;    if (!m_output_video_codec_context) {&#xA;        printf("Failed to create video codec context.\n");&#xA;        return(false);&#xA;    }&#xA;    m_output_video_stream->codecpar->codec_id = m_output_format->video_codec;&#xA;    m_output_video_stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    m_output_video_stream->codecpar->width = width;&#xA;    m_output_video_stream->codecpar->height = height;&#xA;    m_output_video_stream->codecpar->format = AV_PIX_FMT_YUV420P;&#xA;    // Use the same bit rate as the input stream.&#xA;    m_output_video_stream->codecpar->bit_rate = m_format_context->streams[m_video_stream_index]->codecpar->bit_rate;&#xA;    m_output_video_stream->avg_frame_rate = m_format_context->streams[m_video_stream_index]->avg_frame_rate;&#xA;    avcodec_parameters_to_context(m_output_video_codec_context, m_output_video_stream->codecpar);&#xA;    m_output_video_codec_context->time_base = m_format_context->streams[m_video_stream_index]->time_base;&#xA;    &#xA;    //TODO(P1): Set these to match the input stream?&#xA;    m_output_video_codec_context->max_b_frames = 2;&#xA;    m_output_video_codec_context->gop_size = 12;&#xA;    m_output_video_codec_context->framerate = m_format_context->streams[m_video_stream_index]->r_frame_rate;&#xA;    //m_output_codec_context->refcounted_frames = 0;&#xA;    if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H264) {&#xA;        av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);&#xA;    } else if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H265) {&#xA;        av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);&#xA;    } else {&#xA;        av_opt_set_int(m_output_video_codec_context, "lossless", 1, 0);&#xA;    }&#xA;    avcodec_parameters_from_context(m_output_video_stream->codecpar, m_output_video_codec_context);&#xA;&#xA;    m_output_audio_codec = avcodec_find_encoder(m_output_format->audio_codec);&#xA;    if (!m_output_audio_codec) {&#xA;        printf("Failed to create audio codec.\n");&#xA;        return false;&#xA;    }&#xA;

    &#xA;

    I've commented out all of the audio stream init beyond this next line, because this is where&#xA;the trouble begins. Creating this output stream causes the null reference I mentioned. If I&#xA;uncomment everything below here, I still get the null deref. If I comment out this line, the&#xA;deref exception vanishes. (IOW, I commented out more and more code until I found that this&#xA;was the trigger that caused the problem.)

    &#xA;

    I assume that there's something I'm doing wrong in the rest of the commented out code, that,&#xA;when fixed, will fix the nullptr and give me a working audio stream.

    &#xA;

        m_output_audio_stream = avformat_new_stream(m_output_format_context, m_output_audio_codec);&#xA;    if (!m_output_audio_stream) {&#xA;        printf("Failed to find audio format.\n");&#xA;        return false;&#xA;    }&#xA;    /*&#xA;    m_output_audio_codec_context = avcodec_alloc_context3(m_output_audio_codec);&#xA;    if (!m_output_audio_codec_context) {&#xA;        printf("Failed to create audio codec context.\n");&#xA;        return(false);&#xA;    }&#xA;    m_output_audio_stream->codecpar->codec_id = m_output_format->audio_codec;&#xA;    m_output_audio_stream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;&#xA;    m_output_audio_stream->codecpar->format = m_format_context->streams[m_audio_stream_index]->codecpar->format;&#xA;    m_output_audio_stream->codecpar->bit_rate = m_format_context->streams[m_audio_stream_index]->codecpar->bit_rate;&#xA;    m_output_audio_stream->avg_frame_rate = m_format_context->streams[m_audio_stream_index]->avg_frame_rate;&#xA;    avcodec_parameters_to_context(m_output_audio_codec_context, m_output_audio_stream->codecpar);&#xA;    m_output_audio_codec_context->time_base = m_format_context->streams[m_audio_stream_index]->time_base;&#xA;    */&#xA;&#xA;    //TODO(P2): Free assets that have been allocated.&#xA;    err = avcodec_open2(m_output_video_codec_context, m_output_video_codec, nullptr);&#xA;    if (err &lt; 0) {&#xA;        printf("Failed to open codec.\n");&#xA;        return false;&#xA;    }&#xA;&#xA;    if (!(m_output_format->flags &amp; AVFMT_NOFILE)) {&#xA;        err = avio_open(&amp;m_output_format_context->pb, video_file_name.c_str(), AVIO_FLAG_WRITE);&#xA;        if (err &lt; 0) {&#xA;            printf("Failed to open output file.");&#xA;            return false;&#xA;        }&#xA;    }&#xA;&#xA;    err = avformat_write_header(m_output_format_context, NULL);&#xA;    if (err &lt; 0) {&#xA;        printf("Failed to write header.\n");&#xA;        return false;&#xA;    }&#xA;&#xA;    av_dump_format(m_output_format_context, 0, video_file_name.c_str(), 1);&#xA;&#xA;    return true;&#xA;}&#xA;&#xA;&#xA;//TODO(P2): make this a member. (Thanks to https://emvlo.wordpress.com/2016/03/10/sws_scale/)&#xA;void PrepareFlipFrameJ420(AVFrame* pFrame) {&#xA;    for (int i = 0; i &lt; 4; i&#x2B;&#x2B;) {&#xA;        if (i)&#xA;            pFrame->data[i] &#x2B;= pFrame->linesize[i] * ((pFrame->height >> 1) - 1);&#xA;        else&#xA;            pFrame->data[i] &#x2B;= pFrame->linesize[i] * (pFrame->height - 1);&#xA;        pFrame->linesize[i] = -pFrame->linesize[i];&#xA;    }&#xA;}&#xA;

    &#xA;

    This is where we take an altered frame and write it to the output container. This works fine&#xA;as long as we haven't set up an audio stream in the output container.

    &#xA;

    bool MediaContainerMgr::output_video_frame(uint8_t* buf) {&#xA;    int err;&#xA;&#xA;    if (!m_output_video_frame) {&#xA;        m_output_video_frame = av_frame_alloc();&#xA;        m_output_video_frame->format = AV_PIX_FMT_YUV420P;&#xA;        m_output_video_frame->width = m_output_video_codec_context->width;&#xA;        m_output_video_frame->height = m_output_video_codec_context->height;&#xA;        err = av_frame_get_buffer(m_output_video_frame, 32);&#xA;        if (err &lt; 0) {&#xA;            printf("Failed to allocate output frame.\n");&#xA;            return false;&#xA;        }&#xA;    }&#xA;&#xA;    if (!m_output_scale_context) {&#xA;        m_output_scale_context = sws_getContext(m_output_video_codec_context->width, m_output_video_codec_context->height, &#xA;                                                AV_PIX_FMT_RGB24,&#xA;                                                m_output_video_codec_context->width, m_output_video_codec_context->height, &#xA;                                                AV_PIX_FMT_YUV420P, SWS_BICUBIC, nullptr, nullptr, nullptr);&#xA;    }&#xA;&#xA;    int inLinesize[1] = { 3 * m_output_video_codec_context->width };&#xA;    sws_scale(m_output_scale_context, (const uint8_t* const*)&amp;buf, inLinesize, 0, m_output_video_codec_context->height,&#xA;              m_output_video_frame->data, m_output_video_frame->linesize);&#xA;    PrepareFlipFrameJ420(m_output_video_frame);&#xA;    //TODO(P0): Switch m_frame to be m_input_video_frame so I don&#x27;t end up using the presentation timestamp from&#xA;    //          an audio frame if I threadify the frame reading.&#xA;    m_output_video_frame->pts = m_frame->pts;&#xA;    printf("Output PTS: %d, time_base: %d/%d\n", m_output_video_frame->pts,&#xA;        m_output_video_codec_context->time_base.num, m_output_video_codec_context->time_base.den);&#xA;    err = avcodec_send_frame(m_output_video_codec_context, m_output_video_frame);&#xA;    if (err &lt; 0) {&#xA;        printf("  ERROR sending new video frame output: ");&#xA;        switch (err) {&#xA;        case AVERROR(EAGAIN):&#xA;            printf("AVERROR(EAGAIN): %d\n", err);&#xA;            break;&#xA;        case AVERROR_EOF:&#xA;            printf("AVERROR_EOF: %d\n", err);&#xA;            break;&#xA;        case AVERROR(EINVAL):&#xA;            printf("AVERROR(EINVAL): %d\n", err);&#xA;            break;&#xA;        case AVERROR(ENOMEM):&#xA;            printf("AVERROR(ENOMEM): %d\n", err);&#xA;            break;&#xA;        }&#xA;&#xA;        return false;&#xA;    }&#xA;&#xA;    AVPacket pkt;&#xA;    av_init_packet(&amp;pkt);&#xA;    pkt.data = nullptr;&#xA;    pkt.size = 0;&#xA;    pkt.flags |= AV_PKT_FLAG_KEY;&#xA;    int ret = 0;&#xA;    if ((ret = avcodec_receive_packet(m_output_video_codec_context, &amp;pkt)) == 0) {&#xA;        static int counter = 0;&#xA;        printf("pkt.key: 0x%08x, pkt.size: %d, counter:\n", pkt.flags &amp; AV_PKT_FLAG_KEY, pkt.size, counter&#x2B;&#x2B;);&#xA;        uint8_t* size = ((uint8_t*)pkt.data);&#xA;        printf("sizes: %d %d %d %d %d %d %d %d %d\n", size[0], size[1], size[2], size[2], size[3], size[4], size[5], size[6], size[7]);&#xA;        av_interleaved_write_frame(m_output_format_context, &amp;pkt);&#xA;    }&#xA;    printf("push: %d\n", ret);&#xA;    av_packet_unref(&amp;pkt);&#xA;&#xA;    return true;&#xA;}&#xA;&#xA;bool MediaContainerMgr::finalize_output() {&#xA;    if (!m_recording)&#xA;        return true;&#xA;&#xA;    AVPacket pkt;&#xA;    av_init_packet(&amp;pkt);&#xA;    pkt.data = nullptr;&#xA;    pkt.size = 0;&#xA;&#xA;    for (;;) {&#xA;        avcodec_send_frame(m_output_video_codec_context, nullptr);&#xA;        if (avcodec_receive_packet(m_output_video_codec_context, &amp;pkt) == 0) {&#xA;            av_interleaved_write_frame(m_output_format_context, &amp;pkt);&#xA;            printf("final push:\n");&#xA;        } else {&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    av_packet_unref(&amp;pkt);&#xA;&#xA;    av_write_trailer(m_output_format_context);&#xA;    if (!(m_output_format->flags &amp; AVFMT_NOFILE)) {&#xA;        int err = avio_close(m_output_format_context->pb);&#xA;        if (err &lt; 0) {&#xA;            printf("Failed to close file. err: %d\n", err);&#xA;            return false;&#xA;        }&#xA;    }&#xA;&#xA;    return true;&#xA;}&#xA;

    &#xA;

  • Ffmpeg Android - First image is skipped while making a slideshow

    17 mars 2021, par M. Bilal Asif

    Issue : I have 7 images in a list (with different size, resolution and format). I am adding an mp3 audio file and fade effect while making a slideshow with them, as i am trying to do it by following command

    &#xA;

    val inputCommandinitial = arrayOf("-y", "-framerate", "1/5")&#xA;val arrTop = ArrayList<string>()&#xA;&#xA; //Add all paths&#xA;    for (i in images!!.indices) {&#xA;        arrTop.add("-loop")&#xA;        arrTop.add("1")&#xA;        arrTop.add("-t")&#xA;        arrTop.add("5")            &#xA;        arrTop.add("-i")&#xA;        arrTop.add(images!![i].path)&#xA;    }&#xA;&#xA;    //Apply filter graph&#xA;    arrTop.add("-i")&#xA;    arrTop.add(audio!!.path)&#xA;    arrTop.add("-filter_complex")&#xA;&#xA;    val stringBuilder = StringBuilder()&#xA;&#xA;    for (i in images!!.indices) {&#xA;        stringBuilder.append("[$i:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v$i];")&#xA;    }&#xA;&#xA;    for (i in images!!.indices) {&#xA;        stringBuilder.append("[v$i]")&#xA;    }&#xA;&#xA;    //Concat command&#xA;    stringBuilder.append("concat=n=${images!!.size}:v=1:a=0,fps=25,format=yuv420p[v]")&#xA;&#xA;    val endcommand = arrayOf("-map", "[v]", "-map", "${images!!.size}:a", "-c:a", "copy", "-preset", "ultrafast", "-shortest", outputLocation.path)&#xA;    val finalCommand = (inputCommandinitial &#x2B; arrTop &#x2B; stringBuilder.toString() &#x2B; endcommand)&#xA;</string>

    &#xA;

    But, it skips the first image and shows the rest 6 images and video output duration is 30 seconds, i've been trying since 3 days now

    &#xA;

    Requirement :&#xA;making a slideshow with different format, size, resolution etc, i.e. picked by user from gallery and show in slideshow adding an audio behind, with fade effect

    &#xA;

    Here is the complete log :

    &#xA;

    I/mobile-ffmpeg: Loading mobile-ffmpeg.&#xA; I/mobile-ffmpeg: Loaded mobile-ffmpeg-full-gpl-x86-4.4-lts-20200803.&#xA; D/mobile-ffmpeg: Callback thread started.&#xA; I/mobile-ffmpeg: ffmpeg version v4.4-dev-416&#xA; I/mobile-ffmpeg:  Copyright (c) 2000-2020 the FFmpeg developers&#xA; I/mobile-ffmpeg:   built with Android (6454773 based on r365631c2) clang version 9.0.8 (https://android.googlesource.com/toolchain/llvm-project 98c855489587874b2a325e7a516b99d838599c6f) (based on LLVM 9.0.8svn)&#xA; I/mobile-ffmpeg:   configuration: --cross-prefix=i686-linux-android- --sysroot=/files/android-sdk/ndk/21.3.6528147/toolchains/llvm/prebuilt/linux-x86_64/sysroot --prefix=/home/taner/Projects/mobile-ffmpeg/prebuilt/android-x86/ffmpeg --pkg-config=/usr/bin/pkg-config --enable-version3 --arch=i686 --cpu=i686 --cc=i686-linux-android16-clang --cxx=i686-linux-android16-clang&#x2B;&#x2B; --extra-libs=&#x27;-L/home/taner/Projects/mobile-ffmpeg/prebuilt/android-x86/cpu-features/lib -lndk_compat&#x27; --target-os=android --disable-neon --disable-asm --disable-inline-asm --enable-cross-compile --enable-pic --enable-jni --enable-optimizations --enable-swscale --enable-shared --enable-v4l2-m2m --disable-outdev=fbdev --disable-indev=fbdev --enable-small --disable-openssl --disable-xmm-clobber-test --disable-debug --enable-lto --disable-neon-clobber-test --disable-programs --disable-postproc --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages --disable-static --disable-sndio --disable-schannel --disable-securetransport --disable-xlib --disable-cuda --disable-cuvid --disable-nvenc --disable-vaapi --disable-vdpau --disable-videotoolbox --disable-audiotoolbox --disable-appkit --disable-alsa --disable-cuda --disable-cuvid --disable-nvenc --disable-vaapi --disable-vdpau --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-gmp --enable-gnutls --enable-libmp3lame --enable-libass --enable-iconv --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libxml2 --enable-libopencore-amrnb --enable-libshine --enable-libspeex --enable-libwavpack --enable-libkvazaar --enable-libx264 --enable-gpl --enable-libxvid --enable-gpl --enable-libx265 --enable-gpl --enable-libvidstab --enable-gpl --enable-libilbc --enable-libopus --enable-libsnappy --enable-libsoxr --enable-libaom --enable-libtwolame --disable-sdl2 --enable-libvo-amrwbenc --enable-zlib --enable-mediacodec&#xA; I/mobile-ffmpeg:   libavutil      56. 55.100 / 56. 55.100&#xA; I/mobile-ffmpeg:   libavcodec     58. 96.100 / 58. 96.100&#xA; I/mobile-ffmpeg:   libavformat    58. 48.100 / 58. 48.100&#xA; I/mobile-ffmpeg:   libavdevice    58. 11.101 / 58. 11.101&#xA; I/mobile-ffmpeg:   libavfilter     7. 87.100 /  7. 87.100&#xA; I/mobile-ffmpeg:   libswscale      5.  8.100 /  5.  8.100&#xA; I/mobile-ffmpeg:   libswresample   3.  8.100 /  3.  8.100&#xA; I/mobile-ffmpeg: Input #0, png_pipe, from &#x27;/storage/emulated/0/FFMpeg Example/image1.png&#x27;:&#xA; I/mobile-ffmpeg:   Duration:&#xA; I/mobile-ffmpeg: N/A&#xA; I/mobile-ffmpeg: , bitrate:&#xA; I/mobile-ffmpeg: N/A&#xA; I/mobile-ffmpeg:     Stream #0:0&#xA; I/mobile-ffmpeg: : Video: png, rgb24(pc), 800x500 [SAR 11811:11811 DAR 8:5]&#xA; I/mobile-ffmpeg: ,&#xA; I/mobile-ffmpeg: 0.20 tbr,&#xA; I/mobile-ffmpeg: 0.20 tbn,&#xA; I/mobile-ffmpeg: 0.20 tbc&#xA; W/mobile-ffmpeg: [png_pipe @ 0xe1a8ec00] Stream #0: not enough frames to estimate rate; consider increasing probesize&#xA; I/mobile-ffmpeg: Input #1, png_pipe, from &#x27;/storage/emulated/0/FFMpeg Example/image2.png&#x27;:&#xA; I/mobile-ffmpeg:   Duration:&#xA; I/mobile-ffmpeg: N/A&#xA; I/mobile-ffmpeg: , bitrate:&#xA; I/mobile-ffmpeg: N/A&#xA; I/mobile-ffmpeg:     Stream #1:0&#xA; I/mobile-ffmpeg: : Video: png, rgb24(pc), 1920x1080 [SAR 3779:3779 DAR 16:9]&#xA; I/mobile-ffmpeg: ,&#xA; I/mobile-ffmpeg: 25 tbr,&#xA; I/mobile-ffmpeg: 25 tbn,&#xA; I/mobile-ffmpeg: 25 tbc&#xA; I/mobile-ffmpeg: Input #2, png_pipe, from &#x27;/storage/emulated/0/FFMpeg Example/one.png&#x27;:&#xA; I/mobile-ffmpeg:   Duration:&#xA; I/mobile-ffmpeg: N/A&#xA; I/mobile-ffmpeg: , bitrate:&#xA; I/mobile-ffmpeg: N/A&#xA; I/mobile-ffmpeg:     Stream #2:0&#xA; I/mobile-ffmpeg: : Video: png, rgba(pc), 720x1280&#xA; I/mobile-ffmpeg: ,&#xA; I/mobile-ffmpeg: 25 fps,&#xA; I/mobile-ffmpeg: 25 tbr,&#xA; I/mobile-ffmpeg: 25 tbn,&#xA; I/mobile-ffmpeg: 25 tbc&#xA; I/mobile-ffmpeg: Input #3, image2, from &#x27;/storage/emulated/0/FFMpeg Example/two.png&#x27;:&#xA; I/mobile-ffmpeg:   Duration:&#xA; I/mobile-ffmpeg: 00:00:00.04&#xA; I/mobile-ffmpeg: , start:&#xA; I/mobile-ffmpeg: 0.000000&#xA; I/mobile-ffmpeg: , bitrate:&#xA; I/mobile-ffmpeg: 7955 kb/s&#xA; I/mobile-ffmpeg:     Stream #3:0&#xA; I/mobile-ffmpeg: : Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 564x1002 [SAR 72:72 DAR 94:167]&#xA; I/mobile-ffmpeg: ,&#xA; I/mobile-ffmpeg: 25 fps,&#xA; I/mobile-ffmpeg: 25 tbr,&#xA; I/mobile-ffmpeg: 25 tbn,&#xA; I/mobile-ffmpeg: 25 tbc&#xA; W/mobile-ffmpeg: [png_pipe @ 0xe1a90a00] Stream #0: not enough frames to estimate rate; consider increasing probesize&#xA; I/mobile-ffmpeg: Input #4, png_pipe, from &#x27;/storage/emulated/0/FFMpeg Example/image3.png&#x27;:&#xA; I/mobile-ffmpeg:   Duration:&#xA; I/mobile-ffmpeg: N/A&#xA; I/mobile-ffmpeg: , bitrate:&#xA; I/mobile-ffmpeg: N/A&#xA; I/mobile-ffmpeg:     Stream #4:0&#xA; I/mobile-ffmpeg: : Video: png, rgb24(pc), 1820x1024&#xA; I/mobile-ffmpeg: ,&#xA; I/mobile-ffmpeg: 25 tbr,&#xA; I/mobile-ffmpeg: 25 tbn,&#xA; I/mobile-ffmpeg: 25 tbc&#xA; I/mobile-ffmpeg: Input #5, png_pipe, from &#x27;/storage/emulated/0/FFMpeg Example/image4.png&#x27;:&#xA; I/mobile-ffmpeg:   Duration:&#xA; I/mobile-ffmpeg: N/A&#xA; I/mobile-ffmpeg: , bitrate:&#xA; I/mobile-ffmpeg: N/A&#xA; I/mobile-ffmpeg:     Stream #5:0&#xA; I/mobile-ffmpeg: : Video: png, rgb24(pc), 1920x800 [SAR 2835:2835 DAR 12:5]&#xA; I/mobile-ffmpeg: ,&#xA; I/mobile-ffmpeg: 25 fps,&#xA; I/mobile-ffmpeg: 25 tbr,&#xA; I/mobile-ffmpeg: 25 tbn,&#xA; I/mobile-ffmpeg: 25 tbc&#xA; I/mobile-ffmpeg: Input #6, image2, from &#x27;/storage/emulated/0/FFMpeg Example/image5.png&#x27;:&#xA; I/mobile-ffmpeg:   Duration:&#xA; I/mobile-ffmpeg: 00:00:00.04&#xA; I/mobile-ffmpeg: , start:&#xA; I/mobile-ffmpeg: 0.000000&#xA; I/mobile-ffmpeg: , bitrate:&#xA; I/mobile-ffmpeg: 159573 kb/s&#xA; I/mobile-ffmpeg:     Stream #6:0&#xA; I/mobile-ffmpeg: : Video: mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 1600x900&#xA; I/mobile-ffmpeg: ,&#xA; I/mobile-ffmpeg: 25 fps,&#xA; I/mobile-ffmpeg: 25 tbr,&#xA; I/mobile-ffmpeg: 25 tbn,&#xA; I/mobile-ffmpeg: 25 tbc&#xA; W/mobile-ffmpeg: [mp3 @ 0xe1a92800] Estimating duration from bitrate, this may be inaccurate&#xA; I/mobile-ffmpeg: Input #7, mp3, from &#x27;/storage/emulated/0/FFMpeg Example/shortmusic.mp3&#x27;:&#xA; I/mobile-ffmpeg:   Metadata:&#xA; I/mobile-ffmpeg:     track           :&#xA; I/mobile-ffmpeg: 25&#xA; I/mobile-ffmpeg:     artist          :&#xA; I/mobile-ffmpeg: longzijun&#xA; I/mobile-ffmpeg:     title           :&#xA; I/mobile-ffmpeg: Memoryne Music Box Version&#xA; I/mobile-ffmpeg:     album_artist    :&#xA; I/mobile-ffmpeg: longzijun&#xA; I/mobile-ffmpeg:     genre           :&#xA; I/mobile-ffmpeg: Soundtrack&#xA; I/mobile-ffmpeg:     date            :&#xA; I/mobile-ffmpeg: 2012&#xA; I/mobile-ffmpeg:   Duration:&#xA; I/mobile-ffmpeg: 00:00:57.70&#xA; I/mobile-ffmpeg: , start:&#xA; I/mobile-ffmpeg: 0.000000&#xA; I/mobile-ffmpeg: , bitrate:&#xA; I/mobile-ffmpeg: 320 kb/s&#xA; I/mobile-ffmpeg:     Stream #7:0&#xA; I/mobile-ffmpeg: : Audio: mp3, 48000 Hz, stereo, fltp, 320 kb/s&#xA; I/mobile-ffmpeg: Stream mapping:&#xA; I/mobile-ffmpeg:   Stream #0:0 (png) -> scale&#xA; I/mobile-ffmpeg:   Stream #1:0 (png) -> scale&#xA; I/mobile-ffmpeg:   Stream #2:0 (png) -> scale&#xA; I/mobile-ffmpeg:   Stream #3:0 (mjpeg) -> scale&#xA; I/mobile-ffmpeg:   Stream #4:0 (png) -> scale&#xA; I/mobile-ffmpeg:   Stream #5:0 (png) -> scale&#xA; I/mobile-ffmpeg:   Stream #6:0 (mjpeg) -> scale&#xA; I/mobile-ffmpeg:   format&#xA; I/mobile-ffmpeg:  -> Stream #0:0 (libx264)&#xA; I/mobile-ffmpeg:   Stream #7:0 -> #0:1&#xA; I/mobile-ffmpeg:  (copy)&#xA; I/mobile-ffmpeg: Press [q] to stop, [?] for help&#xA; I/mobile-ffmpeg: frame=    0 fps=0.0 q=0.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbits/s speed=N/A&#xA; W/mobile-ffmpeg: [graph 0 input from stream 0:0 @ 0xe1a1bec0] sws_param option is deprecated and ignored&#xA; W/mobile-ffmpeg: [graph 0 input from stream 1:0 @ 0xe1a1bf20] sws_param option is deprecated and ignored&#xA; W/mobile-ffmpeg: [graph 0 input from stream 2:0 @ 0xe1a1bfe0] sws_param option is deprecated and ignored&#xA; W/mobile-ffmpeg: [graph 0 input from stream 3:0 @ 0xe1a1c0a0] sws_param option is deprecated and ignored&#xA; W/mobile-ffmpeg: [graph 0 input from stream 4:0 @ 0xe1a1c160] sws_param option is deprecated and ignored&#xA; W/mobile-ffmpeg: [graph 0 input from stream 5:0 @ 0xe1a1c220] sws_param option is deprecated and ignored&#xA; W/mobile-ffmpeg: [graph 0 input from stream 6:0 @ 0xe1a1c2e0] sws_param option is deprecated and ignored&#xA; W/mobile-ffmpeg: [swscaler @ 0xbf684840] deprecated pixel format used, make sure you did set range correctly&#xA; W/mobile-ffmpeg: [swscaler @ 0xbf68fec0] deprecated pixel format used, make sure you did set range correctly&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] using SAR=1/1&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] using cpu capabilities: none!&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] profile Constrained Baseline, level 3.1, 4:2:0, 8-bit&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] 264 - core 160 - H.264/MPEG-4 AVC codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0&#xA; I/mobile-ffmpeg: Output #0, mp4, to &#x27;/storage/emulated/0/FFMpeg Example/video/movie_1615954349867.mp4&#x27;:&#xA; I/mobile-ffmpeg:   Metadata:&#xA; I/mobile-ffmpeg:     encoder         :&#xA; I/mobile-ffmpeg: Lavf58.48.100&#xA; I/mobile-ffmpeg:     Stream #0:0&#xA; I/mobile-ffmpeg: : Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 720x1280 [SAR 1:1 DAR 9:16], q=-1--1&#xA; I/mobile-ffmpeg: ,&#xA; I/mobile-ffmpeg: 25 fps,&#xA; I/mobile-ffmpeg: 12800 tbn,&#xA; I/mobile-ffmpeg: 25 tbc&#xA; I/mobile-ffmpeg:  (default)&#xA; I/mobile-ffmpeg:     Metadata:&#xA; I/mobile-ffmpeg:       encoder         :&#xA; I/mobile-ffmpeg: Lavc58.96.100 libx264&#xA; I/mobile-ffmpeg:     Side data:&#xA; I/mobile-ffmpeg:&#xA; I/mobile-ffmpeg: cpb:&#xA; I/mobile-ffmpeg: bitrate max/min/avg: 0/0/0 buffer size: 0&#xA; I/mobile-ffmpeg: vbv_delay: N/A&#xA; I/mobile-ffmpeg:     Stream #0:1&#xA; I/mobile-ffmpeg: : Audio: mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 320 kb/s&#xA; I/mobile-ffmpeg: frame=    0 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x&#xA; I/mobile-ffmpeg: frame=    7 fps=3.8 q=20.0 size=       0kB time=00:00:00.04 bitrate=   9.6kbits/s speed=0.0215x&#xA; I/mobile-ffmpeg: frame=   15 fps=6.3 q=22.0 size=       0kB time=00:00:00.36 bitrate=   1.1kbits/s speed=0.151x&#xA; I/mobile-ffmpeg: frame=   24 fps=8.2 q=23.0 size=     256kB time=00:00:00.72 bitrate=2912.9kbits/s speed=0.245x&#xA; I/mobile-ffmpeg: frame=   33 fps=9.5 q=14.0 size=     512kB time=00:00:01.08 bitrate=3883.7kbits/s speed=0.31x&#xA; I/mobile-ffmpeg: frame=   44 fps= 11 q=12.0 size=     512kB time=00:00:01.52 bitrate=2759.5kbits/s speed=0.379x&#xA; I/mobile-ffmpeg: frame=   55 fps= 12 q=12.0 size=     512kB time=00:00:01.96 bitrate=2140.1kbits/s speed=0.432x&#xA; I/mobile-ffmpeg: frame=   68 fps= 13 q=12.0 size=     768kB time=00:00:02.48 bitrate=2537.0kbits/s speed=0.491x&#xA; I/mobile-ffmpeg: frame=   77 fps= 14 q=12.0 size=     768kB time=00:00:02.84 bitrate=2215.4kbits/s speed=0.499x&#xA; I/mobile-ffmpeg: frame=   84 fps= 13 q=12.0 size=     768kB time=00:00:03.12 bitrate=2016.6kbits/s speed=0.499x&#xA; I/mobile-ffmpeg: frame=   94 fps= 14 q=12.0 size=     768kB time=00:00:03.52 bitrate=1787.4kbits/s speed=0.52x&#xA; I/mobile-ffmpeg: frame=  102 fps= 14 q=12.0 size=     768kB time=00:00:03.84 bitrate=1638.5kbits/s speed=0.525x&#xA; I/mobile-ffmpeg: frame=  116 fps= 15 q=12.0 size=     768kB time=00:00:04.40 bitrate=1429.9kbits/s speed=0.556x&#xA; I/mobile-ffmpeg: frame=  127 fps= 15 q=12.0 size=     768kB time=00:00:04.84 bitrate=1299.9kbits/s speed=0.574x&#xA; I/mobile-ffmpeg: frame=  134 fps= 15 q=21.0 size=     768kB time=00:00:05.12 bitrate=1228.9kbits/s speed=0.571x&#xA; I/mobile-ffmpeg: frame=  140 fps= 15 q=22.0 size=    1024kB time=00:00:05.36 bitrate=1565.1kbits/s speed=0.56x&#xA; I/mobile-ffmpeg: frame=  145 fps= 14 q=23.0 size=    1024kB time=00:00:05.56 bitrate=1508.8kbits/s speed=0.55x&#xA; I/mobile-ffmpeg: frame=  151 fps= 14 q=23.0 size=    1280kB time=00:00:05.80 bitrate=1807.9kbits/s speed=0.546x&#xA; I/mobile-ffmpeg: frame=  164 fps= 15 q=12.0 size=    1536kB time=00:00:06.32 bitrate=1991.0kbits/s speed=0.567x&#xA; I/mobile-ffmpeg: frame=  172 fps= 15 q=12.0 size=    1536kB time=00:00:06.64 bitrate=1895.1kbits/s speed=0.569x&#xA; I/mobile-ffmpeg: frame=  186 fps= 15 q=12.0 size=    1536kB time=00:00:07.20 bitrate=1747.7kbits/s speed=0.592x&#xA; I/mobile-ffmpeg: frame=  207 fps= 16 q=12.0 size=    1536kB time=00:00:08.04 bitrate=1565.1kbits/s speed=0.634x&#xA; I/mobile-ffmpeg: frame=  229 fps= 17 q=12.0 size=    1792kB time=00:00:08.92 bitrate=1645.8kbits/s speed=0.677x&#xA; I/mobile-ffmpeg: frame=  249 fps= 18 q=12.0 size=    1792kB time=00:00:09.72 bitrate=1510.3kbits/s speed=0.71x&#xA; I/mobile-ffmpeg: frame=  270 fps= 19 q=21.0 size=    2048kB time=00:00:10.56 bitrate=1588.8kbits/s speed=0.744x&#xA; I/mobile-ffmpeg: frame=  296 fps= 20 q=12.0 size=    2304kB time=00:00:11.60 bitrate=1627.1kbits/s speed=0.789x&#xA; I/mobile-ffmpeg: frame=  319 fps= 21 q=12.0 size=    2304kB time=00:00:12.52 bitrate=1507.6kbits/s speed=0.823x&#xA; I/mobile-ffmpeg: frame=  337 fps= 21 q=12.0 size=    2304kB time=00:00:13.24 bitrate=1425.6kbits/s speed=0.839x&#xA; I/mobile-ffmpeg: frame=  347 fps= 21 q=12.0 size=    2304kB time=00:00:13.64 bitrate=1383.8kbits/s speed=0.835x&#xA; I/mobile-ffmpeg: frame=  360 fps= 21 q=12.0 size=    2560kB time=00:00:14.16 bitrate=1481.1kbits/s speed=0.841x&#xA; I/mobile-ffmpeg: frame=  382 fps= 22 q=19.0 size=    2560kB time=00:00:15.04 bitrate=1394.4kbits/s speed=0.866x&#xA; I/mobile-ffmpeg: frame=  395 fps= 22 q=22.0 size=    2816kB time=00:00:15.56 bitrate=1482.6kbits/s speed=0.869x&#xA; I/mobile-ffmpeg: frame=  407 fps= 22 q=15.0 size=    3072kB time=00:00:16.04 bitrate=1569.0kbits/s speed=0.872x&#xA; I/mobile-ffmpeg: frame=  421 fps= 22 q=12.0 size=    3072kB time=00:00:16.60 bitrate=1516.0kbits/s speed=0.875x&#xA; I/mobile-ffmpeg: frame=  432 fps= 22 q=12.0 size=    3072kB time=00:00:17.04 bitrate=1476.9kbits/s speed=0.875x&#xA; I/mobile-ffmpeg: frame=  446 fps= 22 q=12.0 size=    3072kB time=00:00:17.60 bitrate=1429.9kbits/s speed=0.88x&#xA; I/mobile-ffmpeg: frame=  458 fps= 22 q=12.0 size=    3328kB time=00:00:18.08 bitrate=1507.9kbits/s speed=0.879x&#xA; I/mobile-ffmpeg: frame=  472 fps= 22 q=12.0 size=    3328kB time=00:00:18.64 bitrate=1462.6kbits/s speed=0.884x&#xA; I/mobile-ffmpeg: frame=  489 fps= 23 q=12.0 size=    3328kB time=00:00:19.32 bitrate=1411.1kbits/s speed=0.894x&#xA; I/mobile-ffmpeg: frame=  509 fps= 23 q=19.0 size=    3328kB time=00:00:20.12 bitrate=1355.0kbits/s speed=0.909x&#xA; I/mobile-ffmpeg: frame=  531 fps= 23 q=15.0 size=    3584kB time=00:00:21.00 bitrate=1398.1kbits/s speed=0.928x&#xA; I/mobile-ffmpeg: frame=  555 fps= 24 q=12.0 size=    3840kB time=00:00:21.96 bitrate=1432.5kbits/s speed=0.949x&#xA; I/mobile-ffmpeg: frame=  577 fps= 24 q=12.0 size=    3840kB time=00:00:22.84 bitrate=1377.3kbits/s speed=0.966x&#xA; I/mobile-ffmpeg: frame=  599 fps= 25 q=12.0 size=    3840kB time=00:00:23.72 bitrate=1326.2kbits/s speed=0.981x&#xA; I/mobile-ffmpeg: frame=  620 fps= 25 q=12.0 size=    3840kB time=00:00:24.56 bitrate=1280.8kbits/s speed=0.995x&#xA; I/mobile-ffmpeg: frame=  630 fps= 25 q=18.0 size=    3840kB time=00:00:24.96 bitrate=1260.3kbits/s speed=0.99x&#xA; I/mobile-ffmpeg: frame=  640 fps= 25 q=21.0 size=    4096kB time=00:00:25.36 bitrate=1323.1kbits/s speed=0.985x&#xA; I/mobile-ffmpeg: frame=  652 fps= 25 q=22.0 size=    4352kB time=00:00:25.84 bitrate=1379.7kbits/s speed=0.984x&#xA; I/mobile-ffmpeg: frame=  665 fps= 25 q=12.0 size=    4608kB time=00:00:26.36 bitrate=1432.1kbits/s speed=0.984x&#xA; I/mobile-ffmpeg: frame=  678 fps= 25 q=12.0 size=    4608kB time=00:00:26.88 bitrate=1404.4kbits/s speed=0.984x&#xA; I/mobile-ffmpeg: frame=  690 fps= 25 q=12.0 size=    4608kB time=00:00:27.36 bitrate=1379.7kbits/s speed=0.983x&#xA; I/mobile-ffmpeg: frame=  703 fps= 25 q=12.0 size=    4608kB time=00:00:27.88 bitrate=1354.0kbits/s speed=0.983x&#xA; I/mobile-ffmpeg: frame=  716 fps= 25 q=12.0 size=    4608kB time=00:00:28.40 bitrate=1329.2kbits/s speed=0.983x&#xA; I/mobile-ffmpeg: frame=  729 fps= 25 q=12.0 size=    4608kB time=00:00:28.92 bitrate=1305.3kbits/s speed=0.983x&#xA; I/mobile-ffmpeg: frame=  742 fps= 25 q=12.0 size=    4608kB time=00:00:29.44 bitrate=1282.2kbits/s speed=0.983x&#xA; I/mobile-ffmpeg: frame=  749 fps= 25 q=-1.0 Lsize=    4883kB time=00:00:29.95 bitrate=1335.5kbits/s speed=0.988x&#xA; I/mobile-ffmpeg: video:3696kB audio:1171kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead:&#xA; I/mobile-ffmpeg: 0.326516%&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] frame I:3     Avg QP:13.33  size:  2725&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] frame P:746   Avg QP:13.98  size:  5062&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] mb I  I16..4: 100.0%  0.0%  0.0%&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] mb P  I16..4:  7.1%  0.0%  0.0%  P16..4:  8.2%  0.0%  0.0%  0.0%  0.0%    skip:84.7%&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] coded y,uvDC,uvAC intra: 14.5% 19.0% 6.9% inter: 5.1% 5.4% 1.4%&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] i16 v,h,dc,p: 65% 18%  7%  9%&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] i8c dc,h,v,p: 71% 19%  6%  4%&#xA; I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] kb/s:1010.47&#xA; I/mobile-ffmpeg: Async command execution completed successfully.&#xA;

    &#xA;

    and here is command is ffmpeg syntax

    &#xA;

    "-y"&#xA;"-framerate"&#xA;"1/5"&#xA;"-loop"&#xA;"1"&#xA;"-t"&#xA;"5"&#xA;"-i"&#xA;"/storage/emulated/0/FFMpeg Example/image1.png"&#xA;"-loop"&#xA;"1"&#xA;"-t"&#xA;"5"&#xA;"-i"&#xA; "/storage/emulated/0/FFMpeg Example/image2.png"&#xA; "-loop"&#xA; "1"&#xA; "-t"&#xA; "5"&#xA; "-i"&#xA; "/storage/emulated/0/FFMpeg Example/one.png"&#xA; "-loop"&#xA; "1"&#xA; "-t"&#xA; "5"&#xA; "-i"&#xA; "/storage/emulated/0/FFMpeg Example/two.png"&#xA; "-loop"&#xA; "1"&#xA; "-t"&#xA; "5"&#xA; "-i"&#xA; "/storage/emulated/0/FFMpeg Example/image3.png"&#xA; "-loop"&#xA; "1"&#xA; "-t"&#xA; "5"&#xA; "-i"&#xA; "/storage/emulated/0/FFMpeg Example/image4.png"&#xA; "-loop"&#xA; "1"&#xA; "-t"&#xA; "5"&#xA; "-i"&#xA; "/storage/emulated/0/FFMpeg Example/image5.png"&#xA; "-i"&#xA; "/storage/emulated/0/FFMpeg Example/shortmusic.mp3"&#xA; "-filter_complex"&#xA; "[0:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v0];&#xA;[1:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v1];&#xA;[2:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v2];&#xA;[3:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v3];&#xA;[4:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v4];&#xA;[5:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v5];&#xA;[6:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v6];&#xA;[v0][v1][v2][v3][v4][v5][v6]concat=n=7:v=1:a=0,fps=25,format=yuv420p[v]"&#xA; "-map"&#xA; "[v]"&#xA; "-map"&#xA; "7:a"&#xA; "-c:a"&#xA; "copy"&#xA; "-preset"&#xA; "ultrafast"&#xA; "-shortest"&#xA; "/storage/emulated/0/FFMpeg Example/video/movie_1615955101725.mp4"&#xA;

    &#xA;