
Recherche avancée
Médias (91)
-
DJ Z-trip - Victory Lap : The Obama Mix Pt. 2
15 septembre 2011
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (76)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (8792)
-
Why does adding audio stream to ffmpeg's libavcodec output container cause a crash ?
29 mars 2021, par SniggerfardimungusAs it stands, my project correctly uses libavcodec to decode a video, where each frame is manipulated (it doesn't matter how) and output to a new video. I've cobbled this together from examples found online, and it works. The result is a perfect .mp4 of the manipulated frames, minus the audio.


My problem is, when I try to add an audio stream to the output container, I get a crash in mux.c that I can't explain. It's in
static int compute_muxer_pkt_fields(AVFormatContext *s, AVStream *st, AVPacket *pkt)
. Wherest->internal->priv_pts->val = pkt->dts;
is attempted,priv_pts
is nullptr.

I don't recall the version number, but this is from a November 4, 2020 ffmpeg build from git.


My
MediaContentMgr
is much bigger than what I have here. I'm stripping out everything to do with the frame manipulation, so if I'm missing anything, please let me know and I'll edit.

The code that, when added, triggers the nullptr exception, is called out inline


The .h :


#ifndef _API_EXAMPLE_H
#define _API_EXAMPLE_H

#include <glad></glad>glad.h>
#include <glfw></glfw>glfw3.h>
#include "glm/glm.hpp"

extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
#include <libavutil></libavutil>opt.h>
#include <libswscale></libswscale>swscale.h>
}

#include "shader_s.h"

class MediaContainerMgr {
public:
 MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag, 
 const glm::vec3* extents);
 ~MediaContainerMgr();
 void render();
 bool recording() { return m_recording; }

 // Major thanks to "shi-yan" who helped make this possible:
 // https://github.com/shi-yan/videosamples/blob/master/libavmp4encoding/main.cpp
 bool init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height);
 bool output_video_frame(uint8_t* buf);
 bool finalize_output();

private:
 AVFormatContext* m_format_context;
 AVCodec* m_video_codec;
 AVCodec* m_audio_codec;
 AVCodecParameters* m_video_codec_parameters;
 AVCodecParameters* m_audio_codec_parameters;
 AVCodecContext* m_codec_context;
 AVFrame* m_frame;
 AVPacket* m_packet;
 uint32_t m_video_stream_index;
 uint32_t m_audio_stream_index;
 
 void init_rendering(const glm::vec3* extents);
 int decode_packet();

 // For writing the output video:
 void free_output_assets();
 bool m_recording;
 AVOutputFormat* m_output_format;
 AVFormatContext* m_output_format_context;
 AVCodec* m_output_video_codec;
 AVCodecContext* m_output_video_codec_context;
 AVFrame* m_output_video_frame;
 SwsContext* m_output_scale_context;
 AVStream* m_output_video_stream;
 
 AVCodec* m_output_audio_codec;
 AVStream* m_output_audio_stream;
 AVCodecContext* m_output_audio_codec_context;
};

#endif



And, the hellish .cpp :


#include 
#include 
#include 
#include 
#include 

#include "media_container_manager.h"

MediaContainerMgr::MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag,
 const glm::vec3* extents) :
 m_video_stream_index(-1),
 m_audio_stream_index(-1),
 m_recording(false),
 m_output_format(nullptr),
 m_output_format_context(nullptr),
 m_output_video_codec(nullptr),
 m_output_video_codec_context(nullptr),
 m_output_video_frame(nullptr),
 m_output_scale_context(nullptr),
 m_output_video_stream(nullptr)
{
 // AVFormatContext holds header info from the format specified in the container:
 m_format_context = avformat_alloc_context();
 if (!m_format_context) {
 throw "ERROR could not allocate memory for Format Context";
 }
 
 // open the file and read its header. Codecs are not opened here.
 if (avformat_open_input(&m_format_context, infile.c_str(), NULL, NULL) != 0) {
 throw "ERROR could not open input file for reading";
 }

 printf("format %s, duration %lldus, bit_rate %lld\n", m_format_context->iformat->name, m_format_context->duration, m_format_context->bit_rate);
 //read avPackets (?) from the avFormat (?) to get stream info. This populates format_context->streams.
 if (avformat_find_stream_info(m_format_context, NULL) < 0) {
 throw "ERROR could not get stream info";
 }

 for (unsigned int i = 0; i < m_format_context->nb_streams; i++) {
 AVCodecParameters* local_codec_parameters = NULL;
 local_codec_parameters = m_format_context->streams[i]->codecpar;
 printf("AVStream->time base before open coded %d/%d\n", m_format_context->streams[i]->time_base.num, m_format_context->streams[i]->time_base.den);
 printf("AVStream->r_frame_rate before open coded %d/%d\n", m_format_context->streams[i]->r_frame_rate.num, m_format_context->streams[i]->r_frame_rate.den);
 printf("AVStream->start_time %" PRId64 "\n", m_format_context->streams[i]->start_time);
 printf("AVStream->duration %" PRId64 "\n", m_format_context->streams[i]->duration);
 printf("duration(s): %lf\n", (float)m_format_context->streams[i]->duration / m_format_context->streams[i]->time_base.den * m_format_context->streams[i]->time_base.num);
 AVCodec* local_codec = NULL;
 local_codec = avcodec_find_decoder(local_codec_parameters->codec_id);
 if (local_codec == NULL) {
 throw "ERROR unsupported codec!";
 }

 if (local_codec_parameters->codec_type == AVMEDIA_TYPE_VIDEO) {
 if (m_video_stream_index == -1) {
 m_video_stream_index = i;
 m_video_codec = local_codec;
 m_video_codec_parameters = local_codec_parameters;
 }
 m_height = local_codec_parameters->height;
 m_width = local_codec_parameters->width;
 printf("Video Codec: resolution %dx%d\n", m_width, m_height);
 }
 else if (local_codec_parameters->codec_type == AVMEDIA_TYPE_AUDIO) {
 if (m_audio_stream_index == -1) {
 m_audio_stream_index = i;
 m_audio_codec = local_codec;
 m_audio_codec_parameters = local_codec_parameters;
 }
 printf("Audio Codec: %d channels, sample rate %d\n", local_codec_parameters->channels, local_codec_parameters->sample_rate);
 }

 printf("\tCodec %s ID %d bit_rate %lld\n", local_codec->name, local_codec->id, local_codec_parameters->bit_rate);
 }

 m_codec_context = avcodec_alloc_context3(m_video_codec);
 if (!m_codec_context) {
 throw "ERROR failed to allocate memory for AVCodecContext";
 }

 if (avcodec_parameters_to_context(m_codec_context, m_video_codec_parameters) < 0) {
 throw "ERROR failed to copy codec params to codec context";
 }

 if (avcodec_open2(m_codec_context, m_video_codec, NULL) < 0) {
 throw "ERROR avcodec_open2 failed to open codec";
 }

 m_frame = av_frame_alloc();
 if (!m_frame) {
 throw "ERROR failed to allocate AVFrame memory";
 }

 m_packet = av_packet_alloc();
 if (!m_packet) {
 throw "ERROR failed to allocate AVPacket memory";
 }
}

MediaContainerMgr::~MediaContainerMgr() {
 avformat_close_input(&m_format_context);
 av_packet_free(&m_packet);
 av_frame_free(&m_frame);
 avcodec_free_context(&m_codec_context);


 glDeleteVertexArrays(1, &m_VAO);
 glDeleteBuffers(1, &m_VBO);
}


bool MediaContainerMgr::advance_frame() {
 while (true) {
 if (av_read_frame(m_format_context, m_packet) < 0) {
 // Do we actually need to unref the packet if it failed?
 av_packet_unref(m_packet);
 continue;
 //return false;
 }
 else {
 if (m_packet->stream_index == m_video_stream_index) {
 //printf("AVPacket->pts %" PRId64 "\n", m_packet->pts);
 int response = decode_packet();
 av_packet_unref(m_packet);
 if (response != 0) {
 continue;
 //return false;
 }
 return true;
 }
 else {
 printf("m_packet->stream_index: %d\n", m_packet->stream_index);
 printf(" m_packet->pts: %lld\n", m_packet->pts);
 printf(" mpacket->size: %d\n", m_packet->size);
 if (m_recording) {
 int err = 0;
 //err = avcodec_send_packet(m_output_video_codec_context, m_packet);
 printf(" encoding error: %d\n", err);
 }
 }
 }

 // We're done with the packet (it's been unpacked to a frame), so deallocate & reset to defaults:
/*
 if (m_frame == NULL)
 return false;

 if (m_frame->data[0] == NULL || m_frame->data[1] == NULL || m_frame->data[2] == NULL) {
 printf("WARNING: null frame data");
 continue;
 }
*/
 }
}

int MediaContainerMgr::decode_packet() {
 // Supply raw packet data as input to a decoder
 // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3
 int response = avcodec_send_packet(m_codec_context, m_packet);

 if (response < 0) {
 char buf[256];
 av_strerror(response, buf, 256);
 printf("Error while receiving a frame from the decoder: %s\n", buf);
 return response;
 }

 // Return decoded output data (into a frame) from a decoder
 // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga11e6542c4e66d3028668788a1a74217c
 response = avcodec_receive_frame(m_codec_context, m_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 return response;
 } else if (response < 0) {
 char buf[256];
 av_strerror(response, buf, 256);
 printf("Error while receiving a frame from the decoder: %s\n", buf);
 return response;
 } else {
 printf(
 "Frame %d (type=%c, size=%d bytes) pts %lld key_frame %d [DTS %d]\n",
 m_codec_context->frame_number,
 av_get_picture_type_char(m_frame->pict_type),
 m_frame->pkt_size,
 m_frame->pts,
 m_frame->key_frame,
 m_frame->coded_picture_number
 );
 }
 return 0;
}


bool MediaContainerMgr::init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height) {
 if (m_recording)
 return true;
 m_recording = true;

 advance_to(0L); // I've deleted the implmentation. Just seeks to beginning of vid. Works fine.

 if (!(m_output_format = av_guess_format(nullptr, video_file_name.c_str(), nullptr))) {
 printf("Cannot guess output format.\n");
 return false;
 }

 int err = avformat_alloc_output_context2(&m_output_format_context, m_output_format, nullptr, video_file_name.c_str());
 if (err < 0) {
 printf("Failed to allocate output context.\n");
 return false;
 }

 //TODO(P0): Break out the video and audio inits into their own methods.
 m_output_video_codec = avcodec_find_encoder(m_output_format->video_codec);
 if (!m_output_video_codec) {
 printf("Failed to create video codec.\n");
 return false;
 }
 m_output_video_stream = avformat_new_stream(m_output_format_context, m_output_video_codec);
 if (!m_output_video_stream) {
 printf("Failed to find video format.\n");
 return false;
 } 
 m_output_video_codec_context = avcodec_alloc_context3(m_output_video_codec);
 if (!m_output_video_codec_context) {
 printf("Failed to create video codec context.\n");
 return(false);
 }
 m_output_video_stream->codecpar->codec_id = m_output_format->video_codec;
 m_output_video_stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 m_output_video_stream->codecpar->width = width;
 m_output_video_stream->codecpar->height = height;
 m_output_video_stream->codecpar->format = AV_PIX_FMT_YUV420P;
 // Use the same bit rate as the input stream.
 m_output_video_stream->codecpar->bit_rate = m_format_context->streams[m_video_stream_index]->codecpar->bit_rate;
 m_output_video_stream->avg_frame_rate = m_format_context->streams[m_video_stream_index]->avg_frame_rate;
 avcodec_parameters_to_context(m_output_video_codec_context, m_output_video_stream->codecpar);
 m_output_video_codec_context->time_base = m_format_context->streams[m_video_stream_index]->time_base;
 
 //TODO(P1): Set these to match the input stream?
 m_output_video_codec_context->max_b_frames = 2;
 m_output_video_codec_context->gop_size = 12;
 m_output_video_codec_context->framerate = m_format_context->streams[m_video_stream_index]->r_frame_rate;
 //m_output_codec_context->refcounted_frames = 0;
 if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H264) {
 av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
 } else if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H265) {
 av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
 } else {
 av_opt_set_int(m_output_video_codec_context, "lossless", 1, 0);
 }
 avcodec_parameters_from_context(m_output_video_stream->codecpar, m_output_video_codec_context);

 m_output_audio_codec = avcodec_find_encoder(m_output_format->audio_codec);
 if (!m_output_audio_codec) {
 printf("Failed to create audio codec.\n");
 return false;
 }



I've commented out all of the audio stream init beyond this next line, because this is where
the trouble begins. Creating this output stream causes the null reference I mentioned. If I
uncomment everything below here, I still get the null deref. If I comment out this line, the
deref exception vanishes. (IOW, I commented out more and more code until I found that this
was the trigger that caused the problem.)


I assume that there's something I'm doing wrong in the rest of the commented out code, that,
when fixed, will fix the nullptr and give me a working audio stream.


m_output_audio_stream = avformat_new_stream(m_output_format_context, m_output_audio_codec);
 if (!m_output_audio_stream) {
 printf("Failed to find audio format.\n");
 return false;
 }
 /*
 m_output_audio_codec_context = avcodec_alloc_context3(m_output_audio_codec);
 if (!m_output_audio_codec_context) {
 printf("Failed to create audio codec context.\n");
 return(false);
 }
 m_output_audio_stream->codecpar->codec_id = m_output_format->audio_codec;
 m_output_audio_stream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
 m_output_audio_stream->codecpar->format = m_format_context->streams[m_audio_stream_index]->codecpar->format;
 m_output_audio_stream->codecpar->bit_rate = m_format_context->streams[m_audio_stream_index]->codecpar->bit_rate;
 m_output_audio_stream->avg_frame_rate = m_format_context->streams[m_audio_stream_index]->avg_frame_rate;
 avcodec_parameters_to_context(m_output_audio_codec_context, m_output_audio_stream->codecpar);
 m_output_audio_codec_context->time_base = m_format_context->streams[m_audio_stream_index]->time_base;
 */

 //TODO(P2): Free assets that have been allocated.
 err = avcodec_open2(m_output_video_codec_context, m_output_video_codec, nullptr);
 if (err < 0) {
 printf("Failed to open codec.\n");
 return false;
 }

 if (!(m_output_format->flags & AVFMT_NOFILE)) {
 err = avio_open(&m_output_format_context->pb, video_file_name.c_str(), AVIO_FLAG_WRITE);
 if (err < 0) {
 printf("Failed to open output file.");
 return false;
 }
 }

 err = avformat_write_header(m_output_format_context, NULL);
 if (err < 0) {
 printf("Failed to write header.\n");
 return false;
 }

 av_dump_format(m_output_format_context, 0, video_file_name.c_str(), 1);

 return true;
}


//TODO(P2): make this a member. (Thanks to https://emvlo.wordpress.com/2016/03/10/sws_scale/)
void PrepareFlipFrameJ420(AVFrame* pFrame) {
 for (int i = 0; i < 4; i++) {
 if (i)
 pFrame->data[i] += pFrame->linesize[i] * ((pFrame->height >> 1) - 1);
 else
 pFrame->data[i] += pFrame->linesize[i] * (pFrame->height - 1);
 pFrame->linesize[i] = -pFrame->linesize[i];
 }
}



This is where we take an altered frame and write it to the output container. This works fine
as long as we haven't set up an audio stream in the output container.


bool MediaContainerMgr::output_video_frame(uint8_t* buf) {
 int err;

 if (!m_output_video_frame) {
 m_output_video_frame = av_frame_alloc();
 m_output_video_frame->format = AV_PIX_FMT_YUV420P;
 m_output_video_frame->width = m_output_video_codec_context->width;
 m_output_video_frame->height = m_output_video_codec_context->height;
 err = av_frame_get_buffer(m_output_video_frame, 32);
 if (err < 0) {
 printf("Failed to allocate output frame.\n");
 return false;
 }
 }

 if (!m_output_scale_context) {
 m_output_scale_context = sws_getContext(m_output_video_codec_context->width, m_output_video_codec_context->height, 
 AV_PIX_FMT_RGB24,
 m_output_video_codec_context->width, m_output_video_codec_context->height, 
 AV_PIX_FMT_YUV420P, SWS_BICUBIC, nullptr, nullptr, nullptr);
 }

 int inLinesize[1] = { 3 * m_output_video_codec_context->width };
 sws_scale(m_output_scale_context, (const uint8_t* const*)&buf, inLinesize, 0, m_output_video_codec_context->height,
 m_output_video_frame->data, m_output_video_frame->linesize);
 PrepareFlipFrameJ420(m_output_video_frame);
 //TODO(P0): Switch m_frame to be m_input_video_frame so I don't end up using the presentation timestamp from
 // an audio frame if I threadify the frame reading.
 m_output_video_frame->pts = m_frame->pts;
 printf("Output PTS: %d, time_base: %d/%d\n", m_output_video_frame->pts,
 m_output_video_codec_context->time_base.num, m_output_video_codec_context->time_base.den);
 err = avcodec_send_frame(m_output_video_codec_context, m_output_video_frame);
 if (err < 0) {
 printf(" ERROR sending new video frame output: ");
 switch (err) {
 case AVERROR(EAGAIN):
 printf("AVERROR(EAGAIN): %d\n", err);
 break;
 case AVERROR_EOF:
 printf("AVERROR_EOF: %d\n", err);
 break;
 case AVERROR(EINVAL):
 printf("AVERROR(EINVAL): %d\n", err);
 break;
 case AVERROR(ENOMEM):
 printf("AVERROR(ENOMEM): %d\n", err);
 break;
 }

 return false;
 }

 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = nullptr;
 pkt.size = 0;
 pkt.flags |= AV_PKT_FLAG_KEY;
 int ret = 0;
 if ((ret = avcodec_receive_packet(m_output_video_codec_context, &pkt)) == 0) {
 static int counter = 0;
 printf("pkt.key: 0x%08x, pkt.size: %d, counter:\n", pkt.flags & AV_PKT_FLAG_KEY, pkt.size, counter++);
 uint8_t* size = ((uint8_t*)pkt.data);
 printf("sizes: %d %d %d %d %d %d %d %d %d\n", size[0], size[1], size[2], size[2], size[3], size[4], size[5], size[6], size[7]);
 av_interleaved_write_frame(m_output_format_context, &pkt);
 }
 printf("push: %d\n", ret);
 av_packet_unref(&pkt);

 return true;
}

bool MediaContainerMgr::finalize_output() {
 if (!m_recording)
 return true;

 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = nullptr;
 pkt.size = 0;

 for (;;) {
 avcodec_send_frame(m_output_video_codec_context, nullptr);
 if (avcodec_receive_packet(m_output_video_codec_context, &pkt) == 0) {
 av_interleaved_write_frame(m_output_format_context, &pkt);
 printf("final push:\n");
 } else {
 break;
 }
 }

 av_packet_unref(&pkt);

 av_write_trailer(m_output_format_context);
 if (!(m_output_format->flags & AVFMT_NOFILE)) {
 int err = avio_close(m_output_format_context->pb);
 if (err < 0) {
 printf("Failed to close file. err: %d\n", err);
 return false;
 }
 }

 return true;
}



EDIT
The call stack on the crash (which I should have included in the original question) :


avformat-58.dll!compute_muxer_pkt_fields(AVFormatContext * s, AVStream * st, AVPacket * pkt) Line 630 C
avformat-58.dll!write_packet_common(AVFormatContext * s, AVStream * st, AVPacket * pkt, int interleaved) Line 1122 C
avformat-58.dll!write_packets_common(AVFormatContext * s, AVPacket * pkt, int interleaved) Line 1186 C
avformat-58.dll!av_interleaved_write_frame(AVFormatContext * s, AVPacket * pkt) Line 1241 C
CamBot.exe!MediaContainerMgr::output_video_frame(unsigned char * buf) Line 553 C++
CamBot.exe!main() Line 240 C++



If I move the call to avformat_write_header so it's immediately before the audio stream initialization, I still get a crash, but in a different place. The crash happens on line 6459 of movenc.c, where we have :


/* Non-seekable output is ok if using fragmentation. If ism_lookahead
 * is enabled, we don't support non-seekable output at all. */
if (!(s->pb->seekable & AVIO_SEEKABLE_NORMAL) && // CRASH IS HERE
 (!(mov->flags & FF_MOV_FLAG_FRAGMENT) || mov->ism_lookahead)) {
 av_log(s, AV_LOG_ERROR, "muxer does not support non seekable output\n");
 return AVERROR(EINVAL);
}



The exception is a nullptr exception, where s->pb is NULL. The call stack is :


avformat-58.dll!mov_init(AVFormatContext * s) Line 6459 C
avformat-58.dll!init_muxer(AVFormatContext * s, AVDictionary * * options) Line 407 C
[Inline Frame] avformat-58.dll!avformat_init_output(AVFormatContext *) Line 489 C
avformat-58.dll!avformat_write_header(AVFormatContext * s, AVDictionary * * options) Line 512 C
CamBot.exe!MediaContainerMgr::init_video_output(const std::string & video_file_name, unsigned int width, unsigned int height) Line 424 C++
CamBot.exe!main() Line 183 C++



-
Why does adding audio stream to libavcodec output container causes a crash ?
19 mars 2021, par SniggerfardimungusAs it stands, my project correctly uses libavcodec to decode a video, where each frame is manipulated (it doesn't matter how) and output to a new video. I've cobbled this together from examples found online, and it works. The result is a perfect .mp4 of the manipulated frames, minus the audio.


My problem is, when I try to add an audio stream to the output container, I get a crash in mux.c that I can't explain. It's in
static int compute_muxer_pkt_fields(AVFormatContext *s, AVStream *st, AVPacket *pkt)
. Wherest->internal->priv_pts->val = pkt->dts;
is attempted,priv_pts
is nullptr.

I don't recall the version number, but this is from a November 4, 2020 ffmpeg build from git.


My
MediaContentMgr
is much bigger than what I have here. I'm stripping out everything to do with the frame manipulation, so if I'm missing anything, please let me know and I'll edit.

The code that, when added, triggers the nullptr exception, is called out inline


The .h :


#ifndef _API_EXAMPLE_H
#define _API_EXAMPLE_H

#include <glad></glad>glad.h>
#include <glfw></glfw>glfw3.h>
#include "glm/glm.hpp"

extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
#include <libavutil></libavutil>opt.h>
#include <libswscale></libswscale>swscale.h>
}

#include "shader_s.h"

class MediaContainerMgr {
public:
 MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag, 
 const glm::vec3* extents);
 ~MediaContainerMgr();
 void render();
 bool recording() { return m_recording; }

 // Major thanks to "shi-yan" who helped make this possible:
 // https://github.com/shi-yan/videosamples/blob/master/libavmp4encoding/main.cpp
 bool init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height);
 bool output_video_frame(uint8_t* buf);
 bool finalize_output();

private:
 AVFormatContext* m_format_context;
 AVCodec* m_video_codec;
 AVCodec* m_audio_codec;
 AVCodecParameters* m_video_codec_parameters;
 AVCodecParameters* m_audio_codec_parameters;
 AVCodecContext* m_codec_context;
 AVFrame* m_frame;
 AVPacket* m_packet;
 uint32_t m_video_stream_index;
 uint32_t m_audio_stream_index;
 
 void init_rendering(const glm::vec3* extents);
 int decode_packet();

 // For writing the output video:
 void free_output_assets();
 bool m_recording;
 AVOutputFormat* m_output_format;
 AVFormatContext* m_output_format_context;
 AVCodec* m_output_video_codec;
 AVCodecContext* m_output_video_codec_context;
 AVFrame* m_output_video_frame;
 SwsContext* m_output_scale_context;
 AVStream* m_output_video_stream;
 
 AVCodec* m_output_audio_codec;
 AVStream* m_output_audio_stream;
 AVCodecContext* m_output_audio_codec_context;
};

#endif



And, the hellish .cpp :


#include 
#include 
#include 
#include 
#include 

#include "media_container_manager.h"

MediaContainerMgr::MediaContainerMgr(const std::string& infile, const std::string& vert, const std::string& frag,
 const glm::vec3* extents) :
 m_video_stream_index(-1),
 m_audio_stream_index(-1),
 m_recording(false),
 m_output_format(nullptr),
 m_output_format_context(nullptr),
 m_output_video_codec(nullptr),
 m_output_video_codec_context(nullptr),
 m_output_video_frame(nullptr),
 m_output_scale_context(nullptr),
 m_output_video_stream(nullptr)
{
 // AVFormatContext holds header info from the format specified in the container:
 m_format_context = avformat_alloc_context();
 if (!m_format_context) {
 throw "ERROR could not allocate memory for Format Context";
 }
 
 // open the file and read its header. Codecs are not opened here.
 if (avformat_open_input(&m_format_context, infile.c_str(), NULL, NULL) != 0) {
 throw "ERROR could not open input file for reading";
 }

 printf("format %s, duration %lldus, bit_rate %lld\n", m_format_context->iformat->name, m_format_context->duration, m_format_context->bit_rate);
 //read avPackets (?) from the avFormat (?) to get stream info. This populates format_context->streams.
 if (avformat_find_stream_info(m_format_context, NULL) < 0) {
 throw "ERROR could not get stream info";
 }

 for (unsigned int i = 0; i < m_format_context->nb_streams; i++) {
 AVCodecParameters* local_codec_parameters = NULL;
 local_codec_parameters = m_format_context->streams[i]->codecpar;
 printf("AVStream->time base before open coded %d/%d\n", m_format_context->streams[i]->time_base.num, m_format_context->streams[i]->time_base.den);
 printf("AVStream->r_frame_rate before open coded %d/%d\n", m_format_context->streams[i]->r_frame_rate.num, m_format_context->streams[i]->r_frame_rate.den);
 printf("AVStream->start_time %" PRId64 "\n", m_format_context->streams[i]->start_time);
 printf("AVStream->duration %" PRId64 "\n", m_format_context->streams[i]->duration);
 printf("duration(s): %lf\n", (float)m_format_context->streams[i]->duration / m_format_context->streams[i]->time_base.den * m_format_context->streams[i]->time_base.num);
 AVCodec* local_codec = NULL;
 local_codec = avcodec_find_decoder(local_codec_parameters->codec_id);
 if (local_codec == NULL) {
 throw "ERROR unsupported codec!";
 }

 if (local_codec_parameters->codec_type == AVMEDIA_TYPE_VIDEO) {
 if (m_video_stream_index == -1) {
 m_video_stream_index = i;
 m_video_codec = local_codec;
 m_video_codec_parameters = local_codec_parameters;
 }
 m_height = local_codec_parameters->height;
 m_width = local_codec_parameters->width;
 printf("Video Codec: resolution %dx%d\n", m_width, m_height);
 }
 else if (local_codec_parameters->codec_type == AVMEDIA_TYPE_AUDIO) {
 if (m_audio_stream_index == -1) {
 m_audio_stream_index = i;
 m_audio_codec = local_codec;
 m_audio_codec_parameters = local_codec_parameters;
 }
 printf("Audio Codec: %d channels, sample rate %d\n", local_codec_parameters->channels, local_codec_parameters->sample_rate);
 }

 printf("\tCodec %s ID %d bit_rate %lld\n", local_codec->name, local_codec->id, local_codec_parameters->bit_rate);
 }

 m_codec_context = avcodec_alloc_context3(m_video_codec);
 if (!m_codec_context) {
 throw "ERROR failed to allocate memory for AVCodecContext";
 }

 if (avcodec_parameters_to_context(m_codec_context, m_video_codec_parameters) < 0) {
 throw "ERROR failed to copy codec params to codec context";
 }

 if (avcodec_open2(m_codec_context, m_video_codec, NULL) < 0) {
 throw "ERROR avcodec_open2 failed to open codec";
 }

 m_frame = av_frame_alloc();
 if (!m_frame) {
 throw "ERROR failed to allocate AVFrame memory";
 }

 m_packet = av_packet_alloc();
 if (!m_packet) {
 throw "ERROR failed to allocate AVPacket memory";
 }
}

MediaContainerMgr::~MediaContainerMgr() {
 avformat_close_input(&m_format_context);
 av_packet_free(&m_packet);
 av_frame_free(&m_frame);
 avcodec_free_context(&m_codec_context);


 glDeleteVertexArrays(1, &m_VAO);
 glDeleteBuffers(1, &m_VBO);
}


bool MediaContainerMgr::advance_frame() {
 while (true) {
 if (av_read_frame(m_format_context, m_packet) < 0) {
 // Do we actually need to unref the packet if it failed?
 av_packet_unref(m_packet);
 continue;
 //return false;
 }
 else {
 if (m_packet->stream_index == m_video_stream_index) {
 //printf("AVPacket->pts %" PRId64 "\n", m_packet->pts);
 int response = decode_packet();
 av_packet_unref(m_packet);
 if (response != 0) {
 continue;
 //return false;
 }
 return true;
 }
 else {
 printf("m_packet->stream_index: %d\n", m_packet->stream_index);
 printf(" m_packet->pts: %lld\n", m_packet->pts);
 printf(" mpacket->size: %d\n", m_packet->size);
 if (m_recording) {
 int err = 0;
 //err = avcodec_send_packet(m_output_video_codec_context, m_packet);
 printf(" encoding error: %d\n", err);
 }
 }
 }

 // We're done with the packet (it's been unpacked to a frame), so deallocate & reset to defaults:
/*
 if (m_frame == NULL)
 return false;

 if (m_frame->data[0] == NULL || m_frame->data[1] == NULL || m_frame->data[2] == NULL) {
 printf("WARNING: null frame data");
 continue;
 }
*/
 }
}

int MediaContainerMgr::decode_packet() {
 // Supply raw packet data as input to a decoder
 // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga58bc4bf1e0ac59e27362597e467efff3
 int response = avcodec_send_packet(m_codec_context, m_packet);

 if (response < 0) {
 char buf[256];
 av_strerror(response, buf, 256);
 printf("Error while receiving a frame from the decoder: %s\n", buf);
 return response;
 }

 // Return decoded output data (into a frame) from a decoder
 // https://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga11e6542c4e66d3028668788a1a74217c
 response = avcodec_receive_frame(m_codec_context, m_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 return response;
 } else if (response < 0) {
 char buf[256];
 av_strerror(response, buf, 256);
 printf("Error while receiving a frame from the decoder: %s\n", buf);
 return response;
 } else {
 printf(
 "Frame %d (type=%c, size=%d bytes) pts %lld key_frame %d [DTS %d]\n",
 m_codec_context->frame_number,
 av_get_picture_type_char(m_frame->pict_type),
 m_frame->pkt_size,
 m_frame->pts,
 m_frame->key_frame,
 m_frame->coded_picture_number
 );
 }
 return 0;
}


bool MediaContainerMgr::init_video_output(const std::string& video_file_name, unsigned int width, unsigned int height) {
 if (m_recording)
 return true;
 m_recording = true;

 advance_to(0L); // I've deleted the implmentation. Just seeks to beginning of vid. Works fine.

 if (!(m_output_format = av_guess_format(nullptr, video_file_name.c_str(), nullptr))) {
 printf("Cannot guess output format.\n");
 return false;
 }

 int err = avformat_alloc_output_context2(&m_output_format_context, m_output_format, nullptr, video_file_name.c_str());
 if (err < 0) {
 printf("Failed to allocate output context.\n");
 return false;
 }

 //TODO(P0): Break out the video and audio inits into their own methods.
 m_output_video_codec = avcodec_find_encoder(m_output_format->video_codec);
 if (!m_output_video_codec) {
 printf("Failed to create video codec.\n");
 return false;
 }
 m_output_video_stream = avformat_new_stream(m_output_format_context, m_output_video_codec);
 if (!m_output_video_stream) {
 printf("Failed to find video format.\n");
 return false;
 } 
 m_output_video_codec_context = avcodec_alloc_context3(m_output_video_codec);
 if (!m_output_video_codec_context) {
 printf("Failed to create video codec context.\n");
 return(false);
 }
 m_output_video_stream->codecpar->codec_id = m_output_format->video_codec;
 m_output_video_stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 m_output_video_stream->codecpar->width = width;
 m_output_video_stream->codecpar->height = height;
 m_output_video_stream->codecpar->format = AV_PIX_FMT_YUV420P;
 // Use the same bit rate as the input stream.
 m_output_video_stream->codecpar->bit_rate = m_format_context->streams[m_video_stream_index]->codecpar->bit_rate;
 m_output_video_stream->avg_frame_rate = m_format_context->streams[m_video_stream_index]->avg_frame_rate;
 avcodec_parameters_to_context(m_output_video_codec_context, m_output_video_stream->codecpar);
 m_output_video_codec_context->time_base = m_format_context->streams[m_video_stream_index]->time_base;
 
 //TODO(P1): Set these to match the input stream?
 m_output_video_codec_context->max_b_frames = 2;
 m_output_video_codec_context->gop_size = 12;
 m_output_video_codec_context->framerate = m_format_context->streams[m_video_stream_index]->r_frame_rate;
 //m_output_codec_context->refcounted_frames = 0;
 if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H264) {
 av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
 } else if (m_output_video_stream->codecpar->codec_id == AV_CODEC_ID_H265) {
 av_opt_set(m_output_video_codec_context, "preset", "ultrafast", 0);
 } else {
 av_opt_set_int(m_output_video_codec_context, "lossless", 1, 0);
 }
 avcodec_parameters_from_context(m_output_video_stream->codecpar, m_output_video_codec_context);

 m_output_audio_codec = avcodec_find_encoder(m_output_format->audio_codec);
 if (!m_output_audio_codec) {
 printf("Failed to create audio codec.\n");
 return false;
 }



I've commented out all of the audio stream init beyond this next line, because this is where
the trouble begins. Creating this output stream causes the null reference I mentioned. If I
uncomment everything below here, I still get the null deref. If I comment out this line, the
deref exception vanishes. (IOW, I commented out more and more code until I found that this
was the trigger that caused the problem.)


I assume that there's something I'm doing wrong in the rest of the commented out code, that,
when fixed, will fix the nullptr and give me a working audio stream.


m_output_audio_stream = avformat_new_stream(m_output_format_context, m_output_audio_codec);
 if (!m_output_audio_stream) {
 printf("Failed to find audio format.\n");
 return false;
 }
 /*
 m_output_audio_codec_context = avcodec_alloc_context3(m_output_audio_codec);
 if (!m_output_audio_codec_context) {
 printf("Failed to create audio codec context.\n");
 return(false);
 }
 m_output_audio_stream->codecpar->codec_id = m_output_format->audio_codec;
 m_output_audio_stream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
 m_output_audio_stream->codecpar->format = m_format_context->streams[m_audio_stream_index]->codecpar->format;
 m_output_audio_stream->codecpar->bit_rate = m_format_context->streams[m_audio_stream_index]->codecpar->bit_rate;
 m_output_audio_stream->avg_frame_rate = m_format_context->streams[m_audio_stream_index]->avg_frame_rate;
 avcodec_parameters_to_context(m_output_audio_codec_context, m_output_audio_stream->codecpar);
 m_output_audio_codec_context->time_base = m_format_context->streams[m_audio_stream_index]->time_base;
 */

 //TODO(P2): Free assets that have been allocated.
 err = avcodec_open2(m_output_video_codec_context, m_output_video_codec, nullptr);
 if (err < 0) {
 printf("Failed to open codec.\n");
 return false;
 }

 if (!(m_output_format->flags & AVFMT_NOFILE)) {
 err = avio_open(&m_output_format_context->pb, video_file_name.c_str(), AVIO_FLAG_WRITE);
 if (err < 0) {
 printf("Failed to open output file.");
 return false;
 }
 }

 err = avformat_write_header(m_output_format_context, NULL);
 if (err < 0) {
 printf("Failed to write header.\n");
 return false;
 }

 av_dump_format(m_output_format_context, 0, video_file_name.c_str(), 1);

 return true;
}


//TODO(P2): make this a member. (Thanks to https://emvlo.wordpress.com/2016/03/10/sws_scale/)
void PrepareFlipFrameJ420(AVFrame* pFrame) {
 for (int i = 0; i < 4; i++) {
 if (i)
 pFrame->data[i] += pFrame->linesize[i] * ((pFrame->height >> 1) - 1);
 else
 pFrame->data[i] += pFrame->linesize[i] * (pFrame->height - 1);
 pFrame->linesize[i] = -pFrame->linesize[i];
 }
}



This is where we take an altered frame and write it to the output container. This works fine
as long as we haven't set up an audio stream in the output container.


bool MediaContainerMgr::output_video_frame(uint8_t* buf) {
 int err;

 if (!m_output_video_frame) {
 m_output_video_frame = av_frame_alloc();
 m_output_video_frame->format = AV_PIX_FMT_YUV420P;
 m_output_video_frame->width = m_output_video_codec_context->width;
 m_output_video_frame->height = m_output_video_codec_context->height;
 err = av_frame_get_buffer(m_output_video_frame, 32);
 if (err < 0) {
 printf("Failed to allocate output frame.\n");
 return false;
 }
 }

 if (!m_output_scale_context) {
 m_output_scale_context = sws_getContext(m_output_video_codec_context->width, m_output_video_codec_context->height, 
 AV_PIX_FMT_RGB24,
 m_output_video_codec_context->width, m_output_video_codec_context->height, 
 AV_PIX_FMT_YUV420P, SWS_BICUBIC, nullptr, nullptr, nullptr);
 }

 int inLinesize[1] = { 3 * m_output_video_codec_context->width };
 sws_scale(m_output_scale_context, (const uint8_t* const*)&buf, inLinesize, 0, m_output_video_codec_context->height,
 m_output_video_frame->data, m_output_video_frame->linesize);
 PrepareFlipFrameJ420(m_output_video_frame);
 //TODO(P0): Switch m_frame to be m_input_video_frame so I don't end up using the presentation timestamp from
 // an audio frame if I threadify the frame reading.
 m_output_video_frame->pts = m_frame->pts;
 printf("Output PTS: %d, time_base: %d/%d\n", m_output_video_frame->pts,
 m_output_video_codec_context->time_base.num, m_output_video_codec_context->time_base.den);
 err = avcodec_send_frame(m_output_video_codec_context, m_output_video_frame);
 if (err < 0) {
 printf(" ERROR sending new video frame output: ");
 switch (err) {
 case AVERROR(EAGAIN):
 printf("AVERROR(EAGAIN): %d\n", err);
 break;
 case AVERROR_EOF:
 printf("AVERROR_EOF: %d\n", err);
 break;
 case AVERROR(EINVAL):
 printf("AVERROR(EINVAL): %d\n", err);
 break;
 case AVERROR(ENOMEM):
 printf("AVERROR(ENOMEM): %d\n", err);
 break;
 }

 return false;
 }

 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = nullptr;
 pkt.size = 0;
 pkt.flags |= AV_PKT_FLAG_KEY;
 int ret = 0;
 if ((ret = avcodec_receive_packet(m_output_video_codec_context, &pkt)) == 0) {
 static int counter = 0;
 printf("pkt.key: 0x%08x, pkt.size: %d, counter:\n", pkt.flags & AV_PKT_FLAG_KEY, pkt.size, counter++);
 uint8_t* size = ((uint8_t*)pkt.data);
 printf("sizes: %d %d %d %d %d %d %d %d %d\n", size[0], size[1], size[2], size[2], size[3], size[4], size[5], size[6], size[7]);
 av_interleaved_write_frame(m_output_format_context, &pkt);
 }
 printf("push: %d\n", ret);
 av_packet_unref(&pkt);

 return true;
}

bool MediaContainerMgr::finalize_output() {
 if (!m_recording)
 return true;

 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = nullptr;
 pkt.size = 0;

 for (;;) {
 avcodec_send_frame(m_output_video_codec_context, nullptr);
 if (avcodec_receive_packet(m_output_video_codec_context, &pkt) == 0) {
 av_interleaved_write_frame(m_output_format_context, &pkt);
 printf("final push:\n");
 } else {
 break;
 }
 }

 av_packet_unref(&pkt);

 av_write_trailer(m_output_format_context);
 if (!(m_output_format->flags & AVFMT_NOFILE)) {
 int err = avio_close(m_output_format_context->pb);
 if (err < 0) {
 printf("Failed to close file. err: %d\n", err);
 return false;
 }
 }

 return true;
}



-
Ffmpeg Android - First image is skipped while making a slideshow
17 mars 2021, par M. Bilal AsifIssue : I have 7 images in a list (with different size, resolution and format). I am adding an mp3 audio file and fade effect while making a slideshow with them, as i am trying to do it by following command


val inputCommandinitial = arrayOf("-y", "-framerate", "1/5")
val arrTop = ArrayList<string>()

 //Add all paths
 for (i in images!!.indices) {
 arrTop.add("-loop")
 arrTop.add("1")
 arrTop.add("-t")
 arrTop.add("5") 
 arrTop.add("-i")
 arrTop.add(images!![i].path)
 }

 //Apply filter graph
 arrTop.add("-i")
 arrTop.add(audio!!.path)
 arrTop.add("-filter_complex")

 val stringBuilder = StringBuilder()

 for (i in images!!.indices) {
 stringBuilder.append("[$i:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v$i];")
 }

 for (i in images!!.indices) {
 stringBuilder.append("[v$i]")
 }

 //Concat command
 stringBuilder.append("concat=n=${images!!.size}:v=1:a=0,fps=25,format=yuv420p[v]")

 val endcommand = arrayOf("-map", "[v]", "-map", "${images!!.size}:a", "-c:a", "copy", "-preset", "ultrafast", "-shortest", outputLocation.path)
 val finalCommand = (inputCommandinitial + arrTop + stringBuilder.toString() + endcommand)
</string>


But, it skips the first image and shows the rest 6 images and video output duration is 30 seconds, i've been trying since 3 days now


Requirement :
making a slideshow with different format, size, resolution etc, i.e. picked by user from gallery and show in slideshow adding an audio behind, with fade effect


Here is the complete log :


I/mobile-ffmpeg: Loading mobile-ffmpeg.
 I/mobile-ffmpeg: Loaded mobile-ffmpeg-full-gpl-x86-4.4-lts-20200803.
 D/mobile-ffmpeg: Callback thread started.
 I/mobile-ffmpeg: ffmpeg version v4.4-dev-416
 I/mobile-ffmpeg: Copyright (c) 2000-2020 the FFmpeg developers
 I/mobile-ffmpeg: built with Android (6454773 based on r365631c2) clang version 9.0.8 (https://android.googlesource.com/toolchain/llvm-project 98c855489587874b2a325e7a516b99d838599c6f) (based on LLVM 9.0.8svn)
 I/mobile-ffmpeg: configuration: --cross-prefix=i686-linux-android- --sysroot=/files/android-sdk/ndk/21.3.6528147/toolchains/llvm/prebuilt/linux-x86_64/sysroot --prefix=/home/taner/Projects/mobile-ffmpeg/prebuilt/android-x86/ffmpeg --pkg-config=/usr/bin/pkg-config --enable-version3 --arch=i686 --cpu=i686 --cc=i686-linux-android16-clang --cxx=i686-linux-android16-clang++ --extra-libs='-L/home/taner/Projects/mobile-ffmpeg/prebuilt/android-x86/cpu-features/lib -lndk_compat' --target-os=android --disable-neon --disable-asm --disable-inline-asm --enable-cross-compile --enable-pic --enable-jni --enable-optimizations --enable-swscale --enable-shared --enable-v4l2-m2m --disable-outdev=fbdev --disable-indev=fbdev --enable-small --disable-openssl --disable-xmm-clobber-test --disable-debug --enable-lto --disable-neon-clobber-test --disable-programs --disable-postproc --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages --disable-static --disable-sndio --disable-schannel --disable-securetransport --disable-xlib --disable-cuda --disable-cuvid --disable-nvenc --disable-vaapi --disable-vdpau --disable-videotoolbox --disable-audiotoolbox --disable-appkit --disable-alsa --disable-cuda --disable-cuvid --disable-nvenc --disable-vaapi --disable-vdpau --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-gmp --enable-gnutls --enable-libmp3lame --enable-libass --enable-iconv --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libxml2 --enable-libopencore-amrnb --enable-libshine --enable-libspeex --enable-libwavpack --enable-libkvazaar --enable-libx264 --enable-gpl --enable-libxvid --enable-gpl --enable-libx265 --enable-gpl --enable-libvidstab --enable-gpl --enable-libilbc --enable-libopus --enable-libsnappy --enable-libsoxr --enable-libaom --enable-libtwolame --disable-sdl2 --enable-libvo-amrwbenc --enable-zlib --enable-mediacodec
 I/mobile-ffmpeg: libavutil 56. 55.100 / 56. 55.100
 I/mobile-ffmpeg: libavcodec 58. 96.100 / 58. 96.100
 I/mobile-ffmpeg: libavformat 58. 48.100 / 58. 48.100
 I/mobile-ffmpeg: libavdevice 58. 11.101 / 58. 11.101
 I/mobile-ffmpeg: libavfilter 7. 87.100 / 7. 87.100
 I/mobile-ffmpeg: libswscale 5. 8.100 / 5. 8.100
 I/mobile-ffmpeg: libswresample 3. 8.100 / 3. 8.100
 I/mobile-ffmpeg: Input #0, png_pipe, from '/storage/emulated/0/FFMpeg Example/image1.png':
 I/mobile-ffmpeg: Duration:
 I/mobile-ffmpeg: N/A
 I/mobile-ffmpeg: , bitrate:
 I/mobile-ffmpeg: N/A
 I/mobile-ffmpeg: Stream #0:0
 I/mobile-ffmpeg: : Video: png, rgb24(pc), 800x500 [SAR 11811:11811 DAR 8:5]
 I/mobile-ffmpeg: ,
 I/mobile-ffmpeg: 0.20 tbr,
 I/mobile-ffmpeg: 0.20 tbn,
 I/mobile-ffmpeg: 0.20 tbc
 W/mobile-ffmpeg: [png_pipe @ 0xe1a8ec00] Stream #0: not enough frames to estimate rate; consider increasing probesize
 I/mobile-ffmpeg: Input #1, png_pipe, from '/storage/emulated/0/FFMpeg Example/image2.png':
 I/mobile-ffmpeg: Duration:
 I/mobile-ffmpeg: N/A
 I/mobile-ffmpeg: , bitrate:
 I/mobile-ffmpeg: N/A
 I/mobile-ffmpeg: Stream #1:0
 I/mobile-ffmpeg: : Video: png, rgb24(pc), 1920x1080 [SAR 3779:3779 DAR 16:9]
 I/mobile-ffmpeg: ,
 I/mobile-ffmpeg: 25 tbr,
 I/mobile-ffmpeg: 25 tbn,
 I/mobile-ffmpeg: 25 tbc
 I/mobile-ffmpeg: Input #2, png_pipe, from '/storage/emulated/0/FFMpeg Example/one.png':
 I/mobile-ffmpeg: Duration:
 I/mobile-ffmpeg: N/A
 I/mobile-ffmpeg: , bitrate:
 I/mobile-ffmpeg: N/A
 I/mobile-ffmpeg: Stream #2:0
 I/mobile-ffmpeg: : Video: png, rgba(pc), 720x1280
 I/mobile-ffmpeg: ,
 I/mobile-ffmpeg: 25 fps,
 I/mobile-ffmpeg: 25 tbr,
 I/mobile-ffmpeg: 25 tbn,
 I/mobile-ffmpeg: 25 tbc
 I/mobile-ffmpeg: Input #3, image2, from '/storage/emulated/0/FFMpeg Example/two.png':
 I/mobile-ffmpeg: Duration:
 I/mobile-ffmpeg: 00:00:00.04
 I/mobile-ffmpeg: , start:
 I/mobile-ffmpeg: 0.000000
 I/mobile-ffmpeg: , bitrate:
 I/mobile-ffmpeg: 7955 kb/s
 I/mobile-ffmpeg: Stream #3:0
 I/mobile-ffmpeg: : Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 564x1002 [SAR 72:72 DAR 94:167]
 I/mobile-ffmpeg: ,
 I/mobile-ffmpeg: 25 fps,
 I/mobile-ffmpeg: 25 tbr,
 I/mobile-ffmpeg: 25 tbn,
 I/mobile-ffmpeg: 25 tbc
 W/mobile-ffmpeg: [png_pipe @ 0xe1a90a00] Stream #0: not enough frames to estimate rate; consider increasing probesize
 I/mobile-ffmpeg: Input #4, png_pipe, from '/storage/emulated/0/FFMpeg Example/image3.png':
 I/mobile-ffmpeg: Duration:
 I/mobile-ffmpeg: N/A
 I/mobile-ffmpeg: , bitrate:
 I/mobile-ffmpeg: N/A
 I/mobile-ffmpeg: Stream #4:0
 I/mobile-ffmpeg: : Video: png, rgb24(pc), 1820x1024
 I/mobile-ffmpeg: ,
 I/mobile-ffmpeg: 25 tbr,
 I/mobile-ffmpeg: 25 tbn,
 I/mobile-ffmpeg: 25 tbc
 I/mobile-ffmpeg: Input #5, png_pipe, from '/storage/emulated/0/FFMpeg Example/image4.png':
 I/mobile-ffmpeg: Duration:
 I/mobile-ffmpeg: N/A
 I/mobile-ffmpeg: , bitrate:
 I/mobile-ffmpeg: N/A
 I/mobile-ffmpeg: Stream #5:0
 I/mobile-ffmpeg: : Video: png, rgb24(pc), 1920x800 [SAR 2835:2835 DAR 12:5]
 I/mobile-ffmpeg: ,
 I/mobile-ffmpeg: 25 fps,
 I/mobile-ffmpeg: 25 tbr,
 I/mobile-ffmpeg: 25 tbn,
 I/mobile-ffmpeg: 25 tbc
 I/mobile-ffmpeg: Input #6, image2, from '/storage/emulated/0/FFMpeg Example/image5.png':
 I/mobile-ffmpeg: Duration:
 I/mobile-ffmpeg: 00:00:00.04
 I/mobile-ffmpeg: , start:
 I/mobile-ffmpeg: 0.000000
 I/mobile-ffmpeg: , bitrate:
 I/mobile-ffmpeg: 159573 kb/s
 I/mobile-ffmpeg: Stream #6:0
 I/mobile-ffmpeg: : Video: mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 1600x900
 I/mobile-ffmpeg: ,
 I/mobile-ffmpeg: 25 fps,
 I/mobile-ffmpeg: 25 tbr,
 I/mobile-ffmpeg: 25 tbn,
 I/mobile-ffmpeg: 25 tbc
 W/mobile-ffmpeg: [mp3 @ 0xe1a92800] Estimating duration from bitrate, this may be inaccurate
 I/mobile-ffmpeg: Input #7, mp3, from '/storage/emulated/0/FFMpeg Example/shortmusic.mp3':
 I/mobile-ffmpeg: Metadata:
 I/mobile-ffmpeg: track :
 I/mobile-ffmpeg: 25
 I/mobile-ffmpeg: artist :
 I/mobile-ffmpeg: longzijun
 I/mobile-ffmpeg: title :
 I/mobile-ffmpeg: Memoryne Music Box Version
 I/mobile-ffmpeg: album_artist :
 I/mobile-ffmpeg: longzijun
 I/mobile-ffmpeg: genre :
 I/mobile-ffmpeg: Soundtrack
 I/mobile-ffmpeg: date :
 I/mobile-ffmpeg: 2012
 I/mobile-ffmpeg: Duration:
 I/mobile-ffmpeg: 00:00:57.70
 I/mobile-ffmpeg: , start:
 I/mobile-ffmpeg: 0.000000
 I/mobile-ffmpeg: , bitrate:
 I/mobile-ffmpeg: 320 kb/s
 I/mobile-ffmpeg: Stream #7:0
 I/mobile-ffmpeg: : Audio: mp3, 48000 Hz, stereo, fltp, 320 kb/s
 I/mobile-ffmpeg: Stream mapping:
 I/mobile-ffmpeg: Stream #0:0 (png) -> scale
 I/mobile-ffmpeg: Stream #1:0 (png) -> scale
 I/mobile-ffmpeg: Stream #2:0 (png) -> scale
 I/mobile-ffmpeg: Stream #3:0 (mjpeg) -> scale
 I/mobile-ffmpeg: Stream #4:0 (png) -> scale
 I/mobile-ffmpeg: Stream #5:0 (png) -> scale
 I/mobile-ffmpeg: Stream #6:0 (mjpeg) -> scale
 I/mobile-ffmpeg: format
 I/mobile-ffmpeg: -> Stream #0:0 (libx264)
 I/mobile-ffmpeg: Stream #7:0 -> #0:1
 I/mobile-ffmpeg: (copy)
 I/mobile-ffmpeg: Press [q] to stop, [?] for help
 I/mobile-ffmpeg: frame= 0 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A
 W/mobile-ffmpeg: [graph 0 input from stream 0:0 @ 0xe1a1bec0] sws_param option is deprecated and ignored
 W/mobile-ffmpeg: [graph 0 input from stream 1:0 @ 0xe1a1bf20] sws_param option is deprecated and ignored
 W/mobile-ffmpeg: [graph 0 input from stream 2:0 @ 0xe1a1bfe0] sws_param option is deprecated and ignored
 W/mobile-ffmpeg: [graph 0 input from stream 3:0 @ 0xe1a1c0a0] sws_param option is deprecated and ignored
 W/mobile-ffmpeg: [graph 0 input from stream 4:0 @ 0xe1a1c160] sws_param option is deprecated and ignored
 W/mobile-ffmpeg: [graph 0 input from stream 5:0 @ 0xe1a1c220] sws_param option is deprecated and ignored
 W/mobile-ffmpeg: [graph 0 input from stream 6:0 @ 0xe1a1c2e0] sws_param option is deprecated and ignored
 W/mobile-ffmpeg: [swscaler @ 0xbf684840] deprecated pixel format used, make sure you did set range correctly
 W/mobile-ffmpeg: [swscaler @ 0xbf68fec0] deprecated pixel format used, make sure you did set range correctly
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] using SAR=1/1
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] using cpu capabilities: none!
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] profile Constrained Baseline, level 3.1, 4:2:0, 8-bit
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] 264 - core 160 - H.264/MPEG-4 AVC codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
 I/mobile-ffmpeg: Output #0, mp4, to '/storage/emulated/0/FFMpeg Example/video/movie_1615954349867.mp4':
 I/mobile-ffmpeg: Metadata:
 I/mobile-ffmpeg: encoder :
 I/mobile-ffmpeg: Lavf58.48.100
 I/mobile-ffmpeg: Stream #0:0
 I/mobile-ffmpeg: : Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 720x1280 [SAR 1:1 DAR 9:16], q=-1--1
 I/mobile-ffmpeg: ,
 I/mobile-ffmpeg: 25 fps,
 I/mobile-ffmpeg: 12800 tbn,
 I/mobile-ffmpeg: 25 tbc
 I/mobile-ffmpeg: (default)
 I/mobile-ffmpeg: Metadata:
 I/mobile-ffmpeg: encoder :
 I/mobile-ffmpeg: Lavc58.96.100 libx264
 I/mobile-ffmpeg: Side data:
 I/mobile-ffmpeg:
 I/mobile-ffmpeg: cpb:
 I/mobile-ffmpeg: bitrate max/min/avg: 0/0/0 buffer size: 0
 I/mobile-ffmpeg: vbv_delay: N/A
 I/mobile-ffmpeg: Stream #0:1
 I/mobile-ffmpeg: : Audio: mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 320 kb/s
 I/mobile-ffmpeg: frame= 0 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
 I/mobile-ffmpeg: frame= 7 fps=3.8 q=20.0 size= 0kB time=00:00:00.04 bitrate= 9.6kbits/s speed=0.0215x
 I/mobile-ffmpeg: frame= 15 fps=6.3 q=22.0 size= 0kB time=00:00:00.36 bitrate= 1.1kbits/s speed=0.151x
 I/mobile-ffmpeg: frame= 24 fps=8.2 q=23.0 size= 256kB time=00:00:00.72 bitrate=2912.9kbits/s speed=0.245x
 I/mobile-ffmpeg: frame= 33 fps=9.5 q=14.0 size= 512kB time=00:00:01.08 bitrate=3883.7kbits/s speed=0.31x
 I/mobile-ffmpeg: frame= 44 fps= 11 q=12.0 size= 512kB time=00:00:01.52 bitrate=2759.5kbits/s speed=0.379x
 I/mobile-ffmpeg: frame= 55 fps= 12 q=12.0 size= 512kB time=00:00:01.96 bitrate=2140.1kbits/s speed=0.432x
 I/mobile-ffmpeg: frame= 68 fps= 13 q=12.0 size= 768kB time=00:00:02.48 bitrate=2537.0kbits/s speed=0.491x
 I/mobile-ffmpeg: frame= 77 fps= 14 q=12.0 size= 768kB time=00:00:02.84 bitrate=2215.4kbits/s speed=0.499x
 I/mobile-ffmpeg: frame= 84 fps= 13 q=12.0 size= 768kB time=00:00:03.12 bitrate=2016.6kbits/s speed=0.499x
 I/mobile-ffmpeg: frame= 94 fps= 14 q=12.0 size= 768kB time=00:00:03.52 bitrate=1787.4kbits/s speed=0.52x
 I/mobile-ffmpeg: frame= 102 fps= 14 q=12.0 size= 768kB time=00:00:03.84 bitrate=1638.5kbits/s speed=0.525x
 I/mobile-ffmpeg: frame= 116 fps= 15 q=12.0 size= 768kB time=00:00:04.40 bitrate=1429.9kbits/s speed=0.556x
 I/mobile-ffmpeg: frame= 127 fps= 15 q=12.0 size= 768kB time=00:00:04.84 bitrate=1299.9kbits/s speed=0.574x
 I/mobile-ffmpeg: frame= 134 fps= 15 q=21.0 size= 768kB time=00:00:05.12 bitrate=1228.9kbits/s speed=0.571x
 I/mobile-ffmpeg: frame= 140 fps= 15 q=22.0 size= 1024kB time=00:00:05.36 bitrate=1565.1kbits/s speed=0.56x
 I/mobile-ffmpeg: frame= 145 fps= 14 q=23.0 size= 1024kB time=00:00:05.56 bitrate=1508.8kbits/s speed=0.55x
 I/mobile-ffmpeg: frame= 151 fps= 14 q=23.0 size= 1280kB time=00:00:05.80 bitrate=1807.9kbits/s speed=0.546x
 I/mobile-ffmpeg: frame= 164 fps= 15 q=12.0 size= 1536kB time=00:00:06.32 bitrate=1991.0kbits/s speed=0.567x
 I/mobile-ffmpeg: frame= 172 fps= 15 q=12.0 size= 1536kB time=00:00:06.64 bitrate=1895.1kbits/s speed=0.569x
 I/mobile-ffmpeg: frame= 186 fps= 15 q=12.0 size= 1536kB time=00:00:07.20 bitrate=1747.7kbits/s speed=0.592x
 I/mobile-ffmpeg: frame= 207 fps= 16 q=12.0 size= 1536kB time=00:00:08.04 bitrate=1565.1kbits/s speed=0.634x
 I/mobile-ffmpeg: frame= 229 fps= 17 q=12.0 size= 1792kB time=00:00:08.92 bitrate=1645.8kbits/s speed=0.677x
 I/mobile-ffmpeg: frame= 249 fps= 18 q=12.0 size= 1792kB time=00:00:09.72 bitrate=1510.3kbits/s speed=0.71x
 I/mobile-ffmpeg: frame= 270 fps= 19 q=21.0 size= 2048kB time=00:00:10.56 bitrate=1588.8kbits/s speed=0.744x
 I/mobile-ffmpeg: frame= 296 fps= 20 q=12.0 size= 2304kB time=00:00:11.60 bitrate=1627.1kbits/s speed=0.789x
 I/mobile-ffmpeg: frame= 319 fps= 21 q=12.0 size= 2304kB time=00:00:12.52 bitrate=1507.6kbits/s speed=0.823x
 I/mobile-ffmpeg: frame= 337 fps= 21 q=12.0 size= 2304kB time=00:00:13.24 bitrate=1425.6kbits/s speed=0.839x
 I/mobile-ffmpeg: frame= 347 fps= 21 q=12.0 size= 2304kB time=00:00:13.64 bitrate=1383.8kbits/s speed=0.835x
 I/mobile-ffmpeg: frame= 360 fps= 21 q=12.0 size= 2560kB time=00:00:14.16 bitrate=1481.1kbits/s speed=0.841x
 I/mobile-ffmpeg: frame= 382 fps= 22 q=19.0 size= 2560kB time=00:00:15.04 bitrate=1394.4kbits/s speed=0.866x
 I/mobile-ffmpeg: frame= 395 fps= 22 q=22.0 size= 2816kB time=00:00:15.56 bitrate=1482.6kbits/s speed=0.869x
 I/mobile-ffmpeg: frame= 407 fps= 22 q=15.0 size= 3072kB time=00:00:16.04 bitrate=1569.0kbits/s speed=0.872x
 I/mobile-ffmpeg: frame= 421 fps= 22 q=12.0 size= 3072kB time=00:00:16.60 bitrate=1516.0kbits/s speed=0.875x
 I/mobile-ffmpeg: frame= 432 fps= 22 q=12.0 size= 3072kB time=00:00:17.04 bitrate=1476.9kbits/s speed=0.875x
 I/mobile-ffmpeg: frame= 446 fps= 22 q=12.0 size= 3072kB time=00:00:17.60 bitrate=1429.9kbits/s speed=0.88x
 I/mobile-ffmpeg: frame= 458 fps= 22 q=12.0 size= 3328kB time=00:00:18.08 bitrate=1507.9kbits/s speed=0.879x
 I/mobile-ffmpeg: frame= 472 fps= 22 q=12.0 size= 3328kB time=00:00:18.64 bitrate=1462.6kbits/s speed=0.884x
 I/mobile-ffmpeg: frame= 489 fps= 23 q=12.0 size= 3328kB time=00:00:19.32 bitrate=1411.1kbits/s speed=0.894x
 I/mobile-ffmpeg: frame= 509 fps= 23 q=19.0 size= 3328kB time=00:00:20.12 bitrate=1355.0kbits/s speed=0.909x
 I/mobile-ffmpeg: frame= 531 fps= 23 q=15.0 size= 3584kB time=00:00:21.00 bitrate=1398.1kbits/s speed=0.928x
 I/mobile-ffmpeg: frame= 555 fps= 24 q=12.0 size= 3840kB time=00:00:21.96 bitrate=1432.5kbits/s speed=0.949x
 I/mobile-ffmpeg: frame= 577 fps= 24 q=12.0 size= 3840kB time=00:00:22.84 bitrate=1377.3kbits/s speed=0.966x
 I/mobile-ffmpeg: frame= 599 fps= 25 q=12.0 size= 3840kB time=00:00:23.72 bitrate=1326.2kbits/s speed=0.981x
 I/mobile-ffmpeg: frame= 620 fps= 25 q=12.0 size= 3840kB time=00:00:24.56 bitrate=1280.8kbits/s speed=0.995x
 I/mobile-ffmpeg: frame= 630 fps= 25 q=18.0 size= 3840kB time=00:00:24.96 bitrate=1260.3kbits/s speed=0.99x
 I/mobile-ffmpeg: frame= 640 fps= 25 q=21.0 size= 4096kB time=00:00:25.36 bitrate=1323.1kbits/s speed=0.985x
 I/mobile-ffmpeg: frame= 652 fps= 25 q=22.0 size= 4352kB time=00:00:25.84 bitrate=1379.7kbits/s speed=0.984x
 I/mobile-ffmpeg: frame= 665 fps= 25 q=12.0 size= 4608kB time=00:00:26.36 bitrate=1432.1kbits/s speed=0.984x
 I/mobile-ffmpeg: frame= 678 fps= 25 q=12.0 size= 4608kB time=00:00:26.88 bitrate=1404.4kbits/s speed=0.984x
 I/mobile-ffmpeg: frame= 690 fps= 25 q=12.0 size= 4608kB time=00:00:27.36 bitrate=1379.7kbits/s speed=0.983x
 I/mobile-ffmpeg: frame= 703 fps= 25 q=12.0 size= 4608kB time=00:00:27.88 bitrate=1354.0kbits/s speed=0.983x
 I/mobile-ffmpeg: frame= 716 fps= 25 q=12.0 size= 4608kB time=00:00:28.40 bitrate=1329.2kbits/s speed=0.983x
 I/mobile-ffmpeg: frame= 729 fps= 25 q=12.0 size= 4608kB time=00:00:28.92 bitrate=1305.3kbits/s speed=0.983x
 I/mobile-ffmpeg: frame= 742 fps= 25 q=12.0 size= 4608kB time=00:00:29.44 bitrate=1282.2kbits/s speed=0.983x
 I/mobile-ffmpeg: frame= 749 fps= 25 q=-1.0 Lsize= 4883kB time=00:00:29.95 bitrate=1335.5kbits/s speed=0.988x
 I/mobile-ffmpeg: video:3696kB audio:1171kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead:
 I/mobile-ffmpeg: 0.326516%
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] frame I:3 Avg QP:13.33 size: 2725
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] frame P:746 Avg QP:13.98 size: 5062
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] mb I I16..4: 100.0% 0.0% 0.0%
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] mb P I16..4: 7.1% 0.0% 0.0% P16..4: 8.2% 0.0% 0.0% 0.0% 0.0% skip:84.7%
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] coded y,uvDC,uvAC intra: 14.5% 19.0% 6.9% inter: 5.1% 5.4% 1.4%
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] i16 v,h,dc,p: 65% 18% 7% 9%
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] i8c dc,h,v,p: 71% 19% 6% 4%
 I/mobile-ffmpeg: [libx264 @ 0xe1ad4400] kb/s:1010.47
 I/mobile-ffmpeg: Async command execution completed successfully.



and here is command is ffmpeg syntax


"-y"
"-framerate"
"1/5"
"-loop"
"1"
"-t"
"5"
"-i"
"/storage/emulated/0/FFMpeg Example/image1.png"
"-loop"
"1"
"-t"
"5"
"-i"
 "/storage/emulated/0/FFMpeg Example/image2.png"
 "-loop"
 "1"
 "-t"
 "5"
 "-i"
 "/storage/emulated/0/FFMpeg Example/one.png"
 "-loop"
 "1"
 "-t"
 "5"
 "-i"
 "/storage/emulated/0/FFMpeg Example/two.png"
 "-loop"
 "1"
 "-t"
 "5"
 "-i"
 "/storage/emulated/0/FFMpeg Example/image3.png"
 "-loop"
 "1"
 "-t"
 "5"
 "-i"
 "/storage/emulated/0/FFMpeg Example/image4.png"
 "-loop"
 "1"
 "-t"
 "5"
 "-i"
 "/storage/emulated/0/FFMpeg Example/image5.png"
 "-i"
 "/storage/emulated/0/FFMpeg Example/shortmusic.mp3"
 "-filter_complex"
 "[0:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v0];
[1:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v1];
[2:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v2];
[3:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v3];
[4:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v4];
[5:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v5];
[6:v]scale=720:1280:force_original_aspect_ratio=decrease,pad=720:1280:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1[v6];
[v0][v1][v2][v3][v4][v5][v6]concat=n=7:v=1:a=0,fps=25,format=yuv420p[v]"
 "-map"
 "[v]"
 "-map"
 "7:a"
 "-c:a"
 "copy"
 "-preset"
 "ultrafast"
 "-shortest"
 "/storage/emulated/0/FFMpeg Example/video/movie_1615955101725.mp4"