Recherche avancée

Médias (91)

Autres articles (106)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (17713)

  • I want to take any Audio from a file and encode it as PCM_ALAW. My Example is a .m4a file to .wav file

    22 novembre 2023, par Clockman

    I have been working on this for a while now while am generally new to ffmpeg library, I have studied it a bit. The challenge I have that at the point of witting to file I get the following exception.

    


    "Exception thrown at 0x00007FFACA8305B3 (avformat-60.dll) in FfmpegPractice.exe : 0xC0000005 : Access violation writing location 0x0000000000000000.". I understand this means am writing to an uninitialized buffer am unable to discover why this is happening. The exception call stack shows the following

    


    avformat-60.dll!avformat_write_header() C
avformat-60.dll!ff_write_chained()  C
avformat-60.dll!ff_write_chained()  C
avformat-60.dll!av_write_frame()    C
FfmpegPractice.exe!main() Line 215  C++


    


    Some things I have tried

    


    This code is part of a larger project built with CMake but for some reason I could no step into ffmpeg library while debugging, So I recompiled ffmpeg ensured debugging was enabled so I could drill down to the root cause but I still could not step into the ffmpeg library.

    


    I then created a minimal project using visual studio c++ console project and I still could not step into the code.

    


    I have read through many ffmpeg docs and some I could find on the internet and I still could not solve it.

    


    This is the code

    


    #include <iostream>&#xA;&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libswresample></libswresample>swresample.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavutil></libavutil>audio_fifo.h>&#xA;}&#xA;&#xA;using namespace std;&#xA;&#xA;//in audio file&#xA;string filename{ "rapid_caller_test.m4a" };&#xA;AVFormatContext* pFormatCtx{};&#xA;AVCodecContext* pCodecCtx{};&#xA;AVStream* pStream{};&#xA;&#xA;//out audio file&#xA;string outFilename{ "output.wav" };&#xA;AVFormatContext* pOutFormatCtx{ nullptr };&#xA;AVCodecContext* pOutCodecCtx{ nullptr };&#xA;AVIOContext* pOutIoContext{ nullptr };&#xA;const AVCodec* pOutCodec{ nullptr };&#xA;AVStream* pOutStream{ nullptr };&#xA;const int OUTPUT_CHANNELS = 1;&#xA;const int SAMPLE_RATE = 8000;&#xA;const int OUT_BIT_RATE = 64000;&#xA;uint8_t** convertedSamplesBuffer{ nullptr };&#xA;int64_t dstNmbrSamples{ 0 };&#xA;int dstLineSize{ 0 };&#xA;static int64_t pts{ 0 };&#xA;&#xA;//conversion context;&#xA;SwrContext* swr{};&#xA;&#xA;uint32_t i{ 0 };&#xA;int audiostream{ -1 };&#xA;&#xA;&#xA;void cleanUp() &#xA;{&#xA;  avcodec_free_context(&amp;pOutCodecCtx);;&#xA;  avio_closep(&amp;(pOutFormatCtx)->pb);&#xA;  avformat_free_context(pOutFormatCtx);&#xA;  pOutFormatCtx = nullptr;&#xA;}&#xA;&#xA;int main()&#xA;{&#xA;&#xA;/*&#xA;* section to setup input file&#xA;*/&#xA;if (avformat_open_input(&amp;pFormatCtx, filename.data(), nullptr, nullptr) != 0) {&#xA;  cout &lt;&lt; "could not open file " &lt;&lt; filename &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;if (avformat_find_stream_info(pFormatCtx, nullptr) &lt; 0) {&#xA;  cout &lt;&lt; "Could not retrieve stream information from file " &lt;&lt; filename &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;av_dump_format(pFormatCtx, 0, filename.c_str(), 0);&#xA;&#xA;for (i = 0; i &lt; pFormatCtx->nb_streams; i&#x2B;&#x2B;) {&#xA;  if (pFormatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;    audiostream = i;&#xA;    break;&#xA;  }&#xA;}&#xA;if (audiostream == -1) {&#xA;  cout &lt;&lt; "did not find audio stream" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;&#xA;pStream = pFormatCtx->streams[audiostream];&#xA;const AVCodec* pCodec{ avcodec_find_decoder(pStream->codecpar->codec_id) };&#xA;pCodecCtx = avcodec_alloc_context3(pCodec);&#xA;avcodec_parameters_to_context(pCodecCtx, pStream->codecpar);&#xA;if (avcodec_open2(pCodecCtx, pCodec, nullptr)) {&#xA;  cout &lt;&lt; "could not open codec" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;&#xA;/*&#xA;* section to set up output file which is a G711 audio&#xA;*/&#xA;if (avio_open(&amp;pOutIoContext, outFilename.data(), AVIO_FLAG_WRITE)) {&#xA;  cout &lt;&lt; "could not open out put file" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;if (!(pOutFormatCtx = avformat_alloc_context())) {&#xA;  cout &lt;&lt; "could not create format conext" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;pOutFormatCtx->pb = pOutIoContext;&#xA;if (!(pOutFormatCtx->oformat = av_guess_format(nullptr, outFilename.data(), nullptr))) {&#xA;  cout &lt;&lt; "could not find output file format" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutFormatCtx->url = av_strdup(outFilename.data()))) {&#xA;  cout &lt;&lt; "could not allocate file name" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutCodec = avcodec_find_encoder(AV_CODEC_ID_PCM_ALAW))) {&#xA;  cout &lt;&lt; "codec not found" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutStream = avformat_new_stream(pOutFormatCtx, nullptr))) {&#xA;  cout &lt;&lt; "could not create new stream" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if (!(pOutCodecCtx = avcodec_alloc_context3(pOutCodec))) {&#xA;  cout &lt;&lt; "could not allocate codec context" &lt;&lt; endl;&#xA;  return -1;&#xA;}&#xA;av_channel_layout_default(&amp;pOutCodecCtx->ch_layout, OUTPUT_CHANNELS);&#xA;pOutCodecCtx->sample_rate = SAMPLE_RATE;&#xA;pOutCodecCtx->sample_fmt = pOutCodec->sample_fmts[0];&#xA;pOutCodecCtx->bit_rate = OUT_BIT_RATE;&#xA;&#xA;//setting sample rate for the container&#xA;pOutStream->time_base.den = SAMPLE_RATE;&#xA;pOutStream->time_base.num = 1;&#xA;if (pOutFormatCtx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;  pOutCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;if (avcodec_open2(pOutCodecCtx, pOutCodec, nullptr)) {&#xA;  cout &lt;&lt; "could not open output codec" &lt;&lt; endl;&#xA;  cleanUp();&#xA;  return -1;&#xA;}&#xA;if ((avcodec_parameters_from_context(pOutStream->codecpar, pOutCodecCtx)) &lt; 0) {&#xA;  cout &lt;&lt; "could not initialize stream parameters" &lt;&lt; endl;&#xA;}   &#xA;&#xA;AVPacket* packet = av_packet_alloc();&#xA;&#xA;swr = swr_alloc();&#xA;swr_alloc_set_opts2(&amp;swr, &amp;pOutCodecCtx->ch_layout, pOutCodecCtx->sample_fmt, pOutCodecCtx->sample_rate,&amp;pCodecCtx->ch_layout, pCodecCtx->sample_fmt, pCodecCtx->sample_rate, 0, nullptr);&#xA;swr_init(swr);&#xA;&#xA;int ret{};&#xA;int bSize{};&#xA;while (av_read_frame(pFormatCtx, packet) >= 0) {&#xA;  AVFrame* pFrame = av_frame_alloc();&#xA;  AVFrame* pOutFrame = av_frame_alloc();&#xA;  if (packet->stream_index == audiostream) {&#xA;    ret = avcodec_send_packet(pCodecCtx, packet);&#xA;    while (ret >= 0) {&#xA;    ret = avcodec_receive_frame(pCodecCtx, pFrame);&#xA;    if (ret == AVERROR(EAGAIN))&#xA;    continue;&#xA;    else if (ret == AVERROR_EOF)&#xA;    break;&#xA;    dstNmbrSamples = av_rescale_rnd(swr_get_delay(swr, pCodecCtx->sample_rate) &#x2B; pFrame->nb_samples, pOutCodecCtx->sample_rate, pCodecCtx->sample_rate, AV_ROUND_UP);&#xA;    if ((av_samples_alloc_array_and_samples(&amp;convertedSamplesBuffer, &amp;dstLineSize, pOutCodecCtx->ch_layout.nb_channels,dstNmbrSamples, pOutCodecCtx->sample_fmt, 0)) &lt; 0) {&#xA;    cout &lt;&lt; "coult not allocate samples array and buffer" &lt;&lt; endl;&#xA;    }&#xA;    int channel_samples_count{ 0 };&#xA;    channel_samples_count = swr_convert(swr, convertedSamplesBuffer, dstNmbrSamples, (const uint8_t**)pFrame->data, pFrame->nb_samples);&#xA;    bSize = av_samples_get_buffer_size(&amp;dstLineSize, pOutCodecCtx->ch_layout.nb_channels, channel_samples_count, pOutCodecCtx->sample_fmt, 0);&#xA;    cout &lt;&lt; "no of samples is " &lt;&lt; channel_samples_count &lt;&lt; " the buffer size " &lt;&lt; bSize &lt;&lt; endl;&#xA;    pOutFrame->nb_samples = channel_samples_count;&#xA;    av_channel_layout_copy(&amp;pOutFrame->ch_layout, &amp;pOutCodecCtx->ch_layout);&#xA;    pOutFrame->format = pOutCodecCtx->sample_fmt;&#xA;    pOutFrame->sample_rate = pOutCodecCtx->sample_rate;&#xA;    if ((av_frame_get_buffer(pOutFrame, 0)) &lt; 0) {&#xA;    cout &lt;&lt; "could not allocate output frame samples " &lt;&lt; endl;&#xA;    av_frame_free(&amp;pOutFrame);&#xA;  }&#xA;                &#xA;    //populate out frame buffer&#xA;    av_frame_make_writable(pOutFrame);&#xA;    for (int i{ 0 }; i &lt; bSize; i&#x2B;&#x2B;) {&#xA;    pOutFrame->data[0][i] = convertedSamplesBuffer[0][i];&#xA;    cout &lt;&lt; pOutFrame->data[0][i];&#xA;   }&#xA;   if (pOutFrame) {&#xA;   pOutFrame->pts = pts;&#xA;   pts &#x2B;= pOutFrame->nb_samples;&#xA;  }&#xA;   int res = avcodec_send_frame(pOutCodecCtx, pOutFrame);&#xA;    if (res &lt; 0) {&#xA;    cout &lt;&lt; "error sending frame to encoder" &lt;&lt; endl;&#xA;    cleanUp();&#xA;    return -1;&#xA;   }&#xA;   //int er = avformat_write_header(pOutFormatCtx,nullptr);&#xA;   AVPacket* pOutPacket = av_packet_alloc();&#xA;   pOutPacket->time_base.num = 1;&#xA;   pOutPacket->time_base.den = 8000;&#xA;   if (pOutPacket == nullptr) {&#xA;    cout &lt;&lt; "unable to allocate packet" &lt;&lt; endl;&#xA;  }&#xA;  while (res >= 0) {&#xA;   res = avcodec_receive_packet(pOutCodecCtx, pOutPacket);&#xA;   if (res == AVERROR(EAGAIN))&#xA;    continue;&#xA;   else if (ret == AVERROR_EOF)&#xA;    break;&#xA;   av_packet_rescale_ts(pOutPacket, pOutCodecCtx->time_base, pOutFormatCtx->streams[0]->time_base);&#xA;   //av_dump_format(pOutFormatCtx, 0, outFilename.c_str(), 1);&#xA;   if (av_write_frame(pOutFormatCtx, pOutPacket) &lt; 0) {&#xA;    cout &lt;&lt; "could not write frame" &lt;&lt; endl;&#xA;    }&#xA;   }&#xA;  }&#xA;}&#xA; av_frame_free(&amp;pFrame);&#xA; av_frame_free(&amp;pOutFrame);&#xA;}&#xA;if (av_write_trailer(pOutFormatCtx) &lt; 0) {&#xA; cout &lt;&lt; "could not write file trailer" &lt;&lt; endl;&#xA;}&#xA;swr_free(&amp;swr);&#xA;avcodec_free_context(&amp;pOutCodecCtx);&#xA;av_packet_free(&amp;packet);&#xA;}&#xA;</iostream>

    &#xA;

    Error/Exception

    &#xA;

    The exception is thrown when I call

    &#xA;

    if (av_write_frame(pOutFormatCtx, pOutPacket) &lt; 0)  {   cout &lt;&lt; "could not write frame" &lt;&lt; endl; } &#xA;I also called this line

    &#xA;

    //int er = avformat_write_header(pOutFormatCtx,nullptr);

    &#xA;

    to see if I will get an exception but it did not throw any exception.

    &#xA;

    I have spent weeks on this issue with no success.&#xA;My goal is to take any audio from a file an be able to resample it if need be, and transcode it to PCM_ALAW.&#xA;I will appreciate any help I can get.

    &#xA;

  • How to make my video in landscape mode using ffmpeg

    6 novembre 2019, par Aarwil

    I have four video chats. Somehow I have managed to cut videos into pieces, stored in array then stacked and finally concat the video which is in the youtube link down below. I have used the size in the portrait view is 640*480. But I need to show them in the landscape. Suggest me any ideas.

    Landscape view :
    https://youtu.be/u8tmL2-CdK0
    Portrait view :
    https://youtu.be/lO-Q3I9X8OA

    These are my inputs

    Input #0, matroska,webm, from 'PA473fbf06ed1f952f95c88b9cf22ed0ba_pre.mkv':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2019-11-05T06:08:19.000000Z
     Duration: 00:01:05.50, start: 63.041000, bitrate: 30 kb/s
       Stream #0:0(eng): Video: h264 (Baseline), yuvj420p(pc, progressive), 360x480, SAR 1:1 DAR 3:4, 15 tbr, 1k tbn, 2k tbc (default)
       Metadata:
         title           : Video
    Input #1, matroska,webm, from 'PA183db0ed986039de3197092103a411eb_pre.mkv':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2019-11-05T06:07:20.000000Z
     Duration: 00:03:15.14, start: 4.062000, bitrate: 172 kb/s
       Stream #1:0(eng): Video: h264 (Baseline), yuvj420p(pc, progressive), 360x480, SAR 1:1 DAR 3:4, 15 fps, 15 tbr, 1k tbn, 2k tbc (default)
       Metadata:
         title           : Video
    Input #2, matroska,webm, from 'PA62a810038cbcc00be21fac43e98f5ee1_pre.mkv':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2019-11-05T06:07:45.000000Z
     Duration: 00:02:21.71, start: 28.803000, bitrate: 92 kb/s
       Stream #2:0(eng): Video: h264 (Baseline), yuvj420p(pc, progressive), 360x480, SAR 1:1 DAR 3:4, 15 tbr, 1k tbn, 2k tbc (default)
       Metadata:
         title           : Video
    Input #3, matroska,webm, from 'PA8fa44ff1ba37ee510a045198bca6f04a_pre.mkv':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2019-11-05T06:07:48.000000Z
     Duration: 00:01:50.69, start: 32.318000, bitrate: 28 kb/s
       Stream #3:0(eng): Video: h264 (Baseline), yuvj420p(pc, progressive), 360x480, SAR 1:1 DAR 3:4, 15 fps, 15 tbr, 1k tbn, 2k tbc (default)
       Metadata:
         title           : Video

    At first, I change all the video resolution to 640:480 because each video may have different resolution (probability). So I am setting a fixed resolution.

    ffmpeg -i PA8fa44ff1ba37ee510a045198bca6f04a_pre.mkv -vf scale=640:480 PA8fa44ff1ba37ee510a045198bca6f04a.mkv -hide_banner
    Input #0, matroska,webm, from PA8fa44ff1ba37ee510a045198bca6f04a_pre.mkv':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2019-11-05T06:07:48.000000Z
     Duration: 00:01:50.69, start: 32.318000, bitrate: 28 kb/s
       Stream #0:0(eng): Video: h264 (Baseline), yuvj420p(pc, progressive), 360x480, SAR 1:1 DAR 3:4, 15 fps, 15 tbr, 1k tbn, 2k tbc (default)
       Metadata:
         title           : Video
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    [swscaler @ 0000021d72eb3f80] deprecated pixel format used, make sure you did set range correctly
    [libx264 @ 0000021d72b33b40] using SAR=9/16
    [libx264 @ 0000021d72b33b40] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0000021d72b33b40] profile High, level 2.2, 4:2:0, 8-bit
    [libx264 @ 0000021d72b33b40] 264 - core 158 r2984 3759fcb - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=15 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, matroska, to 'PA8fa44ff1ba37ee510a045198bca6f04a.mkv':
     Metadata:
       encoder         : Lavf58.32.104
       Stream #0:0(eng): Video: h264 (libx264) (H264 / 0x34363248), yuvj420p(pc), 640x480 [SAR 9:16 DAR 3:4], q=-1--1, 15 fps, 1k tbn, 15 tbc (default)
       Metadata:
         title           : Video
         encoder         : Lavc58.56.101 libx264

    Two videos merging command

    ffmpeg
    -i ddb97d85-fc21-4fb4-8062-ca2084a48aeb.mkv
    -i a8665a5f-fb5e-44cb-a072-070fbe07a14f.mkv
    -filter_complex "[0:v][1:v]hstack" 1572934056.mkv

    Three videos merging command

    ffmpeg
    -i 16f90447-c8b6-4077-b3b6-4fb2c07e19b5.mkv
    -i ef501109-0ee9-4924-8de1-65eb796a4a78.mkv
    -i 0b284aa6-4175-472c-aaf6-837412f97f32.mkv
    -filter_complex "[1:v]scale=320:-1[left]; [2:v]scale=320:-1[right]; [left][right]hstack[bottom]; [0:v][bottom]vstack" 1572934058.mkv

    Concat Command

    ffmpeg
    -i 1572934031.mkv -i 1572934056.mkv -i 1572934058.mkv -i 1572934089.mkv -i 1572934155.mkv -i 1572934169.mkv -i 1572934198.mkv -filter_complex "[0]scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:(ow-iw)/2:(oh-ih)/2,fps=fps=30,setsar=1[0v];[1]scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:(ow-iw)/2:(oh-ih)/2,fps=fps=30,setsar=1[1v];[2]scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:(ow-iw)/2:(oh-ih)/2,fps=fps=30,setsar=1[2v];[3]scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:(ow-iw)/2:(oh-ih)/2,fps=fps=30,setsar=1[3v];[4]scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:(ow-iw)/2:(oh-ih)/2,fps=fps=30,setsar=1[4v];[5]scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:(ow-iw)/2:(oh-ih)/2,fps=fps=30,setsar=1[5v];[6]scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:(ow-iw)/2:(oh-ih)/2,fps=fps=30,setsar=1[6v];[0v][1v][2v][3v][4v][5v][6v]concat=n=7:v=1:a=0[v]" -map "[v]" 4c21f002fa76b148c00cc6fbceaa57ee.mp4
  • Why duration of the mp4 file created with ffmpeg from png files is smaller than expected ?

    17 août 2021, par qwark97

    I'm trying to concatenate a bunch of png files into one mp4 file using ffmpeg. I use following command :

    &#xA;

    ffmpeg -f concat -i concat.txt -f mp4 out.mp4&#xA;

    &#xA;

    The "description file" (concat.txt) looks like this :

    &#xA;

    file screen_001.png&#xA;duration 0.14538311958312988&#xA;file screen_002.png&#xA;duration 0.11382007598876953&#xA;file screen_003.png&#xA;duration 2.543360710144043&#xA;...&#xA;file screen_036.png&#xA;duration 0.15303301811218262&#xA;file screen_037.png&#xA;duration 0.160630464553833&#xA;file screen_038.png&#xA;duration 3.2751874923706055&#xA;

    &#xA;

    Given command works, I'm able to create desired mp4 file. The problem is, the duration of the output file is smaller than sum of the duration lines from concat.txt. I expect mp4 file 22.48s long but I get 19.20s long file.

    &#xA;

    What am I doing wrong ? Maybe I'm not using some flag I should ? Is it even possible to do what I want ? I'm kind a newbie with ffmpeg and video manipulation at all so any help would be appreciated.

    &#xA;

    Output of the ffmpeg, maybe will be useful :

    &#xA;

    root@65181939e08e:/files/tmp# ffmpeg -f concat -i concat.txt -f mp4 out.mp4&#xA;ffmpeg version 4.1.6-1~deb10u1 Copyright (c) 2000-2020 the FFmpeg developers&#xA;  built with gcc 8 (Debian 8.3.0-6)&#xA;  configuration: --prefix=/usr --extra-version=&#x27;1~deb10u1&#x27; --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared&#xA;  libavutil      56. 22.100 / 56. 22.100&#xA;  libavcodec     58. 35.100 / 58. 35.100&#xA;  libavformat    58. 20.100 / 58. 20.100&#xA;  libavdevice    58.  5.100 / 58.  5.100&#xA;  libavfilter     7. 40.101 /  7. 40.101&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  3.100 /  5.  3.100&#xA;  libswresample   3.  3.100 /  3.  3.100&#xA;  libpostproc    55.  3.100 / 55.  3.100&#xA;Input #0, concat, from &#x27;concat.txt&#x27;:&#xA;  Duration: 00:00:22.48, start: 0.000000, bitrate: 0 kb/s&#xA;    Stream #0:0: Video: png, rgba(pc), 1366x768, 25 tbr, 25 tbn, 25 tbc&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))&#xA;Press [q] to stop, [?] for help&#xA;[libx264 @ 0x5651f1a6e8c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x5651f1a6e8c0] profile High 4:4:4 Predictive, level 3.2, 4:4:4 8-bit&#xA;[libx264 @ 0x5651f1a6e8c0] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, mp4, to &#x27;out.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.20.100&#xA;    Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv444p, 1366x768, q=-1--1, 25 fps, 12800 tbn, 25 tbc&#xA;    Metadata:&#xA;      encoder         : Lavc58.35.100 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1&#xA;frame=  483 fps=199 q=-1.0 Lsize=     258kB time=00:00:19.20 bitrate= 110.2kbits/s dup=445 drop=0 speed=7.89x    &#xA;video:252kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.564829%&#xA;[libx264 @ 0x5651f1a6e8c0] frame I:5     Avg QP: 7.86  size: 28761&#xA;[libx264 @ 0x5651f1a6e8c0] frame P:121   Avg QP:14.58  size:   729&#xA;[libx264 @ 0x5651f1a6e8c0] frame B:357   Avg QP:13.34  size:    70&#xA;[libx264 @ 0x5651f1a6e8c0] consecutive B-frames:  1.2%  0.0%  1.9% 96.9%&#xA;[libx264 @ 0x5651f1a6e8c0] mb I  I16..4: 88.2%  0.0% 11.8%&#xA;[libx264 @ 0x5651f1a6e8c0] mb P  I16..4:  1.3%  0.0%  0.3%  P16..4:  0.2%  0.0%  0.0%  0.0%  0.0%    skip:98.1%&#xA;[libx264 @ 0x5651f1a6e8c0] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8:  0.9%  0.0%  0.0%  direct: 0.0%  skip:99.1%  L0:59.4% L1:40.6% BI: 0.0%&#xA;[libx264 @ 0x5651f1a6e8c0] coded y,u,v intra: 7.9% 0.7% 1.3% inter: 0.0% 0.0% 0.0%&#xA;[libx264 @ 0x5651f1a6e8c0] i16 v,h,dc,p: 81% 18%  1%  0%&#xA;[libx264 @ 0x5651f1a6e8c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 33% 28% 22%  2%  3%  3%  3%  3%  3%&#xA;[libx264 @ 0x5651f1a6e8c0] Weighted P-Frames: Y:0.0% UV:0.0%&#xA;[libx264 @ 0x5651f1a6e8c0] ref P L0: 72.5% 11.9% 12.4%  3.1%&#xA;[libx264 @ 0x5651f1a6e8c0] ref B L0: 50.4% 48.1%  1.4%&#xA;[libx264 @ 0x5651f1a6e8c0] ref B L1: 98.9%  1.1%&#xA;[libx264 @ 0x5651f1a6e8c0] kb/s:106.45&#xA;

    &#xA;

    Thanks for your help !

    &#xA;