Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (50)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (7381)

  • Using libav to encode RGBA frames into MP4 but the output is a mess

    5 octobre 2019, par Cu2S

    I’m trying to decode a video into RGB frames, and then postprocess the frames, finally encode the frames into a video. But the output video is completely a mess :
    Screenshot from potplayer

    I wrote a minimal example to illustrate my idea. First, I read some information from some source video :

       AVFormatContext* inputFormatCtx = nullptr;
       int ret = avformat_open_input(&inputFormatCtx, inputParamsVideo, nullptr, nullptr);
       assert(ret >= 0);
       ret = avformat_find_stream_info(inputFormatCtx, NULL);
       av_dump_format(inputFormatCtx, 0, inputParamsVideo, 0);

       assert(ret >= 0);
       AVStream* inputVideoStream = nullptr;
       for (int i = 0; i < inputFormatCtx->nb_streams; i++)
       {
           const auto inputStream = inputFormatCtx->streams[i];
           if (inputStream->codec->codec_type == AVMEDIA_TYPE_VIDEO)
           {
               inputVideoStream = inputStream;
               break;
           }
       }

       assert(inputVideoStream != nullptr);
       AVCodecParameters* inputParams = inputVideoStream->codecpar;
       AVRational framerate = inputVideoStream->codec->framerate;
       auto gop_size = inputVideoStream->codec->gop_size;
       auto maxBFrames = inputVideoStream->codec->max_b_frames;

    Then I assign the information to the output stream :

    AVFormatContext *outputAVFormat = nullptr;
    avformat_alloc_output_context2(&outputAVFormat, nullptr, nullptr, kOutputPath);
    assert(outputAVFormat);
    AVCodec* codec = avcodec_find_encoder(outputAVFormat->oformat->video_codec);
    assert(codec);
    AVCodecContext* encodingCtx = avcodec_alloc_context3(codec);
    avcodec_parameters_to_context(encodingCtx, inputParams);
    encodingCtx->time_base = av_inv_q(framerate);
    encodingCtx->max_b_frames = maxBFrames;
    encodingCtx->gop_size = gop_size;


    if (outputAVFormat->oformat->flags & AVFMT_GLOBALHEADER)
       encodingCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    AVStream* outStream = avformat_new_stream(outputAVFormat, nullptr);
    assert(outStream != nullptr);
    ret = avcodec_parameters_from_context(outStream->codecpar, encodingCtx);
    assert(ret >= 0);
    outStream->time_base = encodingCtx->time_base;

    Then I convert RGBA frames(which is read from files) into YUV420P via sws_scale, and encoding :

       ret = avcodec_open2(encodingCtx, codec, nullptr);
       assert(ret >= 0);
       av_dump_format(outputAVFormat, 0, kOutputPath, 1);

       ret = avio_open(&outputAVFormat->pb, kOutputPath, AVIO_FLAG_WRITE);
       assert(ret >= 0);
       ret = avformat_write_header(outputAVFormat, nullptr);
       assert(ret >= 0);

       AVFrame* frame = av_frame_alloc();
       frame->width = inputParams->width;
       frame->height = inputParams->height;
       frame->format = inputParams->format;
       frame->pts = 0;
       assert(ret >= 0);

       ret = av_frame_get_buffer(frame, 32);
       int frameCount = 0;
       assert(ret >= 0);
       ret = av_frame_make_writable(frame);
       assert(ret >= 0);
       SwsContext* swsContext = sws_getContext(inputParams->width, inputParams->height,
           AV_PIX_FMT_RGBA, frame->width,
           frame->height, static_cast<avpixelformat>(inputParams->format),
           SWS_BILINEAR, NULL, NULL, NULL);


       for (auto inputPicPath : std::filesystem::directory_iterator(kInputDir))
       {
           int width, height, comp;
           unsigned char* data = stbi_load(inputPicPath.path().string().c_str(), &amp;width, &amp;height, &amp;comp, 4);
           int srcStrides[1] = { 4 * width };
           int ret = sws_scale(swsContext, &amp;data, srcStrides, 0, height, frame->data,
               frame->linesize);
           assert(ret >= 0);
           frame->pts = frameCount;
           //frame->pict_type = AV_PICTURE_TYPE_I;
           frameCount += 1;
           encode(encodingCtx, frame, 0, outputAVFormat);

           stbi_image_free(data);
       }

       while (encode(encodingCtx, nullptr, 0, outputAVFormat))
       {
           ;
       }

       static bool encode(AVCodecContext* enc_ctx, AVFrame* frame, std::uint32_t streamIndex,
           AVFormatContext * formatCtx)
       {
           int ret;
           int got_output = 0;
           AVPacket packet = {};
           av_init_packet(&amp;packet);
           ret = avcodec_encode_video2(enc_ctx, &amp;packet, frame, &amp;got_output);
           assert(ret >= 0);
           if (got_output) {
               packet.stream_index = streamIndex;
               av_packet_rescale_ts(&amp;packet, enc_ctx->time_base, formatCtx->streams[streamIndex]->time_base);
               ret = av_interleaved_write_frame(formatCtx, &amp;packet);
               assert(ret >= 0);
               return true;
           }
           else {
               return false;
           }
       }
    </avpixelformat>

    Finally I cleaned up stuff :

       av_write_trailer(outputAVFormat);
       sws_freeContext(swsContext);
       avcodec_free_context(&amp;encodingCtx);
       avio_closep(&amp;outputAVFormat->pb);
       avformat_free_context(outputAVFormat);
       av_frame_free(&amp;frame);

    I dumped my input format and my output format :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'H:\Me.MP4':
     Metadata:
       major_brand     : mp42
       minor_version   : 1
       compatible_brands: mp41mp42isom
       creation_time   : 2019-04-03T05:44:22.000000Z
     Duration: 00:00:06.90, start: 0.000000, bitrate: 1268 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 540x960, 1238 kb/s, 29.86 fps, 30 tbr, 600 tbn, 1200 tbc (default)
       Metadata:
         creation_time   : 2019-04-03T05:44:22.000000Z
         handler_name    : Core Media Video
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, stereo, fltp, 24 kb/s (default)
       Metadata:
         creation_time   : 2019-04-03T05:44:22.000000Z
         handler_name    : Core Media Audio
    [libx264 @ 000002126F90C1C0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    [libx264 @ 000002126F90C1C0] profile High, level 3.1, 4:2:0, 8-bit
    [libx264 @ 000002126F90C1C0] 264 - core 157 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=2 keyint=12 keyint_min=1 scenecut=40 intra_refresh=0 rc_lookahead=12 rc=abr mbtree=1 bitrate=1238 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to './output.mp4':
       Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 540x960, q=2-31, 1238 kb/s, 29.86 tbn

    Update :

    After I deleted

    encodingCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

    the output video is right. Also, outputting avi works, too.

  • Manual encoding into MPEG-TS

    4 juillet 2014, par Lane

    SO...

    I am trying to take a H264 Annex B byte stream video and encode it into MPEG-TS in pure Java. My goals is to create a minimal MPEG-TS, Single Program, valid stream and to not include any timing information information (PCR, PTS, DTS).

    I am currently at the point where my generated file can be passed to ffmpeg (ffmpeg -i myVideo.ts) and ffmpeg reports...

    [NULL @ 0x7f8103022600] start time is not set in estimate_timings_from_pts

    Input #0, mpegts, from 'video.ts':
    Duration: N/A, bitrate: N/A
    Program 1
     Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc

    ...it seems like this warning for start time is not a big deal... and ffmpeg is unable to determine how long the video is. If I create another mpeg-ts file from my video file (ffmpeg -i myVideo.ts -vcodec copy validVideo.ts) and run ffmpeg -i validVideo.ts I get...

    Input #0, mpegts, from 'video2.ts':
    Duration: 00:00:11.61, start: 1.400000, bitrate: 3325 kb/s
    Program 1
     Metadata:
       service_name    : Service01
       service_provider: FFmpeg
     Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc

    ...so you can see the timing information and bitrate is there and so is the metadata.

    My H264 video is comprised of only I and P Frames (with the SPS and PPS preceding the I Frame of course) and the way that I am creating my MPEG-TS stream is...

    1. Write a single PAT at the beginning of the file
    2. Write a single PMT at the beginning of the file
    3. Create TS and PES packets from SPS, PPS and I Frame (AUD NALs too, if this is required ?)
    4. Create TS and PES packets from P Frame (again, AUD NALs too, if required)
    5. For the last payload of either an I Frame or P Frame, add filler bytes to an adaptation field to make sure it fits into a full TS packet
    6. Repeat 3-5 for the entire file

    ...my PAT looks like this...

    4740 0010 0000 b00d 0001 c100 0000 01f0
    002a b104 b2ff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff

    ...and my PMT looks like this...

    4750 0010
    0002 b012 0001 c100 00ff fff0 001b e100
    f000 c15b 41e0 ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff

    ...notice after the c100 00, the "ff ff", f0... says that we are not using a PCR... Also notice that I have updated my CRC to reflect this change to the PMT. My first I Frame packet looks like...

    4741 0010 0000 01e0
    0000 8000 0000 0000 0109 f000 0000 0127
    4d40 288d 8d60 2802 dd80 b501 0101 4000
    00fa 4000 3a98 3a18 00b7 2000 3380 2ef2
    e343 0016 e400 0670 05de 5c16 345d c000
    0000 0128 ee3c 8000 0000 0165 8880 0020
    0000 4fe5 63b5 4e90 b11c 9f8f f891 10f3
    13b1 666b 9fc6 03e9 e321 36bf 1788 347b
    eb23 fc89 5772 6e2e 1714 96df ed16 9b30
    252d ceb7 07e9 a0c7 c6e7 9515 be87 2df1
    81f3 b9d2 ba5f 243e 2d5c cba2 8ca5 b798
    6bec 8c43 0b5d bbda bc5b 6e7c e15c 84e8
    2f13 be84

    ...you’ll notice after the 01e0 0000, 8000 00 is the PES header extension where I specify no PTS / DTS and the remaining length is zero. My first P Frame packet looks like...

    4741 001d
    0000 01e0 0000 8000 0000 0000 0109 f000
    0000 0141 9a00 0200 0593 ff45 a7ae 1acd
    f2d7 f9ec 557f cdb6 ba38 60d6 a626 5edb
    4bb9 9783 89e2 d7e1 102e 4625 2fbf ce16
    f952 d8c9 f027 e55a 6b2a 81c3 48d4 6a45
    050a f355 fbec db01 6562 6405 04aa e011
    50ec 0b45 45e5 0df7 2fed a3f8 ac13 2e69
    6739 6d81 f13d 2455 e6ca 1c6b dc96 65d5
    3bad f250 7dab 42e4 7ba9 f564 ee61 29fb
    1b2c 974c 6924 1a1f 99ef 063c b99a c507
    8c22 b0f8 b14c 3e4d 01d0 6120 4e19 8725
    2fda 6550 f907 3f87

    ...and whenever an I Frame or P Frame is ending, I have a TS packet with an adaptation field like...

    4701 003c b000 ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff

    ...where the first b0 bytes are the adaptation field stuffing bytes and the remaining ones are the final bytes of the I or P Frame. So as you can tell I can use ffmpeg and pass it my file to create a valid movie in any format. However, I need the file I create to be in the proper format and I cannot quite figure out what the last piece I am missing is. Any ideas ?

  • ffmpeg 4 : Using the stream_loop parameter to loop the audio during a video ends up with an infinite loop

    17 juin 2020, par JarsOfJam-Scheduler

    Summary

    &#xA;&#xA;

      &#xA;
    1. Context
    2. &#xA;

    3. The software I use
    4. &#xA;

    5. The problem
    6. &#xA;

    7. Results
      &#xA;4.1. Actual Results

      &#xA;&#xA;

      4.2. Expected Results

    8. &#xA;

    9. What did I try to fix the bug ?

    10. &#xA;

    11. How to reproduce this bug : minimal and testable example with the provided required data

    12. &#xA;

    13. The question

    14. &#xA;

    15. Sources

    16. &#xA;

    &#xA;&#xA;


    &#xA;&#xA;

    Context

    &#xA;&#xA;

    I would want to set an audio WAV as the background sound of a video WEBM. The video can be shorter or longer than the audio. At the moment I add the audio over the video, I don't know the length of both streams. The audio must repeat until the video ends (the audio can be truncated if the video ends before the end of the last repetition of the audio).

    &#xA;&#xA;

    The software I use

    &#xA;&#xA;

    I use ffmpeg version 4.2.2-1ubuntu1 18.04.sav0.

    &#xA;&#xA;

    The problem

    &#xA;&#xA;

    ffmpeg seems to enter in an infinite loop when it proccesses in order to mix the audio and the video. Also, the length of the currently-generating-output-file (which contains both video and audio) is equal to the length of the audio, instead of the length of the video.

    &#xA;&#xA;

    The problem seems to be triggered by this command line :

    &#xA;&#xA;

    ffmpeg -i directory_1/video.webm -stream_loop -1 -fflags &#x2B;shortest -max_interleave_delta 50000 -i directory_2/audio.wav directory_3/video_and_audio.webm&#xA;

    &#xA;&#xA;

    Results

    &#xA;&#xA;

    Actual Results

    &#xA;&#xA;

    Three things :

    &#xA;&#xA;

      &#xA;
    1. The infinite loop of the ffmpeg process : I must manually stop the ffmpeg process

    2. &#xA;

    3. The output video file with music (which is currently generating but output anyway) : it contains both audio and video. But the length of the output file is equal to the length of the audio, instead of the length of the video.

    4. &#xA;

    5. The following output logs :

    6. &#xA;

    &#xA;&#xA;

    &#xA;

    ffmpeg version 4.2.2-1ubuntu1 18.04.sav0 Copyright (c) 2000-2019 the&#xA; FFmpeg developers built with gcc 7 (Ubuntu 7.5.0-3ubuntu1 18.04)
    &#xA; configuration : —prefix=/usr —extra-version='1ubuntu1 18.04.sav0'&#xA; —toolchain=hardened —libdir=/usr/lib/x86_64-linux-gnu —incdir=/usr/include/x86_64-linux-gnu —arch=amd64 —enable-gpl —disable-stripping —enable-avresample —disable-filter=resample —enable-avisynth —enable-gnutls —enable-ladspa —enable-libaom —enable-libass —enable-libbluray —enable-libbs2b —enable-libcaca —enable-libcdio —enable-libcodec2 —enable-libflite —enable-libfontconfig —enable-libfreetype —enable-libfribidi —enable-libgme —enable-libgsm —enable-libjack —enable-libmp3lame —enable-libmysofa —enable-libopenjpeg —enable-libopenmpt —enable-libopus —enable-libpulse —enable-librsvg —enable-librubberband —enable-libshine —enable-libsnappy —enable-libsoxr —enable-libspeex —enable-libssh —enable-libtheora —enable-libtwolame —enable-libvidstab —enable-libvorbis —enable-libvpx —enable-libwavpack —enable-libwebp —enable-libx265 —enable-libxml2 —enable-libxvid —enable-libzmq —enable-libzvbi —enable-lv2 —enable-omx —enable-openal —enable-opencl —enable-opengl —enable-sdl2 —enable-libdc1394 —enable-libdrm —enable-libiec61883 —enable-nvenc —enable-chromaprint —enable-frei0r —enable-libx264 —enable-shared libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 /&#xA; 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 /&#xA; 55. 5.100 Input #0, matroska,webm, from 'youtubed/my_youtube_video.webm' : Metadata :&#xA; encoder : Chrome Duration : N/A, start : 0.000000, bitrate : N/A&#xA; Stream #0:0(eng) : Video : vp8, yuv420p(progressive), 3200x1608, SAR 1:1 DAR 400:201, 1k tbr, 1k tbn, 1k tbc (default)&#xA; Metadata :&#xA; alpha_mode : 1 Guessed Channel Layout for Input Stream #1.0 : stereo Input #1, wav, from 'tmp_music/original_music.wav' :
    &#xA; Duration : 00:00:11.78, bitrate : 1411 kb/s&#xA; Stream #1:0 : Audio : pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s Stream mapping : Stream #0:0 -> #0:0 (vp8&#xA; (native) -> vp9 (libvpx-vp9)) Stream #1:0 -> #0:1 (pcm_s16le&#xA; (native) -> opus (libopus)) Press [q] to stop, [?] for help&#xA; [libvpx-vp9 @ 0x5645268aed80] v1.8.2 [libopus @ 0x5645268b09c0] No bit&#xA; rate set. Defaulting to 96000 bps. Output #0, webm, to&#xA; 'youtubed/my_youtube_video_with_music.webm' : Metadata :&#xA; encoder : Lavf58.29.100&#xA; Stream #0:0(eng) : Video : vp9 (libvpx-vp9), yuv420p(progressive), 3200x1608 [SAR 1:1 DAR 400:201], q=-1—1, 200 kb/s, 1k fps, 1k tbn, 1k&#xA; tbc (default)&#xA; Metadata :&#xA; alpha_mode : 1&#xA; encoder : Lavc58.54.100 libvpx-vp9&#xA; Side data :&#xA; cpb : bitrate max/min/avg : 0/0/0 buffer size : 0 vbv_delay : -1&#xA; Stream #0:1 : Audio : opus (libopus), 48000 Hz, stereo, s16, 96 kb/s&#xA; Metadata :&#xA; encoder : Lavc58.54.100 libopus

    &#xA;

    &#xA;&#xA;

    Expected Results

    &#xA;&#xA;

      &#xA;
    1. No infinite loop during the ffmpeg process

    2. &#xA;

    3. Concerning the output logs, I don't know what it should look.

    4. &#xA;

    5. The output file with the audio and the video should :

      &#xA;&#xA;

      3.1. If the video is longer than the audio, then the audio is repeated until it exactly fits the video. The audio can be truncated.

      &#xA;&#xA;

      3.2. If the video is shorter than the audio, then the audio is truncated and exactly fits the video.

      &#xA;&#xA;

      3.3. If both video and audio are of the same length, then the audio exactly fits the video.

    6. &#xA;

    &#xA;&#xA;

    How to reproduce this bug ? (+ required data)

    &#xA;&#xA;

      &#xA;
    1. Download the following files (resp. audio and video) (I must refresh these download links every 24 hours) :

      &#xA;&#xA;

      1.1. https://a.uguu.se/dmgsmItjJMDq_audio.wav

      &#xA;&#xA;

      1.2. https://a.uguu.se/w3qHDlGq6mOW_video.webm

    2. &#xA;

    3. Move them into the directory/directories of your choice.

    4. &#xA;

    5. Open your CLI, move to the adequat directory and copy/paste/execute the instruction given in Part. The Problem (don't forget to eventually modify this instruction by indicating the adequat directories, according to step 2.).

    6. &#xA;

    7. You'll face my problem.

    8. &#xA;

    &#xA;&#xA;

    What did I try to fix the bug ?

    &#xA;&#xA;

    Nothing, since I don't even understand why the bug occures.

    &#xA;&#xA;

    The question

    &#xA;&#xA;

    How to correct my command in order to mix these audio and video streams without any infinite loop during the ffmpeg process, keeping in mind that I don't know their length, and that audio must be repeated in order to fit the video, even if audio must be truncated (in the case of the last repetition of the audio file must be truncated because the video stream has just ended) ?

    &#xA;&#xA;

    Sources

    &#xA;&#xA;

    The source is the command line you can find in Part. The problem.

    &#xA;