Recherche avancée

Médias (0)

Mot : - Tags -/albums

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (73)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (7642)

  • Libavformat/FFMPEG : Muxing into mp4 with AVFormatContext drops the final frame, depending on the number of frames

    27 octobre 2020, par Galen Lynch

    I am trying to use libavformat to create a .mp4 video
with a single h.264 video stream, but the final frame in the resulting file
often has a duration of zero and is effectively dropped from the video.
Strangely enough, whether the final frame is dropped or not depends on how many
frames I try to add to the file. Some simple testing that I outline below makes
me think that I am somehow misconfiguring either the AVFormatContext or the
h.264 encoder, resulting in two edit lists that sometimes chop off the final
frame. I will also post a simplified version of the code I am using, in case I'm
making some obvious mistake. Any help would be greatly appreciated : I've been
struggling with this issue for the past few days and have made little progress.

    


    I can recover the dropped frame by creating a new mp4 container using ffmpeg
binary with the copy codec if I use the -ignore_editlist option. Inspecting
the file with a missing frame using ffprobe, mp4trackdump, or mp4file --dump, shows that the final frame is dropped if its sample time is exactly the
same the end of the edit list. When I make a file that has no dropped frames, it
still has two edit lists : the only difference is that the end time of the edit
list is beyond all samples in files that do not have dropped frames. Though this
is hardly a fair comparison, if I make a .png for each frame and then generate
a .mp4 with ffmpeg using the image2 codec and similar h.264 settings, I
produce a movie with all frames present, only one edit list, and similar PTS
times as my mangled movies with two edit lists. In this case, the edit list
always ends after the last frame/sample time.

    


    I am using this command to determine the number of frames in the resulting stream,
though I also get the same number with other utilities :

    


    ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 video_file_name.mp4


    


    Simple inspection of the file with ffprobe shows no obviously alarming signs to
me, besides the framerate being affected by the missing frame (the target was
24) :

    


    $ ffprobe -hide_banner testing.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'testing.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.45.100
  Duration: 00:00:04.13, start: 0.041016, bitrate: 724 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 100x100, 722 kb/s, 24.24 fps, 24 tbr, 12288 tbn, 48 tbc (default)
    Metadata:
      handler_name    : VideoHandler


    


    The files that I generate programatically always have two edit lists, one of
which is very short. In files both with and without a missing frame, the
duration one of the frames is 0, while all the others have the same duration
(512). You can see this in the ffmpeg output for this file that I tried to put
100 frames into, though only 99 are visible despite the file containing all 100
samples.

    


    $ ffmpeg -hide_banner -y -v 9 -loglevel 99 -i testing.mp4  &#xA;...&#xA;<edited to="to" remove="remove" the="the" class="class" printing="printing">&#xA;type:&#x27;edts&#x27; parent:&#x27;trak&#x27; sz: 48 100 948&#xA;type:&#x27;elst&#x27; parent:&#x27;edts&#x27; sz: 40 8 40&#xA;track[0].edit_count = 2&#xA;duration=41 time=-1 rate=1.000000&#xA;duration=4125 time=0 rate=1.000000&#xA;type:&#x27;mdia&#x27; parent:&#x27;trak&#x27; sz: 808 148 948&#xA;type:&#x27;mdhd&#x27; parent:&#x27;mdia&#x27; sz: 32 8 800&#xA;type:&#x27;hdlr&#x27; parent:&#x27;mdia&#x27; sz: 45 40 800&#xA;ctype=[0][0][0][0]&#xA;stype=vide&#xA;type:&#x27;minf&#x27; parent:&#x27;mdia&#x27; sz: 723 85 800&#xA;type:&#x27;vmhd&#x27; parent:&#x27;minf&#x27; sz: 20 8 715&#xA;type:&#x27;dinf&#x27; parent:&#x27;minf&#x27; sz: 36 28 715&#xA;type:&#x27;dref&#x27; parent:&#x27;dinf&#x27; sz: 28 8 28&#xA;Unknown dref type 0x206c7275 size 12&#xA;type:&#x27;stbl&#x27; parent:&#x27;minf&#x27; sz: 659 64 715&#xA;type:&#x27;stsd&#x27; parent:&#x27;stbl&#x27; sz: 151 8 651&#xA;size=135 4CC=avc1 codec_type=0&#xA;type:&#x27;avcC&#x27; parent:&#x27;stsd&#x27; sz: 49 8 49&#xA;type:&#x27;stts&#x27; parent:&#x27;stbl&#x27; sz: 32 159 651&#xA;track[0].stts.entries = 2&#xA;sample_count=99, sample_duration=512&#xA;sample_count=1, sample_duration=0&#xA;...&#xA;AVIndex stream 0, sample 99, offset 5a0ed, dts 50688, size 3707, distance 0, keyframe 1&#xA;Processing st: 0, edit list 0 - media time: -1, duration: 504&#xA;Processing st: 0, edit list 1 - media time: 0, duration: 50688&#xA;type:&#x27;udta&#x27; parent:&#x27;moov&#x27; sz: 98 1072 1162&#xA;...&#xA;</edited>

    &#xA;

    The last frame has zero duration :

    &#xA;

    $ mp4trackdump -v testing.mp4&#xA;...&#xA;mp4file testing.mp4, track 1, samples 100, timescale 12288&#xA;sampleId      1, size  6943 duration      512 time        0 00:00:00.000 S&#xA;sampleId      2, size  3671 duration      512 time      512 00:00:00.041 S&#xA;...&#xA;sampleId     99, size  3687 duration      512 time    50176 00:00:04.083 S&#xA;sampleId    100, size  3707 duration        0 time    50688 00:00:04.125 S&#xA;

    &#xA;

    Non-mangled videos that I generate have similar structure, as you can see in&#xA;this video that had 99 input frames, all of which are visible in the output.&#xA;Even though the sample_duration is set to zero for one of the samples in the&#xA;stss box, it is not dropped from the frame count or when reading the frames back&#xA;in with ffmpeg.

    &#xA;

    $ ffmpeg -hide_banner -y -v 9 -loglevel 99 -i testing_99.mp4  &#xA;...&#xA;type:&#x27;elst&#x27; parent:&#x27;edts&#x27; sz: 40 8 40&#xA;track[0].edit_count = 2&#xA;duration=41 time=-1 rate=1.000000&#xA;duration=4084 time=0 rate=1.000000&#xA;...&#xA;track[0].stts.entries = 2&#xA;sample_count=98, sample_duration=512&#xA;sample_count=1, sample_duration=0&#xA;...&#xA;AVIndex stream 0, sample 98, offset 5d599, dts 50176, size 3833, distance 0, keyframe 1&#xA;Processing st: 0, edit list 0 - media time: -1, duration: 504&#xA;Processing st: 0, edit list 1 - media time: 0, duration: 50184&#xA;...&#xA;

    &#xA;

    $ mp4trackdump -v testing_99.mp4&#xA;...&#xA;sampleId     98, size  3814 duration      512 time    49664 00:00:04.041 S&#xA;sampleId     99, size  3833 duration        0 time    50176 00:00:04.083 S&#xA;

    &#xA;

    One difference that jumps out to me is that the mangled file's second edit list&#xA;ends at time 50688, which coincides with the last sample, while the non-mangled&#xA;file's edit list ends at 50184, which is after the time of the last sample&#xA;at 50176. As I mentioned before, whether the last frame is clipped depends on&#xA;the number of frames I encode and mux into the container : 100 input frames&#xA;results in 1 dropped frame, 99 results in 0, 98 in 0, 97 in 1, etc...

    &#xA;

    Here is the code that I used to generate these files, which is a MWE script&#xA;version of library functions that I am modifying. It is written in Julia,&#xA;which I do not think is important here, and calls the FFMPEG library version&#xA;4.3.1. It's more or less a direct translation from of the FFMPEG muxing&#xA;demo, although the codec&#xA;context here is created before the format context. I am presenting the code that&#xA;interacts with ffmpeg first, although it relies on some helper code that I will&#xA;put below.

    &#xA;

    The helper code just makes it easier to work with nested C structs in Julia, and&#xA;allows . syntax in Julia to be used in place of C's arrow (->) operator for&#xA;field access of struct pointers. Libav structs such as AVFrame appear as a&#xA;thin wrapper type AVFramePtr, and similarly AVStream appears as&#xA;AVStreamPtr etc... These act like single or double pointers for the purposes&#xA;of function calls, depending on the function's type signature. Hopefully it will&#xA;be clear enough to understand if you are familiar with working with libav in C,&#xA;and I don't think looking at the helper code should be necessary if you don't&#xA;want to run the code.

    &#xA;

    # Function to transfer array to AVPicture/AVFrame&#xA;function transfer_img_buf_to_frame!(frame, img)&#xA;    img_pointer = pointer(img)&#xA;    data_pointer = frame.data[1] # Base-1 indexing, get pointer to first data buffer in frame&#xA;    for h = 1:frame.height&#xA;        data_line_pointer = data_pointer &#x2B; (h-1) * frame.linesize[1] # base-1 indexing&#xA;        img_line_pointer = img_pointer &#x2B; (h-1) * frame.width&#xA;        unsafe_copyto!(data_line_pointer, img_line_pointer, frame.width) # base-1 indexing&#xA;    end&#xA;end&#xA;&#xA;# Function to transfer AVFrame to AVCodecContext, and AVPacket to AVFormatContext&#xA;function encode_mux!(packet, format_context, frame, codec_context; flush = false)&#xA;    if flush&#xA;        fret = avcodec_send_frame(codec_context, C_NULL)&#xA;    else&#xA;        fret = avcodec_send_frame(codec_context, frame)&#xA;    end&#xA;    if fret &lt; 0 &amp;&amp; !in(fret, [-Libc.EAGAIN, VIO_AVERROR_EOF])&#xA;        error("Error $fret sending a frame for encoding")&#xA;    end&#xA;&#xA;    pret = Cint(0)&#xA;    while pret >= 0&#xA;        pret = avcodec_receive_packet(codec_context, packet)&#xA;        if pret == -Libc.EAGAIN || pret == VIO_AVERROR_EOF&#xA;             break&#xA;        elseif pret &lt; 0&#xA;            error("Error $pret during encoding")&#xA;        end&#xA;        stream = format_context.streams[1] # Base-1 indexing&#xA;        av_packet_rescale_ts(packet, codec_context.time_base, stream.time_base)&#xA;        packet.stream_index = 0&#xA;        ret = av_interleaved_write_frame(format_context, packet)&#xA;        ret &lt; 0 &amp;&amp; error("Error muxing packet: $ret")&#xA;    end&#xA;    if !flush &amp;&amp; fret == -Libc.EAGAIN &amp;&amp; pret != VIO_AVERROR_EOF&#xA;        fret = avcodec_send_frame(codec_context, frame)&#xA;        if fret &lt; 0 &amp;&amp; fret != VIO_AVERROR_EOF&#xA;            error("Error $fret sending a frame for encoding")&#xA;        end&#xA;    end&#xA;    return pret&#xA;end&#xA;&#xA;# Set parameters of test movie&#xA;nframe = 100&#xA;width, height = 100, 100&#xA;framerate = 24&#xA;gop = 0&#xA;codec_name = "libx264"&#xA;filename = "testing.mp4"&#xA;&#xA;((width % 2 !=0) || (height % 2 !=0)) &amp;&amp; error("Encoding error: Image dims must be a multiple of two")&#xA;&#xA;# Make test images&#xA;imgstack = map(x->rand(UInt8,width,height),1:nframe);&#xA;&#xA;pix_fmt = AV_PIX_FMT_GRAY8&#xA;framerate_rat = Rational(framerate)&#xA;&#xA;codec = avcodec_find_encoder_by_name(codec_name)&#xA;codec == C_NULL &amp;&amp; error("Codec &#x27;$codec_name&#x27; not found")&#xA;&#xA;# Allocate AVCodecContext&#xA;codec_context_p = avcodec_alloc_context3(codec) # raw pointer&#xA;codec_context_p == C_NULL &amp;&amp; error("Could not allocate AVCodecContext")&#xA;# Easier to work with pointer that acts like a c struct pointer, type defined below&#xA;codec_context = AVCodecContextPtr(codec_context_p)&#xA;&#xA;codec_context.width = width&#xA;codec_context.height = height&#xA;codec_context.time_base = AVRational(1/framerate_rat)&#xA;codec_context.framerate = AVRational(framerate_rat)&#xA;codec_context.pix_fmt = pix_fmt&#xA;codec_context.gop_size = gop&#xA;&#xA;ret = avcodec_open2(codec_context, codec, C_NULL)&#xA;ret &lt; 0 &amp;&amp; error("Could not open codec: Return code $(ret)")&#xA;&#xA;# Allocate AVFrame and wrap it in a Julia convenience type&#xA;frame_p = av_frame_alloc()&#xA;frame_p == C_NULL &amp;&amp; error("Could not allocate AVFrame")&#xA;frame = AVFramePtr(frame_p)&#xA;&#xA;frame.format = pix_fmt&#xA;frame.width = width&#xA;frame.height = height&#xA;&#xA;# Allocate picture buffers for frame&#xA;ret = av_frame_get_buffer(frame, 0)&#xA;ret &lt; 0 &amp;&amp; error("Could not allocate the video frame data")&#xA;&#xA;# Allocate AVPacket and wrap it in a Julia convenience type&#xA;packet_p = av_packet_alloc()&#xA;packet_p == C_NULL &amp;&amp; error("Could not allocate AVPacket")&#xA;packet = AVPacketPtr(packet_p)&#xA;&#xA;# Allocate AVFormatContext and wrap it in a Julia convenience type&#xA;format_context_dp = Ref(Ptr{AVFormatContext}()) # double pointer&#xA;ret = avformat_alloc_output_context2(format_context_dp, C_NULL, C_NULL, filename)&#xA;if ret != 0 || format_context_dp[] == C_NULL&#xA;    error("Could not allocate AVFormatContext")&#xA;end&#xA;format_context = AVFormatContextPtr(format_context_dp)&#xA;&#xA;# Add video stream to AVFormatContext and configure it to use the encoder made above&#xA;stream_p = avformat_new_stream(format_context, C_NULL)&#xA;stream_p == C_NULL &amp;&amp; error("Could not allocate output stream")&#xA;stream = AVStreamPtr(stream_p) # Wrap this pointer in a convenience type&#xA;&#xA;stream.time_base = codec_context.time_base&#xA;stream.avg_frame_rate = 1 / convert(Rational, stream.time_base)&#xA;ret = avcodec_parameters_from_context(stream.codecpar, codec_context)&#xA;ret &lt; 0 &amp;&amp; error("Could not set parameters of stream")&#xA;&#xA;# Open the AVIOContext&#xA;pb_ptr = field_ptr(format_context, :pb)&#xA;# This following is just a call to avio_open, with a bit of extra protection&#xA;# so the Julia garbage collector does not destroy format_context during the call&#xA;ret = GC.@preserve format_context avio_open(pb_ptr, filename, AVIO_FLAG_WRITE)&#xA;ret &lt; 0 &amp;&amp; error("Could not open file $filename for writing")&#xA;&#xA;# Write the header&#xA;ret = avformat_write_header(format_context, C_NULL)&#xA;ret &lt; 0 &amp;&amp; error("Could not write header")&#xA;&#xA;# Encode and mux each frame&#xA;for i in 1:nframe # iterate from 1 to nframe&#xA;    img = imgstack[i] # base-1 indexing&#xA;    ret = av_frame_make_writable(frame)&#xA;    ret &lt; 0 &amp;&amp; error("Could not make frame writable")&#xA;    transfer_img_buf_to_frame!(frame, img)&#xA;    frame.pts = i&#xA;    encode_mux!(packet, format_context, frame, codec_context)&#xA;end&#xA;&#xA;# Flush the encoder&#xA;encode_mux!(packet, format_context, frame, codec_context; flush = true)&#xA;&#xA;# Write the trailer&#xA;av_write_trailer(format_context)&#xA;&#xA;# Close the AVIOContext&#xA;pb_ptr = field_ptr(format_context, :pb) # get pointer to format_context.pb&#xA;ret = GC.@preserve format_context avio_closep(pb_ptr) # simply a call to avio_closep&#xA;ret &lt; 0 &amp;&amp; error("Could not free AVIOContext")&#xA;&#xA;# Deallocation&#xA;avcodec_free_context(codec_context)&#xA;av_frame_free(frame)&#xA;av_packet_free(packet)&#xA;avformat_free_context(format_context)&#xA;

    &#xA;

    Below is the helper code that makes accessing pointers to nested c structs not a&#xA;total pain in Julia. If you try to run the code yourself, please enter this in&#xA;before the logic of the code shown above. It requires&#xA;VideoIO.jl, a Julia wrapper to libav.

    &#xA;

    # Convenience type and methods to make the above code look more like C&#xA;using Base: RefValue, fieldindex&#xA;&#xA;import Base: unsafe_convert, getproperty, setproperty!, getindex, setindex!,&#xA;    unsafe_wrap, propertynames&#xA;&#xA;# VideoIO is a Julia wrapper to libav&#xA;#&#xA;# Bring bindings to libav library functions into namespace&#xA;using VideoIO: AVCodecContext, AVFrame, AVPacket, AVFormatContext, AVRational,&#xA;    AVStream, AV_PIX_FMT_GRAY8, AVIO_FLAG_WRITE, AVFMT_NOFILE,&#xA;    avformat_alloc_output_context2, avformat_free_context, avformat_new_stream,&#xA;    av_dump_format, avio_open, avformat_write_header,&#xA;    avcodec_parameters_from_context, av_frame_make_writable, avcodec_send_frame,&#xA;    avcodec_receive_packet, av_packet_rescale_ts, av_interleaved_write_frame,&#xA;    avformat_query_codec, avcodec_find_encoder_by_name, avcodec_alloc_context3,&#xA;    avcodec_open2, av_frame_alloc, av_frame_get_buffer, av_packet_alloc,&#xA;    avio_closep, av_write_trailer, avcodec_free_context, av_frame_free,&#xA;    av_packet_free&#xA;&#xA;# Submodule of VideoIO&#xA;using VideoIO: AVCodecs&#xA;&#xA;# Need to import this function from Julia&#x27;s Base to add more methods&#xA;import Base: convert&#xA;&#xA;const VIO_AVERROR_EOF = -541478725 # AVERROR_EOF&#xA;&#xA;# Methods to convert between AVRational and Julia&#x27;s Rational type, because it&#x27;s&#xA;# hard to access the AV rational macros with Julia&#x27;s C interface&#xA;convert(::Type{Rational{T}}, r::AVRational) where T = Rational{T}(r.num, r.den)&#xA;convert(::Type{Rational}, r::AVRational) = Rational(r.num, r.den)&#xA;convert(::Type{AVRational}, r::Rational) = AVRational(numerator(r), denominator(r))&#xA;&#xA;"""&#xA;    mutable struct NestedCStruct{T}&#xA;&#xA;Wraps a pointer to a C struct, and acts like a double pointer to that memory.&#xA;The methods below will automatically convert it to a single pointer if needed&#xA;for a function call, and make interacting with it in Julia look (more) similar&#xA;to interacting with it in C, except &#x27;->&#x27; in C is replaced by &#x27;.&#x27; in Julia.&#xA;"""&#xA;mutable struct NestedCStruct{T}&#xA;    data::RefValue{Ptr{T}}&#xA;end&#xA;NestedCStruct{T}(a::Ptr) where T = NestedCStruct{T}(Ref(a))&#xA;NestedCStruct(a::Ptr{T}) where T = NestedCStruct{T}(a)&#xA;&#xA;const AVCodecContextPtr = NestedCStruct{AVCodecContext}&#xA;const AVFramePtr = NestedCStruct{AVFrame}&#xA;const AVPacketPtr = NestedCStruct{AVPacket}&#xA;const AVFormatContextPtr = NestedCStruct{AVFormatContext}&#xA;const AVStreamPtr = NestedCStruct{AVStream}&#xA;&#xA;function field_ptr(::Type{S}, struct_pointer::Ptr{T}, field::Symbol,&#xA;                           index::Integer = 1) where {S,T}&#xA;    fieldpos = fieldindex(T, field)&#xA;    field_pointer = convert(Ptr{S}, struct_pointer) &#x2B;&#xA;        fieldoffset(T, fieldpos) &#x2B; (index - 1) * sizeof(S)&#xA;    return field_pointer&#xA;end&#xA;&#xA;field_ptr(a::Ptr{T}, field::Symbol, args...) where T =&#xA;    field_ptr(fieldtype(T, field), a, field, args...)&#xA;&#xA;function check_ptr_valid(p::Ptr, err::Bool = true)&#xA;    valid = p != C_NULL&#xA;    err &amp;&amp; !valid &amp;&amp; error("Invalid pointer")&#xA;    valid&#xA;end&#xA;&#xA;unsafe_convert(::Type{Ptr{T}}, ap::NestedCStruct{T}) where T =&#xA;    getfield(ap, :data)[]&#xA;unsafe_convert(::Type{Ptr{Ptr{T}}}, ap::NestedCStruct{T}) where T =&#xA;    unsafe_convert(Ptr{Ptr{T}}, getfield(ap, :data))&#xA;&#xA;function check_ptr_valid(a::NestedCStruct{T}, args...) where T&#xA;    p = unsafe_convert(Ptr{T}, a)&#xA;    GC.@preserve a check_ptr_valid(p, args...)&#xA;end&#xA;&#xA;nested_wrap(x::Ptr{T}) where T = NestedCStruct(x)&#xA;nested_wrap(x) = x&#xA;&#xA;function getproperty(ap::NestedCStruct{T}, s::Symbol) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    res = GC.@preserve ap unsafe_load(field_ptr(p, s))&#xA;    nested_wrap(res)&#xA;end&#xA;&#xA;function setproperty!(ap::NestedCStruct{T}, s::Symbol, x) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    fp = field_ptr(p, s)&#xA;    GC.@preserve ap unsafe_store!(fp, x)&#xA;end&#xA;&#xA;function getindex(ap::NestedCStruct{T}, i::Integer) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    res = GC.@preserve ap unsafe_load(p, i)&#xA;    nested_wrap(res)&#xA;end&#xA;&#xA;function setindex!(ap::NestedCStruct{T}, i::Integer, x) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    GC.@preserve ap unsafe_store!(p, x, i)&#xA;end&#xA;&#xA;function unsafe_wrap(::Type{T}, ap::NestedCStruct{S}, i) where {S, T}&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{S}, ap)&#xA;    GC.@preserve ap unsafe_wrap(T, p, i)&#xA;end&#xA;&#xA;function field_ptr(::Type{S}, a::NestedCStruct{T}, field::Symbol,&#xA;                           args...) where {S, T}&#xA;    check_ptr_valid(a)&#xA;    p = unsafe_convert(Ptr{T}, a)&#xA;    GC.@preserve a field_ptr(S, p, field, args...)&#xA;end&#xA;&#xA;field_ptr(a::NestedCStruct{T}, field::Symbol, args...) where T =&#xA;    field_ptr(fieldtype(T, field), a, field, args...)&#xA;&#xA;propertynames(ap::T) where {S, T&lt;:NestedCStruct{S}} = (fieldnames(S)...,&#xA;                                                       fieldnames(T)...)&#xA;

    &#xA;


    &#xA;

    Edit : Some things that I have already tried

    &#xA;

      &#xA;
    • Explicitly setting the stream duration to be the same number as the number of frames that I add, or a few more beyond that
    • &#xA;

    • Explicitly setting the stream start time to zero, while the first frame has a PTS of 1
    • &#xA;

    • Playing around with encoder parameters, as well as gop_size, using B frames, etc.
    • &#xA;

    • Setting the private data for the mov/mp4 muxer to set the movflag negative_cts_offsets
    • &#xA;

    • Changing the framerate
    • &#xA;

    • Tried different pixel formats, such as AV_PIX_FMT_YUV420P
    • &#xA;

    &#xA;

    Also to be clear while I can just transfer the file into another while ignoring the edit lists to work around this problem, I am hoping to not make damaged mp4 files in the first place.

    &#xA;

  • How to play and seek fragmented MP4 audio using MSE SourceBuffer ?

    29 juin 2024, par Stefan Falk

    Note :

    &#xA;

    &#xA;

    If you end up here, you might want to take a look at shaka-player and the accompanying shaka-streamer. Use it. Don't implement this yourself unless you really have to.

    &#xA;

    &#xA;

    I am trying for quite some time now to be able to play an audio track on Chrome, Firefox, Safari, etc. but I keep hitting brick walls. My problem is currently that I am just not able to seek within a fragmented MP4 (or MP3).

    &#xA;

    At the moment I am converting audio files such as MP3 to fragmented MP4 (fMP4) and send them chunk-wise to the client. What I do is defining a CHUNK_DURACTION_SEC (chunk duration in seconds) and compute a chunk size like this :

    &#xA;

    chunksTotal = Math.ceil(this.track.duration / CHUNK_DURATION_SEC);&#xA;chunkSize = Math.ceil(this.track.fileSize / this.chunksTotal);&#xA;

    &#xA;

    With this I partition the audio file and can fetch it entirely jumping chunkSize-many bytes for each chunk :

    &#xA;

    -----------------------------------------&#xA;| chunk 1 | chunk 2 |   ...   | chunk n |&#xA;-----------------------------------------&#xA;

    &#xA;

    How audio files are converted to fMP4

    &#xA;

    ffmpeg -i input.mp3 -acodec aac -b:a 256k -f mp4 \&#xA;       -movflags faststart&#x2B;frag_every_frame&#x2B;empty_moov&#x2B;default_base_moof \&#xA;        output.mp4&#xA;

    &#xA;

    This seems to work with Chrome and Firefox (so far).

    &#xA;

    How chunks are appended

    &#xA;

    After following this example, and realizing that it's simply not working as it is explained here, I threw it away and started over from scratch. Unfortunately without success. It's still not working.

    &#xA;

    The following code is supposed to play a track from the very beginning to the very end. However, I also need to be able to seek. So far, this is simply not working. Seeking will just stop the audio after the seeking event got triggered.

    &#xA;

    The code

    &#xA;

    /* Desired chunk duration in seconds. */&#xA;const CHUNK_DURATION_SEC = 20;&#xA;&#xA;const AUDIO_EVENTS = [&#xA;  &#x27;ended&#x27;,&#xA;  &#x27;error&#x27;,&#xA;  &#x27;play&#x27;,&#xA;  &#x27;playing&#x27;,&#xA;  &#x27;seeking&#x27;,&#xA;  &#x27;seeked&#x27;,&#xA;  &#x27;pause&#x27;,&#xA;  &#x27;timeupdate&#x27;,&#xA;  &#x27;canplay&#x27;,&#xA;  &#x27;loadedmetadata&#x27;,&#xA;  &#x27;loadstart&#x27;,&#xA;  &#x27;updateend&#x27;,&#xA;];&#xA;&#xA;&#xA;class ChunksLoader {&#xA;&#xA;  /** The total number of chunks for the track. */&#xA;  public readonly chunksTotal: number;&#xA;&#xA;  /** The length of one chunk in bytes */&#xA;  public readonly chunkSize: number;&#xA;&#xA;  /** Keeps track of requested chunks. */&#xA;  private readonly requested: boolean[];&#xA;&#xA;  /** URL of endpoint for fetching audio chunks. */&#xA;  private readonly url: string;&#xA;&#xA;  constructor(&#xA;    private track: Track,&#xA;    private sourceBuffer: SourceBuffer,&#xA;    private logger: NGXLogger,&#xA;  ) {&#xA;&#xA;    this.chunksTotal = Math.ceil(this.track.duration / CHUNK_DURATION_SEC);&#xA;    this.chunkSize = Math.ceil(this.track.fileSize / this.chunksTotal);&#xA;&#xA;    this.requested = [];&#xA;    for (let i = 0; i &lt; this.chunksTotal; i&#x2B;&#x2B;) {&#xA;      this.requested[i] = false;&#xA;    }&#xA;&#xA;    this.url = `${environment.apiBaseUrl}/api/tracks/${this.track.id}/play`;&#xA;  }&#xA;&#xA;  /**&#xA;   * Fetch the first chunk.&#xA;   */&#xA;  public begin() {&#xA;    this.maybeFetchChunk(0);&#xA;  }&#xA;&#xA;  /**&#xA;   * Handler for the "timeupdate" event. Checks if the next chunk should be fetched.&#xA;   *&#xA;   * @param currentTime&#xA;   *  The current time of the track which is currently played.&#xA;   */&#xA;  public handleOnTimeUpdate(currentTime: number) {&#xA;&#xA;    const nextChunkIndex = Math.floor(currentTime / CHUNK_DURATION_SEC) &#x2B; 1;&#xA;    const hasAllChunks = this.requested.every(val => !!val);&#xA;&#xA;    if (nextChunkIndex === (this.chunksTotal - 1) &amp;&amp; hasAllChunks) {&#xA;      this.logger.debug(&#x27;Last chunk. Calling mediaSource.endOfStream();&#x27;);&#xA;      return;&#xA;    }&#xA;&#xA;    if (this.requested[nextChunkIndex] === true) {&#xA;      return;&#xA;    }&#xA;&#xA;    if (currentTime &lt; CHUNK_DURATION_SEC * (nextChunkIndex - 1 &#x2B; 0.25)) {&#xA;      return;&#xA;    }&#xA;&#xA;    this.maybeFetchChunk(nextChunkIndex);&#xA;  }&#xA;&#xA;  /**&#xA;   * Fetches the chunk if it hasn&#x27;t been requested yet. After the request finished, the returned&#xA;   * chunk gets appended to the SourceBuffer-instance.&#xA;   *&#xA;   * @param chunkIndex&#xA;   *  The chunk to fetch.&#xA;   */&#xA;  private maybeFetchChunk(chunkIndex: number) {&#xA;&#xA;    const start = chunkIndex * this.chunkSize;&#xA;    const end = start &#x2B; this.chunkSize - 1;&#xA;&#xA;    if (this.requested[chunkIndex] == true) {&#xA;      return;&#xA;    }&#xA;&#xA;    this.requested[chunkIndex] = true;&#xA;&#xA;    if ((end - start) == 0) {&#xA;      this.logger.warn(&#x27;Nothing to fetch.&#x27;);&#xA;      return;&#xA;    }&#xA;&#xA;    const totalKb = ((end - start) / 1000).toFixed(2);&#xA;    this.logger.debug(`Starting to fetch bytes ${start} to ${end} (total ${totalKb} kB). Chunk ${chunkIndex &#x2B; 1} of ${this.chunksTotal}`);&#xA;&#xA;    const xhr = new XMLHttpRequest();&#xA;    xhr.open(&#x27;get&#x27;, this.url);&#xA;    xhr.setRequestHeader(&#x27;Authorization&#x27;, `Bearer ${AuthenticationService.getJwtToken()}`);&#xA;    xhr.setRequestHeader(&#x27;Range&#x27;, &#x27;bytes=&#x27; &#x2B; start &#x2B; &#x27;-&#x27; &#x2B; end);&#xA;    xhr.responseType = &#x27;arraybuffer&#x27;;&#xA;    xhr.onload = () => {&#xA;      this.logger.debug(`Range ${start} to ${end} fetched`);&#xA;      this.logger.debug(`Requested size:        ${end - start &#x2B; 1}`);&#xA;      this.logger.debug(`Fetched size:          ${xhr.response.byteLength}`);&#xA;      this.logger.debug(&#x27;Appending chunk to SourceBuffer.&#x27;);&#xA;      this.sourceBuffer.appendBuffer(xhr.response);&#xA;    };&#xA;    xhr.send();&#xA;  };&#xA;&#xA;}&#xA;&#xA;export enum StreamStatus {&#xA;  NOT_INITIALIZED,&#xA;  INITIALIZING,&#xA;  PLAYING,&#xA;  SEEKING,&#xA;  PAUSED,&#xA;  STOPPED,&#xA;  ERROR&#xA;}&#xA;&#xA;export class PlayerState {&#xA;  status: StreamStatus = StreamStatus.NOT_INITIALIZED;&#xA;}&#xA;&#xA;&#xA;/**&#xA; *&#xA; */&#xA;@Injectable({&#xA;  providedIn: &#x27;root&#x27;&#xA;})&#xA;export class MediaSourcePlayerService {&#xA;&#xA;  public track: Track;&#xA;&#xA;  private mediaSource: MediaSource;&#xA;&#xA;  private sourceBuffer: SourceBuffer;&#xA;&#xA;  private audioObj: HTMLAudioElement;&#xA;&#xA;  private chunksLoader: ChunksLoader;&#xA;&#xA;  private state: PlayerState = new PlayerState();&#xA;&#xA;  private state$ = new BehaviorSubject<playerstate>(this.state);&#xA;&#xA;  public stateChange = this.state$.asObservable();&#xA;&#xA;  private currentTime$ = new BehaviorSubject<number>(null);&#xA;&#xA;  public currentTimeChange = this.currentTime$.asObservable();&#xA;&#xA;  constructor(&#xA;    private httpClient: HttpClient,&#xA;    private logger: NGXLogger&#xA;  ) {&#xA;  }&#xA;&#xA;  get canPlay() {&#xA;    const state = this.state$.getValue();&#xA;    const status = state.status;&#xA;    return status == StreamStatus.PAUSED;&#xA;  }&#xA;&#xA;  get canPause() {&#xA;    const state = this.state$.getValue();&#xA;    const status = state.status;&#xA;    return status == StreamStatus.PLAYING || status == StreamStatus.SEEKING;&#xA;  }&#xA;&#xA;  public playTrack(track: Track) {&#xA;    this.logger.debug(&#x27;playTrack&#x27;);&#xA;    this.track = track;&#xA;    this.startPlayingFrom(0);&#xA;  }&#xA;&#xA;  public play() {&#xA;    this.logger.debug(&#x27;play()&#x27;);&#xA;    this.audioObj.play().then();&#xA;  }&#xA;&#xA;  public pause() {&#xA;    this.logger.debug(&#x27;pause()&#x27;);&#xA;    this.audioObj.pause();&#xA;  }&#xA;&#xA;  public stop() {&#xA;    this.logger.debug(&#x27;stop()&#x27;);&#xA;    this.audioObj.pause();&#xA;  }&#xA;&#xA;  public seek(seconds: number) {&#xA;    this.logger.debug(&#x27;seek()&#x27;);&#xA;    this.audioObj.currentTime = seconds;&#xA;  }&#xA;&#xA;  private startPlayingFrom(seconds: number) {&#xA;    this.logger.info(`Start playing from ${seconds.toFixed(2)} seconds`);&#xA;    this.mediaSource = new MediaSource();&#xA;    this.mediaSource.addEventListener(&#x27;sourceopen&#x27;, this.onSourceOpen);&#xA;&#xA;    this.audioObj = document.createElement(&#x27;audio&#x27;);&#xA;    this.addEvents(this.audioObj, AUDIO_EVENTS, this.handleEvent);&#xA;    this.audioObj.src = URL.createObjectURL(this.mediaSource);&#xA;&#xA;    this.audioObj.play().then();&#xA;  }&#xA;&#xA;  private onSourceOpen = () => {&#xA;&#xA;    this.logger.debug(&#x27;onSourceOpen&#x27;);&#xA;&#xA;    this.mediaSource.removeEventListener(&#x27;sourceopen&#x27;, this.onSourceOpen);&#xA;    this.mediaSource.duration = this.track.duration;&#xA;&#xA;    this.sourceBuffer = this.mediaSource.addSourceBuffer(&#x27;audio/mp4; codecs="mp4a.40.2"&#x27;);&#xA;    // this.sourceBuffer = this.mediaSource.addSourceBuffer(&#x27;audio/mpeg&#x27;);&#xA;&#xA;    this.chunksLoader = new ChunksLoader(&#xA;      this.track,&#xA;      this.sourceBuffer,&#xA;      this.logger&#xA;    );&#xA;&#xA;    this.chunksLoader.begin();&#xA;  };&#xA;&#xA;  private handleEvent = (e) => {&#xA;&#xA;    const currentTime = this.audioObj.currentTime.toFixed(2);&#xA;    const totalDuration = this.track.duration.toFixed(2);&#xA;    this.logger.warn(`MediaSource event: ${e.type} (${currentTime} of ${totalDuration} sec)`);&#xA;&#xA;    this.currentTime$.next(this.audioObj.currentTime);&#xA;&#xA;    const currentStatus = this.state$.getValue();&#xA;&#xA;    switch (e.type) {&#xA;      case &#x27;playing&#x27;:&#xA;        currentStatus.status = StreamStatus.PLAYING;&#xA;        this.state$.next(currentStatus);&#xA;        break;&#xA;      case &#x27;pause&#x27;:&#xA;        currentStatus.status = StreamStatus.PAUSED;&#xA;        this.state$.next(currentStatus);&#xA;        break;&#xA;      case &#x27;timeupdate&#x27;:&#xA;        this.chunksLoader.handleOnTimeUpdate(this.audioObj.currentTime);&#xA;        break;&#xA;      case &#x27;seeking&#x27;:&#xA;        currentStatus.status = StreamStatus.SEEKING;&#xA;        this.state$.next(currentStatus);&#xA;        if (this.mediaSource.readyState == &#x27;open&#x27;) {&#xA;          this.sourceBuffer.abort();&#xA;        }&#xA;        this.chunksLoader.handleOnTimeUpdate(this.audioObj.currentTime);&#xA;        break;&#xA;    }&#xA;  };&#xA;&#xA;  private addEvents(obj, events, handler) {&#xA;    events.forEach(event => obj.addEventListener(event, handler));&#xA;  }&#xA;&#xA;}&#xA;</number></playerstate>

    &#xA;

    Running it will give me the following output :

    &#xA;

    enter image description here

    &#xA;

    &#xA;

    Apologies for the screenshot but it's not possible to just copy the output without all the stack traces in Chrome.

    &#xA;

    &#xA;

    What I also tried was following this example and call sourceBuffer.abort() but that didn't work. It looks more like a hack that used to work years ago but it's still referenced in the docs (see "Example" -> "You can see something similar in action in Nick Desaulnier's bufferWhenNeeded demo ..").

    &#xA;

    case &#x27;seeking&#x27;:&#xA;  currentStatus.status = StreamStatus.SEEKING;&#xA;  this.state$.next(currentStatus);        &#xA;  if (this.mediaSource.readyState === &#x27;open&#x27;) {&#xA;    this.sourceBuffer.abort();&#xA;  } &#xA;  break;&#xA;

    &#xA;

    Trying with MP3

    &#xA;

    I have tested the above code under Chrome by converting tracks to MP3 :

    &#xA;

    ffmpeg -i input.mp3 -acodec aac -b:a 256k -f mp3 output.mp3&#xA;

    &#xA;

    and creating a SourceBuffer using audio/mpeg as type :

    &#xA;

    this.mediaSource.addSourceBuffer(&#x27;audio/mpeg&#x27;)&#xA;

    &#xA;

    I have the same problem when seeking.

    &#xA;

    The issue wihout seeking

    &#xA;

    The above code has another issue :

    &#xA;

    After two minutes of playing, the audio playback starts to stutter and comes to a halt prematurely. So, the audio plays up to a point and then it stops without any obvious reason.

    &#xA;

    For whatever reason there is another canplay and playing event. A few seconds after, the audio simply stops..

    &#xA;

    enter image description here

    &#xA;

  • ffmpeg takes too long to start

    17 octobre 2020, par Suspended

    I have this command in python script, in a loop :

    &#xA;

    ffmpeg -i somefile.mp4 -ss 00:03:12 -t 00:00:35 piece.mp4 -loglevel error -stats&#xA;

    &#xA;

    It cuts out pieces of input file (-i). Input filename, as well as start time (-ss) and length of the piece I cut out (-t) varies, so it reads number of mp4 files and cuts out number of pieces from each one. During execution of the script it might be called around 100 times. My problem is that each time before it starts, there is a delay of 6-15 seconds and it adds up to significant time. How can I get it to start immediately ?

    &#xA;

    Initially I thought it was process priority problem, but I noticed that even during the "pause", all processors work at 100%, so apparently some work is being done.

    &#xA;

    The script (process_videos.py) :

    &#xA;

    import subprocess&#xA;import sys&#xA;import math&#xA;import time&#xA;&#xA;class TF:&#xA;    """TimeFormatter class (TF).&#xA;This class&#x27; reason for being is to convert time in short&#xA;form, e.g. 1:33, 0:32, or 23 into long form accepted by&#xA;mp4cut function in bash, e.g. 00:01:22, 00:00:32, etc"""&#xA;&#xA;def toLong(self, shrt):&#xA;    """Converts time to its long form"""&#xA;    sx = &#x27;00:00:00&#x27;&#xA;    ladd = 8 - len(shrt)&#xA;    n = sx[:ladd] &#x2B; shrt&#xA;    return n&#xA;&#xA;def toShort(self, lng):&#xA;    """Converts time to short form"""&#xA;    if lng[0] == &#x27;0&#x27; or lng[0] == &#x27;:&#x27;:&#xA;        return self.toShort(lng[1:])&#xA;    else:&#xA;        return lng&#xA;&#xA;def toSeconds(self, any_time):&#xA;    """Converts time to seconds"""&#xA;    if len(any_time) &lt; 3:&#xA;        return int(any_time)&#xA;    tt = any_time.split(&#x27;:&#x27;)&#xA;    if len(any_time) &lt; 6:            &#xA;        return int(tt[0])*60 &#x2B; int(tt[1])&#xA;    return int(tt[0])*3600 &#x2B; int(tt[1])*60 &#x2B; int(tt[2])&#xA;&#xA;def toTime(self, secsInt):&#xA;    """"""&#xA;    tStr = &#x27;&#x27;&#xA;    hrs, mins, secs = 0, 0, 0&#xA;    if secsInt >= 3600:&#xA;        hrs = math.floor(secsInt / 3600)&#xA;        secsInt = secsInt % 3600&#xA;    if secsInt >= 60:&#xA;        mins = math.floor(secsInt / 60)&#xA;        secsInt = secsInt % 60&#xA;    secs = secsInt&#xA;    return str(hrs).zfill(2) &#x2B; &#x27;:&#x27; &#x2B; str(mins).zfill(2) &#x2B; &#x27;:&#x27; &#x2B; str(secs).zfill(2)&#xA;&#xA;def minus(self, t_start, t_end):&#xA;    """"""&#xA;    t_e = self.toSeconds(t_end)&#xA;    t_s = self.toSeconds(t_start)&#xA;    t_r = t_e - t_s&#xA;    hrs, mins, secs = 0, 0, 0&#xA;    if t_r >= 3600:&#xA;        hrs = math.floor(t_r / 3600)&#xA;        t_r = t_r - (hrs * 3600)&#xA;    if t_r >= 60:&#xA;        mins = math.floor(t_r / 60)&#xA;        t_r = t_r - (mins * 60)&#xA;    secs = t_r&#xA;    hrsf = str(hrs).zfill(2)&#xA;    minsf = str(mins).zfill(2)&#xA;    secsf = str(secs).zfill(2)&#xA;    t_fnl = hrsf &#x2B; &#x27;:&#x27; &#x2B; minsf &#x2B; &#x27;:&#x27; &#x2B; secsf&#xA;    return t_fnl&#xA;&#xA;def go_main():&#xA;    tf = TF()&#xA;    vid_n = 0&#xA;    arglen = len(sys.argv)&#xA;    if arglen == 2:&#xA;        with open(sys.argv[1], &#x27;r&#x27;) as f_in:&#xA;            lines = f_in.readlines()&#xA;            start = None&#xA;            end = None&#xA;            cnt = 0&#xA;            for line in lines:&#xA;                if line[:5] == &#x27;BEGIN&#x27;:&#xA;                    start = cnt&#xA;                if line[:3] == &#x27;END&#x27;:&#xA;                    end = cnt&#xA;                cnt &#x2B;= 1&#xA;            if start == None or end == None:&#xA;                print(&#x27;Invalid file format. start = {}, end = {}&#x27;.format(start,end))&#xA;                return&#xA;            else:&#xA;                lines_r = lines[start&#x2B;1:end]&#xA;                del lines&#xA;                print(&#x27;videos to process: {}&#x27;.format(len(lines_r)))&#xA;                f_out_prefix = ""&#xA;                for vid in lines_r:&#xA;                     vid_n &#x2B;= 1&#xA;                    print(&#x27;\nProcessing video {}/{}&#x27;.format(vid_n, len(lines_r)))&#xA;                    f_out_prefix = &#x27;v&#x27; &#x2B; str(vid_n) &#x2B; &#x27;-&#x27;&#xA;                    dat = vid.split(&#x27;!&#x27;)[1:3]&#xA;                    title = dat[0]&#xA;                    dat_t = dat[1].split(&#x27;,&#x27;)&#xA;                    v_pieces = len(dat_t)&#xA;                    piece_n = 0&#xA;                    video_pieces = []&#xA;                    cmd1 = "echo -n \"\" > tmpfile"&#xA;                    subprocess.run(cmd1, shell=True)                    &#xA;                    print(&#x27;  new tmpfile created&#x27;)&#xA;                    for v_times in dat_t:&#xA;                        piece_n &#x2B;= 1&#xA;                        f_out = f_out_prefix &#x2B; str(piece_n) &#x2B; &#x27;.mp4&#x27;&#xA;                        video_pieces.append(f_out)&#xA;                        print(&#x27;  piece filename {} added to video_pieces list&#x27;.format(f_out))&#xA;                        v_times_spl = v_times.split(&#x27;-&#x27;)&#xA;                        v_times_start = v_times_spl[0]&#xA;                        v_times_end = v_times_spl[1]&#xA;                        t_st = tf.toLong(v_times_start)&#xA;                        t_dur = tf.toTime(tf.toSeconds(v_times_end) - tf.toSeconds(v_times_start))&#xA;                        cmd3 = ["ffmpeg", "-i", title, "-ss", t_st, "-t", t_dur, f_out, "-loglevel", "error", "-stats"]&#xA;                        print(&#x27;  cutting out piece {}/{} - {}&#x27;.format(piece_n, len(dat_t), t_dur))&#xA;                        subprocess.run(cmd3)&#xA;                    for video_piece_name in video_pieces:&#xA;                        cmd4 = "echo \"file " &#x2B; video_piece_name &#x2B; "\" >> tmpfile"&#xA;                        subprocess.run(cmd4, shell=True)&#xA;                        print(&#x27;  filename {} added to tmpfile&#x27;.format(video_piece_name))&#xA;                    vname = f_out_prefix[:-1] &#x2B; ".mp4"&#xA;                    print(&#x27;  name of joined file: {}&#x27;.format(vname))&#xA;                    cmd5 = "ffmpeg -f concat -safe 0 -i tmpfile -c copy joined.mp4 -loglevel error -stats"&#xA;                    to_be_joined = " ".join(video_pieces)&#xA;                    print(&#x27;  joining...&#x27;)&#xA;                    join_cmd = subprocess.Popen(cmd5, shell=True)&#xA;                    join_cmd.wait()&#xA;                    print(&#x27;  joined!&#x27;)&#xA;                    cmd6 = "mv joined.mp4 " &#x2B; vname&#xA;                    rename_cmd = subprocess.Popen(cmd6, shell=True)&#xA;                    rename_cmd.wait()&#xA;                    print(&#x27;  File joined.mp4 renamed to {}&#x27;.format(vname))&#xA;                    cmd7 = "rm " &#x2B; to_be_joined&#xA;                    rm_cmd = subprocess.Popen(cmd7, shell=True)&#xA;                    rm_cmd.wait()&#xA;                    print(&#x27;rm command completed - pieces removed&#x27;)&#xA;                cmd8 = "rm tmpfile"&#xA;                subprocess.run(cmd8, shell=True)&#xA;                print(&#x27;tmpfile removed&#x27;)&#xA;                print(&#x27;All done&#x27;)&#xA;    else:&#xA;        print(&#x27;Incorrect number of arguments&#x27;)&#xA;&#xA;############################&#xA;if __name__ == &#x27;__main__&#x27;:&#xA;    go_main()&#xA;

    &#xA;

    process_videos.py is called from bash terminal like this :

    &#xA;

    $ python process_videos.py video_data   &#xA;

    &#xA;

    video_data file has the following format :

    &#xA;

    BEGIN&#xA;!first_video.mp4!3-23,55-1:34,2:01-3:15,3:34-3:44!&#xA;!second_video.mp4!2-7,12-44,1:03-1:33!&#xA;END&#xA;

    &#xA;

    My system details :

    &#xA;

    System:    Host: snowflake Kernel: 5.4.0-52-generic x86_64 bits: 64 Desktop: Gnome 3.28.4&#xA;           Distro: Ubuntu 18.04.5 LTS&#xA;Machine:   Device: desktop System: Gigabyte product: N/A serial: N/A&#xA;Mobo:      Gigabyte model: Z77-D3H v: x.x serial: N/A BIOS: American Megatrends v: F14 date: 05/31/2012&#xA;CPU:       Quad core Intel Core i5-3570 (-MCP-) cache: 6144 KB &#xA;           clock speeds: max: 3800 MHz 1: 1601 MHz 2: 1601 MHz 3: 1601 MHz 4: 1602 MHz&#xA;Drives:    HDD Total Size: 1060.2GB (55.2% used)&#xA;           ID-1: /dev/sda model: ST31000524AS size: 1000.2GB&#xA;           ID-2: /dev/sdb model: Corsair_Force_GT size: 60.0GB&#xA;Partition: ID-1: / size: 366G used: 282G (82%) fs: ext4 dev: /dev/sda1&#xA;           ID-2: swap-1 size: 0.70GB used: 0.00GB (0%) fs: swap dev: /dev/sda5&#xA;Info:      Processes: 313 Uptime: 16:37 Memory: 3421.4/15906.9MB Client: Shell (bash) inxi: 2.3.56&#xA;

    &#xA;
    &#xA;

    UPDATE :

    &#xA;

    Following Charles' advice, I used performance sampling :

    &#xA;

    # perf record -a -g sleep 180&#xA;

    &#xA;

    ...and here's the report :

    &#xA;

    Samples: 74K of event &#x27;cycles&#x27;, Event count (approx.): 1043554519767&#xA;  Children      Self  Command          Shared Object&#xA;-   50.56%    45.86%  ffmpeg           libavcodec.so.57.107.100                                                                                &#xA;   - 3.10% 0x4489480000002825                                                                                                                  &#xA;       0.64% 0x7ffaf24b92f0                                                                                                                   &#xA;   - 2.12% 0x5f7369007265646f                                                                                                                  &#xA;       av_default_item_name                                                                                                                   &#xA;     1.39% 0                                                                                                                                   &#xA;-   44.48%    40.59%  ffmpeg           libx264.so.152                                                                                          &#xA;     5.78% x264_add8x8_idct_avx2.skip_prologue                                                                                                 &#xA;     3.13% x264_add8x8_idct_avx2.skip_prologue                                                                                                 &#xA;     2.91% x264_add8x8_idct_avx2.skip_prologue                                                                                                 &#xA;     2.31% x264_add8x8_idct_avx.skip_prologue                                                                                                  &#xA;     2.03% 0                                                                                                                                   &#xA;     1.78% 0x1                                                                                                                                 &#xA;     1.26% x264_add8x8_idct_avx2.skip_prologue                                                                                                 &#xA;     1.09% x264_add8x8_idct_avx.skip_prologue                                                                                                  &#xA;     1.06% x264_me_search_ref                                                                                                                  &#xA;     0.97% x264_add8x8_idct_avx.skip_prologue                                                                                                  &#xA;     0.60% x264_me_search_ref                                                                                                                  &#xA;-   38.01%     0.00%  ffmpeg           [unknown]                                                                                               &#xA;     4.10% 0                                                                                                                                   &#xA;   - 3.49% 0x4489480000002825                                                                                                                  &#xA;        0.70% 0x7ffaf24b92f0                                                                                                                   &#xA;        0.56% 0x7f273ae822f0                                                                                                                   &#xA;        0.50% 0x7f0c4768b2f0                                                                                                                   &#xA;   - 2.29% 0x5f7369007265646f                                                                                                                  &#xA;        av_default_item_name                                                                                                                   &#xA;     1.99% 0x1                                                                                                                                 &#xA;    10.13%    10.12%  ffmpeg           [kernel.kallsyms]                                                                                       &#xA;-    3.14%     0.73%  ffmpeg           libavutil.so.55.78.100                                                                                  &#xA;     2.34% av_default_item_name                                                                                                                &#xA;-    1.73%     0.21%  ffmpeg           libpthread-2.27.so                                                                                      &#xA;   - 0.70% pthread_cond_wait@@GLIBC_2.3.2                                                                                                      &#xA;      - 0.62% entry_SYSCALL_64_after_hwframe                                                                                                   &#xA;         - 0.62% do_syscall_64                                                                                                                 &#xA;            - 0.57% __x64_sys_futex                                                                                                            &#xA;                 0.52% do_futex                                                                                                                &#xA;     0.93%     0.89%  ffmpeg           libc-2.27.so                                                                                            &#xA;-    0.64%     0.64%  swapper          [kernel.kallsyms]                                                                                       &#xA;     0.63% secondary_startup_64                                                                                                                &#xA;     0.21%     0.18%  ffmpeg           libavfilter.so.6.107.100                                                                                &#xA;     0.20%     0.11%  ffmpeg           libavformat.so.57.83.100                                                                                &#xA;     0.12%     0.11%  ffmpeg           ffmpeg                                                                                                  &#xA;     0.11%     0.00%  gnome-terminal-  [unknown]                                                                                               &#xA;     0.09%     0.07%  ffmpeg           libm-2.27.so                                                                                            &#xA;     0.08%     0.07%  ffmpeg           ld-2.27.so                                                                                              &#xA;     0.04%     0.04%  gnome-terminal-  libglib-2.0.so.0.5600.4&#xA;

    &#xA;

    &#xA;