Recherche avancée

Médias (91)

Autres articles (98)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (9440)

  • Libavformat/FFMPEG : Muxing into mp4 with AVFormatContext drops the final frame, depending on the number of frames

    27 octobre 2020, par Galen Lynch

    I am trying to use libavformat to create a .mp4 video
with a single h.264 video stream, but the final frame in the resulting file
often has a duration of zero and is effectively dropped from the video.
Strangely enough, whether the final frame is dropped or not depends on how many
frames I try to add to the file. Some simple testing that I outline below makes
me think that I am somehow misconfiguring either the AVFormatContext or the
h.264 encoder, resulting in two edit lists that sometimes chop off the final
frame. I will also post a simplified version of the code I am using, in case I'm
making some obvious mistake. Any help would be greatly appreciated : I've been
struggling with this issue for the past few days and have made little progress.

    


    I can recover the dropped frame by creating a new mp4 container using ffmpeg
binary with the copy codec if I use the -ignore_editlist option. Inspecting
the file with a missing frame using ffprobe, mp4trackdump, or mp4file --dump, shows that the final frame is dropped if its sample time is exactly the
same the end of the edit list. When I make a file that has no dropped frames, it
still has two edit lists : the only difference is that the end time of the edit
list is beyond all samples in files that do not have dropped frames. Though this
is hardly a fair comparison, if I make a .png for each frame and then generate
a .mp4 with ffmpeg using the image2 codec and similar h.264 settings, I
produce a movie with all frames present, only one edit list, and similar PTS
times as my mangled movies with two edit lists. In this case, the edit list
always ends after the last frame/sample time.

    


    I am using this command to determine the number of frames in the resulting stream,
though I also get the same number with other utilities :

    


    ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 video_file_name.mp4


    


    Simple inspection of the file with ffprobe shows no obviously alarming signs to
me, besides the framerate being affected by the missing frame (the target was
24) :

    


    $ ffprobe -hide_banner testing.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'testing.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.45.100
  Duration: 00:00:04.13, start: 0.041016, bitrate: 724 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 100x100, 722 kb/s, 24.24 fps, 24 tbr, 12288 tbn, 48 tbc (default)
    Metadata:
      handler_name    : VideoHandler


    


    The files that I generate programatically always have two edit lists, one of
which is very short. In files both with and without a missing frame, the
duration one of the frames is 0, while all the others have the same duration
(512). You can see this in the ffmpeg output for this file that I tried to put
100 frames into, though only 99 are visible despite the file containing all 100
samples.

    


    $ ffmpeg -hide_banner -y -v 9 -loglevel 99 -i testing.mp4  &#xA;...&#xA;<edited to="to" remove="remove" the="the" class="class" printing="printing">&#xA;type:&#x27;edts&#x27; parent:&#x27;trak&#x27; sz: 48 100 948&#xA;type:&#x27;elst&#x27; parent:&#x27;edts&#x27; sz: 40 8 40&#xA;track[0].edit_count = 2&#xA;duration=41 time=-1 rate=1.000000&#xA;duration=4125 time=0 rate=1.000000&#xA;type:&#x27;mdia&#x27; parent:&#x27;trak&#x27; sz: 808 148 948&#xA;type:&#x27;mdhd&#x27; parent:&#x27;mdia&#x27; sz: 32 8 800&#xA;type:&#x27;hdlr&#x27; parent:&#x27;mdia&#x27; sz: 45 40 800&#xA;ctype=[0][0][0][0]&#xA;stype=vide&#xA;type:&#x27;minf&#x27; parent:&#x27;mdia&#x27; sz: 723 85 800&#xA;type:&#x27;vmhd&#x27; parent:&#x27;minf&#x27; sz: 20 8 715&#xA;type:&#x27;dinf&#x27; parent:&#x27;minf&#x27; sz: 36 28 715&#xA;type:&#x27;dref&#x27; parent:&#x27;dinf&#x27; sz: 28 8 28&#xA;Unknown dref type 0x206c7275 size 12&#xA;type:&#x27;stbl&#x27; parent:&#x27;minf&#x27; sz: 659 64 715&#xA;type:&#x27;stsd&#x27; parent:&#x27;stbl&#x27; sz: 151 8 651&#xA;size=135 4CC=avc1 codec_type=0&#xA;type:&#x27;avcC&#x27; parent:&#x27;stsd&#x27; sz: 49 8 49&#xA;type:&#x27;stts&#x27; parent:&#x27;stbl&#x27; sz: 32 159 651&#xA;track[0].stts.entries = 2&#xA;sample_count=99, sample_duration=512&#xA;sample_count=1, sample_duration=0&#xA;...&#xA;AVIndex stream 0, sample 99, offset 5a0ed, dts 50688, size 3707, distance 0, keyframe 1&#xA;Processing st: 0, edit list 0 - media time: -1, duration: 504&#xA;Processing st: 0, edit list 1 - media time: 0, duration: 50688&#xA;type:&#x27;udta&#x27; parent:&#x27;moov&#x27; sz: 98 1072 1162&#xA;...&#xA;</edited>

    &#xA;

    The last frame has zero duration :

    &#xA;

    $ mp4trackdump -v testing.mp4&#xA;...&#xA;mp4file testing.mp4, track 1, samples 100, timescale 12288&#xA;sampleId      1, size  6943 duration      512 time        0 00:00:00.000 S&#xA;sampleId      2, size  3671 duration      512 time      512 00:00:00.041 S&#xA;...&#xA;sampleId     99, size  3687 duration      512 time    50176 00:00:04.083 S&#xA;sampleId    100, size  3707 duration        0 time    50688 00:00:04.125 S&#xA;

    &#xA;

    Non-mangled videos that I generate have similar structure, as you can see in&#xA;this video that had 99 input frames, all of which are visible in the output.&#xA;Even though the sample_duration is set to zero for one of the samples in the&#xA;stss box, it is not dropped from the frame count or when reading the frames back&#xA;in with ffmpeg.

    &#xA;

    $ ffmpeg -hide_banner -y -v 9 -loglevel 99 -i testing_99.mp4  &#xA;...&#xA;type:&#x27;elst&#x27; parent:&#x27;edts&#x27; sz: 40 8 40&#xA;track[0].edit_count = 2&#xA;duration=41 time=-1 rate=1.000000&#xA;duration=4084 time=0 rate=1.000000&#xA;...&#xA;track[0].stts.entries = 2&#xA;sample_count=98, sample_duration=512&#xA;sample_count=1, sample_duration=0&#xA;...&#xA;AVIndex stream 0, sample 98, offset 5d599, dts 50176, size 3833, distance 0, keyframe 1&#xA;Processing st: 0, edit list 0 - media time: -1, duration: 504&#xA;Processing st: 0, edit list 1 - media time: 0, duration: 50184&#xA;...&#xA;

    &#xA;

    $ mp4trackdump -v testing_99.mp4&#xA;...&#xA;sampleId     98, size  3814 duration      512 time    49664 00:00:04.041 S&#xA;sampleId     99, size  3833 duration        0 time    50176 00:00:04.083 S&#xA;

    &#xA;

    One difference that jumps out to me is that the mangled file's second edit list&#xA;ends at time 50688, which coincides with the last sample, while the non-mangled&#xA;file's edit list ends at 50184, which is after the time of the last sample&#xA;at 50176. As I mentioned before, whether the last frame is clipped depends on&#xA;the number of frames I encode and mux into the container : 100 input frames&#xA;results in 1 dropped frame, 99 results in 0, 98 in 0, 97 in 1, etc...

    &#xA;

    Here is the code that I used to generate these files, which is a MWE script&#xA;version of library functions that I am modifying. It is written in Julia,&#xA;which I do not think is important here, and calls the FFMPEG library version&#xA;4.3.1. It's more or less a direct translation from of the FFMPEG muxing&#xA;demo, although the codec&#xA;context here is created before the format context. I am presenting the code that&#xA;interacts with ffmpeg first, although it relies on some helper code that I will&#xA;put below.

    &#xA;

    The helper code just makes it easier to work with nested C structs in Julia, and&#xA;allows . syntax in Julia to be used in place of C's arrow (->) operator for&#xA;field access of struct pointers. Libav structs such as AVFrame appear as a&#xA;thin wrapper type AVFramePtr, and similarly AVStream appears as&#xA;AVStreamPtr etc... These act like single or double pointers for the purposes&#xA;of function calls, depending on the function's type signature. Hopefully it will&#xA;be clear enough to understand if you are familiar with working with libav in C,&#xA;and I don't think looking at the helper code should be necessary if you don't&#xA;want to run the code.

    &#xA;

    # Function to transfer array to AVPicture/AVFrame&#xA;function transfer_img_buf_to_frame!(frame, img)&#xA;    img_pointer = pointer(img)&#xA;    data_pointer = frame.data[1] # Base-1 indexing, get pointer to first data buffer in frame&#xA;    for h = 1:frame.height&#xA;        data_line_pointer = data_pointer &#x2B; (h-1) * frame.linesize[1] # base-1 indexing&#xA;        img_line_pointer = img_pointer &#x2B; (h-1) * frame.width&#xA;        unsafe_copyto!(data_line_pointer, img_line_pointer, frame.width) # base-1 indexing&#xA;    end&#xA;end&#xA;&#xA;# Function to transfer AVFrame to AVCodecContext, and AVPacket to AVFormatContext&#xA;function encode_mux!(packet, format_context, frame, codec_context; flush = false)&#xA;    if flush&#xA;        fret = avcodec_send_frame(codec_context, C_NULL)&#xA;    else&#xA;        fret = avcodec_send_frame(codec_context, frame)&#xA;    end&#xA;    if fret &lt; 0 &amp;&amp; !in(fret, [-Libc.EAGAIN, VIO_AVERROR_EOF])&#xA;        error("Error $fret sending a frame for encoding")&#xA;    end&#xA;&#xA;    pret = Cint(0)&#xA;    while pret >= 0&#xA;        pret = avcodec_receive_packet(codec_context, packet)&#xA;        if pret == -Libc.EAGAIN || pret == VIO_AVERROR_EOF&#xA;             break&#xA;        elseif pret &lt; 0&#xA;            error("Error $pret during encoding")&#xA;        end&#xA;        stream = format_context.streams[1] # Base-1 indexing&#xA;        av_packet_rescale_ts(packet, codec_context.time_base, stream.time_base)&#xA;        packet.stream_index = 0&#xA;        ret = av_interleaved_write_frame(format_context, packet)&#xA;        ret &lt; 0 &amp;&amp; error("Error muxing packet: $ret")&#xA;    end&#xA;    if !flush &amp;&amp; fret == -Libc.EAGAIN &amp;&amp; pret != VIO_AVERROR_EOF&#xA;        fret = avcodec_send_frame(codec_context, frame)&#xA;        if fret &lt; 0 &amp;&amp; fret != VIO_AVERROR_EOF&#xA;            error("Error $fret sending a frame for encoding")&#xA;        end&#xA;    end&#xA;    return pret&#xA;end&#xA;&#xA;# Set parameters of test movie&#xA;nframe = 100&#xA;width, height = 100, 100&#xA;framerate = 24&#xA;gop = 0&#xA;codec_name = "libx264"&#xA;filename = "testing.mp4"&#xA;&#xA;((width % 2 !=0) || (height % 2 !=0)) &amp;&amp; error("Encoding error: Image dims must be a multiple of two")&#xA;&#xA;# Make test images&#xA;imgstack = map(x->rand(UInt8,width,height),1:nframe);&#xA;&#xA;pix_fmt = AV_PIX_FMT_GRAY8&#xA;framerate_rat = Rational(framerate)&#xA;&#xA;codec = avcodec_find_encoder_by_name(codec_name)&#xA;codec == C_NULL &amp;&amp; error("Codec &#x27;$codec_name&#x27; not found")&#xA;&#xA;# Allocate AVCodecContext&#xA;codec_context_p = avcodec_alloc_context3(codec) # raw pointer&#xA;codec_context_p == C_NULL &amp;&amp; error("Could not allocate AVCodecContext")&#xA;# Easier to work with pointer that acts like a c struct pointer, type defined below&#xA;codec_context = AVCodecContextPtr(codec_context_p)&#xA;&#xA;codec_context.width = width&#xA;codec_context.height = height&#xA;codec_context.time_base = AVRational(1/framerate_rat)&#xA;codec_context.framerate = AVRational(framerate_rat)&#xA;codec_context.pix_fmt = pix_fmt&#xA;codec_context.gop_size = gop&#xA;&#xA;ret = avcodec_open2(codec_context, codec, C_NULL)&#xA;ret &lt; 0 &amp;&amp; error("Could not open codec: Return code $(ret)")&#xA;&#xA;# Allocate AVFrame and wrap it in a Julia convenience type&#xA;frame_p = av_frame_alloc()&#xA;frame_p == C_NULL &amp;&amp; error("Could not allocate AVFrame")&#xA;frame = AVFramePtr(frame_p)&#xA;&#xA;frame.format = pix_fmt&#xA;frame.width = width&#xA;frame.height = height&#xA;&#xA;# Allocate picture buffers for frame&#xA;ret = av_frame_get_buffer(frame, 0)&#xA;ret &lt; 0 &amp;&amp; error("Could not allocate the video frame data")&#xA;&#xA;# Allocate AVPacket and wrap it in a Julia convenience type&#xA;packet_p = av_packet_alloc()&#xA;packet_p == C_NULL &amp;&amp; error("Could not allocate AVPacket")&#xA;packet = AVPacketPtr(packet_p)&#xA;&#xA;# Allocate AVFormatContext and wrap it in a Julia convenience type&#xA;format_context_dp = Ref(Ptr{AVFormatContext}()) # double pointer&#xA;ret = avformat_alloc_output_context2(format_context_dp, C_NULL, C_NULL, filename)&#xA;if ret != 0 || format_context_dp[] == C_NULL&#xA;    error("Could not allocate AVFormatContext")&#xA;end&#xA;format_context = AVFormatContextPtr(format_context_dp)&#xA;&#xA;# Add video stream to AVFormatContext and configure it to use the encoder made above&#xA;stream_p = avformat_new_stream(format_context, C_NULL)&#xA;stream_p == C_NULL &amp;&amp; error("Could not allocate output stream")&#xA;stream = AVStreamPtr(stream_p) # Wrap this pointer in a convenience type&#xA;&#xA;stream.time_base = codec_context.time_base&#xA;stream.avg_frame_rate = 1 / convert(Rational, stream.time_base)&#xA;ret = avcodec_parameters_from_context(stream.codecpar, codec_context)&#xA;ret &lt; 0 &amp;&amp; error("Could not set parameters of stream")&#xA;&#xA;# Open the AVIOContext&#xA;pb_ptr = field_ptr(format_context, :pb)&#xA;# This following is just a call to avio_open, with a bit of extra protection&#xA;# so the Julia garbage collector does not destroy format_context during the call&#xA;ret = GC.@preserve format_context avio_open(pb_ptr, filename, AVIO_FLAG_WRITE)&#xA;ret &lt; 0 &amp;&amp; error("Could not open file $filename for writing")&#xA;&#xA;# Write the header&#xA;ret = avformat_write_header(format_context, C_NULL)&#xA;ret &lt; 0 &amp;&amp; error("Could not write header")&#xA;&#xA;# Encode and mux each frame&#xA;for i in 1:nframe # iterate from 1 to nframe&#xA;    img = imgstack[i] # base-1 indexing&#xA;    ret = av_frame_make_writable(frame)&#xA;    ret &lt; 0 &amp;&amp; error("Could not make frame writable")&#xA;    transfer_img_buf_to_frame!(frame, img)&#xA;    frame.pts = i&#xA;    encode_mux!(packet, format_context, frame, codec_context)&#xA;end&#xA;&#xA;# Flush the encoder&#xA;encode_mux!(packet, format_context, frame, codec_context; flush = true)&#xA;&#xA;# Write the trailer&#xA;av_write_trailer(format_context)&#xA;&#xA;# Close the AVIOContext&#xA;pb_ptr = field_ptr(format_context, :pb) # get pointer to format_context.pb&#xA;ret = GC.@preserve format_context avio_closep(pb_ptr) # simply a call to avio_closep&#xA;ret &lt; 0 &amp;&amp; error("Could not free AVIOContext")&#xA;&#xA;# Deallocation&#xA;avcodec_free_context(codec_context)&#xA;av_frame_free(frame)&#xA;av_packet_free(packet)&#xA;avformat_free_context(format_context)&#xA;

    &#xA;

    Below is the helper code that makes accessing pointers to nested c structs not a&#xA;total pain in Julia. If you try to run the code yourself, please enter this in&#xA;before the logic of the code shown above. It requires&#xA;VideoIO.jl, a Julia wrapper to libav.

    &#xA;

    # Convenience type and methods to make the above code look more like C&#xA;using Base: RefValue, fieldindex&#xA;&#xA;import Base: unsafe_convert, getproperty, setproperty!, getindex, setindex!,&#xA;    unsafe_wrap, propertynames&#xA;&#xA;# VideoIO is a Julia wrapper to libav&#xA;#&#xA;# Bring bindings to libav library functions into namespace&#xA;using VideoIO: AVCodecContext, AVFrame, AVPacket, AVFormatContext, AVRational,&#xA;    AVStream, AV_PIX_FMT_GRAY8, AVIO_FLAG_WRITE, AVFMT_NOFILE,&#xA;    avformat_alloc_output_context2, avformat_free_context, avformat_new_stream,&#xA;    av_dump_format, avio_open, avformat_write_header,&#xA;    avcodec_parameters_from_context, av_frame_make_writable, avcodec_send_frame,&#xA;    avcodec_receive_packet, av_packet_rescale_ts, av_interleaved_write_frame,&#xA;    avformat_query_codec, avcodec_find_encoder_by_name, avcodec_alloc_context3,&#xA;    avcodec_open2, av_frame_alloc, av_frame_get_buffer, av_packet_alloc,&#xA;    avio_closep, av_write_trailer, avcodec_free_context, av_frame_free,&#xA;    av_packet_free&#xA;&#xA;# Submodule of VideoIO&#xA;using VideoIO: AVCodecs&#xA;&#xA;# Need to import this function from Julia&#x27;s Base to add more methods&#xA;import Base: convert&#xA;&#xA;const VIO_AVERROR_EOF = -541478725 # AVERROR_EOF&#xA;&#xA;# Methods to convert between AVRational and Julia&#x27;s Rational type, because it&#x27;s&#xA;# hard to access the AV rational macros with Julia&#x27;s C interface&#xA;convert(::Type{Rational{T}}, r::AVRational) where T = Rational{T}(r.num, r.den)&#xA;convert(::Type{Rational}, r::AVRational) = Rational(r.num, r.den)&#xA;convert(::Type{AVRational}, r::Rational) = AVRational(numerator(r), denominator(r))&#xA;&#xA;"""&#xA;    mutable struct NestedCStruct{T}&#xA;&#xA;Wraps a pointer to a C struct, and acts like a double pointer to that memory.&#xA;The methods below will automatically convert it to a single pointer if needed&#xA;for a function call, and make interacting with it in Julia look (more) similar&#xA;to interacting with it in C, except &#x27;->&#x27; in C is replaced by &#x27;.&#x27; in Julia.&#xA;"""&#xA;mutable struct NestedCStruct{T}&#xA;    data::RefValue{Ptr{T}}&#xA;end&#xA;NestedCStruct{T}(a::Ptr) where T = NestedCStruct{T}(Ref(a))&#xA;NestedCStruct(a::Ptr{T}) where T = NestedCStruct{T}(a)&#xA;&#xA;const AVCodecContextPtr = NestedCStruct{AVCodecContext}&#xA;const AVFramePtr = NestedCStruct{AVFrame}&#xA;const AVPacketPtr = NestedCStruct{AVPacket}&#xA;const AVFormatContextPtr = NestedCStruct{AVFormatContext}&#xA;const AVStreamPtr = NestedCStruct{AVStream}&#xA;&#xA;function field_ptr(::Type{S}, struct_pointer::Ptr{T}, field::Symbol,&#xA;                           index::Integer = 1) where {S,T}&#xA;    fieldpos = fieldindex(T, field)&#xA;    field_pointer = convert(Ptr{S}, struct_pointer) &#x2B;&#xA;        fieldoffset(T, fieldpos) &#x2B; (index - 1) * sizeof(S)&#xA;    return field_pointer&#xA;end&#xA;&#xA;field_ptr(a::Ptr{T}, field::Symbol, args...) where T =&#xA;    field_ptr(fieldtype(T, field), a, field, args...)&#xA;&#xA;function check_ptr_valid(p::Ptr, err::Bool = true)&#xA;    valid = p != C_NULL&#xA;    err &amp;&amp; !valid &amp;&amp; error("Invalid pointer")&#xA;    valid&#xA;end&#xA;&#xA;unsafe_convert(::Type{Ptr{T}}, ap::NestedCStruct{T}) where T =&#xA;    getfield(ap, :data)[]&#xA;unsafe_convert(::Type{Ptr{Ptr{T}}}, ap::NestedCStruct{T}) where T =&#xA;    unsafe_convert(Ptr{Ptr{T}}, getfield(ap, :data))&#xA;&#xA;function check_ptr_valid(a::NestedCStruct{T}, args...) where T&#xA;    p = unsafe_convert(Ptr{T}, a)&#xA;    GC.@preserve a check_ptr_valid(p, args...)&#xA;end&#xA;&#xA;nested_wrap(x::Ptr{T}) where T = NestedCStruct(x)&#xA;nested_wrap(x) = x&#xA;&#xA;function getproperty(ap::NestedCStruct{T}, s::Symbol) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    res = GC.@preserve ap unsafe_load(field_ptr(p, s))&#xA;    nested_wrap(res)&#xA;end&#xA;&#xA;function setproperty!(ap::NestedCStruct{T}, s::Symbol, x) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    fp = field_ptr(p, s)&#xA;    GC.@preserve ap unsafe_store!(fp, x)&#xA;end&#xA;&#xA;function getindex(ap::NestedCStruct{T}, i::Integer) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    res = GC.@preserve ap unsafe_load(p, i)&#xA;    nested_wrap(res)&#xA;end&#xA;&#xA;function setindex!(ap::NestedCStruct{T}, i::Integer, x) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    GC.@preserve ap unsafe_store!(p, x, i)&#xA;end&#xA;&#xA;function unsafe_wrap(::Type{T}, ap::NestedCStruct{S}, i) where {S, T}&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{S}, ap)&#xA;    GC.@preserve ap unsafe_wrap(T, p, i)&#xA;end&#xA;&#xA;function field_ptr(::Type{S}, a::NestedCStruct{T}, field::Symbol,&#xA;                           args...) where {S, T}&#xA;    check_ptr_valid(a)&#xA;    p = unsafe_convert(Ptr{T}, a)&#xA;    GC.@preserve a field_ptr(S, p, field, args...)&#xA;end&#xA;&#xA;field_ptr(a::NestedCStruct{T}, field::Symbol, args...) where T =&#xA;    field_ptr(fieldtype(T, field), a, field, args...)&#xA;&#xA;propertynames(ap::T) where {S, T&lt;:NestedCStruct{S}} = (fieldnames(S)...,&#xA;                                                       fieldnames(T)...)&#xA;

    &#xA;


    &#xA;

    Edit : Some things that I have already tried

    &#xA;

      &#xA;
    • Explicitly setting the stream duration to be the same number as the number of frames that I add, or a few more beyond that
    • &#xA;

    • Explicitly setting the stream start time to zero, while the first frame has a PTS of 1
    • &#xA;

    • Playing around with encoder parameters, as well as gop_size, using B frames, etc.
    • &#xA;

    • Setting the private data for the mov/mp4 muxer to set the movflag negative_cts_offsets
    • &#xA;

    • Changing the framerate
    • &#xA;

    • Tried different pixel formats, such as AV_PIX_FMT_YUV420P
    • &#xA;

    &#xA;

    Also to be clear while I can just transfer the file into another while ignoring the edit lists to work around this problem, I am hoping to not make damaged mp4 files in the first place.

    &#xA;

  • Is there a way to fit a given time frame of a a video to fit a specific file size ?

    21 février 2020, par Exosylver

    Is there a ffmpeg command to fit part of a video into a certain file size ? For example, if it is a 645mb video that is 70 minutes long, and I want 7 parts (each part 10 mins)- to limit each part to 93 mb (the average estimated file size) ?

    (I already tried to break it up by timestamp,but the files don’t end up the same size)

  • ffmpeg custom audio mixing filter

    27 novembre 2016, par esposito

    I need to mix 2 files : A and B and the result is part of file A and part of File B.

    In detail I like that the output is the sum of the upper part of the file A and the lower part of file B.

    I need something like this :

    ffmpeg -i A.flac -i B.flac -af "copy all from 0 to -25 dB from 'A', copy all from -25dB to -infinite from file 'B' and put these 2 parts on the output" output.flac

    -25dB is variable that I can adjust, I like to keep the volume of file ’A’.

    In short I like to replace the soft background of file ’A’ with file ’B’.

    there is a way to do this ?

    thank you !!!!