Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (18)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

Sur d’autres sites (2635)

  • Libavformat/FFMPEG : Muxing into mp4 with AVFormatContext drops the final frame, depending on the number of frames

    27 octobre 2020, par Galen Lynch

    I am trying to use libavformat to create a .mp4 video&#xA;with a single h.264 video stream, but the final frame in the resulting file&#xA;often has a duration of zero and is effectively dropped from the video.&#xA;Strangely enough, whether the final frame is dropped or not depends on how many&#xA;frames I try to add to the file. Some simple testing that I outline below makes&#xA;me think that I am somehow misconfiguring either the AVFormatContext or the&#xA;h.264 encoder, resulting in two edit lists that sometimes chop off the final&#xA;frame. I will also post a simplified version of the code I am using, in case I'm&#xA;making some obvious mistake. Any help would be greatly appreciated : I've been&#xA;struggling with this issue for the past few days and have made little progress.

    &#xA;

    I can recover the dropped frame by creating a new mp4 container using ffmpeg&#xA;binary with the copy codec if I use the -ignore_editlist option. Inspecting&#xA;the file with a missing frame using ffprobe, mp4trackdump, or mp4file --dump, shows that the final frame is dropped if its sample time is exactly the&#xA;same the end of the edit list. When I make a file that has no dropped frames, it&#xA;still has two edit lists : the only difference is that the end time of the edit&#xA;list is beyond all samples in files that do not have dropped frames. Though this&#xA;is hardly a fair comparison, if I make a .png for each frame and then generate&#xA;a .mp4 with ffmpeg using the image2 codec and similar h.264 settings, I&#xA;produce a movie with all frames present, only one edit list, and similar PTS&#xA;times as my mangled movies with two edit lists. In this case, the edit list&#xA;always ends after the last frame/sample time.

    &#xA;

    I am using this command to determine the number of frames in the resulting stream,&#xA;though I also get the same number with other utilities :

    &#xA;

    ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 video_file_name.mp4&#xA;

    &#xA;

    Simple inspection of the file with ffprobe shows no obviously alarming signs to&#xA;me, besides the framerate being affected by the missing frame (the target was&#xA;24) :

    &#xA;

    $ ffprobe -hide_banner testing.mp4&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;testing.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    encoder         : Lavf58.45.100&#xA;  Duration: 00:00:04.13, start: 0.041016, bitrate: 724 kb/s&#xA;    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 100x100, 722 kb/s, 24.24 fps, 24 tbr, 12288 tbn, 48 tbc (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;

    &#xA;

    The files that I generate programatically always have two edit lists, one of&#xA;which is very short. In files both with and without a missing frame, the&#xA;duration one of the frames is 0, while all the others have the same duration&#xA;(512). You can see this in the ffmpeg output for this file that I tried to put&#xA;100 frames into, though only 99 are visible despite the file containing all 100&#xA;samples.

    &#xA;

    $ ffmpeg -hide_banner -y -v 9 -loglevel 99 -i testing.mp4  &#xA;...&#xA;<edited to="to" remove="remove" the="the" class="class" printing="printing">&#xA;type:&#x27;edts&#x27; parent:&#x27;trak&#x27; sz: 48 100 948&#xA;type:&#x27;elst&#x27; parent:&#x27;edts&#x27; sz: 40 8 40&#xA;track[0].edit_count = 2&#xA;duration=41 time=-1 rate=1.000000&#xA;duration=4125 time=0 rate=1.000000&#xA;type:&#x27;mdia&#x27; parent:&#x27;trak&#x27; sz: 808 148 948&#xA;type:&#x27;mdhd&#x27; parent:&#x27;mdia&#x27; sz: 32 8 800&#xA;type:&#x27;hdlr&#x27; parent:&#x27;mdia&#x27; sz: 45 40 800&#xA;ctype=[0][0][0][0]&#xA;stype=vide&#xA;type:&#x27;minf&#x27; parent:&#x27;mdia&#x27; sz: 723 85 800&#xA;type:&#x27;vmhd&#x27; parent:&#x27;minf&#x27; sz: 20 8 715&#xA;type:&#x27;dinf&#x27; parent:&#x27;minf&#x27; sz: 36 28 715&#xA;type:&#x27;dref&#x27; parent:&#x27;dinf&#x27; sz: 28 8 28&#xA;Unknown dref type 0x206c7275 size 12&#xA;type:&#x27;stbl&#x27; parent:&#x27;minf&#x27; sz: 659 64 715&#xA;type:&#x27;stsd&#x27; parent:&#x27;stbl&#x27; sz: 151 8 651&#xA;size=135 4CC=avc1 codec_type=0&#xA;type:&#x27;avcC&#x27; parent:&#x27;stsd&#x27; sz: 49 8 49&#xA;type:&#x27;stts&#x27; parent:&#x27;stbl&#x27; sz: 32 159 651&#xA;track[0].stts.entries = 2&#xA;sample_count=99, sample_duration=512&#xA;sample_count=1, sample_duration=0&#xA;...&#xA;AVIndex stream 0, sample 99, offset 5a0ed, dts 50688, size 3707, distance 0, keyframe 1&#xA;Processing st: 0, edit list 0 - media time: -1, duration: 504&#xA;Processing st: 0, edit list 1 - media time: 0, duration: 50688&#xA;type:&#x27;udta&#x27; parent:&#x27;moov&#x27; sz: 98 1072 1162&#xA;...&#xA;</edited>

    &#xA;

    The last frame has zero duration :

    &#xA;

    $ mp4trackdump -v testing.mp4&#xA;...&#xA;mp4file testing.mp4, track 1, samples 100, timescale 12288&#xA;sampleId      1, size  6943 duration      512 time        0 00:00:00.000 S&#xA;sampleId      2, size  3671 duration      512 time      512 00:00:00.041 S&#xA;...&#xA;sampleId     99, size  3687 duration      512 time    50176 00:00:04.083 S&#xA;sampleId    100, size  3707 duration        0 time    50688 00:00:04.125 S&#xA;

    &#xA;

    Non-mangled videos that I generate have similar structure, as you can see in&#xA;this video that had 99 input frames, all of which are visible in the output.&#xA;Even though the sample_duration is set to zero for one of the samples in the&#xA;stss box, it is not dropped from the frame count or when reading the frames back&#xA;in with ffmpeg.

    &#xA;

    $ ffmpeg -hide_banner -y -v 9 -loglevel 99 -i testing_99.mp4  &#xA;...&#xA;type:&#x27;elst&#x27; parent:&#x27;edts&#x27; sz: 40 8 40&#xA;track[0].edit_count = 2&#xA;duration=41 time=-1 rate=1.000000&#xA;duration=4084 time=0 rate=1.000000&#xA;...&#xA;track[0].stts.entries = 2&#xA;sample_count=98, sample_duration=512&#xA;sample_count=1, sample_duration=0&#xA;...&#xA;AVIndex stream 0, sample 98, offset 5d599, dts 50176, size 3833, distance 0, keyframe 1&#xA;Processing st: 0, edit list 0 - media time: -1, duration: 504&#xA;Processing st: 0, edit list 1 - media time: 0, duration: 50184&#xA;...&#xA;

    &#xA;

    $ mp4trackdump -v testing_99.mp4&#xA;...&#xA;sampleId     98, size  3814 duration      512 time    49664 00:00:04.041 S&#xA;sampleId     99, size  3833 duration        0 time    50176 00:00:04.083 S&#xA;

    &#xA;

    One difference that jumps out to me is that the mangled file's second edit list&#xA;ends at time 50688, which coincides with the last sample, while the non-mangled&#xA;file's edit list ends at 50184, which is after the time of the last sample&#xA;at 50176. As I mentioned before, whether the last frame is clipped depends on&#xA;the number of frames I encode and mux into the container : 100 input frames&#xA;results in 1 dropped frame, 99 results in 0, 98 in 0, 97 in 1, etc...

    &#xA;

    Here is the code that I used to generate these files, which is a MWE script&#xA;version of library functions that I am modifying. It is written in Julia,&#xA;which I do not think is important here, and calls the FFMPEG library version&#xA;4.3.1. It's more or less a direct translation from of the FFMPEG muxing&#xA;demo, although the codec&#xA;context here is created before the format context. I am presenting the code that&#xA;interacts with ffmpeg first, although it relies on some helper code that I will&#xA;put below.

    &#xA;

    The helper code just makes it easier to work with nested C structs in Julia, and&#xA;allows . syntax in Julia to be used in place of C's arrow (->) operator for&#xA;field access of struct pointers. Libav structs such as AVFrame appear as a&#xA;thin wrapper type AVFramePtr, and similarly AVStream appears as&#xA;AVStreamPtr etc... These act like single or double pointers for the purposes&#xA;of function calls, depending on the function's type signature. Hopefully it will&#xA;be clear enough to understand if you are familiar with working with libav in C,&#xA;and I don't think looking at the helper code should be necessary if you don't&#xA;want to run the code.

    &#xA;

    # Function to transfer array to AVPicture/AVFrame&#xA;function transfer_img_buf_to_frame!(frame, img)&#xA;    img_pointer = pointer(img)&#xA;    data_pointer = frame.data[1] # Base-1 indexing, get pointer to first data buffer in frame&#xA;    for h = 1:frame.height&#xA;        data_line_pointer = data_pointer &#x2B; (h-1) * frame.linesize[1] # base-1 indexing&#xA;        img_line_pointer = img_pointer &#x2B; (h-1) * frame.width&#xA;        unsafe_copyto!(data_line_pointer, img_line_pointer, frame.width) # base-1 indexing&#xA;    end&#xA;end&#xA;&#xA;# Function to transfer AVFrame to AVCodecContext, and AVPacket to AVFormatContext&#xA;function encode_mux!(packet, format_context, frame, codec_context; flush = false)&#xA;    if flush&#xA;        fret = avcodec_send_frame(codec_context, C_NULL)&#xA;    else&#xA;        fret = avcodec_send_frame(codec_context, frame)&#xA;    end&#xA;    if fret &lt; 0 &amp;&amp; !in(fret, [-Libc.EAGAIN, VIO_AVERROR_EOF])&#xA;        error("Error $fret sending a frame for encoding")&#xA;    end&#xA;&#xA;    pret = Cint(0)&#xA;    while pret >= 0&#xA;        pret = avcodec_receive_packet(codec_context, packet)&#xA;        if pret == -Libc.EAGAIN || pret == VIO_AVERROR_EOF&#xA;             break&#xA;        elseif pret &lt; 0&#xA;            error("Error $pret during encoding")&#xA;        end&#xA;        stream = format_context.streams[1] # Base-1 indexing&#xA;        av_packet_rescale_ts(packet, codec_context.time_base, stream.time_base)&#xA;        packet.stream_index = 0&#xA;        ret = av_interleaved_write_frame(format_context, packet)&#xA;        ret &lt; 0 &amp;&amp; error("Error muxing packet: $ret")&#xA;    end&#xA;    if !flush &amp;&amp; fret == -Libc.EAGAIN &amp;&amp; pret != VIO_AVERROR_EOF&#xA;        fret = avcodec_send_frame(codec_context, frame)&#xA;        if fret &lt; 0 &amp;&amp; fret != VIO_AVERROR_EOF&#xA;            error("Error $fret sending a frame for encoding")&#xA;        end&#xA;    end&#xA;    return pret&#xA;end&#xA;&#xA;# Set parameters of test movie&#xA;nframe = 100&#xA;width, height = 100, 100&#xA;framerate = 24&#xA;gop = 0&#xA;codec_name = "libx264"&#xA;filename = "testing.mp4"&#xA;&#xA;((width % 2 !=0) || (height % 2 !=0)) &amp;&amp; error("Encoding error: Image dims must be a multiple of two")&#xA;&#xA;# Make test images&#xA;imgstack = map(x->rand(UInt8,width,height),1:nframe);&#xA;&#xA;pix_fmt = AV_PIX_FMT_GRAY8&#xA;framerate_rat = Rational(framerate)&#xA;&#xA;codec = avcodec_find_encoder_by_name(codec_name)&#xA;codec == C_NULL &amp;&amp; error("Codec &#x27;$codec_name&#x27; not found")&#xA;&#xA;# Allocate AVCodecContext&#xA;codec_context_p = avcodec_alloc_context3(codec) # raw pointer&#xA;codec_context_p == C_NULL &amp;&amp; error("Could not allocate AVCodecContext")&#xA;# Easier to work with pointer that acts like a c struct pointer, type defined below&#xA;codec_context = AVCodecContextPtr(codec_context_p)&#xA;&#xA;codec_context.width = width&#xA;codec_context.height = height&#xA;codec_context.time_base = AVRational(1/framerate_rat)&#xA;codec_context.framerate = AVRational(framerate_rat)&#xA;codec_context.pix_fmt = pix_fmt&#xA;codec_context.gop_size = gop&#xA;&#xA;ret = avcodec_open2(codec_context, codec, C_NULL)&#xA;ret &lt; 0 &amp;&amp; error("Could not open codec: Return code $(ret)")&#xA;&#xA;# Allocate AVFrame and wrap it in a Julia convenience type&#xA;frame_p = av_frame_alloc()&#xA;frame_p == C_NULL &amp;&amp; error("Could not allocate AVFrame")&#xA;frame = AVFramePtr(frame_p)&#xA;&#xA;frame.format = pix_fmt&#xA;frame.width = width&#xA;frame.height = height&#xA;&#xA;# Allocate picture buffers for frame&#xA;ret = av_frame_get_buffer(frame, 0)&#xA;ret &lt; 0 &amp;&amp; error("Could not allocate the video frame data")&#xA;&#xA;# Allocate AVPacket and wrap it in a Julia convenience type&#xA;packet_p = av_packet_alloc()&#xA;packet_p == C_NULL &amp;&amp; error("Could not allocate AVPacket")&#xA;packet = AVPacketPtr(packet_p)&#xA;&#xA;# Allocate AVFormatContext and wrap it in a Julia convenience type&#xA;format_context_dp = Ref(Ptr{AVFormatContext}()) # double pointer&#xA;ret = avformat_alloc_output_context2(format_context_dp, C_NULL, C_NULL, filename)&#xA;if ret != 0 || format_context_dp[] == C_NULL&#xA;    error("Could not allocate AVFormatContext")&#xA;end&#xA;format_context = AVFormatContextPtr(format_context_dp)&#xA;&#xA;# Add video stream to AVFormatContext and configure it to use the encoder made above&#xA;stream_p = avformat_new_stream(format_context, C_NULL)&#xA;stream_p == C_NULL &amp;&amp; error("Could not allocate output stream")&#xA;stream = AVStreamPtr(stream_p) # Wrap this pointer in a convenience type&#xA;&#xA;stream.time_base = codec_context.time_base&#xA;stream.avg_frame_rate = 1 / convert(Rational, stream.time_base)&#xA;ret = avcodec_parameters_from_context(stream.codecpar, codec_context)&#xA;ret &lt; 0 &amp;&amp; error("Could not set parameters of stream")&#xA;&#xA;# Open the AVIOContext&#xA;pb_ptr = field_ptr(format_context, :pb)&#xA;# This following is just a call to avio_open, with a bit of extra protection&#xA;# so the Julia garbage collector does not destroy format_context during the call&#xA;ret = GC.@preserve format_context avio_open(pb_ptr, filename, AVIO_FLAG_WRITE)&#xA;ret &lt; 0 &amp;&amp; error("Could not open file $filename for writing")&#xA;&#xA;# Write the header&#xA;ret = avformat_write_header(format_context, C_NULL)&#xA;ret &lt; 0 &amp;&amp; error("Could not write header")&#xA;&#xA;# Encode and mux each frame&#xA;for i in 1:nframe # iterate from 1 to nframe&#xA;    img = imgstack[i] # base-1 indexing&#xA;    ret = av_frame_make_writable(frame)&#xA;    ret &lt; 0 &amp;&amp; error("Could not make frame writable")&#xA;    transfer_img_buf_to_frame!(frame, img)&#xA;    frame.pts = i&#xA;    encode_mux!(packet, format_context, frame, codec_context)&#xA;end&#xA;&#xA;# Flush the encoder&#xA;encode_mux!(packet, format_context, frame, codec_context; flush = true)&#xA;&#xA;# Write the trailer&#xA;av_write_trailer(format_context)&#xA;&#xA;# Close the AVIOContext&#xA;pb_ptr = field_ptr(format_context, :pb) # get pointer to format_context.pb&#xA;ret = GC.@preserve format_context avio_closep(pb_ptr) # simply a call to avio_closep&#xA;ret &lt; 0 &amp;&amp; error("Could not free AVIOContext")&#xA;&#xA;# Deallocation&#xA;avcodec_free_context(codec_context)&#xA;av_frame_free(frame)&#xA;av_packet_free(packet)&#xA;avformat_free_context(format_context)&#xA;

    &#xA;

    Below is the helper code that makes accessing pointers to nested c structs not a&#xA;total pain in Julia. If you try to run the code yourself, please enter this in&#xA;before the logic of the code shown above. It requires&#xA;VideoIO.jl, a Julia wrapper to libav.

    &#xA;

    # Convenience type and methods to make the above code look more like C&#xA;using Base: RefValue, fieldindex&#xA;&#xA;import Base: unsafe_convert, getproperty, setproperty!, getindex, setindex!,&#xA;    unsafe_wrap, propertynames&#xA;&#xA;# VideoIO is a Julia wrapper to libav&#xA;#&#xA;# Bring bindings to libav library functions into namespace&#xA;using VideoIO: AVCodecContext, AVFrame, AVPacket, AVFormatContext, AVRational,&#xA;    AVStream, AV_PIX_FMT_GRAY8, AVIO_FLAG_WRITE, AVFMT_NOFILE,&#xA;    avformat_alloc_output_context2, avformat_free_context, avformat_new_stream,&#xA;    av_dump_format, avio_open, avformat_write_header,&#xA;    avcodec_parameters_from_context, av_frame_make_writable, avcodec_send_frame,&#xA;    avcodec_receive_packet, av_packet_rescale_ts, av_interleaved_write_frame,&#xA;    avformat_query_codec, avcodec_find_encoder_by_name, avcodec_alloc_context3,&#xA;    avcodec_open2, av_frame_alloc, av_frame_get_buffer, av_packet_alloc,&#xA;    avio_closep, av_write_trailer, avcodec_free_context, av_frame_free,&#xA;    av_packet_free&#xA;&#xA;# Submodule of VideoIO&#xA;using VideoIO: AVCodecs&#xA;&#xA;# Need to import this function from Julia&#x27;s Base to add more methods&#xA;import Base: convert&#xA;&#xA;const VIO_AVERROR_EOF = -541478725 # AVERROR_EOF&#xA;&#xA;# Methods to convert between AVRational and Julia&#x27;s Rational type, because it&#x27;s&#xA;# hard to access the AV rational macros with Julia&#x27;s C interface&#xA;convert(::Type{Rational{T}}, r::AVRational) where T = Rational{T}(r.num, r.den)&#xA;convert(::Type{Rational}, r::AVRational) = Rational(r.num, r.den)&#xA;convert(::Type{AVRational}, r::Rational) = AVRational(numerator(r), denominator(r))&#xA;&#xA;"""&#xA;    mutable struct NestedCStruct{T}&#xA;&#xA;Wraps a pointer to a C struct, and acts like a double pointer to that memory.&#xA;The methods below will automatically convert it to a single pointer if needed&#xA;for a function call, and make interacting with it in Julia look (more) similar&#xA;to interacting with it in C, except &#x27;->&#x27; in C is replaced by &#x27;.&#x27; in Julia.&#xA;"""&#xA;mutable struct NestedCStruct{T}&#xA;    data::RefValue{Ptr{T}}&#xA;end&#xA;NestedCStruct{T}(a::Ptr) where T = NestedCStruct{T}(Ref(a))&#xA;NestedCStruct(a::Ptr{T}) where T = NestedCStruct{T}(a)&#xA;&#xA;const AVCodecContextPtr = NestedCStruct{AVCodecContext}&#xA;const AVFramePtr = NestedCStruct{AVFrame}&#xA;const AVPacketPtr = NestedCStruct{AVPacket}&#xA;const AVFormatContextPtr = NestedCStruct{AVFormatContext}&#xA;const AVStreamPtr = NestedCStruct{AVStream}&#xA;&#xA;function field_ptr(::Type{S}, struct_pointer::Ptr{T}, field::Symbol,&#xA;                           index::Integer = 1) where {S,T}&#xA;    fieldpos = fieldindex(T, field)&#xA;    field_pointer = convert(Ptr{S}, struct_pointer) &#x2B;&#xA;        fieldoffset(T, fieldpos) &#x2B; (index - 1) * sizeof(S)&#xA;    return field_pointer&#xA;end&#xA;&#xA;field_ptr(a::Ptr{T}, field::Symbol, args...) where T =&#xA;    field_ptr(fieldtype(T, field), a, field, args...)&#xA;&#xA;function check_ptr_valid(p::Ptr, err::Bool = true)&#xA;    valid = p != C_NULL&#xA;    err &amp;&amp; !valid &amp;&amp; error("Invalid pointer")&#xA;    valid&#xA;end&#xA;&#xA;unsafe_convert(::Type{Ptr{T}}, ap::NestedCStruct{T}) where T =&#xA;    getfield(ap, :data)[]&#xA;unsafe_convert(::Type{Ptr{Ptr{T}}}, ap::NestedCStruct{T}) where T =&#xA;    unsafe_convert(Ptr{Ptr{T}}, getfield(ap, :data))&#xA;&#xA;function check_ptr_valid(a::NestedCStruct{T}, args...) where T&#xA;    p = unsafe_convert(Ptr{T}, a)&#xA;    GC.@preserve a check_ptr_valid(p, args...)&#xA;end&#xA;&#xA;nested_wrap(x::Ptr{T}) where T = NestedCStruct(x)&#xA;nested_wrap(x) = x&#xA;&#xA;function getproperty(ap::NestedCStruct{T}, s::Symbol) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    res = GC.@preserve ap unsafe_load(field_ptr(p, s))&#xA;    nested_wrap(res)&#xA;end&#xA;&#xA;function setproperty!(ap::NestedCStruct{T}, s::Symbol, x) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    fp = field_ptr(p, s)&#xA;    GC.@preserve ap unsafe_store!(fp, x)&#xA;end&#xA;&#xA;function getindex(ap::NestedCStruct{T}, i::Integer) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    res = GC.@preserve ap unsafe_load(p, i)&#xA;    nested_wrap(res)&#xA;end&#xA;&#xA;function setindex!(ap::NestedCStruct{T}, i::Integer, x) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    GC.@preserve ap unsafe_store!(p, x, i)&#xA;end&#xA;&#xA;function unsafe_wrap(::Type{T}, ap::NestedCStruct{S}, i) where {S, T}&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{S}, ap)&#xA;    GC.@preserve ap unsafe_wrap(T, p, i)&#xA;end&#xA;&#xA;function field_ptr(::Type{S}, a::NestedCStruct{T}, field::Symbol,&#xA;                           args...) where {S, T}&#xA;    check_ptr_valid(a)&#xA;    p = unsafe_convert(Ptr{T}, a)&#xA;    GC.@preserve a field_ptr(S, p, field, args...)&#xA;end&#xA;&#xA;field_ptr(a::NestedCStruct{T}, field::Symbol, args...) where T =&#xA;    field_ptr(fieldtype(T, field), a, field, args...)&#xA;&#xA;propertynames(ap::T) where {S, T&lt;:NestedCStruct{S}} = (fieldnames(S)...,&#xA;                                                       fieldnames(T)...)&#xA;

    &#xA;


    &#xA;

    Edit : Some things that I have already tried

    &#xA;

      &#xA;
    • Explicitly setting the stream duration to be the same number as the number of frames that I add, or a few more beyond that
    • &#xA;

    • Explicitly setting the stream start time to zero, while the first frame has a PTS of 1
    • &#xA;

    • Playing around with encoder parameters, as well as gop_size, using B frames, etc.
    • &#xA;

    • Setting the private data for the mov/mp4 muxer to set the movflag negative_cts_offsets
    • &#xA;

    • Changing the framerate
    • &#xA;

    • Tried different pixel formats, such as AV_PIX_FMT_YUV420P
    • &#xA;

    &#xA;

    Also to be clear while I can just transfer the file into another while ignoring the edit lists to work around this problem, I am hoping to not make damaged mp4 files in the first place.

    &#xA;

  • ffmpeg - convert movie AND show original input (as resized picture-in-picture, top-left corner) in the final output file

    2 octobre 2019, par raven

    this is my first post on this forum, so please be gentle in case I accidentally do trip over any forum rules that I would not know of yet :).

    I would like to apply some color-grading to underwater GoPro footage. To quicker gauge the effect of my color settings (trial-and-error, as of yet), would like to see the original input video stream as a PIP (e.g., scaled down 50%), in the top-left corner of the converted output movie.

    I have one input movie that is going to be color graded. The PIP should use the original as an input, just a scaled-down version of it.

    I would like to use ffmpeg’s -filter_complex option to do the PIP, but all examples I can find on "-filter_complex" would use two already existing movies. Instead, I would like to make the color-corrected stream an on-the-fly input to "-filter_complex", which then renders the PIP.

    Is that doable, all in one go ?

    Both the individual snippets below work fine, I now would like to combine these and skip the creation of an intermediate color-graded video. Your help combining these two steps into one single process is greatly appreciated !

    Thanks in advance,
    raven.

    [existing code snippets (M$ batch files)]

    ::declarations/defines::
    set "INPUT="
    set "TMP="
    set "OUTPUT="
    set "FFMPG="
    set "QU=9" :: quality settings

    set "CONV='"0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1
    0 -1 0:0 -1 0 -1 5 -1 0 -1 0'"" :: sharpening convolution filter

    ::color-grading part::
    %FFMPG% -i %INPUT% -vf convolution=%CONV%,colorbalance=rs=%rs%:gs=%gs%:bs=%bs%:rm=%rm%:gm=%gm%:bm=%bm%:rh=%rh%:gh=%gh%:bh=%bh% -q:v %QU% -codec:v mpeg4 %TMP%

    ::PIP part::
    %FFMPG% -i %INPUT% -i %TMP% -filter_complex "[1]scale=iw/3:ih/3
    [pip]; [0][pip] overlay=main_w-overlay_w-10:main_h-overlay_h-10" -q:v
    %QU% -codec:v mpeg4 %OUTPUT%

    [/existing code]
  • When I append a silent audio (mp3) to an existing list of audio it garbles the final audio ?

    6 février 2020, par Marie

    After several hours I have narrowed down the issue with the garbled audio to be the 2-seconds silence audio mp3 I am appending (I think I had produced it once with Wavelab)

    However, I tried using ffmpeg according to a post to produce a similar 2 seconds audio but it too will corrupt/garble/chop voice in the final concatenation of audio files.

    ffmpeg -f lavfi -i anullsrc=r=44100:cl=mono -t 2 -q:a 9 -acodec libmp3lame SILENCE_2sec.MP3

    I typically will have several audio files to concatenate together but for simplicity I have able to narrow it to a couple of files simplifying to the following script. A simple Windows batch file you should be able to use and reproduce the issue at your end.

    rem
    rem  
    SET EXE="S:\_BINS\FFmpeg 4.2.1 20200112\bin\ffmpeg.exe"

    SET ROOTPATH=.\

    SET IN_FILE="%ROOTPATH%MyList.txt"

    ECHO file '%ROOTPATH%HELLO.mp3' > MyList.txt
    ECHO file 'SILENCE_2sec.MP3' >> MyList.txt

    SET OPTIONS= -f concat -safe 0 -i  %IN_FILE%  -c copy -y

    SET OUT_FILE="%ROOTPATH%CONCATENATED_AUDIO_2.MP3"

    SET INFO_FILE="INFO.TXT"

    %EXE% %OPTIONS%  %OUT_FILE% 1> %INFO_FILE% 2>&amp;1

    ECHO ======================== >> %INFO_FILE%
    ECHO IN_FILE=%IN_FILE%  >> %INFO_FILE%
    ECHO EXE=%EXE%  >> %INFO_FILE%
    ECHO OPTIONS=%OPTIONS%  >> %INFO_FILE%
    ECHO ======================== >> %INFO_FILE%

    Here is the console info output from the ffmpeg, let me know if you need other output include ones from ffprobe

    ffmpeg version git-2020-01-10-3d894db Copyright (c) 2000-2020 the FFmpeg developers
     built with gcc 9.2.1 (GCC) 20191125
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
     libavutil      56. 38.100 / 56. 38.100
     libavcodec     58. 65.103 / 58. 65.103
     libavformat    58. 35.101 / 58. 35.101
     libavdevice    58.  9.103 / 58.  9.103
     libavfilter     7. 70.101 /  7. 70.101
     libswscale      5.  6.100 /  5.  6.100
     libswresample   3.  6.100 /  3.  6.100
     libpostproc    55.  6.100 / 55.  6.100
    [mp3 @ 000000000036af80] Estimating duration from bitrate, this may be inaccurate
    Input #0, concat, from '.\MyList.txt':
     Duration: N/A, start: 0.000000, bitrate: 32 kb/s
       Stream #0:0: Audio: mp3, 24000 Hz, mono, fltp, 32 kb/s
    Output #0, mp3, to '.\CONCATENATED_AUDIO_2.MP3':
     Metadata:
       TSSE            : Lavf58.35.101
       Stream #0:0: Audio: mp3, 24000 Hz, mono, fltp, 32 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    [mp3 @ 0000000000372d00] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 17280 >= 17255
    size=      11kB time=00:00:02.73 bitrate=  33.2kbits/s speed=2.73e+03x    
    video:0kB audio:11kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.137446%
    ========================
    IN_FILE=".\MyList.txt"  
    EXE="S:\_BINS\FFmpeg 4.2.1 20200112\bin\ffmpeg.exe"  
    OPTIONS= -f concat -safe 0 -i  ".\MyList.txt"  -c copy -y  
    ========================  

    I believe I am running FFmpeg 4.2.1, recently installed (20200112)

    You may produce the HELLO.mp3 by saving the following link

    https://translate.google.com.vn/translate_tts?en=UTF-8&amp;q=Hello+&amp;tl=en&amp;client=tw-ob

    FYI, I am still a novice of ffmpeg and using it more like a black box with the help I received in this very super forum.
    Please be as explicit as you can with command line options on how I can fix this issue.
    Thank you.

    Additional Hints Debugging :

    If I append more files after the silence audio it seems that the silence audio impacts (garbles, chops) the previous audio.
    You may try the following for the list of audio files input.

    ECHO file '%ROOTPATH%HELLO.mp3' > MyList.txt
    ECHO file 'SILENCE_2sec.MP3' >> MyList.txt
    ECHO file '%ROOTPATH%HELLO.mp3' >> MyList.txt
    ECHO file '%ROOTPATH%HELLO.mp3' >> MyList.txt

    I typically add one or more silence file to derive a post silence effect after the actual audio. That’s my current logic. However if you have an alternative to appending a silence in the process of concatenating several audio files or appending x-seconds silence to an existing audio file. I can use that method as well from my coding.

    Thank you.