Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (34)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (5170)

  • The encoding of ffmpeg does not work on iOS

    25 mai 2017, par Deric

    I would like to send encoded streaming encoded using ffmpeg.
    The encoding transfer developed under the source below does not work.
    Encoding Before packet operation with vlc player is done well, encoded packets can not operate.
    I do not know what’s wrong.
    Please help me.

    av_register_all();
    avformat_network_init();
    AVOutputFormat *ofmt = NULL;
    //Input AVFormatContext and Output AVFormatContext
    AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
    AVPacket pkt;
    //const char *in_filename, *out_filename;
    int ret, i;
    int videoindex=-1;
    int frame_index=0;
    int64_t start_time=0;

    av_register_all();
    //Network
    avformat_network_init();
    //Input
    if ((ret = avformat_open_input(&ifmt_ctx, "rtmp://", 0, 0)) < 0) {
       printf( "Could not open input file.");
    }
    if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
       printf( "Failed to retrieve input stream information");
    }


    AVCodecContext *context = NULL;

    for(i=0; inb_streams; i++) {
       if(ifmt_ctx->streams[i]->codecpar->codec_type==AVMEDIA_TYPE_VIDEO){

           videoindex=i;

           AVCodecParameters *params = ifmt_ctx->streams[i]->codecpar;
           AVCodec *codec = avcodec_find_decoder(params->codec_id);
           if (codec == NULL)  { return; };

           context = avcodec_alloc_context3(codec);

           if (context == NULL) { return; };

           ret = avcodec_parameters_to_context(context, params);
           if(ret < 0){
               avcodec_free_context(&context);
           }

           context->framerate = av_guess_frame_rate(ifmt_ctx, ifmt_ctx->streams[i], NULL);

           ret = avcodec_open2(context, codec, NULL);
           if(ret < 0) {
               NSLog(@"avcodec open2 error");
               avcodec_free_context(&context);
           }

           break;
       }
    }
    av_dump_format(ifmt_ctx, 0, "rtmp://", 0);

    //Output

    avformat_alloc_output_context2(&ofmt_ctx, NULL, "flv", "rtmp://"); //RTMP
    //avformat_alloc_output_context2(&ofmt_ctx, NULL, "mpegts", out_filename);//UDP

    if (!ofmt_ctx) {
       printf( "Could not create output context\n");
       ret = AVERROR_UNKNOWN;
    }
    ofmt = ofmt_ctx->oformat;
    for (i = 0; i < ifmt_ctx->nb_streams; i++) {
       //Create output AVStream according to input AVStream
       AVStream *in_stream = ifmt_ctx->streams[i];
       AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
       if (!out_stream) {
           printf( "Failed allocating output stream\n");
           ret = AVERROR_UNKNOWN;
       }

       out_stream->time_base = in_stream->time_base;

       //Copy the settings of AVCodecContext
       ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
       if (ret < 0) {
           printf( "Failed to copy context from input to output stream codec context\n");
       }

       out_stream->codecpar->codec_tag = 0;
       if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) {
           out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }
    }
    //Dump Format------------------
    av_dump_format(ofmt_ctx, 0, "rtmp://", 1);
    //Open output URL
    if (!(ofmt->flags & AVFMT_NOFILE)) {
       ret = avio_open(&ofmt_ctx->pb, "rtmp://", AVIO_FLAG_WRITE);
       if (ret < 0) {
           printf( "Could not open output URL ");
      }
    }
    //Write file header
    ret = avformat_write_header(ofmt_ctx, NULL);
    if (ret < 0) {
       printf( "Error occurred when opening output URL\n");
    }

    // Encoding
    AVCodec *codec;
    AVCodecContext *c;

    AVStream *video_st = avformat_new_stream(ofmt_ctx, 0);
    video_st->time_base.num = 1;
    video_st->time_base.den = 25;

    if(video_st == NULL){
       NSLog(@"video stream error");
    }


    codec = avcodec_find_encoder(AV_CODEC_ID_H264);
    if(!codec){
       NSLog(@"avcodec find encoder error");
    }

    c = avcodec_alloc_context3(codec);
    if(!c){
       NSLog(@"avcodec alloc context error");
    }


    c->profile = FF_PROFILE_H264_BASELINE;
    c->width = ifmt_ctx->streams[videoindex]->codecpar->width;
    c->height = ifmt_ctx->streams[videoindex]->codecpar->height;
    c->time_base.num = 1;
    c->time_base.den = 25;
    c->bit_rate = 800000;
    //c->time_base = { 1,22 };
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    c->thread_count = 2;
    c->thread_type = 2;

    AVDictionary *param = 0;

    av_dict_set(&param, "preset", "slow", 0);
    av_dict_set(&param, "tune", "zerolatency", 0);

    if (avcodec_open2(c, codec, NULL) < 0) {
       fprintf(stderr, "Could not open codec\n");
    }



    AVFrame *pFrame = av_frame_alloc();

    start_time=av_gettime();
    while (1) {

       AVPacket encoded_pkt;

       av_init_packet(&encoded_pkt);
       encoded_pkt.data = NULL;
       encoded_pkt.size = 0;

       AVStream *in_stream, *out_stream;
       //Get an AVPacket
       ret = av_read_frame(ifmt_ctx, &pkt);
       if (ret < 0) {
           break;
       }

       //FIX:No PTS (Example: Raw H.264)
       //Simple Write PTS
       if(pkt.pts==AV_NOPTS_VALUE){
           //Write PTS
           AVRational time_base1=ifmt_ctx->streams[videoindex]->time_base;
           //Duration between 2 frames (us)
           int64_t calc_duration=(double)AV_TIME_BASE/av_q2d(ifmt_ctx->streams[videoindex]->r_frame_rate);
           //Parameters
           pkt.pts=(double)(frame_index*calc_duration)/(double)(av_q2d(time_base1)*AV_TIME_BASE);
           pkt.dts=pkt.pts;
           pkt.duration=(double)calc_duration/(double)(av_q2d(time_base1)*AV_TIME_BASE);
       }
       //Important:Delay
       if(pkt.stream_index==videoindex){
           AVRational time_base=ifmt_ctx->streams[videoindex]->time_base;
           AVRational time_base_q={1,AV_TIME_BASE};
           int64_t pts_time = av_rescale_q(pkt.dts, time_base, time_base_q);
           int64_t now_time = av_gettime() - start_time;
           if (pts_time > now_time) {
               av_usleep(pts_time - now_time);
           }

       }

       in_stream  = ifmt_ctx->streams[pkt.stream_index];
       out_stream = ofmt_ctx->streams[pkt.stream_index];
       /* copy packet */
       //Convert PTS/DTS
       //pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
       //pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
       pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
       pkt.pos = -1;

       //Print to Screen
       if(pkt.stream_index==videoindex){
           //printf("Send %8d video frames to output URL\n",frame_index);
           frame_index++;
       }



       // Decode and Encode
       if(pkt.stream_index == videoindex) {

           ret = avcodec_send_packet(context, &pkt);

           if(ret<0){
               NSLog(@"avcode send packet error");
           }

           ret = avcodec_receive_frame(context, pFrame);
           if(ret<0){
               NSLog(@"avcodec receive frame error");
           }

           ret = avcodec_send_frame(c, pFrame);

           if(ret < 0){
               NSLog(@"avcodec send frame - %s", av_err2str(ret));
           }

           ret = avcodec_receive_packet(c, &encoded_pkt);

           if(ret < 0){
               NSLog(@"avcodec receive packet error");
           }

       }

       //ret = av_write_frame(ofmt_ctx, &pkt);

       //encoded_pkt.stream_index = pkt.stream_index;
       av_packet_rescale_ts(&encoded_pkt, c->time_base, ofmt_ctx->streams[videoindex]->time_base);


       ret = av_interleaved_write_frame(ofmt_ctx, &encoded_pkt);

       if (ret < 0) {
           printf( "Error muxing packet\n");
           break;
       }

       av_packet_unref(&encoded_pkt);
       av_free_packet(&pkt);

    }
    //Write file trailer
    av_write_trailer(ofmt_ctx);
  • Libavformat/FFMPEG : Muxing into mp4 with AVFormatContext drops the final frame, depending on the number of frames

    27 octobre 2020, par Galen Lynch

    I am trying to use libavformat to create a .mp4 video
with a single h.264 video stream, but the final frame in the resulting file
often has a duration of zero and is effectively dropped from the video.
Strangely enough, whether the final frame is dropped or not depends on how many
frames I try to add to the file. Some simple testing that I outline below makes
me think that I am somehow misconfiguring either the AVFormatContext or the
h.264 encoder, resulting in two edit lists that sometimes chop off the final
frame. I will also post a simplified version of the code I am using, in case I'm
making some obvious mistake. Any help would be greatly appreciated : I've been
struggling with this issue for the past few days and have made little progress.

    


    I can recover the dropped frame by creating a new mp4 container using ffmpeg
binary with the copy codec if I use the -ignore_editlist option. Inspecting
the file with a missing frame using ffprobe, mp4trackdump, or mp4file --dump, shows that the final frame is dropped if its sample time is exactly the
same the end of the edit list. When I make a file that has no dropped frames, it
still has two edit lists : the only difference is that the end time of the edit
list is beyond all samples in files that do not have dropped frames. Though this
is hardly a fair comparison, if I make a .png for each frame and then generate
a .mp4 with ffmpeg using the image2 codec and similar h.264 settings, I
produce a movie with all frames present, only one edit list, and similar PTS
times as my mangled movies with two edit lists. In this case, the edit list
always ends after the last frame/sample time.

    


    I am using this command to determine the number of frames in the resulting stream,
though I also get the same number with other utilities :

    


    ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 video_file_name.mp4


    


    Simple inspection of the file with ffprobe shows no obviously alarming signs to
me, besides the framerate being affected by the missing frame (the target was
24) :

    


    $ ffprobe -hide_banner testing.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'testing.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.45.100
  Duration: 00:00:04.13, start: 0.041016, bitrate: 724 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 100x100, 722 kb/s, 24.24 fps, 24 tbr, 12288 tbn, 48 tbc (default)
    Metadata:
      handler_name    : VideoHandler


    


    The files that I generate programatically always have two edit lists, one of
which is very short. In files both with and without a missing frame, the
duration one of the frames is 0, while all the others have the same duration
(512). You can see this in the ffmpeg output for this file that I tried to put
100 frames into, though only 99 are visible despite the file containing all 100
samples.

    


    $ ffmpeg -hide_banner -y -v 9 -loglevel 99 -i testing.mp4  &#xA;...&#xA;<edited to="to" remove="remove" the="the" class="class" printing="printing">&#xA;type:&#x27;edts&#x27; parent:&#x27;trak&#x27; sz: 48 100 948&#xA;type:&#x27;elst&#x27; parent:&#x27;edts&#x27; sz: 40 8 40&#xA;track[0].edit_count = 2&#xA;duration=41 time=-1 rate=1.000000&#xA;duration=4125 time=0 rate=1.000000&#xA;type:&#x27;mdia&#x27; parent:&#x27;trak&#x27; sz: 808 148 948&#xA;type:&#x27;mdhd&#x27; parent:&#x27;mdia&#x27; sz: 32 8 800&#xA;type:&#x27;hdlr&#x27; parent:&#x27;mdia&#x27; sz: 45 40 800&#xA;ctype=[0][0][0][0]&#xA;stype=vide&#xA;type:&#x27;minf&#x27; parent:&#x27;mdia&#x27; sz: 723 85 800&#xA;type:&#x27;vmhd&#x27; parent:&#x27;minf&#x27; sz: 20 8 715&#xA;type:&#x27;dinf&#x27; parent:&#x27;minf&#x27; sz: 36 28 715&#xA;type:&#x27;dref&#x27; parent:&#x27;dinf&#x27; sz: 28 8 28&#xA;Unknown dref type 0x206c7275 size 12&#xA;type:&#x27;stbl&#x27; parent:&#x27;minf&#x27; sz: 659 64 715&#xA;type:&#x27;stsd&#x27; parent:&#x27;stbl&#x27; sz: 151 8 651&#xA;size=135 4CC=avc1 codec_type=0&#xA;type:&#x27;avcC&#x27; parent:&#x27;stsd&#x27; sz: 49 8 49&#xA;type:&#x27;stts&#x27; parent:&#x27;stbl&#x27; sz: 32 159 651&#xA;track[0].stts.entries = 2&#xA;sample_count=99, sample_duration=512&#xA;sample_count=1, sample_duration=0&#xA;...&#xA;AVIndex stream 0, sample 99, offset 5a0ed, dts 50688, size 3707, distance 0, keyframe 1&#xA;Processing st: 0, edit list 0 - media time: -1, duration: 504&#xA;Processing st: 0, edit list 1 - media time: 0, duration: 50688&#xA;type:&#x27;udta&#x27; parent:&#x27;moov&#x27; sz: 98 1072 1162&#xA;...&#xA;</edited>

    &#xA;

    The last frame has zero duration :

    &#xA;

    $ mp4trackdump -v testing.mp4&#xA;...&#xA;mp4file testing.mp4, track 1, samples 100, timescale 12288&#xA;sampleId      1, size  6943 duration      512 time        0 00:00:00.000 S&#xA;sampleId      2, size  3671 duration      512 time      512 00:00:00.041 S&#xA;...&#xA;sampleId     99, size  3687 duration      512 time    50176 00:00:04.083 S&#xA;sampleId    100, size  3707 duration        0 time    50688 00:00:04.125 S&#xA;

    &#xA;

    Non-mangled videos that I generate have similar structure, as you can see in&#xA;this video that had 99 input frames, all of which are visible in the output.&#xA;Even though the sample_duration is set to zero for one of the samples in the&#xA;stss box, it is not dropped from the frame count or when reading the frames back&#xA;in with ffmpeg.

    &#xA;

    $ ffmpeg -hide_banner -y -v 9 -loglevel 99 -i testing_99.mp4  &#xA;...&#xA;type:&#x27;elst&#x27; parent:&#x27;edts&#x27; sz: 40 8 40&#xA;track[0].edit_count = 2&#xA;duration=41 time=-1 rate=1.000000&#xA;duration=4084 time=0 rate=1.000000&#xA;...&#xA;track[0].stts.entries = 2&#xA;sample_count=98, sample_duration=512&#xA;sample_count=1, sample_duration=0&#xA;...&#xA;AVIndex stream 0, sample 98, offset 5d599, dts 50176, size 3833, distance 0, keyframe 1&#xA;Processing st: 0, edit list 0 - media time: -1, duration: 504&#xA;Processing st: 0, edit list 1 - media time: 0, duration: 50184&#xA;...&#xA;

    &#xA;

    $ mp4trackdump -v testing_99.mp4&#xA;...&#xA;sampleId     98, size  3814 duration      512 time    49664 00:00:04.041 S&#xA;sampleId     99, size  3833 duration        0 time    50176 00:00:04.083 S&#xA;

    &#xA;

    One difference that jumps out to me is that the mangled file's second edit list&#xA;ends at time 50688, which coincides with the last sample, while the non-mangled&#xA;file's edit list ends at 50184, which is after the time of the last sample&#xA;at 50176. As I mentioned before, whether the last frame is clipped depends on&#xA;the number of frames I encode and mux into the container : 100 input frames&#xA;results in 1 dropped frame, 99 results in 0, 98 in 0, 97 in 1, etc...

    &#xA;

    Here is the code that I used to generate these files, which is a MWE script&#xA;version of library functions that I am modifying. It is written in Julia,&#xA;which I do not think is important here, and calls the FFMPEG library version&#xA;4.3.1. It's more or less a direct translation from of the FFMPEG muxing&#xA;demo, although the codec&#xA;context here is created before the format context. I am presenting the code that&#xA;interacts with ffmpeg first, although it relies on some helper code that I will&#xA;put below.

    &#xA;

    The helper code just makes it easier to work with nested C structs in Julia, and&#xA;allows . syntax in Julia to be used in place of C's arrow (->) operator for&#xA;field access of struct pointers. Libav structs such as AVFrame appear as a&#xA;thin wrapper type AVFramePtr, and similarly AVStream appears as&#xA;AVStreamPtr etc... These act like single or double pointers for the purposes&#xA;of function calls, depending on the function's type signature. Hopefully it will&#xA;be clear enough to understand if you are familiar with working with libav in C,&#xA;and I don't think looking at the helper code should be necessary if you don't&#xA;want to run the code.

    &#xA;

    # Function to transfer array to AVPicture/AVFrame&#xA;function transfer_img_buf_to_frame!(frame, img)&#xA;    img_pointer = pointer(img)&#xA;    data_pointer = frame.data[1] # Base-1 indexing, get pointer to first data buffer in frame&#xA;    for h = 1:frame.height&#xA;        data_line_pointer = data_pointer &#x2B; (h-1) * frame.linesize[1] # base-1 indexing&#xA;        img_line_pointer = img_pointer &#x2B; (h-1) * frame.width&#xA;        unsafe_copyto!(data_line_pointer, img_line_pointer, frame.width) # base-1 indexing&#xA;    end&#xA;end&#xA;&#xA;# Function to transfer AVFrame to AVCodecContext, and AVPacket to AVFormatContext&#xA;function encode_mux!(packet, format_context, frame, codec_context; flush = false)&#xA;    if flush&#xA;        fret = avcodec_send_frame(codec_context, C_NULL)&#xA;    else&#xA;        fret = avcodec_send_frame(codec_context, frame)&#xA;    end&#xA;    if fret &lt; 0 &amp;&amp; !in(fret, [-Libc.EAGAIN, VIO_AVERROR_EOF])&#xA;        error("Error $fret sending a frame for encoding")&#xA;    end&#xA;&#xA;    pret = Cint(0)&#xA;    while pret >= 0&#xA;        pret = avcodec_receive_packet(codec_context, packet)&#xA;        if pret == -Libc.EAGAIN || pret == VIO_AVERROR_EOF&#xA;             break&#xA;        elseif pret &lt; 0&#xA;            error("Error $pret during encoding")&#xA;        end&#xA;        stream = format_context.streams[1] # Base-1 indexing&#xA;        av_packet_rescale_ts(packet, codec_context.time_base, stream.time_base)&#xA;        packet.stream_index = 0&#xA;        ret = av_interleaved_write_frame(format_context, packet)&#xA;        ret &lt; 0 &amp;&amp; error("Error muxing packet: $ret")&#xA;    end&#xA;    if !flush &amp;&amp; fret == -Libc.EAGAIN &amp;&amp; pret != VIO_AVERROR_EOF&#xA;        fret = avcodec_send_frame(codec_context, frame)&#xA;        if fret &lt; 0 &amp;&amp; fret != VIO_AVERROR_EOF&#xA;            error("Error $fret sending a frame for encoding")&#xA;        end&#xA;    end&#xA;    return pret&#xA;end&#xA;&#xA;# Set parameters of test movie&#xA;nframe = 100&#xA;width, height = 100, 100&#xA;framerate = 24&#xA;gop = 0&#xA;codec_name = "libx264"&#xA;filename = "testing.mp4"&#xA;&#xA;((width % 2 !=0) || (height % 2 !=0)) &amp;&amp; error("Encoding error: Image dims must be a multiple of two")&#xA;&#xA;# Make test images&#xA;imgstack = map(x->rand(UInt8,width,height),1:nframe);&#xA;&#xA;pix_fmt = AV_PIX_FMT_GRAY8&#xA;framerate_rat = Rational(framerate)&#xA;&#xA;codec = avcodec_find_encoder_by_name(codec_name)&#xA;codec == C_NULL &amp;&amp; error("Codec &#x27;$codec_name&#x27; not found")&#xA;&#xA;# Allocate AVCodecContext&#xA;codec_context_p = avcodec_alloc_context3(codec) # raw pointer&#xA;codec_context_p == C_NULL &amp;&amp; error("Could not allocate AVCodecContext")&#xA;# Easier to work with pointer that acts like a c struct pointer, type defined below&#xA;codec_context = AVCodecContextPtr(codec_context_p)&#xA;&#xA;codec_context.width = width&#xA;codec_context.height = height&#xA;codec_context.time_base = AVRational(1/framerate_rat)&#xA;codec_context.framerate = AVRational(framerate_rat)&#xA;codec_context.pix_fmt = pix_fmt&#xA;codec_context.gop_size = gop&#xA;&#xA;ret = avcodec_open2(codec_context, codec, C_NULL)&#xA;ret &lt; 0 &amp;&amp; error("Could not open codec: Return code $(ret)")&#xA;&#xA;# Allocate AVFrame and wrap it in a Julia convenience type&#xA;frame_p = av_frame_alloc()&#xA;frame_p == C_NULL &amp;&amp; error("Could not allocate AVFrame")&#xA;frame = AVFramePtr(frame_p)&#xA;&#xA;frame.format = pix_fmt&#xA;frame.width = width&#xA;frame.height = height&#xA;&#xA;# Allocate picture buffers for frame&#xA;ret = av_frame_get_buffer(frame, 0)&#xA;ret &lt; 0 &amp;&amp; error("Could not allocate the video frame data")&#xA;&#xA;# Allocate AVPacket and wrap it in a Julia convenience type&#xA;packet_p = av_packet_alloc()&#xA;packet_p == C_NULL &amp;&amp; error("Could not allocate AVPacket")&#xA;packet = AVPacketPtr(packet_p)&#xA;&#xA;# Allocate AVFormatContext and wrap it in a Julia convenience type&#xA;format_context_dp = Ref(Ptr{AVFormatContext}()) # double pointer&#xA;ret = avformat_alloc_output_context2(format_context_dp, C_NULL, C_NULL, filename)&#xA;if ret != 0 || format_context_dp[] == C_NULL&#xA;    error("Could not allocate AVFormatContext")&#xA;end&#xA;format_context = AVFormatContextPtr(format_context_dp)&#xA;&#xA;# Add video stream to AVFormatContext and configure it to use the encoder made above&#xA;stream_p = avformat_new_stream(format_context, C_NULL)&#xA;stream_p == C_NULL &amp;&amp; error("Could not allocate output stream")&#xA;stream = AVStreamPtr(stream_p) # Wrap this pointer in a convenience type&#xA;&#xA;stream.time_base = codec_context.time_base&#xA;stream.avg_frame_rate = 1 / convert(Rational, stream.time_base)&#xA;ret = avcodec_parameters_from_context(stream.codecpar, codec_context)&#xA;ret &lt; 0 &amp;&amp; error("Could not set parameters of stream")&#xA;&#xA;# Open the AVIOContext&#xA;pb_ptr = field_ptr(format_context, :pb)&#xA;# This following is just a call to avio_open, with a bit of extra protection&#xA;# so the Julia garbage collector does not destroy format_context during the call&#xA;ret = GC.@preserve format_context avio_open(pb_ptr, filename, AVIO_FLAG_WRITE)&#xA;ret &lt; 0 &amp;&amp; error("Could not open file $filename for writing")&#xA;&#xA;# Write the header&#xA;ret = avformat_write_header(format_context, C_NULL)&#xA;ret &lt; 0 &amp;&amp; error("Could not write header")&#xA;&#xA;# Encode and mux each frame&#xA;for i in 1:nframe # iterate from 1 to nframe&#xA;    img = imgstack[i] # base-1 indexing&#xA;    ret = av_frame_make_writable(frame)&#xA;    ret &lt; 0 &amp;&amp; error("Could not make frame writable")&#xA;    transfer_img_buf_to_frame!(frame, img)&#xA;    frame.pts = i&#xA;    encode_mux!(packet, format_context, frame, codec_context)&#xA;end&#xA;&#xA;# Flush the encoder&#xA;encode_mux!(packet, format_context, frame, codec_context; flush = true)&#xA;&#xA;# Write the trailer&#xA;av_write_trailer(format_context)&#xA;&#xA;# Close the AVIOContext&#xA;pb_ptr = field_ptr(format_context, :pb) # get pointer to format_context.pb&#xA;ret = GC.@preserve format_context avio_closep(pb_ptr) # simply a call to avio_closep&#xA;ret &lt; 0 &amp;&amp; error("Could not free AVIOContext")&#xA;&#xA;# Deallocation&#xA;avcodec_free_context(codec_context)&#xA;av_frame_free(frame)&#xA;av_packet_free(packet)&#xA;avformat_free_context(format_context)&#xA;

    &#xA;

    Below is the helper code that makes accessing pointers to nested c structs not a&#xA;total pain in Julia. If you try to run the code yourself, please enter this in&#xA;before the logic of the code shown above. It requires&#xA;VideoIO.jl, a Julia wrapper to libav.

    &#xA;

    # Convenience type and methods to make the above code look more like C&#xA;using Base: RefValue, fieldindex&#xA;&#xA;import Base: unsafe_convert, getproperty, setproperty!, getindex, setindex!,&#xA;    unsafe_wrap, propertynames&#xA;&#xA;# VideoIO is a Julia wrapper to libav&#xA;#&#xA;# Bring bindings to libav library functions into namespace&#xA;using VideoIO: AVCodecContext, AVFrame, AVPacket, AVFormatContext, AVRational,&#xA;    AVStream, AV_PIX_FMT_GRAY8, AVIO_FLAG_WRITE, AVFMT_NOFILE,&#xA;    avformat_alloc_output_context2, avformat_free_context, avformat_new_stream,&#xA;    av_dump_format, avio_open, avformat_write_header,&#xA;    avcodec_parameters_from_context, av_frame_make_writable, avcodec_send_frame,&#xA;    avcodec_receive_packet, av_packet_rescale_ts, av_interleaved_write_frame,&#xA;    avformat_query_codec, avcodec_find_encoder_by_name, avcodec_alloc_context3,&#xA;    avcodec_open2, av_frame_alloc, av_frame_get_buffer, av_packet_alloc,&#xA;    avio_closep, av_write_trailer, avcodec_free_context, av_frame_free,&#xA;    av_packet_free&#xA;&#xA;# Submodule of VideoIO&#xA;using VideoIO: AVCodecs&#xA;&#xA;# Need to import this function from Julia&#x27;s Base to add more methods&#xA;import Base: convert&#xA;&#xA;const VIO_AVERROR_EOF = -541478725 # AVERROR_EOF&#xA;&#xA;# Methods to convert between AVRational and Julia&#x27;s Rational type, because it&#x27;s&#xA;# hard to access the AV rational macros with Julia&#x27;s C interface&#xA;convert(::Type{Rational{T}}, r::AVRational) where T = Rational{T}(r.num, r.den)&#xA;convert(::Type{Rational}, r::AVRational) = Rational(r.num, r.den)&#xA;convert(::Type{AVRational}, r::Rational) = AVRational(numerator(r), denominator(r))&#xA;&#xA;"""&#xA;    mutable struct NestedCStruct{T}&#xA;&#xA;Wraps a pointer to a C struct, and acts like a double pointer to that memory.&#xA;The methods below will automatically convert it to a single pointer if needed&#xA;for a function call, and make interacting with it in Julia look (more) similar&#xA;to interacting with it in C, except &#x27;->&#x27; in C is replaced by &#x27;.&#x27; in Julia.&#xA;"""&#xA;mutable struct NestedCStruct{T}&#xA;    data::RefValue{Ptr{T}}&#xA;end&#xA;NestedCStruct{T}(a::Ptr) where T = NestedCStruct{T}(Ref(a))&#xA;NestedCStruct(a::Ptr{T}) where T = NestedCStruct{T}(a)&#xA;&#xA;const AVCodecContextPtr = NestedCStruct{AVCodecContext}&#xA;const AVFramePtr = NestedCStruct{AVFrame}&#xA;const AVPacketPtr = NestedCStruct{AVPacket}&#xA;const AVFormatContextPtr = NestedCStruct{AVFormatContext}&#xA;const AVStreamPtr = NestedCStruct{AVStream}&#xA;&#xA;function field_ptr(::Type{S}, struct_pointer::Ptr{T}, field::Symbol,&#xA;                           index::Integer = 1) where {S,T}&#xA;    fieldpos = fieldindex(T, field)&#xA;    field_pointer = convert(Ptr{S}, struct_pointer) &#x2B;&#xA;        fieldoffset(T, fieldpos) &#x2B; (index - 1) * sizeof(S)&#xA;    return field_pointer&#xA;end&#xA;&#xA;field_ptr(a::Ptr{T}, field::Symbol, args...) where T =&#xA;    field_ptr(fieldtype(T, field), a, field, args...)&#xA;&#xA;function check_ptr_valid(p::Ptr, err::Bool = true)&#xA;    valid = p != C_NULL&#xA;    err &amp;&amp; !valid &amp;&amp; error("Invalid pointer")&#xA;    valid&#xA;end&#xA;&#xA;unsafe_convert(::Type{Ptr{T}}, ap::NestedCStruct{T}) where T =&#xA;    getfield(ap, :data)[]&#xA;unsafe_convert(::Type{Ptr{Ptr{T}}}, ap::NestedCStruct{T}) where T =&#xA;    unsafe_convert(Ptr{Ptr{T}}, getfield(ap, :data))&#xA;&#xA;function check_ptr_valid(a::NestedCStruct{T}, args...) where T&#xA;    p = unsafe_convert(Ptr{T}, a)&#xA;    GC.@preserve a check_ptr_valid(p, args...)&#xA;end&#xA;&#xA;nested_wrap(x::Ptr{T}) where T = NestedCStruct(x)&#xA;nested_wrap(x) = x&#xA;&#xA;function getproperty(ap::NestedCStruct{T}, s::Symbol) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    res = GC.@preserve ap unsafe_load(field_ptr(p, s))&#xA;    nested_wrap(res)&#xA;end&#xA;&#xA;function setproperty!(ap::NestedCStruct{T}, s::Symbol, x) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    fp = field_ptr(p, s)&#xA;    GC.@preserve ap unsafe_store!(fp, x)&#xA;end&#xA;&#xA;function getindex(ap::NestedCStruct{T}, i::Integer) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    res = GC.@preserve ap unsafe_load(p, i)&#xA;    nested_wrap(res)&#xA;end&#xA;&#xA;function setindex!(ap::NestedCStruct{T}, i::Integer, x) where T&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{T}, ap)&#xA;    GC.@preserve ap unsafe_store!(p, x, i)&#xA;end&#xA;&#xA;function unsafe_wrap(::Type{T}, ap::NestedCStruct{S}, i) where {S, T}&#xA;    check_ptr_valid(ap)&#xA;    p = unsafe_convert(Ptr{S}, ap)&#xA;    GC.@preserve ap unsafe_wrap(T, p, i)&#xA;end&#xA;&#xA;function field_ptr(::Type{S}, a::NestedCStruct{T}, field::Symbol,&#xA;                           args...) where {S, T}&#xA;    check_ptr_valid(a)&#xA;    p = unsafe_convert(Ptr{T}, a)&#xA;    GC.@preserve a field_ptr(S, p, field, args...)&#xA;end&#xA;&#xA;field_ptr(a::NestedCStruct{T}, field::Symbol, args...) where T =&#xA;    field_ptr(fieldtype(T, field), a, field, args...)&#xA;&#xA;propertynames(ap::T) where {S, T&lt;:NestedCStruct{S}} = (fieldnames(S)...,&#xA;                                                       fieldnames(T)...)&#xA;

    &#xA;


    &#xA;

    Edit : Some things that I have already tried

    &#xA;

      &#xA;
    • Explicitly setting the stream duration to be the same number as the number of frames that I add, or a few more beyond that
    • &#xA;

    • Explicitly setting the stream start time to zero, while the first frame has a PTS of 1
    • &#xA;

    • Playing around with encoder parameters, as well as gop_size, using B frames, etc.
    • &#xA;

    • Setting the private data for the mov/mp4 muxer to set the movflag negative_cts_offsets
    • &#xA;

    • Changing the framerate
    • &#xA;

    • Tried different pixel formats, such as AV_PIX_FMT_YUV420P
    • &#xA;

    &#xA;

    Also to be clear while I can just transfer the file into another while ignoring the edit lists to work around this problem, I am hoping to not make damaged mp4 files in the first place.

    &#xA;

  • Video Frame-by-Frame Deraining with MFDNet in Python

    28 octobre 2024, par JimmyHu

    As this CodeReview question mentioned, I am trying to modify the code to process frame-by-frame rain streaks removal in a video. FFmpeg package is used in this code.

    &#xA;

    import argparse&#xA;import os&#xA;import time&#xA;&#xA;import cv2&#xA;import ffmpeg&#xA;import numpy as np&#xA;import torch&#xA;from skimage import img_as_ubyte&#xA;from torch.utils.data import DataLoader&#xA;from tqdm import tqdm&#xA;&#xA;import utils&#xA;from data_RGB import get_test_data&#xA;from MFDNet import HPCNet as mfdnet&#xA;&#xA;&#xA;def process_video_frame_by_frame(input_file, output_file, model_restoration):&#xA;    """&#xA;    Decodes a video frame by frame, processes each frame,&#xA;    and re-encodes to a new video.&#xA;&#xA;    Args:&#xA;        input_file: Path to the input video file.&#xA;        output_file: Path to the output video file.&#xA;    """&#xA;    try:&#xA;        # Probe for video information&#xA;        probe = ffmpeg.probe(input_file)&#xA;        video_stream = next((stream for stream in probe[&#x27;streams&#x27;] if stream[&#x27;codec_type&#x27;] == &#x27;video&#x27;), None)&#xA;        width = int(video_stream[&#x27;width&#x27;])&#xA;        height = int(video_stream[&#x27;height&#x27;])&#xA;&#xA;        # Input&#xA;        process1 = (&#xA;            ffmpeg&#xA;            .input(input_file)&#xA;            .output(&#x27;pipe:&#x27;, format=&#x27;rawvideo&#x27;, pix_fmt=&#x27;rgb24&#x27;)&#xA;            .run_async(pipe_stdout=True)&#xA;        )&#xA;&#xA;        # Output&#xA;        process2 = (&#xA;            ffmpeg&#xA;            .input(&#x27;pipe:&#x27;, format=&#x27;rawvideo&#x27;, pix_fmt=&#x27;rgb24&#x27;, s=&#x27;{}x{}&#x27;.format(width, height))&#xA;            .output(output_file, vcodec=&#x27;libx264&#x27;, pix_fmt=&#x27;yuv420p&#x27;)&#xA;            .overwrite_output()&#xA;            .run_async(pipe_stdin=True)&#xA;        )&#xA;&#xA;        # Process frame (deraining processing)&#xA;        while in_bytes := process1.stdout.read(width * height * 3):&#xA;            in_frame = torch.frombuffer(in_bytes, dtype=torch.uint8).float().reshape((1, 3, width, height))&#xA;            restored = model_restoration(torch.div(in_frame, 255).to(device=&#x27;cuda&#x27;))&#xA;            restored = torch.clamp(restored[0], 0, 1)&#xA;            restored = restored.cpu().detach().numpy()&#xA;            restored *= 255&#xA;            out_frame = restored&#xA;            np.reshape(out_frame, (3, width, height))&#xA;&#xA;            # Encode and write the frame&#xA;            process2.stdin.write(&#xA;                out_frame&#xA;                .astype(np.uint8)&#xA;                .tobytes()&#xA;            )&#xA;            &#xA;        # Close streams&#xA;        process1.stdout.close()&#xA;        process2.stdin.close()&#xA;        process1.wait()&#xA;        process2.wait()&#xA;&#xA;    except ffmpeg.Error as e:&#xA;        print(&#x27;stdout:&#x27;, e.stdout.decode(&#x27;utf8&#x27;))&#xA;        print(&#x27;stderr:&#x27;, e.stderr.decode(&#x27;utf8&#x27;))&#xA;&#xA;if __name__ == &#x27;__main__&#x27;:&#xA;    parser = argparse.ArgumentParser(description=&#x27;Image Deraining using MPRNet&#x27;)&#xA;&#xA;    parser.add_argument(&#x27;--weights&#x27;, default=&#x27;./checkpoints/checkpoints_mfd.pth&#x27;, type=str,&#xA;                        help=&#x27;Path to weights&#x27;)&#xA;    parser.add_argument(&#x27;--gpus&#x27;, default=&#x27;0&#x27;, type=str, help=&#x27;CUDA_VISIBLE_DEVICES&#x27;)&#xA;&#xA;    args = parser.parse_args()&#xA;&#xA;    os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"&#xA;    os.environ["CUDA_VISIBLE_DEVICES"] = args.gpus&#xA;&#xA;    model_restoration = mfdnet()&#xA;    utils.load_checkpoint(model_restoration, args.weights)&#xA;    print("===>Testing using weights: ", args.weights)&#xA;&#xA;    model_restoration.eval().cuda()&#xA;    &#xA;    input_video = "Input_video.mp4"&#xA;    output_video = &#x27;output_video.mp4&#x27;&#xA;&#xA;    process_video_frame_by_frame(input_video, output_video, model_restoration)&#xA;

    &#xA;

    Let's focus on the while loop part :

    &#xA;

    The version of the code snippet above can be executed without error. In the next step, I am trying to follow 301_Moved_Permanently's answer to make the usage of torch.save. Therefore, the contents of while loop comes as the following code :

    &#xA;

            # Process frame (deraining processing)&#xA;        while in_bytes := process1.stdout.read(width * height * 3):&#xA;            in_frame = torch.frombuffer(in_bytes, dtype=torch.uint8).float().reshape((1, 3, width, height))&#xA;            restored = model_restoration(torch.div(in_frame, 255).to(device=&#x27;cuda&#x27;))&#xA;            restored = torch.clamp(restored[0], 0, 1)&#xA;            out_frame = torch.mul(restored.cpu().detach(), 255).reshape(3, width, height).byte()&#xA;            torch.save(out_frame, process2.stdin)&#xA;

    &#xA;

    Out of memory error happened with the following message :

    &#xA;

    &#xA;

    torch.OutOfMemoryError : CUDA out of memory. Tried to allocate 676.00 MiB. GPU 0 has a total capacity of 23.99 GiB of which 0 bytes is free. Of the allocated memory 84.09 GiB is allocated by PyTorch, and 1.21 GiB is reserved by PyTorch but unallocated.

    &#xA;

    &#xA;

    To diagnostics the error, I removed the last two lines of code :

    &#xA;

            # Process frame (deraining processing)&#xA;        while in_bytes := process1.stdout.read(width * height * 3):&#xA;            in_frame = torch.frombuffer(in_bytes, dtype=torch.uint8).float().reshape((1, 3, width, height))&#xA;            restored = model_restoration(torch.div(in_frame, 255).to(device=&#x27;cuda&#x27;))&#xA;            restored = torch.clamp(restored[0], 0, 1)&#xA;

    &#xA;

    The out of memory error still happened. This is weird to me. My understanding of the executable version code, the line restored = restored.cpu().detach().numpy() is to transfer the restored data in GPU memory to main memory and then convert it to numpy format. Why I remove this line of code then out of memory error happened ?

    &#xA;

    The hardware and software specification I used is as follows :

    &#xA;

      &#xA;
    • CPU : 12th Gen Intel(R) Core(TM) i9-12900K 3.20 GHz

      &#xA;

    • &#xA;

    • RAM : 128 GB (128 GB usable)

      &#xA;

    • &#xA;

    • Graphic card : NVIDIA GeForce RTX 4090

      &#xA;

    • &#xA;

    • OS : Windows 11 Pro 22H2, OS build : 22621.4317

      &#xA;

    • &#xA;

    • Pytorch version :

      &#xA;

      > python -c "import torch; print(torch.__version__)"&#xA;2.5.0&#x2B;cu124&#xA;

      &#xA;

    • &#xA;

    &#xA;