Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (77)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Les images

    15 mai 2013

Sur d’autres sites (7760)

  • FFmpeg 5 C api codec : how many times will EOF be returned by the receiving end ?

    3 février 2024, par Guanyuming He

    Edit : I may have written my questions unclearly at first, so I rewrote them. Sorry if you had found my questions confusing.

    


    I'm new to FFmpeg api programming (I'm using version 5.1) and am learning from the documentation and official examples.

    


    In the documentation page about send/receive encoding and decoding API overview, end of stream situation is discussed briefly :

    


    


    End of stream situations. These require "flushing" (aka draining) the codec, as the codec might buffer multiple frames or packets internally for performance or out of necessity (consider B-frames). This is handled as follows :

    


    


    


    Instead of valid input, send NULL to the avcodec_send_packet() (decoding) or avcodec_send_frame() (encoding) functions. This will enter draining mode.
Call avcodec_receive_frame() (decoding)
or avcodec_receive_packet() (encoding) in a loop until AVERROR_EOF is returned. The functions will not return AVERROR(EAGAIN), unless you forgot to enter draining mode.
Before decoding can be resumed again, the codec has to be reset with avcodec_flush_buffers().

    


    


    As I understand it, when I get AVERROR_EOF, I have reached a special point where I need to drain buffered data from the codec and finally reset the codec with avcodec_flush_buffers(). Without doing it, I cannot continue decoding/encoding.

    


    Is my understanding correct ?

    


    If so, then I have some questions :

    


      

    1. How many times at most can EOF be returned by the receiving end during one complete process, for example, decoding ?
    2. 


    3. If the answer to the first question is infinity, then : If I received an EOF from the receiving end when I already finished sending data (e.g. when after EOF is returned by av_read_frame()), how should I tell if it's really finished ? Here there is no return code only for indicating if the receiving is finished.
    4. 


    5. The data returned from the receive_... functions during draining, should I take them as valid ?
    6. 


    


    I might have found answers to those in the official examples, but I'm not sure if the answer is universally true. I noticed that in some official examples, like in transcode_aac.c, draining is only done for the first EOF reached, and then after the second one is received, it is regarded that there are really nothing left. Any data received during draining is also written to the final output.

    


    Am I correct on interpreting the example ? If so, can I say that the answer to question 1 is once, and the answer to question 3 is yes ?

    


    I appreciate your response and time in advance. :)

    


  • FFmpeg.swr_convert : audio to raw 16 bit pcm, to be used with xna SoundEffect. Audio cuts out when i convert

    21 mars 2019, par Robert Russell

    I want to resample mkv(vp8/ogg) and also raw 4 bit adpcm to raw 16bit pcm byte[] to be loaded into SoundEffect from xna library. So I can play it out while I’m using other code to display the frames (the video side is working).
    I can read a 16 bit wav file and play it. But when I goto resample something it doesn’t play 100%. One file is 3 mins and 15 secs. I only get 13 sec and 739 ms before it quits playing. I have been learning to do this by finding code samples in c++ and correcting it to work in c# using ffmpeg.autogen.

    the below is my best attempt at resampling.

               int nb_samples = Frame->nb_samples;
                       int output_nb_samples = nb_samples;
                       int nb_channels = ffmpeg.av_get_channel_layout_nb_channels(ffmpeg.AV_CH_LAYOUT_STEREO);
                       int bytes_per_sample = ffmpeg.av_get_bytes_per_sample(AVSampleFormat.AV_SAMPLE_FMT_S16) * nb_channels;
                       int bufsize = ffmpeg.av_samples_get_buffer_size(null, nb_channels, nb_samples,
                                                                AVSampleFormat.AV_SAMPLE_FMT_S16, 1);

                       byte*[] b = Frame->data;
                       fixed (byte** input = b)
                       {
                           byte* output = null;
                           ffmpeg.av_samples_alloc(&output, null,
                               nb_channels,
                               nb_samples,
                               (AVSampleFormat)Frame->format, 0);//

                           // Buffer input

                           Ret = ffmpeg.swr_convert(Swr, &output, output_nb_samples / 2, input, nb_samples);
                           CheckRet();
                           WritetoMs(output, 0, Ret * bytes_per_sample);
                           output_nb_samples -= Ret;

                           // Drain buffer
                           while ((Ret = ffmpeg.swr_convert(Swr, &output, output_nb_samples, null, 0)) > 0)
                           {
                               CheckRet();
                               WritetoMs(output, 0, Ret * bytes_per_sample);
                               output_nb_samples -= Ret;
                           }
                       }

    I changed that all to this but it cuts off sooner.

     Channels = ffmpeg.av_get_channel_layout_nb_channels(OutFrame->channel_layout);
                       int nb_channels = ffmpeg.av_get_channel_layout_nb_channels(ffmpeg.AV_CH_LAYOUT_STEREO);
                       int bytes_per_sample = ffmpeg.av_get_bytes_per_sample(AVSampleFormat.AV_SAMPLE_FMT_S16) * nb_channels;

                       if((Ret = ffmpeg.swr_convert_frame(Swr, OutFrame, Frame))>=0)
                           WritetoMs(*OutFrame->extended_data, 0, OutFrame->nb_samples * bytes_per_sample);
                       CheckRet();

    Both code use a function to set Swr it runs one time after the first frame is decoded.

           private void PrepareResampler()
       {
           ffmpeg.av_frame_copy_props(OutFrame, Frame);
           OutFrame->channel_layout = ffmpeg.AV_CH_LAYOUT_STEREO;
           OutFrame->format = (int)AVSampleFormat.AV_SAMPLE_FMT_S16;
           OutFrame->sample_rate = Frame->sample_rate;
           OutFrame->channels = 2;
           Swr = ffmpeg.swr_alloc();
           if (Swr == null)
               throw new Exception("SWR = Null");
           Ret = ffmpeg.swr_config_frame(Swr, OutFrame, Frame);
           CheckRet();
           Ret = ffmpeg.swr_init(Swr);
           CheckRet();
           Ret = ffmpeg.swr_is_initialized(Swr);
           CheckRet();
       }

    This is where I take the output and put it in the sound effect

    private void ReadAll()
       {

           using (Ms = new MemoryStream())
           {
               while (true)
               {
                   Ret = ffmpeg.av_read_frame(Format, Packet);
                   if (Ret == ffmpeg.AVERROR_EOF)
                       break;
                   CheckRet();
                   Decode();
               }
               if (Ms.Length > 0)
               {
                   se = new SoundEffect(Ms.ToArray(), 0, (int)Ms.Length, OutFrame->sample_rate, (AudioChannels)Channels, 0, 0);
                   //se.Duration; Stream->duration;


                   see = se.CreateInstance();
                   see.Play();
               }
           }
       }
  • How to apply 'simple 'opacity to combined(layered) mp4s in FFMPEG

    27 mai 2021, par Cam

    I am not getting the final image results I need when layering together multiple mp4s of the same length and format into a single output MP4. I am using ffmpeg to create a pseudo 'motion blur' effect on animation, and need to layer mp4s together with identical opacities to produce the final video.

    


    I am using a base 'black' MP4 as the first layer for a background, and then adding a series of source mp4s with equal opacity over the top in each pass. Here I am showing a photoshop mockup using their 'normal' blending mode which is exactly the blending effect I am trying to replicate with ffmpeg. I understand that the final composite is less "bright" but that's fine (unless you have any ideas).
photoshop mockup

    


    Instead of looking like the result above, I am getting output where the colors are either all pink, garbled, super dark or generally hugely overbright etc based on trying different blend modes.

    


    Here are the commands I am using :

    


    To create the original (uncompressed ?) 'black' MP4 from a sequence of black pngs :

    


    ffmpeg -start_number 0 -r 24 -f image2 -s 1920x1080 -i black_seq.%04d.png -vcodec libx264 -crf 0 -pix_fmt yuv420p   black_seq.mp4 -y


    


    I then take that "black_seq.mp4" and blend a set of n number of source mp4s over the top with an opacity value. This runs in a loop and the output.mp4 of each pass becomes the input.mp4 of the next pass until it completes. In this example a total of 10 source mp4s assigns their opacity to 0.1 for each pass, and this is a single pass (below). The source mp4s are all very similar in their appearance and color, essentially just the same sequence of animation but offset in time by fractions of a single frame and have been generated from color pngs, using the same code that produced the first black layer (above).

    


    ffmpeg i input.mp4 -i n_layer.mp4 -vcodec libx264 -crf 0 -pix_fmt yuv420p   -filter_complex "blend=all_mode='overlay':all_opacity=0.1" output.mp4 -y


    


    Then finally add some compression to the result as the final "blur.mp4"

    


    ffmpeg -i "output.mp4" -vcodec libx264 -crf 25 -pix_fmt yuv420p "blur.mp4" -y


    


    And yes, this is certainly highly inefficient as an approach, but I am learning. The main issue I am trying to solve is, despite the final blur.mp4 being less "bright", it has colors that are not matching the original animation and instead looks like the animation has been hue shifted somehow.

    


    This image shows a cropped output for comparison (the processed blur is set to zero for clarity)
input and output example pic

    


    I would love some insight.