Recherche avancée

Médias (91)

Autres articles (72)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (7899)

  • Convert individual pixel values from RGB to YUV420 and save the frame - C++

    24 mars 2014, par learner

    I have been working with RGB->YUV420 conversion for sometime using the FFmpeg library. Already tried the sws_scale functionality but its not working well. Now, I have decided to convert each pixel individually, using colorspace conversion formulae. So, following is the code that gets me few frames and allows me to access individual R,G,B values of each pixel :

    // Read frames and save first five frames to disk
       i=0;
       while((av_read_frame(pFormatCtx, &amp;packet)>=0) &amp;&amp; (i&lt;5))
       {
           // Is this a packet from the video stream?
           if(packet.stream_index==videoStreamIdx)
           {  
               /// Decode video frame            
               avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);

               // Did we get a video frame?
               if(frameFinished)
               {
                   i++;
                   sws_scale(img_convert_ctx, (const uint8_t * const *)pFrame->data,
                             pFrame->linesize, 0, pCodecCtx->height,
                             pFrameRGB->data, pFrameRGB->linesize);

                   int x, y, R, G, B;
                   uint8_t *p = pFrameRGB->data[0];
                   for(y = 0; y &lt; h; y++)
                   {  
                       for(x = 0; x &lt; w; x++)
                       {
                           R = *p++;
                           G = *p++;
                           B = *p++;
                           printf(" %d-%d-%d ",R,G,B);
                       }
                   }

                   SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i);
               }
           }

           // Free the packet that was allocated by av_read_frame
           av_free_packet(&amp;packet);
       }

    I read online that to convert RGB->YUV420 or vice-versa, one should first convert to YUV444 format. So, its like : RGB->YUV444->YUV420. How do I implement this in C++ ?

    Also, here is the SaveFrame() function used above. I guess this will also have to change a little since YUV420 stores data differently. How to take care of that ?

    void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame)
    {
       FILE *pFile;
       char szFilename[32];
       int  y;

       // Open file
       sprintf(szFilename, "frame%d.ppm", iFrame);
       pFile=fopen(szFilename, "wb");
       if(pFile==NULL)
           return;

       // Write header
       fprintf(pFile, "P6\n%d %d\n255\n", width, height);

       // Write pixel data
       for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);

       // Close file
       fclose(pFile);
    }

    Can somebody please suggest ? Many thanks !!!

  • How to set individual image display durations with ffmpeg-python

    20 septembre 2022, par tompi

    I am using ffmpeg-python 0.2.0 with Python 3.10.0. Displaying videos in VLC 3.0.17.4.

    &#xA;

    I am making an animation from a set of images. Each image is displayed for different amount of time.

    &#xA;

    I have the basics in place with inputting images and concatenating streams, but I can't figure out how to correctly set frame duration.

    &#xA;

    Consider the following example :

    &#xA;

    stream1 = ffmpeg.input(image1_file)&#xA;stream2 = ffmpeg.input(image2_file)&#xA;combined_streams = ffmpeg.concat(stream1, stream2)&#xA;output_stream = ffmpeg.output(combined_streams, output_file)&#xA;ffmpeg.run(output_stream)&#xA;

    &#xA;

    With this I get a video with duration of a split second that barely shows an image before ending. Which is to be expected with two individual frames.

    &#xA;

    For this example, my goal is to have a video of 5 seconds total duration, showing the image in stream1 for 2 seconds and the image in stream2 for 3 seconds.

    &#xA;

    Attempt 1 : Setting t for inputs

    &#xA;

    stream1 = ffmpeg.input(image1_file, t=2)&#xA;stream2 = ffmpeg.input(image2_file, t=3)&#xA;combined_streams = ffmpeg.concat(stream1, stream2)&#xA;output_stream = ffmpeg.output(combined_streams, output_file)&#xA;ffmpeg.run(output_stream)&#xA;

    &#xA;

    With this, I get a video with the duration of a split second and no image displayed.

    &#xA;

    Attempt 2 : Setting frames for inputs

    &#xA;

    stream1 = ffmpeg.input(image1_file, frames=48)&#xA;stream2 = ffmpeg.input(image2_file, frames=72)&#xA;combined_streams = ffmpeg.concat(stream1, stream2)&#xA;output_stream = ffmpeg.output(combined_streams, output_file, r=24)&#xA;ffmpeg.run(output_stream)&#xA;

    &#xA;

    In this case, I get the following error from ffmpeg :

    &#xA;

    Option frames (set the number of frames to output) cannot be applied to input url ########## -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to.&#xA;

    &#xA;

    I can't tell if this is a bug in ffmpeg-python or if I did it wrong.

    &#xA;

    Attempt 3 : Setting framerate for inputs

    &#xA;

    stream1 = ffmpeg.input(image1_file, framerate=1/2)&#xA;stream2 = ffmpeg.input(image2_file, framerate=1/3)&#xA;combined_streams = ffmpeg.concat(stream1, stream2)&#xA;output_stream = ffmpeg.output(combined_streams, output_file)&#xA;ffmpeg.run(output_stream)&#xA;

    &#xA;

    With this, I get a video with the duration of a split second and no image displayed. However, when I set both framerate values to 1/2, I get an animation of 4 seconds duration that displays the first image for two seconds and the second image for two seconds. This is the closest I got to a functional solution, but it is not quite there.

    &#xA;

    I am aware that multiple images can be globbed by input, but that would apply the same duration setting to all images, and my images each have different durations, so I am looking for a different solution.

    &#xA;

    Any ideas for how to get ffmpeg-python to do the thing is much appreciated.

    &#xA;

  • fftools/ffmpeg_enc : apply -top to individual encoded frames

    14 septembre 2023, par Anton Khirnov
    fftools/ffmpeg_enc : apply -top to individual encoded frames
    

    Fixes #9339.

    • [DH] fftools/ffmpeg_enc.c
    • [DH] tests/ref/fate/concat-demuxer-extended-lavf-mxf_d10
    • [DH] tests/ref/fate/concat-demuxer-simple1-lavf-mxf_d10
    • [DH] tests/ref/lavf/mxf_d10