Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (34)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (5828)

  • What could be serve as a nullptr in cython wrapper for C++ uint8_t multidimensional array ?

    20 juillet 2020, par yose93

    I've stuck with solving of one problem. I have to fill C++ structure with yuv420p frame data in my cython wrapper :

    


    #define FR_PLANE_COUNT_MAX 8

typedef struct fr_frame_s {
    int format = 0;

    int width = 0;
    int height = 0;

    uint8_t* data[FR_PLANE_COUNT_MAX];

    int     stride[FR_PLANE_COUNT_MAX];

    int     size[FR_PLANE_COUNT_MAX];

    long long time = 0;

} fr_frame_t;




    


    Where data is just a multidimensional array with length of 8. In this array first three elements to be y, u and v byte multidimensional arrays, and the rest are just nullptr values. The next chunk of code which I need to implement on pure python just to fill the structure with according data of the above structure itself :

    


    bool VideoCapture::ConvertFrame(const AVFrame *src, fr_frame_t &dst)
{
    if(src != NULL)
    {
        for (size_t i = 0; i < FR_PLANE_COUNT_MAX; ++i)
        {
            if (src->data[i] != nullptr)
            {
                const int line = src->linesize[i];
                const int size = i == 0 ? line * src->height : int(line * (src->height / 2.0));
                dst.data[i] = (uint8_t*)malloc(size);
                memcpy(dst.data[i], src->data[i], size);

                //dst.data[i] = src->data[i];
                dst.size[i] = size;
                dst.stride[i] = src->linesize[i];
            }else{
                dst.data[i] = nullptr;
                dst.size[i] = 0;
                dst.stride[i] = 0;
            }
        }



    


    Here all the values after y, u, v arrays must be just of nullptr as it seems. So, what I can use as nullptr to fill np.ndarray after y, u, v.

    


    And my python code :

    


    def _get_read_frames(
        self,
        video: pathlib.PosixPath,
    ) -> Generator[Tuple[Union[teyefr.MetadataImage, float]], None, None]:
        """Video frames reader."""
        self._cap = cv2.VideoCapture(str(video))
        self._total_frames = self._cap.get(cv2.CAP_PROP_FRAME_COUNT)
        self._fps = math.ceil(self._cap.get(cv2.CAP_PROP_FPS))
        self._duration = self._total_frames / self._fps

        while(self._cap.isOpened()):
            _, frame = self._cap.read()

            if frame is None:
                break
            
            yuv420_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2YUV)

            self._process_yuv420_frame(yuv420_frame)

        self._cap.release()
    
    def _process_yuv420_frame(self, yuv420_frame: np.ndarray) -> None:
        """To fill `self._fr_frames` list.

        Splits already converted frame into 3-channels y, u, v
        and takes all required data to fill `FRFrame` and push it.
        """
        data = np.array([])
        stride = np.array([])
        size = np.array([])
        frame_data = {}.fromkeys(FRFrame.__dataclass_fields__.keys())

        channels = (y, u, v) = cv2.split(yuv420_frame)

        for i in range(FR_PLANE_COUNT_MAX):
            if i < len(channels):
                np.concatenate(data, channels[i])
            else:
                np.concatenate(data, np.array([]))
            
        frame_data['height'], frame_data['width'], _ = yuv420_frame.shape


    


    Please advise.

    


  • Creating GIF from QImages with ffmpeg

    21 mars 2020, par Sierra

    I would like to generate GIF from QImage, using ffmpeg - all of that programmatically (C++). I’m working with Qt 5.6 and the last build of ffmpeg (build git-0a9e781 (2016-06-10).

    I’m already able to convert these QImage in .mp4 and it works. I tried to use the same principle for the GIF, changing format pixel and codec. GIF is generated with two pictures (1 second each), in 15 FPS.

    ## INITIALIZATION
    #####################################################################

    // Filepath : "C:/Users/.../qt_temp.Jv7868.gif"  
    // Allocating an AVFormatContext for an output format...
    avformat_alloc_output_context2(formatContext, NULL, NULL, filepath);

    ...

    // Adding the video streams using the default format codecs and initializing the codecs.
    stream = avformat_new_stream(formatContext, *codec);

    AVCodecContext * codecContext = avcodec_alloc_context3(*codec);

    context->codec_id       = codecId;
    context->bit_rate       = 400000;
    ...
    context->pix_fmt        = AV_PIX_FMT_BGR8;

    ...

    // Opening the codec...
    avcodec_open2(codecContext, codec, NULL);

    ...

    frame = allocPicture(codecContext->width, codecContext->height, codecContext->pix_fmt);
    tmpFrame = allocPicture(codecContext->width, codecContext->height, AV_PIX_FMT_RGBA);

    ...

    avformat_write_header(formatContext, NULL);

    ## ADDING A NEW FRAME
    #####################################################################

    // Getting in parameter the QImage: newFrame(const QImage & image)
    const qint32 width  = image.width();
    const qint32 height = image.height();

    // Converting QImage into AVFrame
    for (qint32 y = 0; y < height; y++) {
       const uint8_t * scanline = image.scanLine(y);

       for (qint32 x = 0; x < width * 4; x++) {
           tmpFrame->data[0][y * tmpFrame->linesize[0] + x] = scanline[x];
       }
    }

    ...

    // Scaling...
    if (codec->pix_fmt != AV_PIX_FMT_BGRA) {
       if (!swsCtx) {
           swsCtx = sws_getContext(codec->width, codec->height,
                                   AV_PIX_FMT_BGRA,
                                   codec->width, codec->height,
                                   codec->pix_fmt,
                                   SWS_BICUBIC, NULL, NULL, NULL);
       }

       sws_scale(swsCtx,
                 (const uint8_t * const *)tmpFrame->data,
                 tmpFrame->linesize,
                 0,
                 codec->height,
                 frame->data,
                 frame->linesize);
    }
    frame->pts = nextPts++;

    ...

    int gotPacket = 0;
    AVPacket packet = {0};

    av_init_packet(&packet);
    avcodec_encode_video2(codec, &packet, frame, &gotPacket);

    if (gotPacket) {
       av_packet_rescale_ts(paket, *codec->time_base, stream->time_base);
       paket->stream_index = stream->index;

       av_interleaved_write_frame(formatContext, paket);
    }

    But when I’m trying to modify the video codec and pixel format to match with GIF specifications, I’m facing some issues.
    I tried several codecs such as AV_CODEC_ID_GIF and AV_CODEC_ID_RAWVIDEO but none of them seem to work. During the initialization phase, avcodec_open2() always returns such kind of errors :

    Specified pixel format rgb24 is invalid or not supported
    Could not open video codec:  gif

    EDIT 17/06/2016

    Digging a little bit more, avcodec_open2() returns -22 :

    #define EINVAL          22      /* Invalid argument */

    EDIT 22/06/2016

    Here are the flags used to compile ffmpeg :

    "FFmpeg/Libav configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32threads --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmfx --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib"

    Did I miss a crucial one for GIF ?

    EDIT 27/06/2016

    Thanks to Gwen, I have a first output : I setted the context->pix_fmt to AV_PIX_FMT_BGR8. Btw I’m still facing some issues with the generated GIF. It’s not playing and encoding appears to fail.

    GIF generated in command lines with ffmpeg (left) . . . GIF generated programmatically (right)
    Generated in command line with ffmpeg
    enter image description here

    It looks like some options are not defined... also may be a wrong conversion between QImage and AVFrame ? I updated the code above. It represents a lot of code, so I tried to stay short. Don’t hesitate to ask more details.

    End of EDIT

    I’m not really familiar with ffmpeg, any kind of help would be highly appreciated. Thank you.

  • How to get FFMPEG to use more GPU when encoding

    24 mars 2023, par Entropy

    so the situation is as following

    


    Im receiging 20/30 uncompressed image per second. format is either PNG or Bitmap. Each individual photo size is between 40 and 50 mb (all have same size since uncompressed).

    


    I want to encode them to a 265 lossless video and stream them to a http server using FFMPEG.
The output video is 1920x1080, so there is some downsampling.
Compression is allowed but nothing is allowed to be lost other than the down sampling.

    


    now i m still in the testing phase. i have a 500 sample image. and i m tryng to encode them as effeciently as possible.
Im using commands such as :

    


    ffmpeg  -hwaccel cuvid -f  image2  -i "0(%01d).png" -framerate 30 / 
-pix_fmt p010le -c:v hevc_nvenc -preset lossless -rc vbr_hq /
-b:v 6M -maxrate:v 10M  -vf scale=1920:1080  -c:a aac -b:a 240k result.mp4


    


    I have a powerfull modern quadro GPU and a 6 cores intel CPU and an Nvme hard drive.

    


    The usuage of the GPU when encoding is exactly 10%, CPU is circa 30-40%

    


    How can i get GPU usuage to 80% ? The machine on which im going to run the code will have at leat a quadro 4000 (maybe stronger) and i want to use it to the fullest