Recherche avancée

Médias (91)

Autres articles (83)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

Sur d’autres sites (11065)

  • Processing video frame by frame in AWS Lambda with Node.js and FFmpeg [closed]

    29 décembre 2023, par Aviato

    I am working on a project where I need to process video frames one at a time in an AWS Lambda function using Node.js. My goal is to avoid storing all frames in memory or the filesystem due to resource constraints. I plan to use the fluent-ffmpeg library or ffmpeg from child processes for video processing.

    &#xA;

    In the past, I used OpenCV to process videos and frames without writing the frames on the disk or storing all the frames at once on the memory itself. But now as I am using node js, its a little hard to set up the code using ffmpeg, etc.

    &#xA;

    Here is a small snippet from what I did with opencv :-

    &#xA;

    import cv2&#xA;&#xA;cap = cv2.VideoCapture(video_file)&#xA;&#xA;out = cv2.VideoWriter(&#x27;output.mp4&#x27;, fourcc, fps, (width, height))&#xA;&#xA;def generate_frame():&#xA;        while cap.isOpened():&#xA;            code, frame = cap.read()&#xA;            if code:&#xA;                yield frame&#xA;            else:&#xA;                print("completed")&#xA;                break&#xA;&#xA;for i, frame in enumerate(generate_frame()):&#xA;          # Now we can process the video frames directly and write them on the output opencv&#xA;          out.write(editing_frames)&#xA;

    &#xA;

    Additionally, I intend to leverage image processing libraries like Sharp and the Canvas API to edit individual frames before assembling the final video. I am looking for help in handling video frames efficiently within the constraints of AWS Lambda.

    &#xA;

    Any insights, code snippets, or recommendations would be greatly appreciated. Thank you !

    &#xA;

  • Single-threaded demuxer with FFmpeg API

    7 décembre 2023, par yaskovdev

    I am trying to create a demuxer using FFmpeg API with the next interface :

    &#xA;

    interface IDemuxer&#xA;{&#xA;    void WriteMediaChunk(byte[] chunk);&#xA;&#xA;    byte[] ReadAudioOrVideoFrame();&#xA;}&#xA;

    &#xA;

    The plan is to use it in a single thread like this :

    &#xA;

    IDemuxer demuxer = new Demuxer();&#xA;&#xA;while (true)&#xA;{&#xA;    byte[] chunk = ReceiveNextChunkFromInputStream();&#xA;    &#xA;    if (chunk.Length == 0) break; // Reached the end of the stream, exiting.&#xA;    &#xA;    demuxer.WriteMediaChunk(chunk);&#xA;    &#xA;    while (true)&#xA;    {&#xA;        var frame = demuxer.ReadAudioOrVideoFrame()&#xA;&#xA;        if (frame.Length == 0) break; // Need more chunks to produce the frame. Let&#x27;s add more chunks and try produce it again.&#xA;&#xA;        WriteFrameToOutputStream(frame);&#xA;    }&#xA;}&#xA;

    &#xA;

    I.e., I want the demuxer to be able to notify me (by returning an empty result) that it needs more input media chunks to produce the output frames.

    &#xA;

    It seems like FFmpeg can read the input chunks that I send to it using the read callback.

    &#xA;

    The problem with this approach is that I cannot handle a situation when more input chunks are required using only one thread. I can handle it in 3 different ways in the read callback :

    &#xA;

      &#xA;
    1. Simply be honest that there is no data yet and return an empty buffer to FFmpeg. Then add more data using WriteMediaChunk(), and then retry ReadAudioOrVideoFrame().
    2. &#xA;

    3. Return AVERROR_EOF to FFmpeg to indicate that there is no data yet.
    4. &#xA;

    5. Block the thread and do not return anything. Once the data arrives, unblock the thread and return the data.
    6. &#xA;

    &#xA;

    But all these options are far from ideal :

    &#xA;

    The 1st one leads to FFmpeg calling the callback again and again in an infinite loop hoping to get more data, essentially blocking the main thread and not allowing me to send the data.

    &#xA;

    The 2nd leads to FFmpeg stopping the processing at all. Even if the data appears finally, I won't be able to receive more frames. The only option is to start demuxing over again.

    &#xA;

    The 3rd one kind of works, but then I need at least 2 threads : first is constantly putting new data to a queue, so that FFmpeg could then read it via the callback, and the second is reading the frames via ReadAudioOrVideoFrame(). The second thread may occasionally block if the first one is not fast enough and there is no data available yet. Having to deal with multiple threads makes implementation and testing more complex.

    &#xA;

    Is there a way to implement this using only one thread ? Is the read callback even the right direction ?

    &#xA;

  • When MP4 files encoded with H264 are set to slices=n, where can I find out how many slices the current NALU is ?

    17 novembre 2023, par Gaowan Liang

    I am doing an experiment on generating thumbnails for web videos. I plan to extract I-frames from the binary stream by simulating the working principle of the decoder, and add the PPS and SPS information of the original video to form the H264 raw information, which is then handed over to ffmpeg to generate images. I have almost solved many problems, and even wrote a demo to implement my function, but I can't find any information about where there is an identifier when multiple NALUs form one frame (strictly speaking, there is a little, but it can't solve my problem, I will talk about it later).

    &#xA;

    You can use the following command to generate the type of video I mentioned :

    &#xA;

     ffmpeg -i input.mp4 -c:v libx264 -x264-params slices=8 output.mp4&#xA;

    &#xA;

    This will generate a video with 8 slices per frame. Since I will use this file later, I will also generate the H264 raw information file with the following command :

    &#xA;

     ffmpeg -i output.mp4 -vcodec copy -an output.h264&#xA;

    &#xA;

    When I put it into the analysis program, I can see multiple IDR NALUs connected together, where the first_mb_in_slice in the Slice Header of the non-first IDR NALU is not 0 :&#xA;

    &#xA;

    But when I go back to the mdat in MP4 and look at the NALU, all the first_mb_in_slice become 0 :&#xA;

    &#xA;

    0x9a= 1001 1010, according to the exponential Golomb coding, first_mb_in_slice == 0( ueg(1B) == 0 ), slice_type == P frame (ueg(00110B) == 5), but using the same algorithm in the H264 raw file, the result is the same as the program gives.

    &#xA;

    Is there any other place where there is an identifier for this information ? Assuming I randomly get a NALU, can I know if this video is sliced or not, or is my operation wrong ?

    &#xA;

    PS : Putting only one NALU into the decoder is feasible, but only 1/8 of the image can be parsed&#xA;

    &#xA;

    If you need a reference, the address of the demo program I wrote is : https://github.com/gaowanliang/web-video-thumbnailer

    &#xA;