Recherche avancée

Médias (0)

Mot : - Tags -/content

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (23)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Les statuts des instances de mutualisation

    13 mars 2010, par

    Pour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
    Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)

Sur d’autres sites (3554)

  • Single-threaded demuxer with FFmpeg API

    7 décembre 2023, par yaskovdev

    I am trying to create a demuxer using FFmpeg API with the next interface :

    


    interface IDemuxer
{
    void WriteMediaChunk(byte[] chunk);

    byte[] ReadAudioOrVideoFrame();
}


    


    The plan is to use it in a single thread like this :

    


    IDemuxer demuxer = new Demuxer();

while (true)
{
    byte[] chunk = ReceiveNextChunkFromInputStream();
    
    if (chunk.Length == 0) break; // Reached the end of the stream, exiting.
    
    demuxer.WriteMediaChunk(chunk);
    
    while (true)
    {
        var frame = demuxer.ReadAudioOrVideoFrame()

        if (frame.Length == 0) break; // Need more chunks to produce the frame. Let's add more chunks and try produce it again.

        WriteFrameToOutputStream(frame);
    }
}


    


    I.e., I want the demuxer to be able to notify me (by returning an empty result) that it needs more input media chunks to produce the output frames.

    


    It seems like FFmpeg can read the input chunks that I send to it using the read callback.

    


    The problem with this approach is that I cannot handle a situation when more input chunks are required using only one thread. I can handle it in 3 different ways in the read callback :

    


      

    1. Simply be honest that there is no data yet and return an empty buffer to FFmpeg. Then add more data using WriteMediaChunk(), and then retry ReadAudioOrVideoFrame().
    2. 


    3. Return AVERROR_EOF to FFmpeg to indicate that there is no data yet.
    4. 


    5. Block the thread and do not return anything. Once the data arrives, unblock the thread and return the data.
    6. 


    


    But all these options are far from ideal :

    


    The 1st one leads to FFmpeg calling the callback again and again in an infinite loop hoping to get more data, essentially blocking the main thread and not allowing me to send the data.

    


    The 2nd leads to FFmpeg stopping the processing at all. Even if the data appears finally, I won't be able to receive more frames. The only option is to start demuxing over again.

    


    The 3rd one kind of works, but then I need at least 2 threads : first is constantly putting new data to a queue, so that FFmpeg could then read it via the callback, and the second is reading the frames via ReadAudioOrVideoFrame(). The second thread may occasionally block if the first one is not fast enough and there is no data available yet. Having to deal with multiple threads makes implementation and testing more complex.

    


    Is there a way to implement this using only one thread ? Is the read callback even the right direction ?

    


  • When MP4 files encoded with H264 are set to slices=n, where can I find out how many slices the current NALU is ?

    17 novembre 2023, par Gaowan Liang

    I am doing an experiment on generating thumbnails for web videos. I plan to extract I-frames from the binary stream by simulating the working principle of the decoder, and add the PPS and SPS information of the original video to form the H264 raw information, which is then handed over to ffmpeg to generate images. I have almost solved many problems, and even wrote a demo to implement my function, but I can't find any information about where there is an identifier when multiple NALUs form one frame (strictly speaking, there is a little, but it can't solve my problem, I will talk about it later).

    


    You can use the following command to generate the type of video I mentioned :

    


     ffmpeg -i input.mp4 -c:v libx264 -x264-params slices=8 output.mp4


    


    This will generate a video with 8 slices per frame. Since I will use this file later, I will also generate the H264 raw information file with the following command :

    


     ffmpeg -i output.mp4 -vcodec copy -an output.h264


    


    When I put it into the analysis program, I can see multiple IDR NALUs connected together, where the first_mb_in_slice in the Slice Header of the non-first IDR NALU is not 0 :


    


    But when I go back to the mdat in MP4 and look at the NALU, all the first_mb_in_slice become 0 :


    


    0x9a= 1001 1010, according to the exponential Golomb coding, first_mb_in_slice == 0( ueg(1B) == 0 ), slice_type == P frame (ueg(00110B) == 5), but using the same algorithm in the H264 raw file, the result is the same as the program gives.

    


    Is there any other place where there is an identifier for this information ? Assuming I randomly get a NALU, can I know if this video is sliced or not, or is my operation wrong ?

    


    PS : Putting only one NALU into the decoder is feasible, but only 1/8 of the image can be parsed


    


    If you need a reference, the address of the demo program I wrote is : https://github.com/gaowanliang/web-video-thumbnailer

    


  • How to stream 24/7 on youtube (audio + video) with FFMPEG

    29 septembre 2023, par Carter510

    I plan to create a 24/7 stream with a video and a musical background which is located in a /Playlist folder.
I would like the music playlist to be played randomly and if a piece of music is corrupted or cannot be played, the program moves on to the next one.

    


    The problem is that with my command every time the music changes the stream stops.
Any suggestions ?

    


    #!/bin/bash

VBR="4500k"
FPS="30"
QUAL="superfast"

YOUTUBE_URL="rtmp://a.rtmp.youtube.com/live2"
KEY="XXXX-XXXX-XXXX-XXXX"

VIDEO_SOURCE="fireplace.mkv"
AUDIO_FOLDER="/home/administrateur/Documents/Youtube/Playlist"

while true; do
    # Joue la vidéo en boucle
    ffmpeg -re -stream_loop -1 -i "$VIDEO_SOURCE" \
    -thread_queue_size 512 -i "$(find "$AUDIO_FOLDER" -type f -name "*.mp3" | shuf -n 1)" \
    -map 0:v:0 -map 1:a:0 \
    -map_metadata:g 1:g \
    -vcodec libx264 -pix_fmt yuv420p -preset $QUAL -r $FPS -g $(($FPS * 2)) -b:v $VBR \
    -acodec libmp3lame -ar 44100 -threads 6 -qscale:v 3 -b:a 320000 -bufsize 512k \
    -f flv "$YOUTUBE_URL/$KEY"
done


    


    I would like the fireplace.mkv video to play without interruption, for the music to be chosen randomly without ever stopping. And if one of the songs cannot be played, it is skipped.