Recherche avancée

Médias (1)

Mot : - Tags -/belgique

Autres articles (81)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

Sur d’autres sites (4263)

  • Prevent suspend event when streaming video via HTML video tag

    24 septembre 2014, par jasongullickson

    I seem to be having the opposite problem of most people who are streaming video using the HTML video tag ; I’m saturating the client with data.

    When playing a long video served via ffserver (webm container) everything works great but eventually the browser (Chrome in this case) will begin throwing "suspend" events. After a number of these ( 50-100), a "stalled" event will fire and playback will stop.

    I believe the problem is that once Chrome has buffered a certain amount of video it goes into "suspend" and stops downloading more data. I’ve tested this theory by throttling the speed at which video data is delivered, and if I keep the delivered frame rate close to the playback rate, I can prevent this from happening, but of course deliberately holding back server performance isn’t ideal.

    What I’m looking for is either a way to suppress this "suspend" behavior altogether, or alternatively a way to respond to the event that prevents the eventual "stalled" state.

    Presumably the browser at some point exits the "suspend" state and begins requesting data again, but I haven’t actually observed this occurring. I’m using a chain of mpeg2 -> ffmpeg -> ffserver to stream the video so if the browser is attempting to resume loading data I don’t see the request in my application. I could use a proxy or a sniffer to watch for the traffic but I would expect that maybe there is an ffserver log that can tell me the same thing ? In any event if it’s attempting to resume the download it’s failing, and there’s no indication server-side that there’s a reason for the request to fail (in fact I can pull up the same video feed from ffserver and see it playing correctly).

    So I feel like I’ve isolated this to a client-side playback issue, and one where the browser is voluntarily giving up on loading the data, but I’m not sure how to convince it to "not do that", or at least attempt to resume when it runs the buffer dry.

  • ffmpeg muxing to mkv container

    24 mai 2017, par Pawel K

    I’m muxing H264 frames with alaw audio frames coming from the RTP stream but I’m having some problems with setting fps for the mkv container.

    For the AVPacket::pts I am using the calculated presentation time (from RTCP SR reports)
    I am rescaling them as follows ( I had assumed exactly the same for audio) :

    pkt.pts = av_rescale_q(s_video_sync.fSyncTime.tv_sec * 1000000 + s_video_sync.fSyncTime.tv_usec,
    av_encode_codec_ctx->time_base, video_st->time_base);
    • the timestamp in the end is in microseconds (the first parameter) in theory it’s NTP derrived.
    • av_encode_codec_ctx->time_base is set to {1,fps} where fps is depending on the stream, let’s say 5
    • video_st->time_base is auto se to to {1,1000} (I gather that this is forced by mkv container)

    I gather that this is what should be set to te rescale function (at least that’s what examples show), but the ffprobe show strange readouts for duration and fps is set to 1k... and well the video plays strangely.

    my question is how should I approach this ?
    Should I rescale the timestamp to start counting from 0 for the first packet, or I’m messing two different time domains and thus muxer can’t figure out what to do ?

    EDIT1 :
    I have figured out that since the timestamp is in micro seconds (and I’m not encoding) I should use {1,1000000} as timebase for pts calc instead of codec’s timebase. Now at least the duration is ok, and audio plays smoothly, but video is ’choppy’ there is a timestamp in the video that does not increment smoothly... and the fps is still 1k

    EDIT2
    It seems that after manually (might be it was supposed to be set like that in the first place) setting the following :

    video_st->avg_frame_rate = (AVRational){ 90000, 90000/u8fps };  
    av_stream_set_r_frame_rate(video_st, (AVRational){ 90000, 90000/u8fps });

    where u8fps is the assumed rate of frames per second, and the 90000 is the standard sampling rate for video (90kHz tick) Video is playing smoothly.

    Regards,
    Pawel.

  • Extract images from ffmpeg stream

    30 juillet 2022, par Exitare

    I am trying to setup an application which receives a stream via tcp from a localhost webcam. The stream is generated by ffmpeg like so :

    


    ffmpeg -f avfoundation -framerate 30 -i 0 -target pal-dvd -probesize 42M -pix_fmt uyvy422 -f mpegts -flush_packets 0 tcp://127.0.0.1:9050


    


    My receiving server application code looks like this :

    


    public static void StartServer()
{
   
    bool done = false;

    var listener = new TcpListener(IPAddress.Any, 9050);

    listener.Start();
    
    // Buffer for reading data
    var bytes = new byte[256];
    string data;
    
    while (!done)
    {
        Console.WriteLine("Waiting for connection...");
        TcpClient client = listener.AcceptTcpClient();

        
        data = null;

        // Get a stream object for reading and writing
        NetworkStream stream = client.GetStream();

        int i;
        Console.WriteLine("Connection accepted.");
        // Loop to receive all the data sent by the client.
        while((i = stream.Read(bytes, 0, bytes.Length))!=0)
        {
            // Translate data bytes to a ASCII string.
            Console.WriteLine("Received: {0}", bytes);

            
        }
       
    }

    listener.Stop();
}


    


    ffmpeg is able to connect and send the stream, the c# application is able to receive the stream.

    


    The output by the Console.WriteLine call is like so :

    


    Received: System.Byte[]
Received: System.Byte[]
Received: System.Byte[]
Received: System.Byte[]
Received: System.Byte[]
Received: System.Byte[]
Received: System.Byte[]
Received: System.Byte[]
Received: System.Byte[]
Received: System.Byte[]
Received: System.Byte[]


    


    However, I am unsure how to create images from these bytes. In theory I would have to wait until I receive the whole image, as I expect the image to be greater than 256bytes.

    


    In short, I receive the bytes but don't know how to convert them into images.
How do I do this ?

    


    Also I am not sure if this is the best approach. I know that ffmpeg offers the possibility to extract images from videos. But I don't know, whether this is also possible using streams. Is there a solution provided by ffmpeg to extract images from an input stream ? Ideally 10 images per second.