Recherche avancée

Médias (0)

Mot : - Tags -/upload

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (93)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (10765)

  • FFMPEG : Recurring onMetaData for RTMP ? [on hold]

    30 novembre 2017, par stevendesu

    For whatever reason this was put on hold as "too broad", although I felt I was quite specific. So I’ll try rephrasing here :

    My former understanding :

    The RTMP Protocol involves sending several parallel streams of data as a series of packets, with an ID correlating to which stream they are a part of. For instance :

    [VIDEO] <data>
    [AUDIO] <data>
    [VIDEO] <data>
    [VIDEO] <data>
    [SERVER] <metadata about="about" bandwidth="bandwidth">
    [VIDEO] <data>
    [AUDIO] <data>
    ...
    </data></data></metadata></data></data></data></data>

    Then on the player side these packets are split up into separate buffers based on type (all video data is concatenated, all audio data is concatenated, etc)

    One of the packet types is called onMetaData (ID : 0x12)

    An onMetaData packet includes a timestamp for when to trigger the metadata (this way it can be synchronized with the video) as well as the contents of the metadata (a text string)

    My setup :

    I’m using Red5Pro as my ingest server to take in an RTMP stream and then watch this stream via WebRTC. When an onMetaData packet is received by Red5, it sends out a JSON object to all subscribers of the stream over WebSockets with the contents of the stream.

    What I want :

    I want to take advantage of this onMetaData channel to embed the server’s system clock into a stream. This way anyone viewing the stream can determine when (according to the server) a stream was encoded and, if they synchronize their clock with the server, they can then compute the end-to-end latency of the stream. Due to Red5’s use of WebSockets to send metadata this isn’t a perfect solution (you may receive the metadata before or after you actually receive the video information), however I have some plans to work around this.

    In other words, I want my stream to look like this :

    [VIDEO] <data>
    [AUDIO] <data>
    [ONMETADATA] time: 2:05:77.382
    [VIDEO] <data>
    [VIDEO] <data>
    [SERVER] <metadata about="about" bandwidth="bandwidth">
    [VIDEO] <data>
    [ONMETADATA] time: 2:05:77.423
    [AUDIO] <data>
    ...
    </data></data></metadata></data></data></data></data>

    What I would like is to generate this stream (with the server’s current time periodically embedded into the onMetaData channel) using FFMPEG

    Simpler problem :

    FFMPEG offers a -metadata command-line parameter.

    In my experiments, using this parameter caused a single onMetaData event to be fired including things like "title", "author", etc. I could not inject additional onMetaData packets periodically as the stream progressed.

    Even if the metadata packets do not contain the system clock, if I could send any metadata packets periodically using FFMPEG then I could include something static like "the server’s clock at the time the broadcast started". I can then compare this to the current timestamp of the video and calculate the latency.

    My confusion :

    Continuing to look into this after creating my post, there are a couple things that I don’t fully understand or which don’t quite make sense to me. For one, if FFMPEG is only injecting a single onMetaData packet into the stream, then I would expect anyone joining the stream late to miss it. However when I join the stream 8 hours later I see Red5 send me the metadata packet complete with title, author, etc. So it’s almost like the metadata packet doesn’t have a timestamp associated with it but instead is just generic metadata about the video

    Furthermore, there’s something called "AMF" which I’m not familiar with, but it may be important ?

    Original Post

    I spent today playing around with methods to embed the system clock at time of encode into a stream, so that I could compare this value to the same system clock at time of decode to get a rough estimate of RTMP latency. Unfortunately the majority of techniques I used ended up failing.

    One thing I wanted to try next was taking advantage of RTMP’s onMetaData to send the current system clock periodically (maybe every 5 seconds) as part of the stream for any clients to listen for.

    Unfortunately FFMPEG’s -metadata option seems to only be for one-time metadata when the stream first loads. I can’t figure out how to add continuous (and generated) values to a stream.

    Is there a way to do this ?

  • Cut AVI video via FFMPEG results in black screen video, but audio is OK

    25 décembre 2017, par mipi

    I want to trim a AVI video (H264 codec) via ffmpeg. The time interval for the result is available as START_TIME_ORIG and DURATION_ORIG (both in microseconds). To make sure that the resulting video starts with an IDR frame, I determine START_TIME and DURATION via ffprobe by executing

    ffprobe -show_frames -pretty -read_intervals [TIME_FROM%TIME_TO] input.avi

    twice to get the IDR frames which are (1st call) closest to START_TIME_ORIG and (2nd call) closest to START_TIME_ORIG+DURATION_ORIG. TIME_FROM and TIME_TO is an interval of 5 seconds plus/minus around (1st call) START_TIME_ORIG and (2nd call) START_TIME_ORIG+DURATION_ORIG. To identify a frame as IDR frame I verify that key_frame=1 and pict_type=I. START_TIME is then set to pkt_dts_time of that frame. In a similar way I calculate DURATION.

    Then ffmpeg is called :

    ffmpeg -ss [START_TIME] -i input.avi -t [DURATION] -codec copy -reset_timestamps 1 -async 1 -map 0 -y output.avi

    Unfortunately the resulting video has a black screen only, audio is OK. What is wrong with my approach ?
    Thanks, mipi

  • How to detect blue screen of ffmpeg video packet ?

    28 novembre 2017, par 심상원

    Good morning. There is one question about FFMPEG.

    I’m using FFMPEG to study C ++ on Linux.

    When the camera spirituality is RTSP and the format is H.264,

    I would like to determine if the camera image is a blue screen, but the following concepts are confusing.

    1. KeyFrame comes in 1 second or every X seconds cycle. Does the KeyFrame get delivered from the camera even if it is still the same image ?

    2. If the KeyFrame is delivered, is the size of the packet transmitted between the cycles zero ?

    3. If the above method is the same as normal image, should I compare the individual frames after decoding ?

    If you do not have any of these questions, please let me know if you have a good way.

    Thank you.