Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (41)

  • ANNEXE : Les extensions, plugins SPIP des canaux

    11 février 2010, par

    Un plugin est un ajout fonctionnel au noyau principal de SPIP. MediaSPIP consiste en un choix délibéré de plugins existant ou pas auparavant dans la communauté SPIP, qui ont pour certains nécessité soit leur création de A à Z, soit des ajouts de fonctionnalités.
    Les extensions que MediaSPIP nécessite pour fonctionner
    Depuis la version 2.1.0, SPIP permet d’ajouter des plugins dans le répertoire extensions/.
    Les "extensions" ne sont ni plus ni moins que des plugins dont la particularité est qu’ils se (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

Sur d’autres sites (4914)

  • Decode h264 video bytes into JPEG frames in memory with ffmpeg

    5 février 2024, par John Karkas

    I'm using python and ffmpeg (4.4.2) to generate a h264 video stream from images produced continuously from a process. I am aiming to send this stream over websocket connection and decode it to individual image frames at the receiving end, and emulate a stream by continuously pushing frames to an <img style='max-width: 300px; max-height: 300px' /> tag in my HTML.

    &#xA;

    However, I cannot read images at the receiving end, after trying combinations of rawvideo input format, image2pipe format, re-encoding the incoming stream with mjpeg and png, etc. So I would be happy to know what the standard way of doing something like this would be.

    &#xA;

    At the source, I'm piping frames from a while loop into ffmpeg to assemble a h264 encoded video. My command is :

    &#xA;

            command = [&#xA;            &#x27;ffmpeg&#x27;,&#xA;            &#x27;-f&#x27;, &#x27;rawvideo&#x27;,&#xA;            &#x27;-pix_fmt&#x27;, &#x27;rgb24&#x27;,&#xA;            &#x27;-s&#x27;, f&#x27;{shape[1]}x{shape[0]}&#x27;,&#xA;            &#x27;-re&#x27;,&#xA;            &#x27;-i&#x27;, &#x27;pipe:&#x27;,&#xA;            &#x27;-vcodec&#x27;, &#x27;h264&#x27;,&#xA;            &#x27;-f&#x27;, &#x27;rawvideo&#x27;,&#xA;            # &#x27;-vsync&#x27;, &#x27;vfr&#x27;,&#xA;            &#x27;-hide_banner&#x27;,&#xA;            &#x27;-loglevel&#x27;, &#x27;error&#x27;,&#xA;            &#x27;pipe:&#x27;&#xA;        ]&#xA;

    &#xA;

    At the receiving end of the websocket connection, I can store the images in storage by including :

    &#xA;

            command = [&#xA;            &#x27;ffmpeg&#x27;,&#xA;            &#x27;-i&#x27;, &#x27;-&#x27;,  # Read from stdin&#xA;            &#x27;-c:v&#x27;, &#x27;mjpeg&#x27;,&#xA;            &#x27;-f&#x27;, &#x27;image2&#x27;,&#xA;            &#x27;-hide_banner&#x27;,&#xA;            &#x27;-loglevel&#x27;, &#x27;error&#x27;,&#xA;            f&#x27;encoded/img_%d_encoded.jpg&#x27;&#xA;        ]&#xA;

    &#xA;

    in my ffmpeg command.

    &#xA;

    But, I want to instead extract each individual frame coming in the pipe and load in my application, without saving them in storage. So basically, I want whatever is happening at by the &#x27;encoded/img_%d_encoded.jpg&#x27; line in ffmpeg, but allowing me to access each frame in the stdout subprocess pipe of an ffmpeg pipeline at the receiving end, running in its own thread.

    &#xA;

      &#xA;
    • What would be the most appropriate ffmpeg command to fulfil a use case like the above ? And how could it be tuned to be faster or have more quality ?
    • &#xA;

    • Would I be able to read from the stdout buffer with process.stdout.read(2560x1440x3) for each frame ?
    • &#xA;

    &#xA;

    If you feel strongly about referring me to a more update version of ffmpeg, please do so.

    &#xA;

    PS : It is understandable this may not be the optimal way to create a stream. Nevertheless, I do not find there should be much complexity in this and the latency should be low. I could instead communicate JPEG images via the websocket and view them in my <img style='max-width: 300px; max-height: 300px' /> tag, but I want to save on bandwidth and relay some computational effort at the receiving end.

    &#xA;

  • Monitor multiple instances of same process

    18 décembre 2013, par user3116597

    I'm trying to monitor multiple instances of the same process. I can't for the life of me do this without running into a problem.

    All the examples I have seen so far on the internet involve me writing out the PID or monitoring the process itself. The issue is that if one instance fails, it doesn't mean all the rest have failed as well.

    In order for me to write out the PID for each process it would mean I'd probably have to run each process with a short delay to record the correct, seeing as the way I need to record the PID is done through the process name being probed.

    If I'm wrong on this, please correct me. But so far I haven't found a way to monitor each individual process, which all have the same name.

    To add to the above, the processes are run in a batch script and each one is run in its own screen (ffmpeg would otherwise not be able to run in the background).

    If anyone can point me vaguely in the right direction on how to do this in Linux I would really appreciate it. I read somewhere that it would be possible to set up symlinks which would then give me fake process names and that way I can monitor the 'fake' process name.

  • ffmpeg-next how can I enable multithreading on a decoder ?

    14 décembre 2022, par Brandon Piña

    I'm using the rust crate ffmpeg-next to decode some video into individual frames for usage in another library. Problem is when I run my test it only seems to use a single core. I've tried modifying the threading configuration for my decoder as you can see below, but It doesn't seem to be do anything

    &#xA;

            let context_decoder =&#xA;            ffmpeg_next::codec::context::Context::from_parameters(input_stream.parameters())?;&#xA;        let mut decoder = context_decoder.decoder().video()?;&#xA;        let mut threading_config = decoder.threading();&#xA;        threading_config.count = num_cpus::get();&#xA;        threading_config.kind = ThreadingType::Frame;&#xA;&#xA;        decoder.set_threading(threading_config);&#xA;

    &#xA;