Recherche avancée

Médias (91)

Autres articles (82)

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • La gestion des forums

    3 novembre 2011, par

    Si les forums sont activés sur le site, les administrateurs ont la possibilité de les gérer depuis l’interface d’administration ou depuis l’article même dans le bloc de modification de l’article qui se trouve dans la navigation de la page.
    Accès à l’interface de modération des messages
    Lorsqu’il est identifié sur le site, l’administrateur peut procéder de deux manières pour gérer les forums.
    S’il souhaite modifier (modérer, déclarer comme SPAM un message) les forums d’un article particulier, il a à sa (...)

Sur d’autres sites (13884)

  • FFmpeg Has A Native VP8 Decoder

    24 juin 2010, par Multimedia Mike — VP8

    Thanks to David Conrad and Ronald Bultje who committed their native VP8 video decoder to the FFmpeg codebase yesterday. At this point, it can decode 14/17 of the VP8 test vectors that Google released during the initial open sourcing event. Work is ongoing on those 3 non-passing samples (missing bilinear filter). Meanwhile, FFmpeg’s optimization-obsessive personalities are hard at work optimizing the native decoder. The current decoder is already profiled to be faster than Google/On2’s official libvpx.

    Testing
    So it falls to FATE to test this on the ridiculous diversity of platforms that FFmpeg supports. I staged individual test specs for each of the 17 test vectors : vp8-test-vector-001 ... vp8-test-vector-017. After the samples have propagated through to the various FATE installations, I’ll activate the 14 test specs that are currently passing.

    Initial Testing Methodology
    Inspired by Ronald Bultje’s idea, I built the latest FFmpeg-SVN with libvpx enabled. Then I selected between the reference and native decoders as such :

    $ for i in 001 002 003 004 005 006 007 008 009 \
     010 011 012 013 014 015 016 017
    do
      echo vp80-00-comprehensive-$i.ivf
      ffmpeg -vcodec libvpx -i \
        /path/to/vp8-test-vectors-r1/vp80-00-comprehensive-$i.ivf \
        -f framemd5 - 2> /dev/null
    done > refs.txt
    

    $ for i in 001 002 003 004 005 006 007 008 009 \
    010 011 012 013 014 015 016 017
    do
    echo vp80-00-comprehensive-$i.ivf
    ffmpeg -vcodec vp8 -i \
    /path/to/vp8-test-vectors-r1/vp80-00-comprehensive-$i.ivf \
    -f framemd5 - 2> /dev/null
    done > native.txt

    $ diff -u refs.txt native.txt

    That reveals precisely which files differ.

  • Multiple live video outputs advice. Live stream/Record/Preview, FFMPEG, Windows, Decklink [closed]

    18 septembre 2024, par stroltz

    I am looking for advice on how best to achieve multiple live video outputs.

    &#xA;

    The live source is a Decklink card on Windows. (We have a ffmpeg build working to access the card) We want 4 outputs ;

    &#xA;

      &#xA;
    • We want to run a preview window (low quality would be preferred) just so the user can see the video is working.

      &#xA;

    • &#xA;

    • We want to be able to live stream - single bit rate, RTMP. (goes up to a CDN)

      &#xA;

    • &#xA;

    • Independent from the streaming we want to be able to stop and start recording to file. Ideally using CRF. So a separate encode – but maybe we use the RTMP encode, not sure, and do 1 x encode only.

      &#xA;

    • &#xA;

    • We also want to save a separate audio file. Stops and starts at the same time as the video file above (if required we could do this as a post process on the video file we make above)

      &#xA;

    • &#xA;

    &#xA;

    We want to keep CPU use down to as reasonable as possible. (so no high end hardware)

    &#xA;

    We have had a suggestion of this with ffmpeg ;

    &#xA;

    Input >> ffmpeg

    &#xA;

      &#xA;
    • split input to main and monitoring ;
    • &#xA;

    • scale monitoring stream to lower resolution
    • &#xA;

    • encode both streams
    • &#xA;

    • provide both outputs to local streaming server&#xA;ffmpeg >> local streaming server
    • &#xA;

    • use API to start and stop recordings (or web console, if you do it manually)
    • &#xA;

    • provide streams to CDN or/and provide access to your streams for end users
    • &#xA;

    &#xA;

    recorded files >> another ffmpeg (controlled by some script that get&#xA;RECORDING COMPLETED event to start ffmpeg process)

    &#xA;

      &#xA;
    • extract audio from recorded file
    • &#xA;

    • save audio into file
    • &#xA;

    &#xA;

    Which sounds possible, but if doing that, which local streaming server would work best (open source, API...)

    &#xA;

    or open to other ideas as to the best way.

    &#xA;

    https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs shows lots of ways, but I don't think you get to control the individual outputs independently.

    &#xA;

  • How do I send a mediaStream from the electron renderer process to a background ffmpeg process ?

    26 juillet 2020, par Samamoma_Vadakopa

    Goal (to avoid the XY problem) :

    &#xA;

    I'm building a small linux desktop application using webRTC, electron, and create-react-app. The application should receive a mediaStream via a webRTC peer connection, display the stream to the user, create a virtual webcam device, and send the stream to the virtual webcam so it can be selected as the input on most major videoconferencing platforms.

    &#xA;

    Problem :

    &#xA;

    The individual parts all work : receiving the stream (webRTC), creating the webcam device (v4l2loopback), creating a child process of ffmpeg from within electron, passing the video stream to the ffmpeg process, streaming the video to the virtual device using ffmpeg, and selecting the virtual device and seeing the video stream in a videoconference meeting.

    &#xA;

    But I'm currently stuck on tying the parts together.&#xA;The problem is, the mediaStream object is available inside electron's renderer process (as state in a deeply nested react component, FWIW). As far as I can tell, I can only create a node.js child process of ffmpeg from within electron's main process. That implies that I need to get the mediaStream from the renderer to the main process. To communicate between processes, electron uses an IPC system. Unfortunately, it seems that IPC doesn't support sending a complex object like a video stream.

    &#xA;

    What I've tried :

    &#xA;

      &#xA;
    • starting ffmpeg child process (using child_process.spawn) from within renderer process throws 'fs.fileexistssync' error. Browsing SO indicates that only the main process can start these background processes.

      &#xA;

    • &#xA;

    • creating separate webRTC connection between renderer and main to re-stream the video. I'm using IPC to facilitate the connection, but offer/answer descriptions aren't reaching the other peer over IPC - my guess is this is due to the same limitations on IPC as before.

      &#xA;

    • &#xA;

    &#xA;

    My next step is to create a separate node server on app startup which ingests the incoming RTC stream and rebroadcasts it to the app's renderer process, as well as to a background ffmpeg process.

    &#xA;

    Before I try that, though, does anyone have suggestions for approaches I should consider ? (this is my first SO question, so any advice on how to improve it is appreciated).

    &#xA;