Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (80)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (9889)

  • Recommendations for real-time pixel-level analysis of television (TV) video

    6 décembre 2011, par Randall Cook

    [Note : This is a rewrite of an earlier question that was considered inappropriate and closed.]

    I need to do some pixel-level analysis of television (TV) video. The exact nature of this analysis is not pertinent, but it basically involves looking at every pixel of every frame of TV video, starting from an MPEG-2 transport stream. The host platform will be server-class, multiprocessor 64-bit Linux machines.

    I need a library that can handle the decoding of the transport stream and present me with the image data in real-time. OpenCV and ffmpeg are two libraries that I am considering for this work. OpenCV is appealing because I have heard it has easy to use APIs and rich image analysis support, but I have no experience using it. I have used ffmpeg in the past for extracting video frame data from files for analysis, but it lacks image analysis support (though Intel's IPP can supplement).

    In addition to general recommendations for approaches to this problem (excluding the actual image analysis), I have some more specific questions that would help me get started :

    1. Are ffmpeg or OpenCV commonly used in industry as a foundation for real-time
      video analysis, or is there something else I should be looking at ?
    2. Can OpenCV decode video frames in real time, and still leave enough
      CPU left over to do nontrivial image analysis, also in real-time ?
    3. Is sufficient to use ffpmeg for MPEG-2 transport stream decoding, or
      is it preferable to just use an MPEG-2 decoding library directly (and if so, which one) ?
    4. Are there particular pixel formats for the output frames that ffmpeg
      or OpenCV is particularly efficient at producing (like RGB, YUV, or YUV422, etc) ?
  • Any advice on streaming av1 with gstreamer to mediamtx and webrtcbin ?

    8 février 2024, par Israel Robotnick

    I have gstreamer 1.22.7 with the RS plugin for av1 support.
I'm trying to stream AV1 rtp to mediamtx using gstreamer, but the bigger goal is that my rtspsrc->webrtcbin pipeline will work with av1 as it works with h264\vp8\vp9.

    


    I have gstreamer 1.22.7 with the RS plugin for av1 support.
I've created a few av1 files with ffmpeg using svtav1 and rav1e encoders :

    


    ffmpeg -i h264.mp4 -an -c:v libsvtav1 -preset 5 -crf 30 -g 60 -svtav1-params tune=0:fast-decode=1 -pix_fmt yuv420p test1.mp4

ffmpeg -i h264.mp4 -an -c:v librav1e -preset 5 -crf 30 -g 60 -rav1e-params speed=5:low_latency=true -pix_fmt yuv420p test2.mp4


    


    ffmpeg does not currently support AV1 streaming to rtp\rtsp, so im using gstreamer to do so :

    


    gst-launch-1.0 filesrc location=test1.mp4 ! qtdemux ! av1parse ! rtspclietsink location=rtsp://127.0.0.1:8554/test1


    


    From what I've read, mediaMTX\chrome\VLC in their latest versions support av1 streaming in webrtc\rtsp,
but there are no examples whatsoever on how to do so.

    


    Gstreamer preroll, playing and recording when publishing. Everything seems to be fine. Same in mediamtx logs.

    


    When I try to connect a client to the rtsp path via VLC\FFplay\gstreamer rtspsrc->webrtcbin pipeline I don't
et any image. (though webrtc internals show packets arrive fine, but VLC\ffmpeg cant connect)

    


    Any ideas what can be wrong ? Anyone have experience with encoding+streaming AV1 with gstreamer rtspclientsink ?
If you have any tips on redirecting it to webrtcbin (what I do is rtspsrc...parsebin ! queue ! rtpav1pay ! webrtcbin, which seems to connect to chrome and create the av1 sdp, but there in no image) I would appreciate them as well ( :

    


  • ffmpeg-python : combine live video/audio streams into one file

    3 avril 2021, par Greendrake

    I am using python-dvr to pull raw video/audio streams from IP cameras. It parses the binary data into chunks of video and audio and writes them into separate files like this :

    


    with open("file.video", "wb") as v, open("file.audio", "wb") as a:
    def receiver(frame, meta, user):
        if 'frame' in meta:
            v.write(frame)
        if 'type' in meta and meta["type"] == "g711a":
            a.write(frame)
    
    cam.start_monitor(receiver)


    


    I could then use ffmpeg binary to combine the two files into one. But I want the script to do it straight away continuously (splitting the combined stream into say 10-min clips but that would be a separate question).

    


    It looks like ffmpeg-python.output could do it. But, with virtually no experience in Python I can't immediately get my head around it. The syntax goes :

    


    ffmpeg.output(stream1[, stream2, stream3…], filename, **ffmpeg_args)


    


    How do I use that in the code above ? I do not have "streams" as such there. Instead, the receiver function is called in a loop with frames/chunks of binary data, which could be video or audio. How do I "pipe" them into the ffmpeg.output function ?