Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (65)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (11789)

  • How can I parse ffprobe output and run ffmpeg depending on the result ?

    14 septembre 2019, par Eli Greenberg

    I have had incredible trouble building a binary of ffmpeg for Mac that works correctly for all of my needs. I have an older build that works great remuxing h264 video without problems but lacks a library I need, namely libspeex. I built a newer build based on ffmpeg’s git that includes libspeex but crashes when trying to remux h264 from .flv files with bad timecodes (live dumps from rtmpdump). So I have two ffmpeg binaries that each do half of what I need. This is what I have as my current .command file :

    for f in ~/Desktop/Uploads/*.flv
    do
    /usr/local/bin/ffmpeg -i "$f" -vcodec copy -acodec libfaac -ab 128k -ar 48000 -async 1 "${f%.*}".mp4 && rmtrash "$f" || rmtrash "${f%.*}".mp4
    done

    This ffmpeg binary has libspeex included so it can decode speex audio in the .flv input files. What I’m looking to do is something like this pseudocode :

    for f in ~/Desktop/Uploads/*.flv
    do
    ffprobe input.flv
       if Stream #0:1 contains speex
           ffmpeg-speex -i input.flv -acodec copy -async 1 output.m4a
       fi
    ffmpeg-h264 -i input.flv -vcodec copy output.mp4
    MP4Box -add output.mp4 -add output.m4a finaloutput.mp4
    done

    Is something like this possible ? Are there any alternatives ?

  • pyav / ffmpeg / libav receiving too many keyframe

    26 mai 2021, par user1315621

    I am streaming from an rtsp source. It looks like half of the frames received are key frames. Is there a way to reduce this percentage and have an higher number of P-frames and B-frames ? If possible, I would like to increase the number of P-frames (not the one of B-frames).
I am using pyav which is a Python wrapper for libav (ffmpeg)

    


    Code :

    


    container = av.open(
    url, 'r',
    options={
        'rtsp_transport': 'tcp',
        'stimeout': '5000000',
        'max_delay': '5000000',
    }
)
stream = container.streams.video[0]
codec_context = stream.codec_context
codec_context.export_mvs = True
codec_context.gop_size = 25  

for packet in self.container.demux(video=0):
    for video_frame in packet.decode():
        print(video_frame.is_key_frame)


    


    Output :

    


    True
False
True
False
...


    


    Note 1 : I can't edit the source. I can just edit the code used to stream the video.

    


    Note 2 : same solution should apply to pyav, libavi and ffmpeg.

    


    Edit : it seems that B-frames are disabled : codec_context.has_b_frames is False

    


  • How can I stream raw video frames AND audio to FFMPEG with Python 2.7 ?

    18 novembre 2017, par Just Askin

    I am streaming raw video frames from Pygame to FFMPEG, then sending to a rtmp stream, but for the life of me, I can’t figure out how to send live audio using the same Python module. It does not need to be the Pygame mixer, but I am not opposed to using it if that is where the best answer lies. I’m pretty sure it’s not though.

    My question is this : What is the best strategy to send live audio output from a program to FFMPEG along with raw video frames simultaneously from the same Python module ?

    My program is large, and eventually I would like to build options to switch audio inputs from a queue of music, a microphone, or any other random sounds from any program I want to use. But for the time being, I just want something to work. I am starting off with a simple Espeak command.

    Here is my Python commands :

    command = ['ffmpeg', '-re', '-framerate', '22', '-s', '1280x720', '-pix_fmt', 'rgba', '-f', 'rawvideo', '-i', '-', '-f', 's16le', '-ar', '22500', '-i', '/tmp/audio', '-preset', ultrafast', '-pix_fmt', 'rgba', '-b:v', '2500', '-s', 'hd720', '-r', '25', '-g', '50', '-crf', '20', '-f', 'flv', 'rtmp://xxx' ]

    pipe = sp.Popen(command, stdin=sp.PIPE)

    Then I send my frames to stdin from within my main while True: loop.

    The problem I run into with this strategy is I can’t figure out how to shove audio into FFMPEG from within Python without blocking the pipe. After hours of research, I am pretty confident I can’t use the pipe to send the audio along with the frames. I thought the named pipe was my solution (which works running Espeak outside of Python), but it blocks Python until the Espeak is done... so no good.

    I assume I need threading for multiprocessing, but I cannot figure out from the official documentation or any other resources as to how I can solve my problem with it.

    The ['-f', 's16le', '-ar', '22500', '-i', '/tmp/audio'] are settings that work if I run espeak from a separate terminal with espeak 'some text' --stdout > /tmp/audio.

    I am using Centos 7, Python 2.7, pygame, the latest build of FFMPEG,