Recherche avancée

Médias (0)

Mot : - Tags -/publication

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (58)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (10017)

  • How to convert H264 RTP stream from PCAP to a playable video file

    21 août 2014, par yoosha

    I have captured stream of H264 in PCAP files and trying to create media files from the data. The container is not important (avi,mp4,mkv,…).
    When I’m using videosnarf or rtpbreak (combined with python code that adds 00 00 00 01 before each packet) and then ffmpeg, the result is OK only if the input frame rate is constant (or near constant). However, when the input is vfr, the result plays too fast (and on same rare cases too slow).
    For example :

    videosnarf -i captured.pcap –c
    ffmpeg -i H264-media-1.264 output.avi

    After doing some investigation of the issue I believe now that since the videosnarf (and rtpbreak) are removing the RTP header from the packets, the timestamp is lost and ffmpeg is referring to the input data as cbr.

    1. I would like to know if there is a way to pass (on a separate file ?)
      the timestamps vector or any other information to ffmpeg so the
      result will be created correctly ?
    2. Is there any other way I can take the data out of the PCAP file and play it or convert it and then play it ?
    3. Since all work is done in Python, any suggestion of libraries/modules that can help with the work (even if requires some codding) is welcome as well.

    Note : All work is done offline, no limitations on the output. It can be cbr/vbr, any playable container and transcoding. The only "limitation" I have : it should all run on linux…

    Thanks
    Y

    Some additional information :
    Since the nothing provides the FFMPEG with the timestamp data, i decided to try a different approach : skip videosnarf and use Python code to pipe the packets directly to ffmpeg (using the "-f -i -" options) but then it refuses to accept it unless I provide an SDP file...
    How do I provide the SDP file ? is it an additional input file ? ("-i config.sdp")

    The following code is an unsuccessful try doing the above :

    import time  
    import sys  
    import shutil  
    import subprocess  
    import os  
    import dpkt  

    if len(sys.argv) < 2:  
       print "argument required!"  
       print "txpcap <pcap file="file">"  
       sys.exit(2)  
    pcap_full_path = sys.argv[1]  

    ffmp_cmd = ['ffmpeg','-loglevel','debug','-y','-i','109c.sdp','-f','rtp','-i','-','-na','-vcodec','copy','p.mp4']  

    ffmpeg_proc = subprocess.Popen(ffmp_cmd,stdout = subprocess.PIPE,stdin = subprocess.PIPE)  

    with open(pcap_full_path, "rb") as pcap_file:  
       pcapReader = dpkt.pcap.Reader(pcap_file)  
       for ts, data in pcapReader:  
           if len(data) &lt; 49:  
               continue  
           ffmpeg_proc.stdin.write(data[42:])

    sout, err = ffmpeg_proc.communicate()  
    print "stdout ---------------------------------------"  
    print sout  
    print "stderr ---------------------------------------"  
    print err  
    </pcap>

    In general this will pipe the packets from the PCAP file to the following command :

    ffmpeg -loglevel debug -y -i 109c.sdp -f rtp -i - -na -vcodec copy p.mp4

    SDP file : [RTP includes dynamic payload type # 109, H264]

    v=0
    o=- 0 0 IN IP4 ::1
    s=No Name
    c=IN IP4 ::1
    t=0 0
    a=tool:libavformat 53.32.100
    m=video 0 RTP/AVP 109
    a=rtpmap:109 H264/90000
    a=fmtp:109
    packetization-mode=1 ;profile-level-id=64000c ;sprop-parameter-sets=Z2QADKwkpAeCP6wEQAAAAwBAAAAFI8UKkg==,aMvMsiw= ;
    b=AS:200

    Results :

    ffmpeg version 0.10.2 Copyright (c) 2000-2012 the FFmpeg developers
    built on Mar 20 2012 04:34:50 with gcc 4.4.6 20110731 (Red Hat
    4.4.6-3) configuration : —prefix=/usr —libdir=/usr/lib64 —shlibdir=/usr/lib64 —mandir=/usr/share/man —enable-shared —enable-runtime-cpudetect —enable-gpl —enable-version3 —enable-postproc —enable-avfilter —enable-pthreads —enable-x11grab —enable-vdpau —disable-avisynth —enable-frei0r —enable-libopencv —enable-libdc1394 —enable-libdirac —enable-libgsm —enable-libmp3lame —enable-libnut —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libopenjpeg —enable-librtmp —enable-libschroedinger —enable-libspeex —enable-libtheora —enable-libvorbis —enable-libvpx —enable-libx264 —enable-libxavs —enable-libxvid —extra-cflags=’-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector —param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC’ —disable-stripping libavutil 51. 35.100 / 51. 35.100 libavcodec 53. 61.100 / 53. 61.100 libavformat 53. 32.100
    / 53. 32.100 libavdevice 53. 4.100 / 53. 4.100
    libavfilter 2. 61.100 / 2. 61.100 libswscale 2. 1.100
    / 2. 1.100 libswresample 0. 6.100 / 0. 6.100
    libpostproc 52. 0.100 / 52. 0.100 [sdp @ 0x15c0c00] Format sdp
    probed with size=2048 and score=50 [sdp @ 0x15c0c00] video codec set
    to : h264 [NULL @ 0x15c7240] RTP Packetization Mode : 1 [NULL @
    0x15c7240] RTP Profile IDC : 64 Profile IOP : 0 Level : c [NULL @
    0x15c7240] Extradata set to 0x15c78e0 (size : 36) !error,_recognition
    separate : 1 ; 1 [h264 @ 0x15c7240] error,_recognition combined : 1 ;
    10001 [sdp @ 0x15c0c00] decoding for stream 0 failed [sdp @
    0x15c0c00] Could not find codec parameters (Video : h264) [sdp @
    0x15c0c00] Estimating duration from bitrate, this may be inaccurate
    109c.sdp : could not find codec parameters Traceback (most recent
    call last) : File "./ffpipe.py", line 26, in
    ffmpeg_proc.stdin.write(data[42 :]) IOError : [Errno 32] Broken pipe

    (forgive the mass above, the editor keep on complaining about code that is not indented OK ??)

    I’m working on this issue for days... any help/suggestion/hint will be appreciated.

  • Create a video with audio only on one channel

    6 février 2018, par Jude Osborn

    I’m trying to create 5 videos that each play audio only in one channel. So, for example, one video only plays audio in the right front, another only in the back left, another only in the center, etc.

    The videos are currently stereo. Can I use ffmpeg to accomplish this ?

  • Best way of playing mkv video that is being recorded with ffmpeg in web browser

    17 juin 2017, par martin49

    I’m looking for a way to play a video that is being recorded with ffmpeg in a web browser preferably using HTML5 video tag.

    I have this code in my page, the URL is pointing to a video file (example.mkv) that is being recorded by ffmpeg.

    <video class="video-player" controls="controls" loop="loop">          
               <source src="{{asset('example.mkv')}}" type="video/mp4">
               <p>Your browser does not support H.264/MP4.</p>
    </source></video>

    Google Chrome/Chromium can play mkv video in HTML5, but only plays the part that was loaded when page was loaded. Is it possible to continue to load the video while it is being recorded ?