Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (53)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (7161)

  • How To Extract The Duration From A Webm Using FFprobe

    8 décembre 2018, par Sixtoo

    I have the following code which stores the duration of a video file in a variable called duration :

    for /f %%i in ('ffprobe -select_streams v:0 -show_entries stream^=duration -of default^=noprint_wrappers^=1:nokey^=1 input.avi') do set duration=%%i

    However, when I try to get the duration of a .webm file I get N/A. I used the answer here How to determine webm duration using ffprobe which helped me to be able to see the duration of a webm when using ffprobe. But for some reason I can see the duration in the output of ffprobe but I cannot manage to store it in a variable.

    Please help me with this. Thank you


    Here is the command and output :

    Command :

    for /f %%i in ('ffprobe -select_streams v:0 -show_entries stream^=duration -of default^=noprint_wrappers^=1:nokey^=1 webm_copy.webm') do set duration=%%i

    echo %duration%

    Output :

    D:\SOFTWARE\ffmpeg\bin\test\go>for /F %i in ('ffprobe -select_streams v:0 -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1 webm_copy.webm') do set duration=%i
    ffprobe version N-80066-g566be4f Copyright (c) 2007-2016 the FFmpeg developers
     built with gcc 5.3.0 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmfx --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
     libavutil      55. 24.100 / 55. 24.100
     libavcodec     57. 43.100 / 57. 43.100
     libavformat    57. 37.100 / 57. 37.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 46.100 /  6. 46.100
     libswscale      4.  1.100 /  4.  1.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    Input #0, matroska,webm, from 'webm_copy.webm':
     Metadata:
       encoder         : Lavf57.37.100
     Duration: 00:01:31.44, start: 0.000000, bitrate: 278 kb/s
       Stream #0:0: Video: vp8, yuv420p, 480x360, SAR 1:1 DAR 4:3, 29.97 fps, 29.97 tbr, 1k tbn (default)
       Stream #0:1: Audio: vorbis, 44100 Hz, stereo, fltp (default)

    D:\SOFTWARE\ffmpeg\bin\test\go>set duration=N/A

    D:\SOFTWARE\ffmpeg\bin\test\go>echo N/A
    N/A
  • libzvbi-teletextdec : split dvb packet to slices

    1er mars 2014, par Marton Balint
    libzvbi-teletextdec : split dvb packet to slices
    

    Instead of using the demux function of libzvbi to split the packet to slices
    (vbi lines), lets do it ourselves.

    - eliminates the 1 frame delay between page input and output
    - handles non-ascending line numbers more gracefully
    - enables us to return error codes on some invalid packets instead of silently
    ignoring them

    Signed-off-by : Marton Balint <cus@passwd.hu>

    • [DH] libavcodec/libzvbi-teletextdec.c
  • RTP and H.264 (Packetization Mode 1)... Decoding RAW Data... Help understanding the audio and STAP-A packets

    12 février 2014, par Lane

    I am attempting to re-create a video from a Wireshark capture. I have researched extensively and the following links provided me with the most useful information...

    How to convert H.264 UDP packets to playable media stream or file (defragmentation) (and the 2 sub-links)
    H.264 over RTP - Identify SPS and PPS Frames

    ...I understand from these links and RFC (RTP Payload Format for H.264 Video) that...

    • The Wireshark capture shows a client communicating with a server via RTSP/RTP by making the following calls... OPTIONS, DESCRIBE, SETUP, SETUP, then PLAY (both audio and video tracks exist)

    • The RTSP response from PLAY (that contains the Sequence and Picture Parameter Sets) contains the following (some lines excluded)...

    Media Description, name and address (m) : audio 0 RTP/AVP 0
    Media Attribute (a) : rtpmap:0 PCMU/8000/1
    Media Attribute (a) : control:trackID=1
    Media Attribute (a) : x-bufferdelay:0

    Media Description, name and address (m) : video 0 RTP/AVP 98
    Media Attribute (a) : rtpmap:98 H264/90000
    Media Attribute (a) : control:trackID=2
    Media Attribute (a) : fmtp:98 packetization-mode=1 ;profile-level-id=4D0028 ;sprop-parameter-sets=J00AKI2NYCgC3YC1AQEBQAAA+kAAOpg6GAC3IAAzgC7y40MAFuQABnAF3lwWNF3A,KO48gA==

    Media Description, name and address (m) : metadata 0 RTP/AVP 100
    Media Attribute (a) : rtpmap:100 IQ-METADATA/90000
    Media Attribute (a) : control:trackID=3

    ...the packetization-mode=1 means that only NAL Units, STAP-A and FU-A are accepted

    • The streaming RTP packets (video only, DynamicRTP-Type-98) arrive in the following order...

    1x
    [RTP Header]
    0x78 0x00 (Type is 24, meaning STAP-A)
    [Remaining Payload]

     36x
    [RTP Header]
    0x7c (Type is 28, meaning FU-A) then either 0x85 (first) 0x05 (middle) or 0x45 (last)
    [Remaining Payload]

    1x
    [RTP Header]
    0x18 0x00 (Type is 24, meaning STAP-A)
    [Remaining Payload]

    8x
    [RTP Header]
    0x5c (Type is 28, meaning FU-A) then either 0x81 (first) 0x01 (middle) or 0x41 (last)
    [Remaining Payload]

    ...the cycle then repeats... typically there are 29 0x18/0x5c RTP packets for each 0x78/0x7c packet

    • Approximately every 100 packets, there is an audio RTP packet, all have their Marker set to true and their sequence numbers ascend as expected. Sometimes there is an individual RTP audio packet and sometimes there are three, see a sample one here...

    RTP 1042 PT=ITU-T G.711 PCMU, SSRC=0x238E1F29, Seq=31957, Time=1025208762, Mark

    ...also, the type of each audio RTP packet is different (as far as first bytes go... I see 0x4e, 0x55, 0xc5, 0xc1, 0xbc, 0x3c, 0x4d, 0x5f, 0xcc, 0xce, 0xdc, 0x3e, 0xbf, 0x43, 0xc9, and more)

    • From what I gather... to re-create the video, I first need to create a file of the format

    0x000001 [SPS Payload]
    0x000001 [PPS Payload]
    0x000001 [Complete H.264 Frame (NAL Byte, followed by all fragmented RTP payloads without the first 2 bytes)
    0x000001 [Next Frame]
    Etc...

    I made some progress where I can run "ffmpeg -i file" without it saying a bad input format or unable to find codec. But currently it complains something about MP3. My questions are as follows...

    1. Should I be using the SPS and PPS payload returned by the response to the DESCRIBE RTSP call or use the data sent in the first STAP-A RTP packets (0x78 and 0x18) ?

    2. How does the file format change to incorporate the audio track ?

    3. Why is the audio track payload headers all over the place and how can I make sense / utilize them ?

    4. Is my understanding of anything incorrect ?

    Any help is GREATLY appreciated, thanks !