Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (112)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (13846)

  • One liner to create HLS stream

    29 avril 2015, par hendry

    IIUC with HLS or DASH, I can create a manifest and serve the segments straight from my httpd, e.g. python -m http.server.

    I have a UVC video feed coming in on /dev/video1 and I’m battling to create a simple m3u8 in either gstreamer or ffmpeg.

    I got as far as :

    gst-launch-1.0 -e v4l2src device=/dev/video1 ! videoconvert ! x264enc ! mpegtsmux ! hlssink max-files=5

    Any ideas ?

  • libav/ffmpeg : where is data buffer storage ?

    1er juillet 2020, par nguyenthachung

    I'm using libav/ffmpeg to play some DASH livestream.

    


    My case : player is playing normally -> pause for 3 4 minutes -> play again -> Player continue to play without interrupted, not jump to realtime.

    


    Is libav/ffmpeg cached buffer or stored to somewhere ?

    


  • Algorithm when recording SegmentTimeline

    24 mars 2021, par jgkim0518

    I packaged a media stream by mpeg-dash transcoded video at twice the speed and audio at normal speed.
Here is the command :

    


    ffmpeg -re -stream_loop -1 -i timing_logic.mp4 -c:v hevc_nvenc -filter:v "setpts=2*PTS" -c:a libfdk_aac -map 0:v -map 0:a -f mpegts udp://xxx.xxx.xxx.xxx:xxxx?pkt_size=1316


    


    The source information used here is as follows :

    


    Input #0, mpegts, from '/home/test/timing_logic/timing_logic.mp4':
Duration: 00:00:30.22, start: 1.400000, bitrate: 16171 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: hevc (Main 10) (HEVC / 0x43564548), yuv420p10le(tv, bt709), 3840x2160 [SAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 50 tbc
Stream #0:1[0x101](eng): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, fltp, 128 kb/s Stream mapping:
Stream #0:0 -> #0:0 (hevc (native) -> hevc (hevc_nvenc))
Stream #0:1 -> #0:1 (mp2 (native) -> aac (libfdk_aac))
Press [q] to stop, [?] for help
frame= 0 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A
frame= 0 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A
Output #0, mpegts, to 'udp://xxx.xxx.xxx.xxx:xxxx?pkt_size=1316':
Metadata:
encoder : Lavf58.45.100
Stream #0:0: Video: hevc (hevc_nvenc) (Main 10), p010le, 3840x2160 [SAR 1:1 DAR 16:9], q=-1--1, 2000 kb/s, 50 fps, 90k tbn, 50 tbc
Metadata:
encoder : Lavc58.91.100 hevc_nvenc
Side data:
cpb: bitrate max/min/avg: 0/0/2000000 buffer size: 4000000 vbv_delay: N/A
Stream #0:1(eng): Audio: aac (libfdk_aac), 48000 Hz, stereo, s16, 139 kb/s
Metadata:
encoder : Lavc58.91.100 libfdk_aac


    


    My shaka packager command is :

    


    packager \ 'in=udp://xxx.xxx.xxx.xxx:xxxx,stream=video,init_segment=/home/test/timing_logic/package/video/0/video.mp4,segment_template=/home/test/timing_logic/package/video/0/$Time$.m4s' \
'in=udp://xxx.xxx.xxx.xxx:xxxx,stream=audio,init_segment=/home/test/timing_logic/package/audio/0/audio.mp4,segment_template=/home/test/timing_logic/package/audio/0/$Time$.m4a' \
--segment_duration 5 --fragment_duration 5 --minimum_update_period 5 --min_buffer_time 5 \
--preserved_segments_outside_live_window 24 --time_shift_buffer_depth 40 \
--allow_codec_switching --allow_approximate_segment_timeline --log_file_generation_deletion \
--mpd_output $OUTPUT/$output_mpd.mpd &


    


    Looking at the mpd generated as a result of packaging, the duration of the audio is twice that of the video, and the number of segments is half.
The following is the content of mpd :

    


    &lt;?xml version="1.0" encoding="UTF-8"?>&#xA;&#xA;<mpd xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" minbuffertime="PT5S" type="dynamic" publishtime="2021-03-23T10:01:57Z" availabilitystarttime="2021-03-23T09:59:55Z" minimumupdateperiod="PT5S" timeshiftbufferdepth="PT40S">&#xA;<period start="PT0S">&#xA;<adaptationset contenttype="audio" segmentalignment="true">&#xA;<representation bandwidth="140020" codecs="mp4a.40.2" mimetype="audio/mp4" audiosamplingrate="48000">&#xA;<audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></audiochannelconfiguration>&#xA;<segmenttemplate timescale="90000" initialization="/home/test/timing_logic/package/audio/0/audio.mp4" media="/home/test/timing_logic/package/audio/0/$Time$.m4a" startnumber="16">&#xA;<segmenttimeline>&#xA;<s t="6751436" d="449280"></s>&#xA;<s t="7200716" d="451200"></s>&#xA;<s t="7651916" d="449280"></s>&#xA;<s t="8101196" d="374400"></s>&#xA;<s t="8551196" d="449280"></s>&#xA;<s t="9000476" d="451200"></s>&#xA;<s t="9451674" d="449280"></s>&#xA;<s t="9900956" d="449280"></s>&#xA;<s t="10350236" d="451200"></s>&#xA;<s t="10801434" d="374400"></s>&#xA;</segmenttimeline>&#xA;</segmenttemplate>&#xA;</representation>&#xA;</adaptationset>&#xA;<adaptationset contenttype="video" width="3840" height="2160" framerate="90000/3600" segmentalignment="true" par="16:9">&#xA;<representation bandwidth="1520392" codecs="hvc1.2.4.L153" mimetype="video/mp4" sar="1:1">&#xA;<segmenttemplate timescale="90000" initialization="/home/test/timing_logic/package/video/0/video.mp4" media="/home/test/timing_logic/package/video/0/$Time$.m4s" startnumber="19">&#xA;<segmenttimeline>&#xA;<s t="16954440" d="900000" r="4"></s>&#xA;</segmenttimeline>&#xA;</segmenttemplate>&#xA;</representation>&#xA;</adaptationset>&#xA;</period>&#xA;</mpd>&#xA;

    &#xA;

    When looking at the MPD, it seems that the shaka packager checks the PTS or DTS of the TS when creating a segment and recording the contents of the SegmentTimeline.&#xA;But I couldn't understand even by looking at the MPEG standard documentation and the DASH-IF documentation.

    &#xA;

    My question is whether the packager refers to PTS or DTS when creating a segment.&#xA;How are SegmentTimeline's S@t and S@d recorded ?&#xA;What algorithm is the SegmentTimeline recorded with ? Please help me. Thank you.

    &#xA;