
Recherche avancée
Médias (91)
-
Chuck D with Fine Arts Militia - No Meaning No
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Paul Westerberg - Looking Up in Heaven
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Le Tigre - Fake French
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Thievery Corporation - DC 3000
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Dan the Automator - Relaxation Spa Treatment
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Gilberto Gil - Oslodum
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (45)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (5632)
-
What does "dash" - mean as ffmpeg output filename
26 août 2020, par DDSI'm trying to use ffmpeg with gnuplot to draw some audio spectra, I'm following this ffmpeg doc link.


Now I'm asking what "dash"
-
means on this line right after-f data
, it should be a filename : the last element of ffmpeg command should the output file but I have no files named-
in the directory after running the command.

ffmpeg -y -i in.wav -ac 1 -filter:a aresample=8000 -map 0:a -c:a pcm_s16le -f data - | gnuplot -p -e "plot 'code>


I looked on ffmpeg docs but I didn't find anything.


-
Algorithm when recording SegmentTimeline of MPEG-DASH MPD in shaka packager
25 mars 2021, par jgkim0518I packaged a media stream by mpeg-dash transcoded video at twice the speed and audio at normal speed.
Here is the command :


ffmpeg -re -stream_loop -1 -i timing_logic.mp4 -c:v hevc_nvenc -filter:v "setpts=2*PTS" -c:a libfdk_aac -map 0:v -map 0:a -f mpegts udp://xxx.xxx.xxx.xxx:xxxx?pkt_size=1316



The source information used here is as follows :


Input #0, mpegts, from '/home/test/timing_logic/timing_logic.mp4':
Duration: 00:00:30.22, start: 1.400000, bitrate: 16171 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: hevc (Main 10) (HEVC / 0x43564548), yuv420p10le(tv, bt709), 3840x2160 [SAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 50 tbc
Stream #0:1[0x101](eng): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, fltp, 128 kb/s Stream mapping:
Stream #0:0 -> #0:0 (hevc (native) -> hevc (hevc_nvenc))
Stream #0:1 -> #0:1 (mp2 (native) -> aac (libfdk_aac))
Press [q] to stop, [?] for help
frame= 0 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A
frame= 0 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A
Output #0, mpegts, to 'udp://xxx.xxx.xxx.xxx:xxxx?pkt_size=1316':
Metadata:
encoder : Lavf58.45.100
Stream #0:0: Video: hevc (hevc_nvenc) (Main 10), p010le, 3840x2160 [SAR 1:1 DAR 16:9], q=-1--1, 2000 kb/s, 50 fps, 90k tbn, 50 tbc
Metadata:
encoder : Lavc58.91.100 hevc_nvenc
Side data:
cpb: bitrate max/min/avg: 0/0/2000000 buffer size: 4000000 vbv_delay: N/A
Stream #0:1(eng): Audio: aac (libfdk_aac), 48000 Hz, stereo, s16, 139 kb/s
Metadata:
encoder : Lavc58.91.100 libfdk_aac



My shaka packager command is :


packager \ 'in=udp://xxx.xxx.xxx.xxx:xxxx,stream=video,init_segment=/home/test/timing_logic/package/video/0/video.mp4,segment_template=/home/test/timing_logic/package/video/0/$Time$.m4s' \
'in=udp://xxx.xxx.xxx.xxx:xxxx,stream=audio,init_segment=/home/test/timing_logic/package/audio/0/audio.mp4,segment_template=/home/test/timing_logic/package/audio/0/$Time$.m4a' \
--segment_duration 5 --fragment_duration 5 --minimum_update_period 5 --min_buffer_time 5 \
--preserved_segments_outside_live_window 24 --time_shift_buffer_depth 40 \
--allow_codec_switching --allow_approximate_segment_timeline --log_file_generation_deletion \
--mpd_output $OUTPUT/$output_mpd.mpd &



Looking at the mpd generated as a result of packaging, the duration of the audio is twice that of the video, and the number of segments is half.
The following is the content of mpd :


<?xml version="1.0" encoding="UTF-8"?>

<mpd xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" minbuffertime="PT5S" type="dynamic" publishtime="2021-03-23T10:01:57Z" availabilitystarttime="2021-03-23T09:59:55Z" minimumupdateperiod="PT5S" timeshiftbufferdepth="PT40S">
<period start="PT0S">
<adaptationset contenttype="audio" segmentalignment="true">
<representation bandwidth="140020" codecs="mp4a.40.2" mimetype="audio/mp4" audiosamplingrate="48000">
<audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></audiochannelconfiguration>
<segmenttemplate timescale="90000" initialization="/home/test/timing_logic/package/audio/0/audio.mp4" media="/home/test/timing_logic/package/audio/0/$Time$.m4a" startnumber="16">
<segmenttimeline>
<s t="6751436" d="449280"></s>
<s t="7200716" d="451200"></s>
<s t="7651916" d="449280"></s>
<s t="8101196" d="374400"></s>
<s t="8551196" d="449280"></s>
<s t="9000476" d="451200"></s>
<s t="9451674" d="449280"></s>
<s t="9900956" d="449280"></s>
<s t="10350236" d="451200"></s>
<s t="10801434" d="374400"></s>
</segmenttimeline>
</segmenttemplate>
</representation>
</adaptationset>
<adaptationset contenttype="video" width="3840" height="2160" framerate="90000/3600" segmentalignment="true" par="16:9">
<representation bandwidth="1520392" codecs="hvc1.2.4.L153" mimetype="video/mp4" sar="1:1">
<segmenttemplate timescale="90000" initialization="/home/test/timing_logic/package/video/0/video.mp4" media="/home/test/timing_logic/package/video/0/$Time$.m4s" startnumber="19">
<segmenttimeline>
<s t="16954440" d="900000" r="4"></s>
</segmenttimeline>
</segmenttemplate>
</representation>
</adaptationset>
</period>
</mpd>



When looking at the MPD, it seems that the shaka packager checks the PTS or DTS of the TS when creating a segment and recording the contents of the SegmentTimeline.
But I couldn't understand even by looking at the MPEG standard documentation and the DASH-IF documentation.


My question is whether the packager refers to PTS or DTS when creating a segment.
How are SegmentTimeline's S@t and S@d recorded ?
What algorithm is the SegmentTimeline recorded with ? Please help me. Thank you.


-
Ream-time watermarking with MPEG-DASH
14 juillet 2016, par Calvin W.In the system, I want to add a unique watermark (e.g. IP address of client and time stamp) into the video that he/she want to watch.
But when I handled it with OpenCV, it spent 25 minute with a 15-min video. And I need to transcode to mp4 with ffmpeg.
Now I’m trying the watermark function of ffmpeg, bit it still needs some time.
It it possible to send the video to client side with MPEG-DASH while transcoding it with ffmpeg ?
System spec :(Amazon EC2 c3.xlarge)
Intel Xeon E5-2680 v2 (Ivy Bridge) - 4 vCPU
7.5G RAM
40GB SSD
Ubuntu 14.04 LTS
OpenCV2.4.13
ffmpeg 3.1.1code :
import cv2
import sys
import time
from datetime import datetime as dt
# frame of input video
fps = float(sys.argv[4])
# encode to AVC
fourcc = cv2.cv.CV_FOURCC('A', 'V', 'C', '1')
# transparency of text
alpha = 0.1
beta = 1 - alpha
# input video
cap = cv2.VideoCapture(sys.argv[3])
# current frame index, start from 0
frameIndex = 0
# get input video's width/height
width = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT))
# config output (error using .mp4)
out = cv2.VideoWriter('output.avi', fourcc, fps, (width, height))
# access time
timeStr = dt.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
requestIP = sys.argv[1]
username = sys.argv[2]
text = "%s %s %s" % (requestIP, username, timeStr)
# start loading video
while(cap.isOpened()):
ret, frame = cap.read()
if ret:
# add text between 10s - 20s
if frameIndex > time10 and frameIndex < time20:
# clone a new frame to add text
overlay = frame.copy()
cv2.putText(overlay, text, (100, 100), cv2.FONT_HERSHEY_PLAIN, 0.5, (255, 255, 255))
# combine both frame and make text transparent
cv2.addWeighted(overlay, alpha, frame, beta, 0, frame)
# write frame to output
out.write(frame)
frameIndex += 1
# wait for next frame
if cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) == cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT):
break
# End of video
# release
cap.release()
out.release()