
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (85)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)
Sur d’autres sites (8998)
-
Ream-time watermarking with MPEG-DASH
14 juillet 2016, par Calvin W.In the system, I want to add a unique watermark (e.g. IP address of client and time stamp) into the video that he/she want to watch.
But when I handled it with OpenCV, it spent 25 minute with a 15-min video. And I need to transcode to mp4 with ffmpeg.
Now I’m trying the watermark function of ffmpeg, bit it still needs some time.
It it possible to send the video to client side with MPEG-DASH while transcoding it with ffmpeg ?
System spec :(Amazon EC2 c3.xlarge)
Intel Xeon E5-2680 v2 (Ivy Bridge) - 4 vCPU
7.5G RAM
40GB SSD
Ubuntu 14.04 LTS
OpenCV2.4.13
ffmpeg 3.1.1code :
import cv2
import sys
import time
from datetime import datetime as dt
# frame of input video
fps = float(sys.argv[4])
# encode to AVC
fourcc = cv2.cv.CV_FOURCC('A', 'V', 'C', '1')
# transparency of text
alpha = 0.1
beta = 1 - alpha
# input video
cap = cv2.VideoCapture(sys.argv[3])
# current frame index, start from 0
frameIndex = 0
# get input video's width/height
width = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT))
# config output (error using .mp4)
out = cv2.VideoWriter('output.avi', fourcc, fps, (width, height))
# access time
timeStr = dt.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
requestIP = sys.argv[1]
username = sys.argv[2]
text = "%s %s %s" % (requestIP, username, timeStr)
# start loading video
while(cap.isOpened()):
ret, frame = cap.read()
if ret:
# add text between 10s - 20s
if frameIndex > time10 and frameIndex < time20:
# clone a new frame to add text
overlay = frame.copy()
cv2.putText(overlay, text, (100, 100), cv2.FONT_HERSHEY_PLAIN, 0.5, (255, 255, 255))
# combine both frame and make text transparent
cv2.addWeighted(overlay, alpha, frame, beta, 0, frame)
# write frame to output
out.write(frame)
frameIndex += 1
# wait for next frame
if cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) == cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT):
break
# End of video
# release
cap.release()
out.release() -
Algorithm when recording SegmentTimeline of MPEG-DASH MPD in shaka packager
25 mars 2021, par jgkim0518I packaged a media stream by mpeg-dash transcoded video at twice the speed and audio at normal speed.
Here is the command :


ffmpeg -re -stream_loop -1 -i timing_logic.mp4 -c:v hevc_nvenc -filter:v "setpts=2*PTS" -c:a libfdk_aac -map 0:v -map 0:a -f mpegts udp://xxx.xxx.xxx.xxx:xxxx?pkt_size=1316



The source information used here is as follows :


Input #0, mpegts, from '/home/test/timing_logic/timing_logic.mp4':
Duration: 00:00:30.22, start: 1.400000, bitrate: 16171 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: hevc (Main 10) (HEVC / 0x43564548), yuv420p10le(tv, bt709), 3840x2160 [SAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 50 tbc
Stream #0:1[0x101](eng): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, fltp, 128 kb/s Stream mapping:
Stream #0:0 -> #0:0 (hevc (native) -> hevc (hevc_nvenc))
Stream #0:1 -> #0:1 (mp2 (native) -> aac (libfdk_aac))
Press [q] to stop, [?] for help
frame= 0 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A
frame= 0 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A
Output #0, mpegts, to 'udp://xxx.xxx.xxx.xxx:xxxx?pkt_size=1316':
Metadata:
encoder : Lavf58.45.100
Stream #0:0: Video: hevc (hevc_nvenc) (Main 10), p010le, 3840x2160 [SAR 1:1 DAR 16:9], q=-1--1, 2000 kb/s, 50 fps, 90k tbn, 50 tbc
Metadata:
encoder : Lavc58.91.100 hevc_nvenc
Side data:
cpb: bitrate max/min/avg: 0/0/2000000 buffer size: 4000000 vbv_delay: N/A
Stream #0:1(eng): Audio: aac (libfdk_aac), 48000 Hz, stereo, s16, 139 kb/s
Metadata:
encoder : Lavc58.91.100 libfdk_aac



My shaka packager command is :


packager \ 'in=udp://xxx.xxx.xxx.xxx:xxxx,stream=video,init_segment=/home/test/timing_logic/package/video/0/video.mp4,segment_template=/home/test/timing_logic/package/video/0/$Time$.m4s' \
'in=udp://xxx.xxx.xxx.xxx:xxxx,stream=audio,init_segment=/home/test/timing_logic/package/audio/0/audio.mp4,segment_template=/home/test/timing_logic/package/audio/0/$Time$.m4a' \
--segment_duration 5 --fragment_duration 5 --minimum_update_period 5 --min_buffer_time 5 \
--preserved_segments_outside_live_window 24 --time_shift_buffer_depth 40 \
--allow_codec_switching --allow_approximate_segment_timeline --log_file_generation_deletion \
--mpd_output $OUTPUT/$output_mpd.mpd &



Looking at the mpd generated as a result of packaging, the duration of the audio is twice that of the video, and the number of segments is half.
The following is the content of mpd :


<?xml version="1.0" encoding="UTF-8"?>

<mpd xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" minbuffertime="PT5S" type="dynamic" publishtime="2021-03-23T10:01:57Z" availabilitystarttime="2021-03-23T09:59:55Z" minimumupdateperiod="PT5S" timeshiftbufferdepth="PT40S">
<period start="PT0S">
<adaptationset contenttype="audio" segmentalignment="true">
<representation bandwidth="140020" codecs="mp4a.40.2" mimetype="audio/mp4" audiosamplingrate="48000">
<audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></audiochannelconfiguration>
<segmenttemplate timescale="90000" initialization="/home/test/timing_logic/package/audio/0/audio.mp4" media="/home/test/timing_logic/package/audio/0/$Time$.m4a" startnumber="16">
<segmenttimeline>
<s t="6751436" d="449280"></s>
<s t="7200716" d="451200"></s>
<s t="7651916" d="449280"></s>
<s t="8101196" d="374400"></s>
<s t="8551196" d="449280"></s>
<s t="9000476" d="451200"></s>
<s t="9451674" d="449280"></s>
<s t="9900956" d="449280"></s>
<s t="10350236" d="451200"></s>
<s t="10801434" d="374400"></s>
</segmenttimeline>
</segmenttemplate>
</representation>
</adaptationset>
<adaptationset contenttype="video" width="3840" height="2160" framerate="90000/3600" segmentalignment="true" par="16:9">
<representation bandwidth="1520392" codecs="hvc1.2.4.L153" mimetype="video/mp4" sar="1:1">
<segmenttemplate timescale="90000" initialization="/home/test/timing_logic/package/video/0/video.mp4" media="/home/test/timing_logic/package/video/0/$Time$.m4s" startnumber="19">
<segmenttimeline>
<s t="16954440" d="900000" r="4"></s>
</segmenttimeline>
</segmenttemplate>
</representation>
</adaptationset>
</period>
</mpd>



When looking at the MPD, it seems that the shaka packager checks the PTS or DTS of the TS when creating a segment and recording the contents of the SegmentTimeline.
But I couldn't understand even by looking at the MPEG standard documentation and the DASH-IF documentation.


My question is whether the packager refers to PTS or DTS when creating a segment.
How are SegmentTimeline's S@t and S@d recorded ?
What algorithm is the SegmentTimeline recorded with ? Please help me. Thank you.


-
What does "dash" - mean as ffmpeg output filename
26 août 2020, par DDSI'm trying to use ffmpeg with gnuplot to draw some audio spectra, I'm following this ffmpeg doc link.


Now I'm asking what "dash"
-
means on this line right after-f data
, it should be a filename : the last element of ffmpeg command should the output file but I have no files named-
in the directory after running the command.

ffmpeg -y -i in.wav -ac 1 -filter:a aresample=8000 -map 0:a -c:a pcm_s16le -f data - | gnuplot -p -e "plot 'code>


I looked on ffmpeg docs but I didn't find anything.