
Recherche avancée
Médias (16)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (48)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (11253)
-
Add attachment to Matroska (mkv) programmatically after video write
6 octobre 2020, par FaberI want to add a
protobuf
message as attachment to a Matroska (mkv) video file after all video frames have been written without copying the video data. This must be possible because attaching an arbitrary file to an mkv can be achieved with the MKVToolNix suite (for a JPG) :


# add attachment, no copy according to man page
mkvpropedit out.mkv --add-attachment ~/Downloads/hummingbird.jpg
.
.
# get attachment id
mkvmerge -i out.mkv
.
Attachment ID 1: type 'image/jpeg', size 821740 bytes, file name 'hummingbird.jpg'
.
# extract attachment
mkvextract attachments out.mkv 1:./test.jpg




I want to be capable to perform the same read-write-cycle by calling library methods. Preferably without the need to write the
protobuf
message to a file first (e.g. by passing a byte array of the serializedprotobuf
message).


Currently I'm using
libav
for reading/writing video data from/to mkv. Therefor my favorite solution would also only depend onlibav
. If this is not possible I would consider introducinglibEBML
andlibMatroska
as new dependencies (same as MKVToolNix).


What are the key functions in the frameworks that need to be called to achieve the goal ? I'm pretty sure mbunkus knows the solution ...


-
ffmpeg quality conversion options (video compression)
25 septembre 2020, par Jason HunterCan you provide a link, or an explanation, to the
-q:v 1
argument that deals with video/image quality, and compression, in ffmpeg.

Let me explain...


for f in *
 do 
 extension="${f##*.}"
 filename="${f%.*}"
 ffmpeg -i "$f" -q:v 1 "$filename"_lq."$extension"
 rm -f "$f"
 done



The ffmpeg
for
loop above compresses all images and videos in your working directory, it basically lowers the quality which results in smaller file sizes (the desired outcome).

I'm most interested in the
-q:v 1
argument of thisfor
loop. The1
in the-q:v 1
argument is what controls the amount of compression. But I can't find any documentation describing how to change this value of1
, and describing what it does. Is it a percentage ? Multiplier ? How do I adjust this knob ? Can/should I use negative values ? Integers only ? Min/max values ? etc.

I started with the official documentation but the best I could find was a section on video quality, and the
-q
flag description is sparse.



-frames[:stream_specifier] framecount (output,per-stream)

Stop writing to the stream after framecount frames.

.

-q[:stream_specifier] q (output,per-stream)



-qscale[:stream_specifier] q (output,per-stream)

Use fixed quality scale (VBR). The meaning of q/qscale is codec-dependent. If qscale is used without a stream_specifier then it applies only to the video stream, this is to maintain compatibility with previous behavior and as specifying the same codec specific value to 2 different codecs that is audio and video generally is not what is intended when no stream_specifier is used.



-
How to use ffmpeg to transcode many live streamed videos ? [closed]
21 septembre 2020, par user14258924PREMISE


As a pet project, I am writing a live video streaming service, in Go, that can consume video streams from OBS via SRT(TS -> h264/aac) and RTMP(FLV -> h264/aac) protocols and am planning to support streaming video from web browser as well, captured from a web camera via JS. This ingress server will receive many video streams in various containers and codecs and I need to normalize them into single container and codec and then create multiple versions for various bitrates(ie. 240p, 360p, 480p, 720p, 1080p...) to pass along where needed in the application. Each stream is split into 2 second GOP segments, separate for audio and video track, that will produce fragmented MP4 as the end result - which can be consumed by web browser.


The issue is that I am using Go which has no libraries for transcoding video so I need to use either ffmpeg or vlc, which is a C code. I have decided to avoid the CGo route and use ffmpeg/vlc as standalone binaries.


QUESTION


My question is how to use either of these project in the most efficient way - avoiding the use of files in favour of unix sockets/streams and also the performance aspect - handling hundreds of video segments in any one time and in sufficient time to avoid creating too much of a lag beteen producer and consumer.


So let's say I will pick the most used one - ffmpeg, how should I actually use it to achieve what I have described ? How would you set it up and which flags/config to use with it ?


Can the performance be even achieved or is it just too much and I will need some sort of ffmpeg cluser to even come close to some useful performance/low delay ?