
Recherche avancée
Médias (1)
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (91)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (13871)
-
On the fly transcoding and HLS streaming with ffmpeg
12 janvier 2023, par syfluqsI am building a web application that involves serving various kinds of video content. Web-friendly audio and video codecs are handled without any problems, but I am having trouble designing the delivery of video files incompatible with HTML5 video players like mkv containers or H265.



What I have done till now, is use ffmpeg to transcode the video file on the server and make HLS master and VOD playlists and use hls.js on the frontend. The problem, however, is that ffmpeg treats the playlist as a live stream playlist until transcoding is complete on the whole file and then it changes the playlist to serve as VOD. So, the user can't seek until the transcoding is over, and that my server has unnecessarily transcoded the whole file if the user decides to seek the video file halfway ahead. I am using the following ffmpeg command line arguments



ffmpeg -i sample.mkv \
 -c:v libx264 \
 -crf 18 \
 -preset ultrafast \
 -maxrate 4000k \
 -bufsize 8000k \
 -vf "scale=1280:-1,format=yuv420p" \
 -c:a copy -start_number 0 \
 -hls_time 10 \
 -hls_list_size 0 \
 -f hls \
file.m3u8




Now to improve upon this system, I tried to generate the VOD playlist through my app and not ffmpeg, since the format is self explanatory. The webapp would generate the HLS master and VOD playlists beforehand using the video properties such as duration, resolution and bitrate (which are known to the server) and serve the master playlist to the client. The client then starts requesting the individual video segments at which point the server will individually transcode and generate each segment and serve them. Seeking would be possible as the client already has the complete VOD playlist and it can request the specific segment that the user seeks to. The benefit, as I see it, would be that my server would not have to transcode the whole file, if the user decides to seek forward and play the video halfway through.



Now I tried manually creating segments (10s each) from my
sample.mkv
using the following command


ffmpeg -ss 90 \
 -t 10 \
 -i sample.mkv \
 -g 52 \
 -strict experimental \
 -movflags +frag_keyframe+separate_moof+omit_tfhd_offset+empty_moov \
 -c:v libx264 \
 -crf 18 \
 -preset ultrafast \
 -maxrate 4000k \
 -bufsize 8000k \
 -vf "scale=1280:-1,format=yuv420p" \
 -c:a copy \
fileSequence0.mp4




and so on for other segments, and the VOD playlist as



#EXTM3U
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-TARGETDURATION:10
#EXT-X-VERSION:4
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:10.0,
fileSequence0.mp4
#EXTINF:10.0,
fileSequence1.mp4
...
... and so on 
...
#EXT-X-ENDLIST




which plays the first segment just fine but not the subsequent ones.



Now my questions,



- 

-
Why don't the subsequent segments play ? What am I doing wrong ?
-
Is my technique even viable ? Would there be any problem with presetting the segment durations since segmenting is only possible after keyframes and whether ffmpeg can get around this ?







My knowledge regarding video processing and generation borders on modest at best. I would greatly appreciate some pointers.


-
-
FATE Under New Management
2 août 2010, par Multimedia Mike — FATE ServerAt any given time, I have between 20-30 blog posts in some phase of development. Half of them seem to be contemplations regarding the design and future of my original FATE system and are thus ready for the recycle bin at this point. Mans is a man of considerably fewer words, so I thought I would use a few words to describe the new FATE system that he put together.
Overview
Here are the distinguishing features that Mans mentioned in his announcement message :- Test specs are part of the ffmpeg repo. They are thus properly versioned, and any developer can update them as needed.
- Support for inexact tests.
- Parallel testing on multi-core systems.
- Anyone registered with FATE can add systems.
- Client side entirely in POSIX shell script and GNU make.
- Open source backend and web interface.
- Client and backend entirely decoupled.
- Anyone can contribute patches.
Client
The FATE build/test client source code is contained in tests/fate.sh in the FFmpeg source tree. The script — as the extension implies — is a shell script. It takes a text file full of shell variables, updates source code, configures, builds, and tests. It’s a considerably minor amount of code, especially compared to my original Python code. Part of this is because most of the testing logic has shifted into FFmpeg itself. The build system knows about all the FATE tests and all of the specs are now maintained in the codebase (thanks to all who spearheaded that effort— I think it was Vitor and Mans).The client creates a report file which contains a series of lines to be transported to the server. The first line has some information about the configuration and compiler, plus the overall status of the build/test iteration. The second line contains ’./configure’ information. Each of the remaining lines contain information about an individual FATE test, mostly in Base64 format.
Server
The server source code lives at http://git.mansr.com/?p=fateweb. It is written in Perl and plugs into a CGI-capable HTTP server. Authentication between the client and the server operates via SSH/SSL. In stark contrast to the original FATE server, there is no database component on the backend. The new system maintains information in a series of flat files. -
FFMPEG SAR mismatch when concatenating videos
6 septembre 2020, par marcmanI'm trying to concatenate videos with different codecs using the
concat
filter. I'm also doing a bunch of processing to scale and pad the individual clips in this filter as well. However, I keep getting an error of this form :

[Parsed_concat_14 @ 0x556b918ed580] Input link in1:v0 parameters (size 960x1280, SAR 0:1) do not match the corresponding output link in0:v0 parameters (1280x720, SAR 1:1)



Here is my command :


ffmpeg \
 -y \
 -loglevel warning \
 -stats \
 -i videos/vid0.mp4 \
 -i videos/vid1.mp4 \
 -i videos/vid2.mov \
 -filter_complex \
"[0:v]setpts=PTS-STARTPTS, scale=1920:1080, setsar=1, drawtext=expansion=strftime: basetime=$(date +%s -d'2020-07-10 16:04:44')000000 : fontcolor=white : text='%^b %d, %Y%n%l\\:%M%p' : fontsize=36 : y=1080-4*lh : x=1920-text_w-2*max_glyph_w; \
[1:v]setpts=PTS-STARTPTS, scale=810:1080, pad=width=1920:height=1080:x=555:y=0:color=black, setsar=1, drawtext=expansion=strftime: basetime=$(date +%s -d'2020-08-20 21:12:27')000000 : fontcolor=white : text='%^b %d, %Y%n%l\\:%M%p' : fontsize=36 : y=1080-4*lh : x=1365-text_w-2*max_glyph_w; \
[2:v]setpts=PTS-STARTPTS, scale=607:1080, pad=width=1920:height=1080:x=656:y=0:color=black, setsar=1, drawtext=expansion=strftime: basetime=$(date +%s -d'2020-08-27 16:42:26')000000 : fontcolor=white : text='%^b %d, %Y%n%l\\:%M%p' : fontsize=36 : y=1080-4*lh : x=1263-text_w-2*max_glyph_w; \
[0:v][0:a][1:v][1:a][2:v][2:a]concat=n=3:v=1:a=1[v][a]" \
 -map "[v]" \
 -map "[a]" \
 videos.mp4



This gives the following error :


[Parsed_concat_14 @ 0x563ab9d840c0] Input link in1:v0 parameters (size 960x1280, SAR 0:1) do not match the corresponding output link in0:v0 parameters (1280x720, SAR 1:1)
[Parsed_concat_14 @ 0x563ab9d840c0] Input link in2:v0 parameters (size 1080x1920, SAR 0:1) do not match the corresponding output link in0:v0 parameters (1280x720, SAR 1:1)



The input videos have these resolutions :


- 

- vid0.mp4 : (1280x720)
- vid1.mp4 : (960x1280)
- vid2.mp4 : (1080x1920)








The output resolution of the concatenated output is to be 1920x1080.


I've added the
setsar=1
command after all my scaling and padding operations, as per this question and answer. I've also triedsetdar=16/9
as in this answer, but it made no difference.

What am I missing here ?