
Recherche avancée
Médias (3)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (107)
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
L’espace de configuration de MediaSPIP
29 novembre 2010, parL’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
Il permet de configurer finement votre site.
La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (9039)
-
How to averagely extract frames from a video using ffmpeg by specifying "fps"
14 juin 2021, par alanzzzI have a job of using ffmpeg to extract the frames averagely from a video, with different fps. I use this command for it.


ffmpeg -i input.mp4 -r specified_fps -q:v 2 image %4d.png


And I have 3 questions about this task.


- 

- what I expect is that if I double the fps, the number of extracted frames will also get doubled. However, that's not the case. Take one of the input videos as an example.




video info


- 

- Duration : 1min18s
- Total number of frames : 2340
- Frame rate mode : constant (CFR)
- Frame rate : 30.0 FPS










Config setting & results


- 

- FPS=1 => number of frames=80
- FPS=2 => number of frames=158 (2x80-2)
- FPS=3 => number of frames=236 (3x80-4)
- FPS=5 => number of frames=392 (5x80-8)










Is it possible for me to get the exact doubled number of frames when fps get doubled ? In such case, number of frames is 160 for FPS=2, 240 for FPS=3, 400 for FPS=5.


- 

-
I check for the output images, the extracted frames in different fps are totally different from each other. In other words, for example, the 1st image for fps=1 is not the same as the 1st image for fps=2. Is that legitimate ? And is it possible for me the get some identical images for different fps ?


-
The last problem is that for some videos I use, the difference between the 1st and 2nd image is different from the difference between the 2nd and 3rd. While for the remaining images, the differences become average. To be specific, there is only a slight change from 1st to 2nd frame, while for 2nd to 3rd, 3rd to 4th, and so on, the changes are the same, which is normally distributed according to the specified FPS. I am wondering why such a case happens ? Does it related to the I-frame, B-frame, P-frame, GOP or IDR ?








I am new to this field and cannot find some useful info from other places. I've tried my best to describe my questions clearly. Feel free to leave some comments. Any help would do me a great favor. Thanks in advance !


-
Web-based video editor
13 avril 2021, par DannyWe have a web-based editor currently that allows users to build animated web apps. The apps are made up of shapes, text, images, and videos. Except for videos, all other elements can also be animated around the screen. The result of building a animated app is basically a big blob of JSON.



The playback code for the web app is web-based as well. It takes the JSON blob and constructs the HTML, which ends up playing back in some sort of browser environment. The problem is that most of the time this playback occurs on lower-end hardware like televisions and set-top boxes.



These performance issues go away if there is some way to be able to convert a digital sign to video. Then the STB/smart TV simply plays a video, which is much more performant than playing back animations in a web view.



Given a blob of JSON describing each layer and how to draw each type of object, its animation points, etc, how could I somehow take that and convert it to video on the server ?



My first attempt at this was using PhantomJS to load the playback page in a headless browser, take a series of screenshots, and then use ffmpeg to merge those screenshots into a video. That worked great so long as there is no video. But it does not work with video since there is no HTML5 video tag support in PhantomJS, and even if there was, I would lose any audio.



The other way I was thinking of doing it would be to again load the playback page in PhantomJS, but turn off the video layers and leave them transparent, then take screenshots as a series of PNGs with transparency. I would then combine these with the video layers.



None of this feels very elegant though. I know there are web-based video editors out there that basically do what I'm trying to accomplish, so how do they do it ?


-
ignore "channel_layout" when working with multichannel audio in ffmpeg
21 mars 2024, par umläuteI'm working with multichannel audio files (higher-order ambisonics), that typically have at least 16 channels.


Sometimes I'm only interested in a subset of the audiochannels (e.g. the first 25 channels of a file that contains even more channels).


For this I have a script like the following, that takes a multichannel input file, an output file and the number of channels I want to extract :


#!/bin/sh
infile=$1
outfile=$2
channels=$3

channelmap=$(seq -s"|" 0 $((channels-1)))

ffmpeg -y -hide_banner \
 -i "${infile}" \
 -filter_complex "[0:a]channelmap=${channelmap}" \
 -c:a libopus -mapping_family 255 -b:a 160k -sample_fmt s16 -vn -f webm -dash 1 \
 "${outfile}"



The actual channel extraction is done via the channelmap filter, that is invoked with something like
-filter:complex "[0:a]channelmap=0|1|2|3"


This works great with 1,2,4 or 16 channels.


However, it fails with 9 channels, and 25 and 17 (and generally anything with >>16 channels).


The error I get is :


$ ffmpeg -y -hide_banner -i input.wav -filter_complex "[0:a]channelmap=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16" -c:a libopus -mapping_family 255 -b:a 160k -sample_fmt s16 -vn -f webm -dash 1 output.webm
Input #0, wav, from 'input.wav':
 Duration: 00:00:09.99, bitrate: 17649 kb/s
 Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 25 channels, s16, 17640 kb/s
[Parsed_channelmap_0 @ 0x5568874ffbc0] Output channel layout is not set and cannot be guessed from the maps.
[AVFilterGraph @ 0x5568874fff40] Error initializing filter 'channelmap' with args '0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16'
Error initializing complex filters.
Invalid argument



So ffmpeg cannot guess the channel layout for a 17 channel file.

ffmpeg -layouts
only lists channel layouts with 1,2,3,4,5,6,7,8 & 16.

However, I really don't care about the channel layout. The entire concept of "channel layout" is centered around the idea, that each audio channel should go to a different speaker.
But my audio channels are not speaker feeds at all.


So I tried providing explicit channel layouts, with something like
-filter_complex "[0:a]channelmap=map=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16:channel_layout=unknown"
, but this results in an error when parsing the channel layout :

$ ffmpeg -y -hide_banner -i input.wav -filter_complex "[0:a]channelmap=map=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16:channel_layout=unknown" -c:a libopus -mapping_family 255 -b:a 160k -sample_fmt s16 -vn -f webm -dash 1 output.webm
Input #0, wav, from 'input.wav':
 Duration: 00:00:09.99, bitrate: 17649 kb/s
 Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 25 channels, s16, 17640 kb/s
[Parsed_channelmap_0 @ 0x55b60492bf80] Error parsing channel layout: 'unknown'.
[AVFilterGraph @ 0x55b604916d00] Error initializing filter 'channelmap' with args 'map=0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16:channel_layout=unknown'
Error initializing complex filters.
Invalid argument



I also tried values like
any
,all
,none
,0x0
and0xFF
with the same result.
I tried usingmono
(as the channels are kind-of independent), butffmpeg
is trying to be clever and tells me that a mono layout must not have 17 channels.

I know that
ffmpeg
can handle multi-channel files without a layout.
E.g. converting a 25-channel file without the-filter_complex "..."
works without problems, andffprobe
gives me anunknown
channel layout.

So : how do I tell
ffmpeg
to just not care about the channel_layout when creating an output file that only contains a subset of the input channels ?