
Advanced search
Other articles (61)
-
Les images
15 May 2013 -
Dépôt de média et thèmes par FTP
31 May 2013, byL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...) -
Contribute to translation
13 April 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...)
On other websites (8496)
-
lavu/frame: Add Dolby Vision metadata side data type
3 January 2022, by Niklas Haaslavu/frame: Add Dolby Vision metadata side data type
In order to be able to extend this struct later (as the Dolby Vision RPU
evolves), all of the 'container' structs are considered extensible, and
the individual constituent fields must instead be accessed via offsets.
The precedent for this style of access is set in
<libavutil/detection_bbox.h>Signed-off-by: Niklas Haas <git@haasn.dev>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com> -
Produce waveform video from audio using FFMPEG
27 April 2017, by RhythmicDevilI am trying to create a waveform video from audio. My goal is to produce a video that looks something like this
For my test I have an mp3 that plays a short clipped sound. There are 4 bars of 1/4 notes and 4 bars of 1/8 notes played at 120bpm. I am having some trouble coming up with the right combination of preprocessing and filtering to produce a video that looks like the image. The colors dont have to be exact, I am more concerned with the shape of the beats. I tried a couple of different approaches using showwaves and showspectrum. I cant quite wrap my head around why when using showwaves the beats go past so quickly, but using showspectrum produces a video where I can see each individual beat.
ShowWaves
ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showwaves=s=1280x100:mode=cline:rate=25:scale=sqrt,format=yuv420p[v]" -map "[v]" -map 0:a output_wav.mp4
This link will download the output of that command.
ShowSpectrum
ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showspectrum=s=1280x100:mode=combined:color=intensity:saturation=5:slide=1:scale=cbrt,format=yuv420p[v]" -map "[v]" -an -map 0:a output_spec.mp4
This link will download the output of that command.
I posted the simple examples because I didn’t want to confuse the issue by adding all the variations I have tried.
In practice I suppose I can get away with the output from showspectrum but I’d like to understand where/how I am thinking about this incorrectly. Thanks for any advice.
Here is a link to the source audio file.
-
Aspect ratio problems at transcoding with ffmpeg [closed]
19 November 2023, by udippelI have a huge collection of videos from the last 20+ years, videos in all sorts of formats. I use gerbera as open source UPnP-AV media server. Our TV handles only very limited of these formats. Therefore I use the transcoding feature of gerbera (I don't want to convert the 2000+ files; thereby avoiding loss of multiple audio tracks, (multiple) subtitles, and so forth).


This is my current unified argument line for ffmpeg:

-c:v mpeg2video -maxrate 20000k -vf setdar=16/9 -r 24000/1001 -qscale:v 4 -top 1 -c:a mp2 -f mpeg -y

It works pretty okay, except of the aspect ratios. Well, I don't understand this fully, because ffprobe for File A states:
Stream #0:0: Video: mpeg4 (Simple Profile) (XVID / 0x44495658), yuv420p, 624x464 [SAR 1:1 DAR 39:29], 1500 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc

This file displays very well.
File B comes as:
Stream #0:0(eng): Video: h264 (High), yuv420p(tv, bt709, progressive), 960x720, SAR 1:1 DAR 4:3, 23.98 fps, 23.98 tbr, 1k tbn, 180k tbc (default)

This file displays horribly squeezed vertically and doesn't fill the screen left and right neither, with the same settings of the TV. Also, playing this file (and others, naturally) the TV doesn't offer the 14:9 display option, which is available e.g. for the file further up.

Both have same SAR, DAR, almost identical H:V ratios (1.345, 1.333); and almost identical DAR.


My questions:


- 

- Why, despite of almost identical pixel ratios, DAR and SAR are these files handled so differently in one and the same session on the same TV (SONY), please?
- With which method could I instruct ffmpeg to display the second file properly, too, please?
(I have already tried 'scale'; but to no avail. Which could have been foreseen, since the ratios are already very close.)
My guess is, that the
(tv, bt709, progressive)
mess things up.
(I have already tried to add theyuv420p
in the argument line, also to no avail.)






Appreciate any help,


Uwe


I have already tried to add a 'scale' option; but to no avail. Which could have been foreseen, since the ratios are already very close.
I have already tried to add the
yuv420p
in the argument line, also to no avail.
I have already triedforce_original_aspect_ratio
, but also here, nothing improving.
Also, I played with -aspect, but the aspects are okay, and would need individual corrections, which I can't and don't to do for 2000+ files. A simple 16:9 doesn't cut it.