
Recherche avancée
Médias (1)
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (31)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (10278)
-
g++ Linking Error on Mac while compiling FFMPEG
7 mai 2013, par Saptarshi Biswasg++ on Snow Leopard is throwing linking errors on the following piece of code
test.cpp
#include <iostream>
using namespace std;
#include <libavcodec></libavcodec>avcodec.h> // required headers
#include <libavformat></libavformat>avformat.h>
int main(int argc, char**argv) {
av_register_all(); // offending library call
return 0;
}
</iostream>When I try to compile this using the following command
g++ test.cpp -I/usr/local/include -L/usr/local/lib \
-lavcodec -lavformat -lavutil -lz -lm -o testI get the error
Undefined symbols :
"av_register_all()", referenced from :
_main in ccUD1ueX.o
ld : symbol(s) not found
collect2 : ld returned 1 exit statusInterestingly, if I have an equivalent c code,
test.c#include
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
int main(int argc, char**argv) {
av_register_all();
return 0;
}gcc compiles it just fine
gcc test.c -I/usr/local/include -L/usr/local/lib \
-lavcodec -lavformat -lavutil -lz -lm -o testI am using Mac OS X 10.6.5
$ g++ --version
i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664)
$ gcc --version
i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664)FFMPEG's libavcodec, libavformat etc. are C libraries and I have built them on my machine like thus :
./configure --enable-gpl --enable-pthreads --enable-shared \
--disable-doc --enable-libx264
make && sudo make installAs one would expect, libavformat indeed contains the symbol av_register_all
$ nm /usr/local/lib/libavformat.a | grep av_register_all
0000000000000000 T _av_register_all
00000000000089b0 S _av_register_all.ehI am inclined to believe g++ and gcc have different views of the libraries on my machine. g++ is not able to pick up the right libraries. Any clue ?
-
Why does my yuva444p video always lose alpha channel after transcoding to vp9 ?
1er avril, par Ian YeI'm trying to convert a video with alpha channel using FFmpeg, but the transparency information gets lost in the output.


Here is the input video (prores, yuva444p) detail with ffprobe :


Stream #0:1[0x2](und): Video: prores (4444) (ap4h / 0x68347061), yuva444p12le(bt709), 360x480, 73314 kb/s, 60.13 fps, 60 tbr, 600 tbn (default)
 Metadata:
 creation_time : 2019-05-21T21:23:03.000000Z
 handler_name : Core Media Video
 vendor_id : appl
 encoder : Apple ProRes 4444



And my command is below :


ffmpeg -i alpha_prores.mov -c:v libvpx-vp9 -pix_fmt yuva420p -auto-alt-ref 0 out.webm



There is the output info :


Output #0, webm, to 'out.webm':
 Metadata:
 major_brand : qt
 minor_version : 0
 compatible_brands: qt
 com.apple.quicktime.creationdate: 2019-05-14T13:47:17-0700
 com.apple.quicktime.location.ISO6709: +37.3367-122.0094/
 com.apple.quicktime.make: Apple
 com.apple.quicktime.model: iPhone X
 com.apple.quicktime.software: 12.1.2
 encoder : Lavf61.7.100
 Stream #0:0(und): Video: vp9, yuva420p(tv, bt709, progressive), 360x480, q=2-31, 60 fps, 1k tbn (default)
 Metadata:
 creation_time : 2019-05-21T21:23:03.000000Z
 handler_name : Core Media Video
 vendor_id : appl
 encoder : Lavc61.19.101 libvpx-vp9
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A



We can see pix_fmt is still yuva420p, but after finished, I use ffprobe to check it, and the result changed !


Stream #0:0: Video: vp9 (Profile 0), yuv420p(tv, bt709, progressive), 360x480, SAR 1:1 DAR 3:4, 60 fps, 60 tbr, 1k tbn (default)
 Metadata:
 alpha_mode : 1
 HANDLER_NAME : Core Media Video
 VENDOR_ID : appl
 ENCODER : Lavc61.19.101 libvpx-vp9
 DURATION : 00:00:05.950000000



The pix_fmt turn to yuv420p, which means lose alpha channel. Also we can see there is an extra info in metadata
alpha_mode: 1
.

By the way, the result webm video can display its transparent pixel correctly in browser.


When I try to overlay it on a mp4 video, transparent pixels turn to black pixels :


ffmpeg -i background.mp4 -i out.webm -filter_complex "[0][1]overlay=x=10:y=10" output2.mp4



And when I overlay the prores video on the same mp4, result is correct.


ffmpeg -i background.mp4 -i alpha_prores.mov -filter_complex "[0][1]overlay=x=10:y=10" output2.mp4



So I wonder the difference between yuva and alpha_mode:1.


And the reason why vp9 would lose alpha channel when overlay on a mp4 video.


It's better if you can provide other pixel format for my purpose(I want it can display correctly when overlay on mp4) !


Thanks !


-
How to create Automator service for multiple ffmpeg command
31 janvier 2017, par user413734There are many tutorials for creating an automator service that calls an ffmpeg one liner script, like so e.g :
http://apple.stackexchange.com/questions/129929/automating-ffmpeg-using-automator-service
But I would like this service to do more than just a one liner. There is the option to chain commands with && but that makes it rather confusing.
How can I create an automator service that does one transformation and after it is done the next line etc, e.g :for f in "$@"
do
ffmpeg -i "$f" -c:v copy -c:a copy "${f%.*}.mp4"
ffmpeg -ss 00:00:01 -i "$f" -frames:v 1 "${f%.*}.jpg"
....
doneOr attach this service to one folder that if I drop a file it automatically does the require conversions.
Thanks