
Recherche avancée
Autres articles (62)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.
Sur d’autres sites (11839)
-
FFmpeg-split.py can't determine video length
13 juin 2017, par Almog WoldenbergFirst, I’m not a developer. I’m trying to split a movie into 1 minute clips usinf ffmpeg-split.py python script. I made sure FFmpeg is installed it trying a simple command and it worked like magic :
ffmpeg -i soccer.mp4 -ss 00:00:00 -codec copy -t 10 soccer1.mp4
A new video file was created in the same folder.
I saved the FFmpeg-split.py in the same dir, updated python PATH and typed the following command :
python ffmpeg-split.py -f soccer.mp4 -s 10
what I got back was :
can't determine video length
I believe it just can’t find the file. I switched video files and even deleted it and got the same message.
Any ideas ?
-
ffmpeg : how to determine frame rate automatically ?
24 septembre 2021, par mrgloomI use this simple script to convert video to images using
ffmpeg
, but frame rate is fixed, how can I determine it automatically ?


FRAME_RATE="30"
SEPARATOR='/'


VIDEO_PATH=$1

VIDEO_BASE_DIR=`dirname $1`
FRAMES_DIR=$VIDEO_BASE_DIR$SEPARATOR"Frames"
rm -rf $FRAMES_DIR
mkdir $FRAMES_DIR

#Convert video to images
./ffmpeg -r $FRAME_RATE -i $VIDEO_PATH $FRAMES_DIR$SEPARATOR"image%d.png"




UPDATE :



By
ffprobe
I checked that my 1st video frame rate is 30.
Also results are the same (339 frames are produced) even I reduce frame rate, so-r
option doesn't work or work in some other way ?


These command give the same result :



./ffmpeg -r 10 -i $VIDEO_PATH $FRAMES_DIR$SEPARATOR"image%d"$EXTENSION
./ffmpeg -r 30 -i $VIDEO_PATH $FRAMES_DIR$SEPARATOR"image%d"$EXTENSION
./ffmpeg -i $VIDEO_PATH $FRAMES_DIR$SEPARATOR"image%d"$EXTENSION




Output :



ffmpeg version N-63893-gc69defd Copyright (c) 2000-2014 the FFmpeg developers
 built on Jul 16 2014 05:38:01 with gcc 4.6 (Debian 4.6.3-1)
 configuration: --prefix=/root/ffmpeg-static/64bit --extra-cflags='-I/root/ffmpeg-static/64bit/include -static' --extra-ldflags='-L/root/ffmpeg-static/64bit/lib -static' --extra-libs='-lxml2 -lexpat -lfreetype' --enable-static --disable-shared --disable-ffserver --disable-doc --enable-bzlib --enable-zlib --enable-postproc --enable-runtime-cpudetect --enable-libx264 --enable-gpl --enable-libtheora --enable-libvorbis --enable-libmp3lame --enable-gray --enable-libass --enable-libfreetype --enable-libopenjpeg --enable-libspeex --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-version3 --enable-libvpx
 libavutil 52. 89.100 / 52. 89.100
 libavcodec 55. 66.101 / 55. 66.101
 libavformat 55. 43.100 / 55. 43.100
 libavdevice 55. 13.101 / 55. 13.101
 libavfilter 4. 8.100 / 4. 8.100
 libswscale 2. 6.100 / 2. 6.100
 libswresample 0. 19.100 / 0. 19.100
 libpostproc 52. 3.100 / 52. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/user/myvideo1.mp4':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: isommp42
 creation_time : 2016-01-16 05:30:03
 Duration: 00:00:11.33, start: 0.000000, bitrate: 4659 kb/s
 Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 640x480, 4539 kb/s, SAR 65536:65536 DAR 4:3, 30 fps, 30 tbr, 90k tbn, 180k tbc (default)
 Metadata:
 rotate : 90
 creation_time : 2016-01-16 05:30:03
 handler_name : VideoHandle
 Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 127 kb/s (default)
 Metadata:
 creation_time : 2016-01-16 05:30:03
 handler_name : SoundHandle
Output #0, image2, to '/home/user/Frames/image%d.png':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: isommp42
 encoder : Lavf55.43.100
 Stream #0:0(eng): Video: png, rgb24, 640x480 [SAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 30 fps, 90k tbn, 30 tbc (default)
 Metadata:
 rotate : 90
 creation_time : 2016-01-12 05:38:03
 handler_name : VideoHandle
 encoder : Lavc55.66.101 png
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> png (png))
Press [q] to stop, [?] for help
frame= 339 fps= 68 q=0.0 Lsize=N/A time=00:00:11.30 bitrate=N/A 
video:195852kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown



-
How can I determine if a codec / container combination is compatible with FFmpeg ?
26 juin 2023, par DanI'm looking at re-muxing some containers holding audio and video such that I extract the best, first audio stream, and store it in a new container where e.g. only the audio stream is present.



The output context for FFmpeg is created like so :



AVFormatContext* output_context = NULL;
avformat_alloc_output_context2( &output_context, NULL, "mp4", NULL );




I have a shortlist of acceptable outputs, e.g. MP4, M4A, etc … essentially those that are readable by Apple's Audio File Services :



kAudioFileAIFFType = 'AIFF',
kAudioFileAIFCType = 'AIFC',
kAudioFileWAVEType = 'WAVE',
kAudioFileSoundDesigner2Type = 'Sd2f',
kAudioFileNextType = 'NeXT',
kAudioFileMP3Type = 'MPG3', // mpeg layer 3
kAudioFileMP2Type = 'MPG2', // mpeg layer 2
kAudioFileMP1Type = 'MPG1', // mpeg layer 1
kAudioFileAC3Type = 'ac-3',
kAudioFileAAC_ADTSType = 'adts',
kAudioFileMPEG4Type = 'mp4f',
kAudioFileM4AType = 'm4af',
kAudioFileM4BType = 'm4bf',
kAudioFileCAFType = 'caff',
kAudioFile3GPType = '3gpp',
kAudioFile3GP2Type = '3gp2',
kAudioFileAMRType = 'amrf'




My question is this : is there an easy API in FFmpeg that can be leveraged to choose a compatible output container given the codec the audio stream is in ?