
Recherche avancée
Médias (3)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (59)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (8139)
-
How to optimize ffmpeg w/ x264 for multiple bitrate output files
10 octobre 2013, par JonesyThe goal is to create multiple output files that differ only in bitrate from a single source file. The solutions for this that were documented worked, but had inefficiencies. The solution that I discovered to be most efficient was not documented anywhere that I could see. I am posting it here for review and asking if others know of additional optimizations that can be made.
Source file MPEG-2 Video (Letterboxed) 1920x1080 @>10Mbps
MPEG-1 Audio @ 384Kbps
Destiation files H264 Video 720x400 @ multiple bitrates
AAC Audio @ 128Kbps
Machine Multi-core ProcessorThe video quality at each bitrate is important so we are running in 2-Pass mode with the 'medium' preset
VIDEO_OPTIONS_P2 = -vcodec libx264 -preset medium -profile:v main -g 72 -keyint_min 24 -vf scale=720:-1,crop=720:400
The first approach was to encode them all in parallel processes
ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 & ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 & ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4 &
The obvious inefficiencies are that the source file is read, decoded, scaled, and cropped identically for each process. How can we do this once and then feed the encoders with the result ?
The hope was that generating all the encodes in a single ffmpeg command would optimize-out the duplicate steps.
ffmpeg -y -i $INPUT_FILE \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4However, the encoding time was nearly identical to the previous multi-process approach. This leads me to believe that all the steps are again being performed in duplicate.
To force ffmpeg to read, decode, and scale only once, I put those steps in one ffmpeg process and piped the result into another ffmpeg process that performed the encoding. This improved the overall processing time by 15%-20%.
INPUT_STREAM="ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 -threads auto -f yuv4mpegpipe -"
$INPUT_STREAM | ffmpeg -y -f yuv4mpegpipe -i - \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto out-250.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto out-500.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto out-700.mp4Does anyone see potential problems with doing it this way, or know of a better method ?
-
Using WebHDFS to play video over HTTP
3 février 2015, par Qin.YangI used ffmpeg + libx264 to convert file format as H264, then uploaded the file to Hadoop. I used WebHDFS to access the file by HTTP, but can not online play. If I download this file over HTTP, it can play by HTML5 video. My English is poor, hope you know what I mean.
-
Using WebHDFS to play video over HTTP
3 février 2015, par Qin.YangI used ffmpeg + libx264 to convert file format as H264, then uploaded the file to Hadoop. I used WebHDFS to access the file by HTTP, but can not online play. If I download this file over HTTP, it can play by HTML5 video. My English is poor, hope you know what I mean.