
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (40)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (5204)
-
FFmpeg drawtext filter - is it possible to use variables with live data for x,y coordinates ?
3 mai 2019, par DavidKI’d like to use variables for FFmpeg’s drawtext filter’s x,y coordinates so I can feed them with real-time data. The below solution with sendcmd works but I have to add relative timecodes at the beginnings so FFmpeg can link coordinates to time positions. Can it be done without timecodes with only the actual coordinates and tell FFmpeg that it should update these coordinates every 100ms ?
It would look like this in my case :
cmd.txt
drawtext reinit ’x=960:y=540’ ; (coordinates change when there’s a new position from the live source and FFmpeg updates these via sendcmd regularly).
Thanks !
-
FFMpeg C# wrapper "MediaToolkit 1.0.4.11". How can I do a conversion in memory ?
8 décembre 2016, par Quantum_KernelI am currently using the MediaToolkit 1.0.4.11 C# wrapper for FFMPEG to extract wav audio from a range of video files.
Instead of creating a wav file on disk, I would like to create it in memory (as a MemoryStream). This will save me from creating temp files which I need to delete and should speed up the various analyses that I am doing on the audio data.
Because of the way the call to the library works, I’m not sure how to ’trick’ it into doing this. Is there a way to do it or will I have to obtain and edit the wrapper source code ?
Here is what I have for doing the conversion on disk :
var inputFile = new MediaToolkit.Model.MediaFile { Filename = mediaFilePath };
var outputFile = new MediaToolkit.Model.MediaFile { Filename = @"C:\Temp\audio.wav" };
var conversionOptions = new MediaToolkit.Options.ConversionOptions
{
MaxVideoDuration = TimeSpan.FromSeconds(30),
VideoAspectRatio = MediaToolkit.Options.VideoAspectRatio.R16_9,
VideoSize = MediaToolkit.Options.VideoSize.Hd1080,
AudioSampleRate = MediaToolkit.Options.AudioSampleRate.Hz48000
};
using (var engine = new Engine())
{
engine.Convert(inputFile, outputFile, conversionOptions);
} -
FFmpeg disrupting "while/if" structure in Bash/shell scripting - how to fix ?
25 avril 2020, par charles_river_runsI'm trying to extract a set number of frames from a series of .mp4 videos within several folders. I'm reading in variables from a CSV file, where each row of the CSV file contains the variables for each clip, which are : "dataset", "file name", "framelabel", "clip target start time", "clip target end time", "extract rate" (in seconds per frame), and "reason for exclusion."



I'm using a while/if loop structure so I can avoid trying to process the videos that are excluded from our project.



The while/if loop structure seems to work well when I just print out the variable names. However, when I insert a line of ffmpeg code, the variable assignments for the next video (ie, the following line read from the CSV file) go haywire.



Here is the code WITHOUT the ffmpeg line, which seems to work (all the "echo" statements are just there as debugging aids to track the variable assignments) :



IFS=‘,’
while read DATASET FILENAME FRAMELABEL TARGET_START TARGET_END TOTAL_DURATION EXTRACT_RATE REASON_FOR_EXCLUSION 
do 
echo “PRE FFMPEG”
echo “dataset and filename are “ $DATASET/$FILENAME
echo “target start is “ $TARGET_START
echo “target end is “ $TARGET_END

if [ $TARGET_START != EXCLUDED ] && [ $TARGET_START != Target_start ]
then
IFS=‘:’ 
read -r -a fps_array <<< "$EXTRACT_RATE"
IFS=‘,’
let "secs = ${fps_array[0]}*3600 + ${fps_array[1]}*60 + ${fps_array[2]}"
FPS_RATE=$(echo "scale=20;1/$secs" | bc)

echo “AFTER FFMPEG”
echo “dataset and filename are “ $DATASET/$FILENAME
echo “target start is “ $TARGET_START
echo “target end is “ $TARGET_END

fi
done < Test_Key.csv`





The output for the code above looks like this, which is correct (I have 2 folders, "cats" and "es123", with two videos each ; one of the es123 videos is excluded from frame extraction) :



“PRE FFMPEG”
“dataset and filename are “ cats/IMG_3460
“target start is “ 0:00:02
“target end is “ 0:00:24
“AFTER FFMPEG”
“dataset and filename are “ cats/IMG_3460
“target start is “ 0:00:02
“target end is “ 0:00:24
“PRE FFMPEG”
“dataset and filename are “ cats/IMG_4137
“target start is “ 0:00:10
“target end is “ 0:00:40
“AFTER FFMPEG”
“dataset and filename are “ cats/IMG_4137
“target start is “ 0:00:10
“target end is “ 0:00:40
“PRE FFMPEG”
“dataset and filename are “ es123/IMG_4577
“target start is “ EXCLUDED
“target end is “
“PRE FFMPEG”
“dataset and filename are “ es123/IMG_4839
“target start is “ 0:00:05
“target end is “ 0:00:25
“AFTER FFMPEG”
“dataset and filename are “ es123/IMG_4839
“target start is “ 0:00:05
“target end is “ 0:00:25




However, I then try to add my ffmpeg line in that does the actual frame extraction :



IFS=‘,’
while read DATASET FILENAME FRAMELABEL TARGET_START TARGET_END TOTAL_DURATION EXTRACT_RATE REASON_FOR_EXCLUSION 
do 
echo “PRE FFMPEG”
echo “dataset and filename are “ $DATASET/$FILENAME
echo “target start is “ $TARGET_START
echo “target end is “ $TARGET_END

if [ $TARGET_START != EXCLUDED ] && [ $TARGET_START != Target_start ]
then
IFS=‘:’ 
read -r -a fps_array <<< "$EXTRACT_RATE"
IFS=‘,’
let "secs = ${fps_array[0]}*3600 + ${fps_array[1]}*60 + ${fps_array[2]}"
FPS_RATE=$(echo "scale=20;1/$secs" | bc)

ffmpeg -i $DATASET/$FILENAME.mp4 -ss $TARGET_START -to $TARGET_END -vf fps=$FPS_RATE ${DATASET}/${DATASET}_${FILENAME}_%03d.jpg

echo “AFTER FFMPEG”
echo “dataset and filename are “ $DATASET/$FILENAME
echo “target start is “ $TARGET_START
echo “target end is “ $TARGET_END

fi
done < Test_Key.csv




The result is that frames are correctly extracted for the first movie, but then the variable names for the second movie are completely screwed up (and then, of course, the code fails because the video names are wrong). You can see this in the 'echo' statements I used to track the variable naming, which become :



“PRE FFMPEG”
“dataset and filename are “ cats/IMG_3460
“target start is “ 0:00:02
“target end is “ 0:00:24
“AFTER FFMPEG”
“dataset and filename are “ cats/IMG_3460
“target start is “ 0:00:02
“target end is “ 0:00:24
“PRE FFMPEG”
“dataset and filename are “ IMG_4839/es123_2
“target start is “ 0:00:25
“target end is “ 0:00:20




Does anyone know why ffmpeg is affecting the functioning of the while/if loop structure here ? (I'm pretty new to shell scripting, sorry if this is something obvious). And what I can do to fix it ?



Thank you !



For reference, here is everything printed when I run the code with the ffmpeg line included— there are some parse errors showing up that I don't really understand but probably explain this.



“PRE FFMPEG”
“dataset and filename are “ cats/IMG_3460
“target start is “ 0:00:02
“target end is “ 0:00:24
ffmpeg version 4.2.2-tessus https://evermeet.cx/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers
 built with Apple clang version 11.0.0 (clang-1100.0.33.16)
 configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-version3 --pkg-config-flags=--static --disable-ffplay
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'cats/IMG_3460.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:25.15, start: 0.000000, bitrate: 6866 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1080x1920, 6796 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
 Metadata:
 handler_name : Core Media Video
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
 Metadata:
 handler_name : Core Media Audio
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help

Enter command: <target>|all <time>|-1 <command>[ <argument>]

Parse error, at least 3 arguments were expected, only 1 given in string 'ats,IMG_4137,cats_2,0:00:10,0:00:40,0:00:30,0:00:03,none,,'
[swscaler @ 0x7fd3fc953400] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to 'cats/cats_IMG_3460_%03d.jpg':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Stream #0:0(und): Video: mjpeg, yuvj420p(pc), 1080x1920, q=2-31, 200 kb/s, 0.50 fps, 0.50 tbn, 0.50 tbc (default)
 Metadata:
 handler_name : Core Media Video
 encoder : Lavc58.54.100 mjpeg
 Side data:
 cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame= 8 fps=3.2 q=1.6 size=N/A time=00:00:16.00 bitrate=N/A speed=6.33x 0.00 bitrate=N/A speed=6.59x 
Enter command: <target>|all <time>|-1 <command>[ <argument>]

Parse error, at least 3 arguments were expected, only 1 given in string 'LUDED,,,,boring,,'
frame= 11 fps=3.0 q=2.6 L00120000000000000000000000000000size=N/A time=00:00:22.00 bitrate=N/A speed=6.06x 
video:803kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
“AFTER FFMPEG”
“dataset and filename are “ cats/IMG_3460
“target start is “ 0:00:02
“target end is “ 0:00:24
“PRE FFMPEG”
“dataset and filename are “ IMG_4839/es123_2
“target start is “ 0:00:25
“target end is “ 0:00:20
-bash: let: secs = none*3600 + *60 + : syntax error: operand expected (error token is "*60 + ")
ffmpeg version 4.2.2-tessus https://evermeet.cx/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers
 built with Apple clang version 11.0.0 (clang-1100.0.33.16)
 configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-version3 --pkg-config-flags=--static --disable-ffplay
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
IMG_4839/es123_2.mp4: No such file or directory
“AFTER FFMPEG”
“dataset and filename are “ IMG_4839/es123_2
“target start is “ 0:00:25
“target end is “ 0:00:20
</argument></command></time></target></argument></command></time></target>