
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (97)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)
Sur d’autres sites (9406)
-
Search For Specific Values Result in Python
5 mai 2020, par jamlotI am attempting to write a Python script that looks for black video and silent audio in a file, and returns only the time instances when they occur.



I have the following code working using the ffmpeg-python wrapper, but I can't figure out an efficient way to parse the stdout or stderror to return only the instances of black_start, black_end, black_duration, silence_start, silence_end, silence_duration.



Putting ffmpeg aside for those who are not experts, how can I use re.findall or similar to define the regex to return only the above values ?



import ffmpeg 

input = ffmpeg.input(source)
video = input.video.filter('blackdetect', d=0, pix_th=0.00)
audio = input.audio.filter('silencedetect', d=0.1, n='-60dB')
out = ffmpeg.output(audio, video, 'out.null', format='null')
run = out.run_async(pipe_stdout=True, pipe_stderr=True)
result = run.communicate()

print(result)




This results in the ffmpeg output, which contains the results I need. Here is the output (edited for brevity) :



(b'', b"ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
 built with Apple clang version 11.0.0 (clang-1100.0.33.17)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.2_3 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags=-fno-stack-check --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/Users/otoolej/Documents/_lab/source/black-silence-detect/AUUV71900381_test.mov':
 Metadata:
 major_brand : qt 
 minor_version : 537199360
 compatible_brands: qt 
 creation_time : 2019-11-14T04:12:49.000000Z
 Duration: 00:03:50.28, start: 0.000000, bitrate: 185168 kb/s
 Stream #0:0(eng): Video: prores (HQ) (apch / 0x68637061), yuv422p10le(tv, bt709, progressive), 1920x1080, 183596 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
 Metadata:
 creation_time : 2019-11-14T04:12:49.000000Z
 handler_name : Apple Video Media Handler
 encoder : Apple ProRes 422 (HQ)
 timecode : 00:00:00:00
 Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, stereo, s16, 1536 kb/s (default)
 Metadata:
 creation_time : 2019-11-14T04:12:49.000000Z
 handler_name : Apple Sound Media Handler
 timecode : 00:00:00:00
 Stream #0:2(eng): Data: none (tmcd / 0x64636D74) (default)
 Metadata:
 creation_time : 2019-11-14T04:12:49.000000Z
 handler_name : Time Code Media Handler
 timecode : 00:00:00:00
Only '-vf blackdetect=d=0:pix_th=0.00' read, ignoring remaining -vf options: Use ',' to separate filters
Only '-af silencedetect=d=0.1:n=-60dB' read, ignoring remaining -af options: Use ',' to separate filters
Stream mapping:
 Stream #0:0 -> #0:0 (prores (native) -> wrapped_avframe (native))
 Stream #0:1 -> #0:1 (pcm_s16le (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, null, to 'pipe:':
 Metadata:
 major_brand : qt 
 minor_version : 537199360
 compatible_brands: qt 
 encoder : Lavf58.29.100
 Stream #0:0(eng): Video: wrapped_avframe, yuv422p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc (default)
 Metadata:
 creation_time : 2019-11-14T04:12:49.000000Z
 handler_name : Apple Video Media Handler
 timecode : 00:00:00:00
 encoder : Lavc58.54.100 wrapped_avframe
 Stream #0:1(eng): Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s (default)
 Metadata:
 creation_time : 2019-11-14T04:12:49.000000Z
 handler_name : Apple Sound Media Handler
 timecode : 00:00:00:00
 encoder : Lavc58.54.100 pcm_s16le
[silencedetect @ 0x7fdd82d011c0] silence_start: 0
frame= 112 fps=0.0 q=-0.0 size=N/A time=00:00:05.00 bitrate=N/A speed=9.96x 
[blackdetect @ 0x7fdd82e06580] black_start:0 black_end:5 black_duration:5
[silencedetect @ 0x7fdd82d011c0] silence_end: 5.06285 | silence_duration: 5.06285
frame= 211 fps=210 q=-0.0 size=N/A time=00:00:09.00 bitrate=N/A speed=8.97x 
frame= 319 fps=212 q=-0.0 size=N/A time=00:00:13.00 bitrate=N/A speed=8.63x 
frame= 427 fps=213 q=-0.0 size=N/A time=00:00:17.08 bitrate=N/A speed=8.51x 
frame= 537 fps=214 q=-0.0 size=N/A time=00:00:22.00 bitrate=N/A speed=8.77x 
frame= 650 fps=216 q=-0.0 size=N/A time=00:00:26.00 bitrate=N/A speed=8.63x 
frame= 761 fps=217 q=-0.0 size=N/A time=00:00:31.00 bitrate=N/A speed=8.82x 
frame= 874 fps=218 q=-0.0 size=N/A time=00:00:35.00 bitrate=N/A speed=8.71x 
frame= 980 fps=217 q=-0.0 size=N/A time=00:00:39.20 bitrate=N/A speed=8.67x 
... 
frame= 5680 fps=213 q=-0.0 size=N/A time=00:03:47.20 bitrate=N/A speed=8.53x 
[silencedetect @ 0x7fdd82d011c0] silence_start: 227.733
[silencedetect @ 0x7fdd82d011c0] silence_end: 229.051 | silence_duration: 1.3184
[silencedetect @ 0x7fdd82d011c0] silence_start: 229.051
[blackdetect @ 0x7fdd82e06580] black_start:229.28 black_end:230.24 black_duration:0.96
frame= 5757 fps=214 q=-0.0 Lsize=N/A time=00:03:50.28 bitrate=N/A speed=8.54x 
video:3013kB audio:43178kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[silencedetect @ 0x7fdd82d011c0] silence_end: 230.28 | silence_duration: 1.22856
\n")




What is the most efficient way to parse the output data to find/return only those result values so I can build further logic from them in my code ? In this case, I would want only the following values returned :



silence_start : 0

silence_end : 5.06285

silence_duration : 5.06285


black_start:0

black_end:5

black_duration:5


silence_start : 227.733

silence_end : 229.051

silence_duration : 1.3184


black_start:229.28

black_end:230.24

black_duration:0.96


silence_start : 229.051

silence_end : 230.28

silence_duration : 1.22856


I think there is a way to get only those values using ffprobe, but I couldn't get that to work within the wrapper method. Possible I would have to run ffprobe as a subprocess and parse that result somehow. That would be a total re-do though.


-
avformat/ftp: use correct enum type
20 août 2015, par Ganesh Ajjanagaddeavformat/ftp: use correct enum type
Fixes -Wenum-conversion from
http://fate.ffmpeg.org/report.cgi?time=20150820031140&slot=arm64-darwin-clang-apple-5.1Signed-off-by : Ganesh Ajjanagadde <gajjanagadde@gmail.com>
Signed-off-by : Michael Niedermayer <michael@niedermayer.cc> -
Making a timelapse by drag and drop - A rebuild of an old script using ImageMagick
14 août 2019, par cursor_majorI have written an apple script previously to automate a task I do in my work many times.
I shoot Raw + JPG in camera, copy to hard drive.
I then drag named and dated folder eg. "2019_08_14_CAM_A_CARD_01" on to an automator app and it divides the files in to folders "NEF" and "JPG" respectively.
I then drag the appropriate "JPG" folder onto my Timelapse app and it runs the image sequence process in QT7 and then saves the file with the parent folder name in the grandparent folder. This keeps things super organised for when I want to re link to the original RAW files.
[code below]
It is a 2 step process and works well for my needs, however, Apple are going to be resigning Quicktime 7 Pro so my app has a foreseeable end of life.
I want to take this opportunity to refine and improve the process using terminal and ImageMagick.
I have managed to work some code that runs well in terminal, but I have to navigate to the folder first then run a script. It doesn’t do the file renaming and doesn’t save in the right place.
Also, when I try and run the simple script in an automator ’App’ it throws up errors even before trying to add anything clever with the file naming.
Later, once I have recreated my timelapse. maker app I want to get clever with more of ImageMagicks commands and overlay a small super of the original frame name in the corner so I can expedite my reconnecting workflow.
I’m sorry, I’m a photographer not a coder but I’ve been bashing my head trying to work this out and I’ve hit a brick wall.
File Sorter
repeat with d in dd
do shell script "d=" & d's POSIX path's quoted form & "
cd \"$d\" || exit
mkdir -p {MOV,JPG,NEF,CR2}
find . -type f -depth 1 -iname '*.mov' -print0 | xargs -0 -J % mv % MOV
find . -type f -depth 1 -iname '*.cr2' -print0 | xargs -0 -J % mv % CR2
find . -type f -depth 1 -iname '*.jpg' -print0 | xargs -0 -J % mv % JPG
find . -type f -depth 1 -iname '*.nef' -print0 | xargs -0 -J % mv % NEF
for folder in `ls`;
do if [ `ls $folder | wc -l` == 0 ]; then
rmdir $folder;
fi; done;
"
end repeat
end open```
Timelapse Compiler
```on run {input, parameters}
repeat with d in input
set d to d's contents
tell application "Finder"
set seq1 to (d's file 1 as alias)
set dparent to d's container as alias
set mov to "" & dparent & (dparent's name) & ".mov"
end tell
tell application "QuickTime Player 7"
activate
open image sequence seq1 frames per second 25
tell document 1
with timeout of 500 seconds
save self contained in file mov
end timeout
quit
end tell
end tell
end repeat
return input
end run```
Current code that runs from within Terminal after I have navigated to folder of JPGs
```ffmpeg -r 25 -f image2 -pattern_type glob -i '*.JPG' -codec:v prores_ks -profile:v 0 imagemagick_TL_Test_01.mov```