
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (60)
-
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Activation de l’inscription des visiteurs
12 avril 2011, parIl est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (10693)
-
ffmpeg python subprocess error only on ubuntu
10 décembre 2018, par WongerIm working on an application that is splitting videos from youtube into images. I work on a macbook pro for development, but our app servers run on an ubuntu 12.04 server. The current code on our servers running right now is the following
ffmpeg -i {video_file} -vf fps={fps}
which we run via a subprocess Popen function call in python. This has very slow performance because it is essentially playing back the whole video file to get the frames. I found another SO post that said you could use -accurate_seek -ss to grab single frames at a specific time, but I am facing some issues with that. When i run the command via command line when SSH into our dev server, it works fine, but when i run it via a subprocess Popen call in python, i get the following error :
(standard_in) 1: syntax error
ffmpeg version 3.2.4-static http://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 5.4.1 (Debian 5.4.1-5) 20170205
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-5 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg
libavutil 55. 34.101 / 55. 34.101
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.101 / 57. 56.101
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
Option accurate_seek (enable/disable accurate seeking with -ss) cannot be applied to output url test.mkv -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to.
Error parsing options for output file test.mkv.
Error opening output files: Invalid argumentpython code :
import subprocess
command = "for i in {0..3} ; do ffmpeg -accurate_seek -ss `echo $i*60.0 | bc` -i test.mkv -frames:v 1 images/test_img_$i.jpg ; done"
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr = subprocess.STDOUT, shell=True)
output = process.communicate()
print output[0].replace('\\n' , '\n')The thing is, when i run this on my OSX terminal, it works fine, but there is some issue with doing it in ubuntu. Does anyone have experience with this issue ?
Output when i run the same exact code in osx :
ffmpeg version 3.0.2 Copyright (c) 2000-2016 the FFmpeg developers
built with Apple LLVM version 7.3.0 (clang-703.0.31)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.0.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-opencl --enable-libx264 --enable-libmp3lame --enable-libxvid --enable-libvpx --enable-vda
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libavresample 3. 0. 0 / 3. 0. 0
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, matroska,webm, from 'test.mkv':
Metadata:
COMPATIBLE_BRANDS: iso6avc1mp41
MAJOR_BRAND : dash
MINOR_VERSION : 0
ENCODER : Lavf57.25.100
Duration: 00:02:10.20, start: 0.007000, bitrate: 2729 kb/s
Stream #0:0: Video: h264 (High), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
Metadata:
HANDLER_NAME : VideoHandler
DURATION : 00:02:10.172000000
Stream #0:1(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
Metadata:
DURATION : 00:02:10.201000000
[swscaler @ 0x7f9b4b008000] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to 'images/test_img_0.jpg':
Metadata:
COMPATIBLE_BRANDS: iso6avc1mp41
MAJOR_BRAND : dash
MINOR_VERSION : 0
encoder : Lavf57.25.100
Stream #0:0: Video: mjpeg, yuvj420p(pc), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 23.98 fps, 23.98 tbn, 23.98 tbc (default)
Metadata:
HANDLER_NAME : VideoHandler
DURATION : 00:02:10.172000000
encoder : Lavc57.24.102 mjpeg
Side data:
unknown side data type 10 (24 bytes)
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
frame= 1 fps=0.0 q=7.2 Lsize=N/A time=00:00:00.04 bitrate=N/A speed= 1x
video:73kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
....One thing i forgot to mention is that on my mac i used homebrew to install ffmpeg, where as on Ubuntu i used a static build
other stack overflow reference : Fastest way to extract frames using ffmpeg ?
-
ffmpeg python subprocess error only on ubuntu
19 avril 2017, par WongerIm working on an application that is splitting videos from youtube into images. I work on a macbook pro for development, but our app servers run on an ubuntu 12.04 server. The current code on our servers running right now is the following
ffmpeg -i {video_file} -vf fps={fps}
which we run via a subprocess Popen function call in python. This has very slow performance because it is essentially playing back the whole video file to get the frames. I found another SO post that said you could use -accurate_seek -ss to grab single frames at a specific time, but I am facing some issues with that. When i run the command via command line when SSH into our dev server, it works fine, but when i run it via a subprocess Popen call in python, i get the following error :
(standard_in) 1: syntax error
ffmpeg version 3.2.4-static http://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 5.4.1 (Debian 5.4.1-5) 20170205
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-5 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg
libavutil 55. 34.101 / 55. 34.101
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.101 / 57. 56.101
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
Option accurate_seek (enable/disable accurate seeking with -ss) cannot be applied to output url test.mkv -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to.
Error parsing options for output file test.mkv.
Error opening output files: Invalid argumentpython code :
import subprocess
command = "for i in {0..3} ; do ffmpeg -accurate_seek -ss `echo $i*60.0 | bc` -i test.mkv -frames:v 1 images/test_img_$i.jpg ; done"
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr = subprocess.STDOUT, shell=True)
output = process.communicate()
print output[0].replace('\\n' , '\n')The thing is, when i run this on my OSX terminal, it works fine, but there is some issue with doing it in ubuntu. Does anyone have experience with this issue ?
Output when i run the same exact code in osx :
ffmpeg version 3.0.2 Copyright (c) 2000-2016 the FFmpeg developers
built with Apple LLVM version 7.3.0 (clang-703.0.31)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.0.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-opencl --enable-libx264 --enable-libmp3lame --enable-libxvid --enable-libvpx --enable-vda
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libavresample 3. 0. 0 / 3. 0. 0
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, matroska,webm, from 'test.mkv':
Metadata:
COMPATIBLE_BRANDS: iso6avc1mp41
MAJOR_BRAND : dash
MINOR_VERSION : 0
ENCODER : Lavf57.25.100
Duration: 00:02:10.20, start: 0.007000, bitrate: 2729 kb/s
Stream #0:0: Video: h264 (High), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
Metadata:
HANDLER_NAME : VideoHandler
DURATION : 00:02:10.172000000
Stream #0:1(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
Metadata:
DURATION : 00:02:10.201000000
[swscaler @ 0x7f9b4b008000] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to 'images/test_img_0.jpg':
Metadata:
COMPATIBLE_BRANDS: iso6avc1mp41
MAJOR_BRAND : dash
MINOR_VERSION : 0
encoder : Lavf57.25.100
Stream #0:0: Video: mjpeg, yuvj420p(pc), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 23.98 fps, 23.98 tbn, 23.98 tbc (default)
Metadata:
HANDLER_NAME : VideoHandler
DURATION : 00:02:10.172000000
encoder : Lavc57.24.102 mjpeg
Side data:
unknown side data type 10 (24 bytes)
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
frame= 1 fps=0.0 q=7.2 Lsize=N/A time=00:00:00.04 bitrate=N/A speed= 1x
video:73kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
....One thing i forgot to mention is that on my mac i used homebrew to install ffmpeg, where as on Ubuntu i used a static build
other stack overflow reference : Fastest way to extract frames using ffmpeg ?
-
Superimposing two videos onto a static image ?
15 décembre 2014, par ArchagonI have two videos that I’d like to combine into a single video, in which both videos would sit on top of a static background image. (Think something like this.) My requirements are that the software I use is free, that it runs on OSX, and that I don’t have to re-encode my videos an excessive number of times. I’d also like to be able to perform this operation from the command line or via script, since I’ll be doing it a lot. (But this isn’t strictly necessary.)
I tried fiddling with ffmpeg for a couple of hours, but it just doesn’t seem very well suited for post-processing. I could potentially hack something together via the overlay feature, but so far I haven’t figured out how to do it, aside from pain-stakingly converting the image to a video (which takes 2x as long as the length of my videos !) and then superimposing the two videos onto it in another rendering step.
Any tips ? Thank you !
Update :
Thanks to LordNeckbeard’s help, I was able to achieve my desired result with a single ffmpeg call ! Unfortunately, encoding is quite slow, taking 6 seconds to encode 1 second of video. I believe this is caused by the background image. Any tips on speeding up encoding ? Here’s the ffmpeg log :
MacBook-Pro:Video archagon$ ffmpeg -loop 1 -i underlay.png -i test-slide-video-short.flv -i test-speaker-video-short.flv -filter_complex "[1:0]scale=400:-1[a];[2:0]scale=320:-1[b];[0:0][a]overlay=0:0[c];[c][b]overlay=0:0" -shortest -t 5 -an output.mp4
ffmpeg version 1.0 Copyright (c) 2000-2012 the FFmpeg developers
built on Nov 14 2012 16:18:58 with Apple clang version 4.0 (tags/Apple/clang-421.0.60) (based on LLVM 3.1svn)
configuration: --prefix=/opt/local --enable-swscale --enable-avfilter --enable-libmp3lame --enable-libvorbis --enable-libopus --enable-libtheora --enable-libschroedinger --enable-libopenjpeg --enable-libmodplug --enable-libvpx --enable-libspeex --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/clang --arch=x86_64 --enable-yasm --enable-gpl --enable-postproc --enable-libx264 --enable-libxvid
libavutil 51. 73.101 / 51. 73.101
libavcodec 54. 59.100 / 54. 59.100
libavformat 54. 29.104 / 54. 29.104
libavdevice 54. 2.101 / 54. 2.101
libavfilter 3. 17.100 / 3. 17.100
libswscale 2. 1.101 / 2. 1.101
libswresample 0. 15.100 / 0. 15.100
libpostproc 52. 0.100 / 52. 0.100
Input #0, image2, from 'underlay.png':
Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
Stream #0:0: Video: png, rgb24, 1024x768, 25 fps, 25 tbr, 25 tbn, 25 tbc
Input #1, flv, from 'test-slide-video-short.flv':
Metadata:
author :
copyright :
description :
keywords :
rating :
title :
presetname : Custom
videodevice : VGA2USB Pro V3U30343
videokeyframe_frequency: 5
canSeekToEnd : false
createdby : FMS 3.5
creationdate : Mon Aug 16 16:35:34 2010
encoder : Lavf54.29.104
Duration: 00:50:32.75, start: 0.000000, bitrate: 90 kb/s
Stream #1:0: Video: vp6f, yuv420p, 640x480, 153 kb/s, 8 tbr, 1k tbn, 1k tbc
Input #2, flv, from 'test-speaker-video-short.flv':
Metadata:
author :
copyright :
description :
keywords :
rating :
title :
presetname : Custom
videodevice : Microsoft DV Camera and VCR
videokeyframe_frequency: 5
audiodevice : Microsoft DV Camera and VCR
audiochannels : 1
audioinputvolume: 75
canSeekToEnd : false
createdby : FMS 3.5
creationdate : Mon Aug 16 16:35:34 2010
encoder : Lavf54.29.104
Duration: 00:50:38.05, start: 0.000000, bitrate: 238 kb/s
Stream #2:0: Video: vp6f, yuv420p, 320x240, 204 kb/s, 25 tbr, 1k tbn, 1k tbc
Stream #2:1: Audio: mp3, 22050 Hz, mono, s16, 32 kb/s
File 'output.mp4' already exists. Overwrite ? [y/N] y
using cpu capabilities: none!
[libx264 @ 0x7fa84c02f200] profile High, level 3.1
[libx264 @ 0x7fa84c02f200] 264 - core 119 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
Metadata:
encoder : Lavf54.29.104
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1024x768, q=-1--1, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 (png) -> overlay:main
Stream #1:0 (vp6f) -> scale
Stream #2:0 (vp6f) -> scale
overlay -> Stream #0:0 (libx264)
Press [q] to stop, [?] for help
Update 2 :
It works ! One important tweak was to move the underlay.png input to the end of the input list. This increased performance substantially. Here’s my final ffmpeg call. (The maps at the end aren’t required for this particular arrangement, but I sometimes have a few extra audio inputs that I want to map to my output.)
ffmpeg
-i VideoOne.flv
-i VideoTwo.flv
-loop 1 -i Underlay.png
-filter_complex "[2:0] [0:0] overlay=20:main_h/2-overlay_h/2 [overlay];[overlay] [1:0] overlay=main_w-overlay_w-20:main_h/2-overlay_h/2 [output]"
-map [output]:v
-map 0:a
OutputVideo.m4v