
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (75)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (8335)
-
Link ffmpeg to Pydub in Serverless layer
15 mars 2021, par akaiI'm using the Serverless framework to deploy an app on AWS. I have created a layer, defined as follows in
serverless.yml
:

layers:
 ffmpeg:
 path: layer



I also excluded it from the main file bundle :


package:
 exclude:
 - layer/**



and defined a lambda function :


cut_audio:
 layers:
 - {Ref: FfmpegLambdaLayer}



In this function I use the Pydub library, which needs to access the ffmpeg layer. At the moment I have the following error
FileNotFoundError: [Errno 2] No such file or directory: 'ffprobe'
, meaning I have to link ffmpeg asAudioSegment.converter(path)


How do I get the path of my layer ?


Edit : could I solve this by bundling both Pydub and ffmpeg in the layer ?


Edit 2 : the eziotedeschi/AWS-Lambda-Layer-Pydub github repository doesn't seem to help. I get the following error
No module named 'pydub'
. I am using Python3.8 as runtime, that might be the issue.

-
FFMPEG wont play the rtsp link, but the same link works in VLC
15 mars 2021, par JCHI have a rtsp link from an ip cam.
rtsp ://admin:admin1234@192.168.2.254:82/cam/realmonitor ?channel=1&subtype=1


The link works fine in VLC media player but it does not play using the ffplay command. It shows a 404 error.


But rtsp ://admin:admin1234@192.168.2.254:82/live , will work inside ffmpeg and vlc without any issues.


Does ffmpeg only support /live or something ? why doesnt my first link work ? Thank you for your time.


ffplay code


ffplay -i rtsp://admin:admin1234@192.168.2.254:82/cam/realmonitor?channel=1&subtype=1



error that i get


libavutil 56. 51.100 / 56. 51.100
 libavcodec 58. 91.100 / 58. 91.100
 libavformat 58. 45.100 / 58. 45.100
 libavdevice 58. 10.100 / 58. 10.100
 libavfilter 7. 85.100 / 7. 85.100
 libswscale 5. 7.100 / 5. 7.100
 libswresample 3. 7.100 / 3. 7.100
 libpostproc 55. 7.100 / 55. 7.100
[rtsp @ 000002663268f600] method DESCRIBE failed: 404 Not Found/0
rtsp://admin:admin1234@192.168.2.254:82/cam/realmonitor?channel=1: Server returned 404 Not Found

'subtype' is not recognized as an internal or external command,
operable program or batch file.



-
ffmpeg : Input link parameters do not match the corresponding output link parameters, when concatenating same size videos
4 mars 2021, par mSourireI'm trying to combine a video and an audio, using the following command :


ffmpeg -y -i 1.mkv -i 1.mka 
-max_muxing_queue_size 10000 -preset veryfast -r 30 -crf 20 -b:a 96000 -vbr on
-strict experimental
-filter_complex '
color=black:s=320x240:d=7ms[black0];
aevalsrc=0:d=15ms[silence1];
[black0][0]concat=n=2:v=1:a=0[video];
[1][silence1]concat=n=2:v=0:a=1[audio]'
-map [video] -map [audio] -c:v libvpx -c:a libopus output.webm



But ffmpeg returns an error :


[Parsed_concat_2 @ 0x7f8004506d00] Input link in0:v0 parameters (size 640x480, SAR 1:1) do not match the corresponding output link in0:v0 parameters (320x240, SAR 1:1)
[Parsed_concat_2 @ 0x7f8004506d00] Failed to configure output pad on Parsed_concat_2
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #0:0
[libopus @ 0x7f800500c000] 1 frames left in the queue on closing
Conversion failed!



It looks like ffmpeg is unable to concatenate video and black frames, complaining, that video has a distinct resolution, but that's not true :


#> ffprobe 1.mkv
Input #0, matroska,webm, from '1.mkv':
 Metadata:
 encoder : GStreamer matroskamux version 1.8.1.1
 creation_time : 2021-03-02T13:44:03.000000Z
 Duration: 00:01:48.41, start: 0.710000, bitrate: 757 kb/s
 Stream #0:0(eng): Video: vp8, yuv420p(progressive), 320x240, SAR 1:1 DAR 4:3, 120 tbr, 1k tbn, 1k tbc (default)
 Metadata:
 title : Video



So, both the input source and the filter has the same resolution.


When I'm trying to change the color filter to "color=s=640x480", ffmpeg says an opposite thing :


[Parsed_concat_2 @ 0x7fca3ca185c0] Input link in0:v0 parameters (size 320x240, SAR 1:1) do not match the corresponding output link in0:v0 parameters (640x480, SAR 1:1)
[Parsed_concat_2 @ 0x7fca3ca185c0] Failed to configure output pad on Parsed_concat_2
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument



Help please to solve this !


Full listing :


ffmpeg -y -i 1.mkv -i 1.mka
-max_muxing_queue_size 10000
-preset veryfast -r 30 -crf 20 -b:a 96000 -vbr on
-strict experimental
-filter_complex '
color=black:s=320x240:d=7ms[black0];
aevalsrc=0:d=15ms[silence1];
[black0][0]concat=n=2:v=1:a=0[video];
[1][silence1]concat=n=2:v=0:a=1[audio]'
-map [video] -map [audio] -c:v libvpx -c:a libopus output.webm

ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
 built with Apple clang version 11.0.0 (clang-1100.0.33.17)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.3.1_9 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack
 libavutil 56. 51.100 / 56. 51.100
 libavcodec 58. 91.100 / 58. 91.100
 libavformat 58. 45.100 / 58. 45.100
 libavdevice 58. 10.100 / 58. 10.100
 libavfilter 7. 85.100 / 7. 85.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 7.100 / 5. 7.100
 libswresample 3. 7.100 / 3. 7.100
 libpostproc 55. 7.100 / 55. 7.100
Input #0, matroska,webm, from '1.mkv':
 Metadata:
 encoder : GStreamer matroskamux version 1.8.1.1
 creation_time : 2021-03-02T13:44:03.000000Z
 Duration: 00:01:48.41, start: 0.710000, bitrate: 757 kb/s
 Stream #0:0(eng): Video: vp8, yuv420p(progressive), 320x240, SAR 1:1 DAR 4:3, 120 tbr, 1k tbn, 1k tbc (default)
 Metadata:
 title : Video
Input #1, matroska,webm, from '1.mka':
 Metadata:
 encoder : GStreamer matroskamux version 1.8.1.1
 creation_time : 2021-03-02T13:44:03.000000Z
 Duration: 00:01:48.40, start: 0.703000, bitrate: 38 kb/s
 Stream #1:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
 Metadata:
 title : Audio
Codec AVOption preset (Configuration preset) specified for output file #0 (output.webm) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
Stream mapping:
 Stream #0:0 (vp8) -> concat:in1:v0
 Stream #1:0 (opus) -> concat:in0:a0
 concat -> Stream #0:0 (libvpx)
 concat -> Stream #0:1 (libopus)
Press [q] to stop, [?] for help
[libvpx @ 0x7fe08a80bc00] v1.9.0
[libvpx @ 0x7fe08a80bc00] Bitrate not specified for constrained quality mode, using default of 256kbit/sec
Output #0, webm, to 'output.webm':
 Metadata:
 encoder : Lavf58.45.100
 Stream #0:0: Video: vp8 (libvpx), yuv420p, 320x240 [SAR 1:1 DAR 4:3], q=-1--1, 256 kb/s, 30 fps, 1k tbn, 30 tbc (default)
 Metadata:
 encoder : Lavc58.91.100 libvpx
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
 Stream #0:1: Audio: opus (libopus), 48000 Hz, stereo, flt, 96 kb/s (default)
 Metadata:
 encoder : Lavc58.91.100 libopus
[Parsed_color_0 @ 0x7fe089815940] EOF timestamp not reliable
[Parsed_concat_2 @ 0x7fe088501980] Input link in0:v0 parameters (size 640x480, SAR 1:1) do not match the corresponding output link in0:v0 parameters (320x240, SAR 1:1)
[Parsed_concat_2 @ 0x7fe088501980] Failed to configure output pad on Parsed_concat_2
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #0:0
[libopus @ 0x7fe08a810c00] 1 frames left in the queue on closing
Conversion failed!