
Recherche avancée
Autres articles (40)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (6708)
-
Copy m3u8 segment files on a single mp4 format
26 juin 2018, par parsaHi I used below command for copy m3u8 segment files to a single mp4 file :
I run this command on c#Process
class.-y -i "D:\OtherProjects\ConvertProj\video\2018\4\1\m3u8\200p\out.m3u8"
-y -i "D:\OtherProjects\ConvertProj\video\2018\4\1\m3u8\360p\out.m3u8"
-y -i "D:\OtherProjects\ConvertProj\video\2018\4\1\m3u8\480p\out.m3u8"
-y -i "D:\OtherProjects\ConvertProj\video\2018\4\1\m3u8\720p\out.m3u8"
-map 0 -c:v copy -c:a copy -threads 0 "D:\OtherProjects\ConvertProj\video\2018\4\1\1-200.mp4"
-map 1 -c:v copy -c:a copy -threads 0 "D:\OtherProjects\ConvertProj\video\2018\4\1\1-360.mp4"
-map 2 -c:v copy -c:a copy -threads 0 "D:\OtherProjects\ConvertProj\video\2018\4\1\1-480.mp4"
-map 3 -c:v copy -c:a copy -threads 0 "D:\OtherProjects\ConvertProj\video\2018\4\1\1.mp4"I get this Error :
[hls,applehttp @ 00000000047e3400] Failed to open segment of playlist 0
Last message repeated 353 times
[hls,applehttp @ 00000000047e3400] Error when loading first segment 'out0.ts'
D:\OtherProjects\ConvertProj\video\2018\4\6208-3905956\m3u8\200p\out.m3u8: Invalid data found when processing inputWhat is the problem ? Waht I must be do ? Is this a bug ?
-
Synchronizing RTSP signals with FFMPEG
3 décembre 2019, par TaitsI have achieved to merge two RTSP signals in one with FFMPEG but in the resulting stream, the first input has a delay of between 2 and 5 seconds with respect to the second. And it is always the first input that is delayed compared to the second.
The two RTSP signals come from the same camera model, same configuration, same room ...
However, if I put the same RTSP signal (either of the two) as input 1 and input 2, the same thing happens. Despite being the same signal, the first input is delayed compared to the second one.
How could I get them synchronized ?
This is the command that I execute :
ffmpeg -rtsp_transport tcp -thread_queue_size 512 -rtbufsize 50M -r 15
-i rtsp://XXXX -rtsp_transport tcp -thread_queue_size 512 -rtbufsize 50M -r 15
-c:a aac -i rtsp://YYYY -filter_complex "[0:v]pad=iw*2:ih,setpts=PTS-STARTPTS[bg];
[1:v]setpts=PTS-STARTPTS[fg]; [bg][fg]overlay=w[out]" -map "[out]" -f hls
-hls_time 2 -hls_list_size 5 -use_localtime 1 -use_localtime_mkdir 1
-hls_segment_filename 'LIVE/file-%s.ts' -map a -ar 16000 -ac 1 -ab 64000 -c:a aac
-y output.m3u8Here you have the process informationn :
ffmpeg version 3.4.2 Copyright (c) 2000-2018 the FFmpeg developers
built with Apple LLVM version 9.0.0 (clang-900.0.39.2)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.4.2 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --disable-jack --enable-gpl --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --enable-videotoolbox --disable-lzma
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
[rtsp @ 0x7fe284000e00] Missing PPS in sprop-parameter-sets, ignoring
Input #0, rtsp, from 'rtsp://XXXX':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1280x720, 15 fps, 15 tbr, 90k tbn, 30 tbc
Stream #0:1: Audio: aac (LC), 16000 Hz, mono, fltp
[rtsp @ 0x7fe28484de00] Missing PPS in sprop-parameter-sets, ignoring
Input #1, rtsp, from 'rtsp://XXXX':
Metadata:
title : Media Presentation
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #1:0: Video: h264 (Main), yuv420p(progressive), 1280x720, 15 fps, 15 tbr, 90k tbn, 30 tbc
Stream #1:1: Audio: aac (LC), 16000 Hz, mono, fltp
Stream mapping:
Stream #0:0 (h264) -> pad (graph 0)
Stream #1:0 (h264) -> setpts (graph 0)
overlay (graph 0) -> Stream #0:0 (libx264)
Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x7fe286802000] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7fe286802000] profile High, level 4.0
[libx264 @ 0x7fe286802000] 264 - core 152 r2854 e9a5903 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=1 weightp=2 keyint=250 keyint_min=15 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
[hls @ 0x7fe286800000] Opening 'LIVE/file-1524728763.ts' for writing
Output #0, hls, to 'output.m3u8':
Metadata:
title : Media Presentation
encoder : Lavf57.83.100
Stream #0:0: Video: h264 (libx264), yuv420p, 2560x720, q=-1--1, 15 fps, 90k tbn, 15 tbc (default)
Metadata:
encoder : Lavc57.107.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream #0:1: Audio: aac (LC), 16000 Hz, mono, fltp, 64 kb/s
Metadata:
encoder : Lavc57.107.100 aac
[hls @ 0x7fe286800000] Opening 'LIVE/file-1524728782.ts' for writinged=1.09x
[hls @ 0x7fe286800000] Opening 'output.m3u8.tmp' for writing
[hls @ 0x7fe286800000] Opening 'output.m3u8.tmp' for writing=N/A speed=1.07x
frame= 396 fps= 15 q=-1.0 Lsize=N/A time=00:00:27.00 bitrate=N/A speed=1.05x
video:2946kB audio:147kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown -
Suppress black margins on the sides of an animation
26 avril 2018, par Clinton WinantI need to make an animation out of a collection of jpeg images. The image size, as given by display, is 1200x900. I can control the size of the jpg images with convert, but not sure what a good size would be. I use the following ffmpeg call :
ffmpeg -f image2 -i img%04d.jpg -r 24 sound.avi
In spite of a long string of warnings like
Past duration 0.879997 too large
sound.avi is produced, however the animation includes black right and left margins (see attached screen shot of the first frame)
that I need to suppress. I am under the impression that the 4x3 format of the jpg images is standard ? I view the animation with
mplayer sound.avi
The OS is Debian buster
Further experiments suggest that the black margins disappear if the jpg files have an aspect ratio 16:9. Is that the only AR possible ?
The output of
ffmpeg -f image2 -i img%04d.jpg -vf cropdetect -vframes 5 -f null
requested by @Gyan is
ffmpeg version 3.4.2-2 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 7 (Debian 7.3.0-15)
configuration: --prefix=/usr --extra-version=2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
Trailing options were found on the commandline.
Input #0, image2, from 'img%04d.jpg':
Duration: 00:00:00.40, start: 0.000000, bitrate: N/A
Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1200x900 [SAR 150:150 DAR 4:3], 25 fps, 25 tbr, 25 tbn, 25 tbc