
Recherche avancée
Autres articles (58)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (5404)
-
Revision 36317 : Il y a certains cas où l’on ne peut pas passer par revision_mot notamment ...
16 mars 2010, par kent1@… — LogIl y a certains cas où l’on ne peut pas passer par revision_mot notamment si on s’insère dans un formulaire d’édition d’article dans un pipeline pre_edition ... On ne passe pas la gestion des conflits dans ce cas là
-
Send sprop-parameter-sets inband rather than in SDP
21 janvier 2021, par MaxI am trying to use ffmpeg to stream an MP4 file over rtp. I am sending the stream to a SFU server that will broadcast the stream to users. The clients are expecting to receive an h264 video stream with profile-level-id
42e01f
. The issue I'm having is that the video received by the clients does not decode properly (just a black screen). If I transcode the video before sending, then everything works correctly. If I dump the SDP that describes what ffmpeg is sending, there is a distinct difference between the transcoded and non-transcoded version.

For the non-transcoded version, my ffmpeg command looks like


ffmpeg '-re' \
 '-v' \
 'info' \
 '-protocol_whitelist' \
 'pipe,tls,file,http,https,tcp,rtp' \
 '-i' \
 '-f' 'mp4' \
 'https://storage.googleapis.com/my_bucket/file' \
 '-map' \
 '0:v:0' \
 '-c:v' \
 'copy' \
 '-f' \
 'rtp' \
 '-sdp_file' 'out.sdp' \
 'rtp://142.93.14.110:40425?rtcpport=45155'



When I run this command, out.sdp contains the line


a=fmtp:96 packetization-mode=1; sprop-parameter-sets=Z0LAH9kAUAW7AWoCAgKAAAH0gABdwAeMGSQ=,aMuMsg==; profile-level-id=42C01F



However, if I change
-c:v copy
to-c:v libx264 -preset ultrafast
, then the sdp line changes toa=fmtp:96 packetization-mode=1;
. Given that there is no SDP exchange between ffmpeg and my SFU, I think the issue is that ffmpeg needs to be sending the sprops in-band rather than setting them in the sdp. Any help here would be amazing. The other possible issue is that the profile levels are sightly different.

-
How to to burn subtitles based image on video using 'overlay_cuda', ffmpeg video filter
4 septembre 2020, par jgkim0518I have to burn subtitles based image on video using ffmpeg(v4.3) and cuda, hardware accelerator. I want to use 'overlay_cuda' in filter complex.


It is my command.


./ffmpeg -init_hw_device cuda=cuda:1 -hwaccel cuda -filter_hw_device cuda -hwaccel_output_format cuda -i 2_dump.ts -filter_complex "[v:0]scale_npp=1920:1080,format=yuv420p[base_video];[base_video][s:3]overlay_cuda[resurlt_video]" -map "[resurlt_video]" -map 0:a -c:v h264_nvenc -f mpegts subtitle_test.ts



But it's fail.


These are output massage included error.


ffmpeg version 4.3 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-39)
configuration: --prefix=/usr/local --pkg-config-flags=--static --extra-cflags=-I/usr/local/include --extra-ldflags='-L/usr/local/lib -L/usr/local/lib64' --extra-libs='-lm -lpthread -lc' --bindir=/usr/local/bin --enable-cross-compile --enable-pic --enable-gpl --enable-libfdk_aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --enable-cuda-nvcc --enable-cuvid --enable-nvenc --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64 --enable-libass --enable-static --disable-shared
libavutil 56. 51.100 / 56. 51.100
libavcodec 58. 91.100 / 58. 91.100
libavformat 58. 45.100 / 58. 45.100
libavdevice 58. 10.100 / 58. 10.100
libavfilter 7. 85.100 / 7. 85.100
libswscale 5. 7.100 / 5. 7.100
libswresample 3. 7.100 / 3. 7.100
libpostproc 55. 7.100 / 55. 7.100
[mpegts @ 0x4971500] start time for stream 5 is not set in estimate_timings_from_pts
[mpegts @ 0x4971500] start time for stream 6 is not set in estimate_timings_from_pts
[mpegts @ 0x4971500] start time for stream 7 is not set in estimate_timings_from_pts
[mpegts @ 0x4971500] start time for stream 8 is not set in estimate_timings_from_pts
[mpegts @ 0x4971500] start time for stream 9 is not set in estimate_timings_from_pts
[mpegts @ 0x4971500] start time for stream 10 is not set in estimate_timings_from_pts
[mpegts @ 0x4971500] start time for stream 11 is not set in estimate_timings_from_pts
[mpegts @ 0x4971500] start time for stream 12 is not set in estimate_timings_from_pts
Input #0, mpegts, from '2_dump.ts':
Duration: 00:05:09.80, start: 1.400000, bitrate: 12595 kb/s
Program 1 
Metadata:
 service_name : Service01
 service_provider: AMUZLAB
Stream #0:0[0x100](eng): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, 5.1(side), fltp, 384 kb/s
Stream #0:1[0x101](ind): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s
Stream #0:2[0x102](zho): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s
Stream #0:3[0x103](kho): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s
Stream #0:4[0x104]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(top first), 1920x1080 [SAR 1:1 DAR 16:9], 25 fps, 50 tbr, 90k tbn, 50 tbc
Stream #0:5[0x105](CHI): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:6[0x106](CHS): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:7[0x107](IND): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:8[0x108](THA): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:9[0x109](MAN): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:10[0x10a](MON): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:11[0x10b](BUR): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
Stream #0:12[0x10c](ENG): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
File 'subtitle_test.ts' already exists. Overwrite? [y/N] y
Stream mapping:
Stream #0:4 (h264) -> scale_npp (graph 0)
Stream #0:8 (dvbsub) -> overlay_cuda:overlay (graph 0)
overlay_cuda (graph 0) -> Stream #0:0 (h264_nvenc)
Stream #0:0 -> #0:1 (ac3 (native) -> mp2 (native))
Stream #0:1 -> #0:2 (ac3 (native) -> mp2 (native))
Stream #0:2 -> #0:3 (ac3 (native) -> mp2 (native))
Stream #0:3 -> #0:4 (ac3 (native) -> mp2 (native))
Press [q] to stop, [?] for help
[mpegts @ 0x4971500] sub2video: using 1920x1080 canvas
Impossible to convert between the formats supported by the filter 'Parsed_scale_npp_0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:4
Conversion failed!



And, It is command line succeeded used 'overlay', software video filter, non_hardware accelerator.


./ffmpeg -hwaccel cuda -re -i 2_dump.ts -filter_complex "[0:v:0][0:s:3]overlay[v]" -map "[v]" -map 0:a -c:v h264_nvenc -f mpegts subtitle_test.ts



How to to burn subtitles based image on video using ffmpeg(v4.3) and cuda, hardware accelerator ?


Thank you.