
Recherche avancée
Autres articles (12)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Déploiements possibles
31 janvier 2010, parDeux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
Version mono serveur
La version mono serveur consiste à n’utiliser qu’une (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (3094)
-
What is the proper syntax to use ffmpeg to stream H.264 using RTSP over an HTTP tunnel ?
1er décembre 2017, par NewtownGuyI’m trying to send an H.264 video stream at 10 fps over rtsp over an http tunnel so the video can be accessed remotely through a firewall and ideally, using only a single port for all communications. I can’t just use rtsp because it needs one open port on which the stream is requested, which is fine, but it opens two server ports that it chooses randomly for the video stream and they can’t be mapped through the router to the world — a common problem.
I tried VLC but it won’t let me control the server ports that it opens. ffmpeg seems to have more capability selecting ports, but I can’t get the syntax right. Here’s the command I’m using, where my H.264 stream at 10 fps comes from a pipe, /home/vout1, and I tried limiting the server ports in case it won’t let me just use one port for everything :
root@Z-1:~# ffmpeg -r 10 -i /home/vout1 -f rtsp -rtsp_transport http -min_port 25000 -max_port 25009 rtsp://localhost:8554
Here’s the result, where I’ve placed the errors messages in bold :
ffmpeg version 3.2.4-static http://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2017 the FFmpeg developers built with gcc 5.4.1 (Debian 5.4.1-5) 20170205 configuration : —enable-gpl
— enable-version3 —enable-static —disable-debug —disable-ffplay —disable-indev=sndio —disable-outdev=sndio —cc=gcc-5 —enable-fontconfig —enable-frei0r —enable-gnutls —enable-gray —enable-libass —enable-libfreetype —enable-libfribidi —enable-libmp3lame —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libopus —enable-librtmp —enable-libsoxr —enable-libspeex —enable-libtheora —enable-libvidstab —enable-libvo-amrwbenc —enable-libvorbis —enable-libvpx —enable-libwebp —enable-libx264 —enable-libxvid
libavutil 55. 34.101 / 55. 34.101
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.101 / 57. 56.101
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
Input #0, h264, from ’/home/vout1’ :
Duration : N/A, bitrate : N/A
Stream #0:0 : Video : h264 (High), yuv420p(progressive), 960x540, 25 fps, 25 tbr, 1200k tbn, 50 tbc
[rtsp @ 0x3d68b30] Unsupported lower transport method, only UDP and TCP are supported for output.
Could not write header for output file #0 (incorrect codec parameters ?) : Invalid argumentStream mapping :
Stream #0:0 -> #0:0 (h264 (native) -> mpeg4 (native))
Last message repeated 1 timesffmpeg sees the stream because it got the resolution right. But it thinks my stream is the default 25 fps, but I specified -r 10 to say the frame rate is only 10 fps. Second, the stream is not being created.
What is the proper command line syntax and how can I make ffmpeg use one port for everything, even if I can only have one stream ?
Thank you in advance for your help.
-
I cannot add subtitles to MP4 using FFmpeg
14 janvier 2024, par NauI have a video called test_video.mp4 downloaded from YouTube




This is the information of the video obtained by running
ffmpeg -i test_video.mp4
:

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_video.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.19.102
 Duration: 00:00:17.62, start: 0.000000, bitrate: 265 kb/s
 Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x480 [SAR 1:1 DAR 4:3], 138 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
 Metadata:
 handler_name : ISO Media file produced by Google Inc.
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
 Metadata:
 handler_name : ISO Media file produced by Google Inc.



I have also a file with subtitles called test_sub.srt :


1
00:00:01,000 --> 00:00:05,000
Hello

2
00:00:10,000 --> 00:00:15,000
World



When I run
ffmpeg -i test_video.mp4 -i test_sub.srt -map 0:0 -map 0:1 -map 1:0 -c:v copy -c:a copy -c:s mov_text -metadata:s:s:0 language=eng test_out.mp4
, soft subtitles are added :



The problem comes when I try to do the same to a bigger video called test2_video.mp4, I run
ffmpeg -i test2_video.mp4 -i test_sub.srt -map 0:0 -map 0:1 -map 1:0 -c:v copy -c:a copy -c:s mov_text -metadata:s:s:0 language=eng test2_out.mp4
but subtitles are not added to test2_out.mp4, this is the information of the video when I runffmpeg -i test2_video.mp4
:

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test2_video.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf57.83.100
 Duration: 00:59:11.08, start: 0.000000, bitrate: 2131 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1997 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
 Metadata:
 handler_name : VideoHandler
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
 Metadata:
 handler_name : SoundHandler



Why does this happen ?, the version of FFmpeg that I'm running is 4.2.7-0ubuntu0.1


-
Is there a way to eliminate seek time when decoding part of a video using ffmpeg ?
17 décembre 2019, par BabisI’ve got some MKV videos encoded with FFV1. For each of the frames, I want to run some complex and time-intensive python or matlab code, so I’m using multithreading, where each thread works on an individual image.
I’ve tried extracting a single frame from the video using -ss, but it’s terribly inefficient.
The most efficient way is to decompress everything into images in one go, but then I’m writing to disk, and then I’ll be reading from disk, therefore it’s not ideal either.
I’ve tried using a ram disk to export images to, and reading them from python/matlab, but it’s not great performance-wise either. Also, I have to split the export into several batches, as the video file is 20GB and all of the exported images will not fit into memory
Is there a way to rapidly extract individual frames from ffmpeg directly into RAM (or ram disk), so that they can be used by another program ? For example using something like a lookup-table.
For reference, each video is about 20GB, comprised of 50000 frames, and they are all keyframes (it’s for archival purposes)