
Recherche avancée
Autres articles (19)
-
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (6695)
-
Input seeking for frame at specified timestamp with Py-AV
9 décembre 2019, par neonScarecrowI have a project already using Py-AV and am trying to replicate a specific ffmpeg command. The goal is to get a frame roughly around the specified timestamp.
Here’s the ffmpeg commmand :
https://trac.ffmpeg.org/wiki/Seekingffmpeg -ss 14 -i https://some_url.mp4 -frames:v 1 frame_at_14_seconds.jpg
Here’s my code :
#return one frame around 14 seconds into the movie
target_sec = 14
container = av.open('https://some_url.mp4', 'r')
container.streams.video[0].thread_type = 'AUTO'
video_stream = next(s for s in container.streams if s.type == 'video')
time_base = float(video_stream.time_base)
target_timestamp = int(target_sec / time_base) + video_stream.start_time
video_stream.seek(target_timestamp)
for frame in container.decode(video_stream):
frame.to_image().save('frame_at_14_seconds.jpg')
breakAdditionally, I have found any documentation about this, but does anyone know if either command (ffmpeg/av.open) is downloading the entire file to a tmp file behind the scenes. I’m looking for a less memory-intensive way to read a frame for every second in an up to 60 second video.
-
Adding HEVC reference decoder to ffmpeg framework
4 mars 2014, par ZaxI have a tweaked HM reference code of HEVC Decoder based on my requirements. I also know that
FFMPEG version 2.1
onwards supports HEVC. But its a necessity for me to integrate my modified HM code.Hence I have gone through the post :
http://wiki.multimedia.cx/index.php?title=FFmpeg_codec_howto
According to this post, I need to define some functions that are needed to add a new decoder support to FFMPEG framework.
The structure is :
typedef struct AVCodec
I have defined a structure as shown below :
AVCodec HMHEVC_decoder =
{
.name = "hmhevc",
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_HMHEVC,
.init = hmhevc_decode_init,
.close = hmhevc_decode_close,
.decode = hmhevc_decode_frame,
};However, looking at the other examples, I feel I have to add another variable like :
.priv_data_size = sizeof(HEVCContext),
But the problem is that I don't have any such context. So in case I don't define this, what are the things that FFMPEG framework wont provide my decoder ?
Also is definition of this private data context compulsary ?
What are the other fields that have to be compulsorily defined ?
My main intention is that
FFPLAY
should be able to play the decoded frame. -
Continuous RTMP Streaming with FFmpeg Without Restarting the Process for New Videos
3 avril 2024, par 刘小佳I'm facing a challenge with my workflow and would appreciate any guidance or solutions you might have. My business scenario involves a service that periodically generates new video files locally (e.g., 1.mp4, 2.mp4, ...). These files are then streamed to an RTMP server using FFmpeg, and clients pull the stream via HTTP-FLV for playback.


My goal is to ensure continuous streaming between video files without restarting the FFmpeg process each time a new video is ready to be streamed. Restarting FFmpeg for each new file introduces a disconnect in the client playback, which I'm trying to avoid to maintain stream continuity.


I've explored several approaches based on the FFmpeg Concatenate wiki (https://trac.ffmpeg.org/wiki/Concatenate), but haven't achieved the desired outcome :

Approach 1 :

Using a list.txt file with ffconcat version 1.0 and dynamically updating the file (next.mp4) being played :

ffconcat version 1.0
file next.mp4
file next.mp4



And then streaming with :

ffmpeg -re -stream_loop -1 -f concat -i list.txt -flush_packets 0 -f flv rtmp://xxx

However, when attempting to replace next.mp4 (e.g., moving 2.mp4 to next.mp4 during the streaming of 1.mp4), I encountered a "device busy" error.

Approach 2 :

Using a nested list approach where list1.txt includes 1.mp4 and list2.txt, and vice versa :

// list1.txt
ffconcat version 1.0
file '1.mp4'
file 'list2.txt'
// list2.txt
ffconcat version 1.0
file '2.mp4'
file 'list1.txt'



Streaming with :
ffmpeg -re -stream_loop -1 -f concat -i list1.txt -flush_packets 0 -f flv rtmp://xxx

In this setup, I tried modifying list1.txt to replace 1.mp4 with 3.mp4 during the streaming of 2.mp4, but FFmpeg would loop back to 1.mp4 and 2.mp4 before streaming 3.mp4 in the next cycle.

Am I missing something in my methods ? Does anyone have a better approach to fulfill this requirement ? Any help would be greatly appreciated !