
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (28)
-
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (1699)
-
Input seeking for frame at specified timestamp with Py-AV
9 décembre 2019, par neonScarecrowI have a project already using Py-AV and am trying to replicate a specific ffmpeg command. The goal is to get a frame roughly around the specified timestamp.
Here’s the ffmpeg commmand :
https://trac.ffmpeg.org/wiki/Seekingffmpeg -ss 14 -i https://some_url.mp4 -frames:v 1 frame_at_14_seconds.jpg
Here’s my code :
#return one frame around 14 seconds into the movie
target_sec = 14
container = av.open('https://some_url.mp4', 'r')
container.streams.video[0].thread_type = 'AUTO'
video_stream = next(s for s in container.streams if s.type == 'video')
time_base = float(video_stream.time_base)
target_timestamp = int(target_sec / time_base) + video_stream.start_time
video_stream.seek(target_timestamp)
for frame in container.decode(video_stream):
frame.to_image().save('frame_at_14_seconds.jpg')
breakAdditionally, I have found any documentation about this, but does anyone know if either command (ffmpeg/av.open) is downloading the entire file to a tmp file behind the scenes. I’m looking for a less memory-intensive way to read a frame for every second in an up to 60 second video.
-
Adding HEVC reference decoder to ffmpeg framework
4 mars 2014, par ZaxI have a tweaked HM reference code of HEVC Decoder based on my requirements. I also know that
FFMPEG version 2.1
onwards supports HEVC. But its a necessity for me to integrate my modified HM code.Hence I have gone through the post :
http://wiki.multimedia.cx/index.php?title=FFmpeg_codec_howto
According to this post, I need to define some functions that are needed to add a new decoder support to FFMPEG framework.
The structure is :
typedef struct AVCodec
I have defined a structure as shown below :
AVCodec HMHEVC_decoder =
{
.name = "hmhevc",
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_HMHEVC,
.init = hmhevc_decode_init,
.close = hmhevc_decode_close,
.decode = hmhevc_decode_frame,
};However, looking at the other examples, I feel I have to add another variable like :
.priv_data_size = sizeof(HEVCContext),
But the problem is that I don't have any such context. So in case I don't define this, what are the things that FFMPEG framework wont provide my decoder ?
Also is definition of this private data context compulsary ?
What are the other fields that have to be compulsorily defined ?
My main intention is that
FFPLAY
should be able to play the decoded frame. -
Continuous RTMP Streaming with FFmpeg Without Restarting the Process for New Videos
3 avril 2024, par 刘小佳I'm facing a challenge with my workflow and would appreciate any guidance or solutions you might have. My business scenario involves a service that periodically generates new video files locally (e.g., 1.mp4, 2.mp4, ...). These files are then streamed to an RTMP server using FFmpeg, and clients pull the stream via HTTP-FLV for playback.


My goal is to ensure continuous streaming between video files without restarting the FFmpeg process each time a new video is ready to be streamed. Restarting FFmpeg for each new file introduces a disconnect in the client playback, which I'm trying to avoid to maintain stream continuity.


I've explored several approaches based on the FFmpeg Concatenate wiki (https://trac.ffmpeg.org/wiki/Concatenate), but haven't achieved the desired outcome :

Approach 1 :

Using a list.txt file with ffconcat version 1.0 and dynamically updating the file (next.mp4) being played :

ffconcat version 1.0
file next.mp4
file next.mp4



And then streaming with :

ffmpeg -re -stream_loop -1 -f concat -i list.txt -flush_packets 0 -f flv rtmp://xxx

However, when attempting to replace next.mp4 (e.g., moving 2.mp4 to next.mp4 during the streaming of 1.mp4), I encountered a "device busy" error.

Approach 2 :

Using a nested list approach where list1.txt includes 1.mp4 and list2.txt, and vice versa :

// list1.txt
ffconcat version 1.0
file '1.mp4'
file 'list2.txt'
// list2.txt
ffconcat version 1.0
file '2.mp4'
file 'list1.txt'



Streaming with :
ffmpeg -re -stream_loop -1 -f concat -i list1.txt -flush_packets 0 -f flv rtmp://xxx

In this setup, I tried modifying list1.txt to replace 1.mp4 with 3.mp4 during the streaming of 2.mp4, but FFmpeg would loop back to 1.mp4 and 2.mp4 before streaming 3.mp4 in the next cycle.

Am I missing something in my methods ? Does anyone have a better approach to fulfill this requirement ? Any help would be greatly appreciated !