
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (64)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (6579)
-
Continuous RTMP Streaming with FFmpeg Without Restarting the Process for New Videos
3 avril 2024, par 刘小佳I'm facing a challenge with my workflow and would appreciate any guidance or solutions you might have. My business scenario involves a service that periodically generates new video files locally (e.g., 1.mp4, 2.mp4, ...). These files are then streamed to an RTMP server using FFmpeg, and clients pull the stream via HTTP-FLV for playback.


My goal is to ensure continuous streaming between video files without restarting the FFmpeg process each time a new video is ready to be streamed. Restarting FFmpeg for each new file introduces a disconnect in the client playback, which I'm trying to avoid to maintain stream continuity.


I've explored several approaches based on the FFmpeg Concatenate wiki (https://trac.ffmpeg.org/wiki/Concatenate), but haven't achieved the desired outcome :

Approach 1 :

Using a list.txt file with ffconcat version 1.0 and dynamically updating the file (next.mp4) being played :

ffconcat version 1.0
file next.mp4
file next.mp4



And then streaming with :

ffmpeg -re -stream_loop -1 -f concat -i list.txt -flush_packets 0 -f flv rtmp://xxx

However, when attempting to replace next.mp4 (e.g., moving 2.mp4 to next.mp4 during the streaming of 1.mp4), I encountered a "device busy" error.

Approach 2 :

Using a nested list approach where list1.txt includes 1.mp4 and list2.txt, and vice versa :

// list1.txt
ffconcat version 1.0
file '1.mp4'
file 'list2.txt'
// list2.txt
ffconcat version 1.0
file '2.mp4'
file 'list1.txt'



Streaming with :
ffmpeg -re -stream_loop -1 -f concat -i list1.txt -flush_packets 0 -f flv rtmp://xxx

In this setup, I tried modifying list1.txt to replace 1.mp4 with 3.mp4 during the streaming of 2.mp4, but FFmpeg would loop back to 1.mp4 and 2.mp4 before streaming 3.mp4 in the next cycle.

Am I missing something in my methods ? Does anyone have a better approach to fulfill this requirement ? Any help would be greatly appreciated !


-
Adding HEVC reference decoder to ffmpeg framework
4 mars 2014, par ZaxI have a tweaked HM reference code of HEVC Decoder based on my requirements. I also know that
FFMPEG version 2.1
onwards supports HEVC. But its a necessity for me to integrate my modified HM code.Hence I have gone through the post :
http://wiki.multimedia.cx/index.php?title=FFmpeg_codec_howto
According to this post, I need to define some functions that are needed to add a new decoder support to FFMPEG framework.
The structure is :
typedef struct AVCodec
I have defined a structure as shown below :
AVCodec HMHEVC_decoder =
{
.name = "hmhevc",
.type = AVMEDIA_TYPE_VIDEO,
.id = AV_CODEC_ID_HMHEVC,
.init = hmhevc_decode_init,
.close = hmhevc_decode_close,
.decode = hmhevc_decode_frame,
};However, looking at the other examples, I feel I have to add another variable like :
.priv_data_size = sizeof(HEVCContext),
But the problem is that I don't have any such context. So in case I don't define this, what are the things that FFMPEG framework wont provide my decoder ?
Also is definition of this private data context compulsary ?
What are the other fields that have to be compulsorily defined ?
My main intention is that
FFPLAY
should be able to play the decoded frame. -
Input seeking for frame at specified timestamp with Py-AV
9 décembre 2019, par neonScarecrowI have a project already using Py-AV and am trying to replicate a specific ffmpeg command. The goal is to get a frame roughly around the specified timestamp.
Here’s the ffmpeg commmand :
https://trac.ffmpeg.org/wiki/Seekingffmpeg -ss 14 -i https://some_url.mp4 -frames:v 1 frame_at_14_seconds.jpg
Here’s my code :
#return one frame around 14 seconds into the movie
target_sec = 14
container = av.open('https://some_url.mp4', 'r')
container.streams.video[0].thread_type = 'AUTO'
video_stream = next(s for s in container.streams if s.type == 'video')
time_base = float(video_stream.time_base)
target_timestamp = int(target_sec / time_base) + video_stream.start_time
video_stream.seek(target_timestamp)
for frame in container.decode(video_stream):
frame.to_image().save('frame_at_14_seconds.jpg')
breakAdditionally, I have found any documentation about this, but does anyone know if either command (ffmpeg/av.open) is downloading the entire file to a tmp file behind the scenes. I’m looking for a less memory-intensive way to read a frame for every second in an up to 60 second video.