
Recherche avancée
Autres articles (41)
-
Qu’est ce qu’un masque de formulaire
13 juin 2013, parUn masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
Chaque formulaire de publication d’objet peut donc être personnalisé.
Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (7845)
-
avcodec/mediacodecdec : add workaround for buggy amlogic mpeg2 decoder
26 avril 2018, par Aman Guptaavcodec/mediacodecdec : add workaround for buggy amlogic mpeg2 decoder
I tested the previous mediacodec changes on seven different Android
TV devices, with both mpeg2 and h264 content. All except one worked
as expected. The exception was the MiBox3 running Android 6.0.1,
where playback would freeze on a frame every few seconds. I tested
two other AMLogic devices with newer Android versions that did not
show the same problem. H264 decoding on the MiBox3 was also not affected,
so this workaround applies only to OMX.amlogic.mpeg2.decoder.awesome
on Android API22.There is a rumor that Xiaomi is planning to release Android Oreo for
the MiBox3, so I will revisit in a few months to confirm whether this
is specific to os/driver version or the chipset used in that device.Signed-off-by : Aman Gupta <aman@tmm1.net>
Signed-off-by : Matthieu Bouron <matthieu.bouron@gmail.com> -
Merging input Streams with nodejs/ffmpeg
14 septembre 2020, par jAndyI'm creating a very basic and rudimentary Video-Web-Chat. On the client side, I'm going to use a simple
getUserMedia
API call to capture the webcam data and send video-data asdata-blob
to my server.

From There, I'm planning to either use the
fluent-ffmpeg
library or just spawnffmpeg
myself and pipe that raw data toffmpeg
, which in turn, does some magic and pushes that out asHLS
stream to an Amazon AWS Service (for instance), which then gets actually displayed on a Web Browser for all participating people in the video chat.

So far, I think all of this should be fairly easy to implement, but I keep my head spinning around the question, how I can create a "combined" or "merged" frame and stream, so the output HLS data from my server to the distributing cloud service has only to be one combined data stream to receive.


If there are 3 people in that video chat, my server receives 3 data streams from those clients and combines these data streams (from the individual web-cam data sources) into one output stream.


How could that be accomplished ?
Can I "create" a new frame with
ffmpeg
, so to speak ? I would be very thankful if anybody could give me a heads up here, maybe I'm thinking in a complete wrong direction.

Another question which arises to me is, if I really can just "dump" any data, which I'm receiving from a binary blob created from
getUserMedia
orMultiStreamRecorder
toffmpeg
or if I have to specify somewhere and somehow the exact codecs being used etc.?

-
How to use ffmpeg to transcode many live streamed videos ? [closed]
21 septembre 2020, par user14258924PREMISE


As a pet project, I am writing a live video streaming service, in Go, that can consume video streams from OBS via SRT(TS -> h264/aac) and RTMP(FLV -> h264/aac) protocols and am planning to support streaming video from web browser as well, captured from a web camera via JS. This ingress server will receive many video streams in various containers and codecs and I need to normalize them into single container and codec and then create multiple versions for various bitrates(ie. 240p, 360p, 480p, 720p, 1080p...) to pass along where needed in the application. Each stream is split into 2 second GOP segments, separate for audio and video track, that will produce fragmented MP4 as the end result - which can be consumed by web browser.


The issue is that I am using Go which has no libraries for transcoding video so I need to use either ffmpeg or vlc, which is a C code. I have decided to avoid the CGo route and use ffmpeg/vlc as standalone binaries.


QUESTION


My question is how to use either of these project in the most efficient way - avoiding the use of files in favour of unix sockets/streams and also the performance aspect - handling hundreds of video segments in any one time and in sufficient time to avoid creating too much of a lag beteen producer and consumer.


So let's say I will pick the most used one - ffmpeg, how should I actually use it to achieve what I have described ? How would you set it up and which flags/config to use with it ?


Can the performance be even achieved or is it just too much and I will need some sort of ffmpeg cluser to even come close to some useful performance/low delay ?