
Recherche avancée
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)
Sur d’autres sites (11163)
-
Merging input Streams with nodejs/ffmpeg
14 septembre 2020, par jAndyI'm creating a very basic and rudimentary Video-Web-Chat. On the client side, I'm going to use a simple
getUserMedia
API call to capture the webcam data and send video-data asdata-blob
to my server.

From There, I'm planning to either use the
fluent-ffmpeg
library or just spawnffmpeg
myself and pipe that raw data toffmpeg
, which in turn, does some magic and pushes that out asHLS
stream to an Amazon AWS Service (for instance), which then gets actually displayed on a Web Browser for all participating people in the video chat.

So far, I think all of this should be fairly easy to implement, but I keep my head spinning around the question, how I can create a "combined" or "merged" frame and stream, so the output HLS data from my server to the distributing cloud service has only to be one combined data stream to receive.


If there are 3 people in that video chat, my server receives 3 data streams from those clients and combines these data streams (from the individual web-cam data sources) into one output stream.


How could that be accomplished ?
Can I "create" a new frame with
ffmpeg
, so to speak ? I would be very thankful if anybody could give me a heads up here, maybe I'm thinking in a complete wrong direction.

Another question which arises to me is, if I really can just "dump" any data, which I'm receiving from a binary blob created from
getUserMedia
orMultiStreamRecorder
toffmpeg
or if I have to specify somewhere and somehow the exact codecs being used etc.?

-
libswscale/aarch64 : add another hscale specialization
13 août 2022, par Swinney, Jonathanlibswscale/aarch64 : add another hscale specialization
This specialization handles the case where filtersize is 4 mod 8, e.g.
12, 20, etc. Aarch64 was previously using the c function for this case.
This implementation speeds up that case significantly.hscale_8_to_15__fs_12_dstW_512_c : 6234.1
hscale_8_to_15__fs_12_dstW_512_neon : 1505.6Signed-off-by : Jonathan Swinney <jswinney@amazon.com>
Signed-off-by : Martin Storsjö <martin@martin.st> -
Livestream not reaching AWS endpoint
13 août 2024, par NoobAmII'm trying to stream my live video into Amazon IVS and I don't see it on the live channels.


Is it possible I have a mistake in my FFMPEG configuration ?
I'm expecting to see this in my playback url or on the console screen for playback but I see nothing at the moment.


As I understand it, shouldn't I see some kind of playback in the live channels if a stream is being sent that channel ?


async ivsStreamingService(payload: any): Promise<void> {
 const injestServer = '***.global-contribute.live-video.net:443/app/';
 const streamKey = 'sk_us-east-1_*****';
 const ffmpeg = spawn('ffmpeg', [
 '-re',
 '-i', '-',
 '-r', '30',
 '-c:v', 'libx264',
 '-pix_fmt', 'yuv420p',
 '-profile:v', 'main',
 '-preset', 'veryfast',
 '-x264opts', 'nal-hrd=cbr:no-scenecut',
 '-minrate', '3000k',
 '-maxrate', '3000k',
 '-g', '60',
 '-c:a', 'aac',
 '-b:a', '160k',
 '-ac', '2',
 '-ar', '44100',
 '-f', 'flv',
 `rtmps://${ingestServer}${streamKey}`
 ]);
 
 ffmpeg.stdin.write(payload, (err) => {
 console.log(payload)
 if (err) console.error('Error writing payload to FFmpeg stdin:', err);
 });
 
 ffmpeg.on('close', (code) => {
 console.log(`FFmpeg process exited with code ${code}`);
 });
 
 ffmpeg.stdin.on('error', (err) => {
 console.error('Error writing to FFmpeg stdin:', err);
 });
 
 ffmpeg.stderr.on('data', (data) => {
 console.error(`FFmpeg error: ${data}`);
 });
 }
</void>




I'm not quite sure why it wouldn't receive the stream, as it would appear everything is correct.