
Recherche avancée
Médias (91)
-
DJ Z-trip - Victory Lap : The Obama Mix Pt. 2
15 septembre 2011
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (41)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (7462)
-
Error : ffmpeg exited with code 1 on AWS Lambda
16 juin 2022, par Hassnain AlviI am using fluent-ffmpeg nodejs package to run ffmpeg for audio conversion on AWS Lambda. I am using this FFmpeg layer for lambda.
Here is my code


const bitrate64 = ffmpeg("file.mp3").audioBitrate('64k');
 bitrate64.outputOptions([
 '-preset slow',
 '-g 48',
 "-map", "0:0",
 '-hls_time 6',
 '-master_pl_name master.m3u8',
 '-hls_segment_filename 64k/fileSequence%d.ts'
 ])
 .output('./64k/prog_index.m3u8')
 .on('progress', function(progress) {
 console.log('Processing 64k bitrate: ' + progress.percent + '% done')
 }) 
 .on('end', function(err, stdout, stderr) {
 console.log('Finished processing 64k bitrate!')
 })
 .run() 



after running it via AWS lambda I get following error message


ERROR Uncaught Exception 
{
 "errorType": "Error",
 "errorMessage": "ffmpeg exited with code 1: Conversion failed!\n",
 "stack": [
 "Error: ffmpeg exited with code 1: Conversion failed!",
 "",
 " at ChildProcess.<anonymous> (/var/task/node_modules/fluent-ffmpeg/lib/processor.js:182:22)",
 " at ChildProcess.emit (events.js:198:13)",
 " at ChildProcess.EventEmitter.emit (domain.js:448:20)",
 " at Process.ChildProcess._handle.onexit (internal/child_process.js:248:12)"
 ]
}
</anonymous>


I don't get any more info so I am not sure what's going on. Can anyone tell me what's wrong here and how can I enable more detailed logs ?


-
swscale/aarch64 : add vscale specializations
13 août 2022, par Swinney, Jonathanswscale/aarch64 : add vscale specializations
This commit adds new code paths for vscale when filterSize is 2, 4, or
8. By using specialized code with unrolling to match the filterSize we
can improve performance.On AWS c7g (Graviton 3, Neoverse V1) instances :
before after
yuv2yuvX_2_0_512_accurate_neon : 558.8 268.9
yuv2yuvX_4_0_512_accurate_neon : 637.5 434.9
yuv2yuvX_8_0_512_accurate_neon : 1144.8 806.2
yuv2yuvX_16_0_512_accurate_neon : 2080.5 1853.7Signed-off-by : Jonathan Swinney <jswinney@amazon.com>
Signed-off-by : Martin Storsjö <martin@martin.st> -
Merging input Streams with nodejs/ffmpeg
14 septembre 2020, par jAndyI'm creating a very basic and rudimentary Video-Web-Chat. On the client side, I'm going to use a simple
getUserMedia
API call to capture the webcam data and send video-data asdata-blob
to my server.

From There, I'm planning to either use the
fluent-ffmpeg
library or just spawnffmpeg
myself and pipe that raw data toffmpeg
, which in turn, does some magic and pushes that out asHLS
stream to an Amazon AWS Service (for instance), which then gets actually displayed on a Web Browser for all participating people in the video chat.

So far, I think all of this should be fairly easy to implement, but I keep my head spinning around the question, how I can create a "combined" or "merged" frame and stream, so the output HLS data from my server to the distributing cloud service has only to be one combined data stream to receive.


If there are 3 people in that video chat, my server receives 3 data streams from those clients and combines these data streams (from the individual web-cam data sources) into one output stream.


How could that be accomplished ?
Can I "create" a new frame with
ffmpeg
, so to speak ? I would be very thankful if anybody could give me a heads up here, maybe I'm thinking in a complete wrong direction.

Another question which arises to me is, if I really can just "dump" any data, which I'm receiving from a binary blob created from
getUserMedia
orMultiStreamRecorder
toffmpeg
or if I have to specify somewhere and somehow the exact codecs being used etc.?