
Recherche avancée
Médias (3)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (68)
-
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (10456)
-
AWS : Best way to generate a thumbnail for every frame of a s3 uploaded video
4 janvier 2018, par danielfrancaI need to process a video file, transcode it and generate a thumbnail for every frame.
It should happen every time there’s a new video on a specific AWS bucket.
I found out that AWS Lambda should be the best service for that
However, it is not working as expected and I’ll explain why
I’ve created a simple Python2.7 file using FFVideo
It seems that this library doesn’t support Python3.It is a nice abstraction on top of ffmpeg
To deploy the package I had run
lld
on the FFVideo shared object, and then copied everything to my project directory, as described in their documentation.
Zipped it and upload to AWS LambdaYet it doesn’t work, I keep getting errors as if the /usr/lib64/libstdc++ is missing, even after copied it to the projecct dir, also tried /usr/lib64 and /lib64
Then as a second thought I wonder if just running
ffmpeg
wouldn’t be easier...
So I just copied ffmpeg to the project dir and did a simple Python script to call it.Missing shared objects, ok,
lld
again and copied everything to the directory.Then AWS Lambda seems to be completely broken, I can’t save it anymore and it just says "Fix errors before saving"
But no error message, nothingI even have attempted to write inline a simple code, but now AWS Lambda don’t even open the online editor.
I also tried to remove all the shared objects I have added, returning to the original state, but still same generic error.
Same thing if I just create a new lambda function with same old code.Doesn’t matter what I do it never even enable the Save button anymore.
I thought it might be just some AWS unstability, but it been a while.I’ve looked to a similar project using Node
and it doesn’t seem to include anything except ffmpegMy other idea is to use SQS to trigger a python script somewhere else to create the thumbnails
Any idea how is the best approach for that ?
-
MP4Box / FFMPEG concat loses audio after first clip
17 novembre 2017, par user1615343So I am certainly no expert when it comes to either of these tools, but I have a web-based project that’s executing commands on an Amazon Linux server to concatenate two video files that are uploaded.
Both files are converted to mp4s first using FFMPEG, and those play perfectly in a browser after conversion :
ffmpeg -i file1.mpg -c:v libx264 -crf 22 -c:a aac -strict -2 -movflags faststart file2.mp4
Then, I attempt to combine these two resulting mp4s into a single mp4. I tried using FFMPEG to do this but to no avail. Switching to try MP4Box got me much closer : the videos are concatenated together, but the audio stops playing at the end of the first clip, and the second clip is silent.
MP4Box -force-cat -keepsys -add file.mp4 -cat file2.mp4 out.mp4
I’ve tried varying versions of the above command with no better results. Any input is greatly appreciated.
EDIT : info on .mp4 files using
ffmpeg -i file1.mp4 -i file2.mp4
ffmpeg -i 1510189259715DogRunsintoGlassDoor_315a03a8e20acfc.mp4 -i
1510189273549NewhouseMoonMoonneverseenstairsbeforefunnydog_285a03a8e6aab25.mp4ffmpeg version N-61041-g52a2138 Copyright (c) 2000-2014 the FFmpeg
developersbuilt on Mar 2 2014 05:45:04 with gcc 4.6 (Debian 4.6.3-1)
configuration : —prefix=/root/ffmpeg-static/64bit
—extra-cflags=’-I/root/ffmpeg-static/64bit/include -static’ —extra-ldflags=’-L/root/ffmpeg-static/64bit/lib -static’ —extra-libs=’-lxml2 -lexpat -lfreetype’ —enable-static —disable-shared —disable-ffserver —disable-doc —enable-bzlib —enable-zlib —enable-postproc —enable-runtime-cpudetect —enable-libx264 —enable-gpl —enable-libtheora —enable-libvorbis —enable-libmp3lame —enable-gray —enable-libass —enable-libfreetype —enable-libopenjpeg —enable-libspeex —enable-libvo-aacenc —enable-libvo-amrwbenc —enable-version3 —enable-libvpxlibavutil 52. 66.100 / 52. 66.100
libavcodec 55. 52.102 / 55. 52.102
libavformat 55. 33.100 / 55. 33.100
libavdevice 55. 10.100 / 55. 10.100
libavfilter 4. 2.100 / 4. 2.100
libswscale 2. 5.101 / 2. 5.101
libswresample 0. 18.100 / 0. 18.100
libpostproc 52. 3.100 / 52. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from
’1510189259715DogRunsintoGlassDoor_315a03a8e20acfc.mp4’ :Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
encoder : Lavf55.33.100
Duration : 00:00:04.92, start : 0.023220, bitrate : 634 kb/s
Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p,
360x360 [SAR 1:1 DAR 1:1], 501 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc
(default)Metadata :
handler_name : VideoHandler
Stream #0:1(und) : Audio : aac (mp4a / 0x6134706D), 44100 Hz, mono,
fltp, 132 kb/s (default)Metadata :
handler_name : SoundHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from
’1510189273549NewhouseMoonMoonneverseenstairsbeforefunnydog_285a03a8e6aab25.mp4’ :Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
encoder : Lavf55.33.100
Duration : 00:00:18.79, start : 0.023220, bitrate : 455 kb/s
Stream #1:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p,
362x360 [SAR 1:1 DAR 181:180], 320 kb/s, 29.94 fps, 29.94 tbr, 11976
tbn, 59.88 tbc (default)Metadata :
handler_name : VideoHandler
Stream #1:1(eng) : Audio : aac (mp4a / 0x6134706D), 44100 Hz, stereo,
fltp, 129 kb/s (default)Metadata :
handler_name : SoundHandler
At least one output file must be specified
-
Live audio using ffmpeg, javascript and nodejs
8 novembre 2017, par klausI am new to this thing. Please don’t hang me for the poor grammar. I am trying to create a proof of concept application which I will later extend. It does the following : We have a html page which asks for permission to use the microphone. We capture the microphone input and send it via websocket to a node js app.
JS (Client) :
var bufferSize = 4096;
var socket = new WebSocket(URL);
var myPCMProcessingNode = context.createScriptProcessor(bufferSize, 1, 1);
myPCMProcessingNode.onaudioprocess = function(e) {
var input = e.inputBuffer.getChannelData(0);
socket.send(convertFloat32ToInt16(input));
}
function convertFloat32ToInt16(buffer) {
l = buffer.length;
buf = new Int16Array(l);
while (l--) {
buf[l] = Math.min(1, buffer[l])*0x7FFF;
}
return buf.buffer;
}
navigator.mediaDevices.getUserMedia({audio:true, video:false})
.then(function(stream){
var microphone = context.createMediaStreamSource(stream);
microphone.connect(myPCMProcessingNode);
myPCMProcessingNode.connect(context.destination);
})
.catch(function(e){});In the server we take each incoming buffer, run it through ffmpeg, and send what comes out of the std out to another device using the node js ’http’ POST. The device has a speaker. We are basically trying to create a 1 way audio link from the browser to the device.
Node JS (Server) :
var WebSocketServer = require('websocket').server;
var http = require('http');
var children = require('child_process');
wsServer.on('request', function(request) {
var connection = request.accept(null, request.origin);
connection.on('message', function(message) {
if (message.type === 'utf8') { /*NOP*/ }
else if (message.type === 'binary') {
ffm.stdin.write(message.binaryData);
}
});
connection.on('close', function(reasonCode, description) {});
connection.on('error', function(error) {});
});
var ffm = children.spawn(
'./ffmpeg.exe'
,'-stdin -f s16le -ar 48k -ac 2 -i pipe:0 -acodec pcm_u8 -ar 48000 -f aiff pipe:1'.split(' ')
);
ffm.on('exit',function(code,signal){});
ffm.stdout.on('data', (data) => {
req.write(data);
});
var options = {
host: 'xxx.xxx.xxx.xxx',
port: xxxx,
path: '/path/to/service/on/device',
method: 'POST',
headers: {
'Content-Type': 'application/octet-stream',
'Content-Length': 0,
'Authorization' : 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
'Transfer-Encoding' : 'chunked',
'Connection': 'keep-alive'
}
};
var req = http.request(options, function(res) {});The device supports only continuous POST and only a couple of formats (ulaw, aiff, wav)
This solution doesn’t seem to work. In the device speaker we only hear something like white noise.
Also, I think I may have a problem with the buffer I am sending to the ffmpeg std in -> Tried to dump whatever comes out of the websocket to a .wav file then play it with VLC -> it plays everything in the record very fast -> 10 seconds of recording played in about 1 second.
I am new to audio processing and have searched for about 3 days now for solutions on how to improve this and found nothing.
I would ask from the community for 2 things :
-
Is something wrong with my approach ? What more can I do to make this work ? I will post more details if required.
-
If what I am doing is reinventing the wheel then I would like to know what other software / 3rd party service (like amazon or whatever) can accomplish the same thing.
Thank you.
-