Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (70)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur
8 février 2011, parLa visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
Configuration de la boite multimédia
Dès (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (9856)
-
Implement chunk buffer size input
5 juillet 2022, par imagesckI try to implement chunk stream output, in this case, it is a video file with 640x360, 7070 Frame, rbg32 (r,g,b,a). so stream length will be 6515712000. i cap stream output to 1 frame = 921600 stream length. try to look into decode stdout data. if i'm not correct, stream size cant be determine, because of that. i put incoming stream buffer to array. and make conditiona to check if buffer array above 1 frame length. decode process pause, then write to encode. then resume again. the problem in encode process, is just freeze. whats wrong ? i check buffer length after splice. it return 0. it should trigger decode process to resume.


const { spawn } = require('child_process');
const path = require('path');

const decArgs = [
 '-i', path.join(__dirname + '/public/future.mp4'),
 '-an',
 '-pix_fmt', 'rgb32',
 '-f', 'rawvideo',
 '-'
];

const encArgs = [
 '-f', 'rawvideo',
 '-pix_fmt', 'rgb32',
 '-s', '640x360',
 '-i', '-',
 '-c:v', 'libx264',
 '-preset', 'ultrafast',
 '-crf', '30',
 '-pix_fmt', 'yuv420p',
 '-y',
 path.join(__dirname + '/public/output.mp4')
];

const decode = spawn('ffmpeg', decArgs, {
 stdio: [
 null,
 'pipe',
 null
 ]
});

const encode = spawn('ffmpeg', encArgs, {
 stdio: [
 'pipe',
 null,
 null
 ]
})


let buffer = [];
decode.stdout.on('data', data => {
 buffer = buffer.concat(data)

 if (buffer.length > 921600) {
 decode.stdout.pause();

 const buffers = Buffer.from(buffer.splice(0, 921600));
 encode.stdin.write(buffers);
 }

 if (buffer.length < 921600) decode.stdout.resume(); 
});

encode.stdout.on('data', data => {
 console.log('Progress: ')
 console.log(data.toString())
})

encode.stderr.on('data', data => {
 console.log('Error: ')
 console.log(data.toString())
})


-
Normalizing audio of several wav snippets with ffmpeg
17 mai 2023, par dick_kickemI searched the site and I figured that maybe
ffmpeg-normalizecould be part of my solution but I'm really not sure.

In my free time me and my friends create quizzes for people to solve. One of these is a music quiz, where you hear audio snippets and have to guess the artist and song title. A few years back I did most of them using Audacity, which means recording snippets from existing videos, inserting fade in and fade out to every snippet, putting announcements like "Number x" before the x-th song and also making sure that all songs are fairly of equal loudeness (-6.0 dB).


The result in Audacity looked like this.




Lazy as I am, I learned about
ffmpegand wrote a script, which does all these steps for me. The script is mostly written in bash. I use some audio files where I extract the audio in wav-format, add a fade in and fade out and then I try to set the volume to -6.0 dB as with Audacity. The part where this happens looks like this :

...[some code before]...

# write the audio details of temmp.wav into the "info" file
ffmpeg -i temp.wav -filter:a volumedetect -f null - 2> info

#check out the max volume of temp.wav
max_vol=$(grep "max_volume" info | cut -d ' ' -f5)

# determine the difference between the max volume and -6
vol_diff=$(expr "-6-($max_vol)" | bc -l)

# change temp.wav loudness by the determined difference
ffmpeg -y -i temp.wav -filter:a "volume=$vol_diff$db" $count.wav

...[some code after]...


I do this with all snippets, leaving me with usually ten snippets in the format
1.wav,2.wavand so on. Lastly, I concatenate them all with announcements in the formnr1.wav,1.wav,nr2.wav,2.wavand so on. Overall this works really great for me. Yet, the loudness is not quite as equal as in Audacity. Here is a screenshot of a generated music quiz using the described script (not the same music snippets as the example before) :



You can see some peaks pointing out. It's not bad and in fact, it works for me in most cases, but it's not like what I used to have with Audacity. Does anyone have any idea how to fix that to make it more equal ?


Thank you in advance and kind regards


-
How to decode and display real-time H264 stream using ffmpeg in Python ?
25 mars 2022, par yiiiiiiiranI would like to port the live stream to ffmpeg and display it in real time using Python.


Anyone knows how to port the stream to PIPE ? And in the mean time to display it after decoding ?


I managed to get real-time stream from my Raspberry Pi3 to Windows PC, using RS232 connection with Baud Rate 2M.


The format of the stream is in H264. The data package I get for each frame is in .
In order for the program to know when does each package ends, I've add


bytes([0xcc,0xdd,0xee,0xff])


to the end of package. So that my serial port will read for a package until it sees those bytes.


Lets assume the stream WIDTH, HEIGHT, NUM_FRAMES, FPS = 320, 240, 90, 30


I have the command for decode the h264 stream :


cmd = ["C:/XXXXXX/ffmpeg.exe",
 "-probesize", "32",
 "-flags", "low_delay",
 "-f", "h264",
 "-i", "pipe:",
 "-f", "rawvideo", "-pix_fmt", "rgb24", "-s", "384x216",
 "pipe:"]

decode_process = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE)


The stream package I got is


while datetime.now() < end_time:
 pkg = ser.read_until(expected=bytes([0xcc,0xdd,0xee,0xff])) #output <class>
 frame_len = len(pkg)-4
 frame_inBytes = pkg[0:frame_len]
 decode_process.stdin.write(frame_inBytes)
</class>

I want to write the real time stream to PIPE however it shows error :


[h264 @ 0000017322a3e980] missing picture in access unit with size 48
[h264 @ 0000017322a3e980] no frame!
[h264 @ 0000017322a2d240] Stream #0: not enough frames to estimate rate; consider increasing probesize
[h264 @ 0000017322a2d240] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (32) options 
Input #0, h264, from 'pipe:':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264, none, 25 tbr, 1200k tbn
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
[h264 @ 0000017322a3f180] no frame!
Error while decoding stream #0:0: Invalid data found when processing input
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished
Conversion failed!



