
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (76)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (9851)
-
lavu/timer : use time for AV_READ_TIME on RISC-V
15 août 2023, par Rémi Denis-Courmontlavu/timer : use time for AV_READ_TIME on RISC-V
So far, AV_READ_TIME would return the cycle counter. This posed two
problems :
1) On recent systems, it would just raise an illegal instruction
exception. Indeed RDCYCLE is blocked in user space to ward off some
side channel attacks. In particular, this would cause the random
number generator to crash.
2) It does not match the x86 behaviour and the apparent original intent
of AV_READ_TIME in the functional code base (outside test cases).So this replaces the cycle counter with the time counter. The unit is
a platform-dependent constant fraction of time, and the value should be
stable across harts (RISC-V lingo for physical CPU thread). -
How to save audio chunks from client to ffmpeg readable file ?
22 septembre 2023, par LuckOverflowI am live recording audio data from a TS React front-end and need to send it to the server, where it can be saved to a file so that ffmpeg can mix it. The front-end saves the mic data to a blob with type "mimeType : "audio/webm ; codecs=opus" when printed in the browser terminal. I send the exact object that I printed to the server, where logging it indicates it is a, or was passed as a, "Buffer" object.


I have tried saving that Buffer as a webm file, but when I pass that file as an input to ffmpeg ffprobe, I get the error "Format matroska,webm detected only with a low score of 1..." and "EBML header parsing failed.." "Invalid data found when processing input." I have tried several other formats to no success.


I need a way to transform this Buffer object to an audio file that can be mixed by ffmpeg. When I am finished, I also need to be able to do the reverse operation to send it in the same format to another client for playback, which is currently working.


Code that records and sends the audio (TS React) :




const startRecording = async function () {
 inputStream = await navigator.mediaDevices.getUserMedia({ audio: true });
 
 mediaRecorder.current = new MediaRecorder(inputStream, { mimeType: "audio/webm; codecs=opus" });

 mediaRecorder.current.ondataavailable = e => {
 console.log(e.data)
 if (e.data.size > 0) {
 socket.emit("recording", e.data);
 console.log("Audio data recorded. Transmitting to server via socketio...");
 }
 };

 mediaRecorder.current.start(1000);
 };




Code that receives and tries to save the Buffer to a file (JS Node.js) :




socket.on("recording", (chunk) => {
 console_log("Audio chunk recieved. Transmitting to frontend...");
 socket.broadcast.emit('listening', chunk);

 fs.writeFileSync('out.webm', chunk.toString());
 if (counter > 3) {
 console.log("Trying ffmpeg...");

 ffmpegInstance
 .input('out.webm')
 .complexFilter([
 {
 filter: 'amix'
 }])
 .save('./Music/FFMPEGSTREAM.mp3');
 }

 counter++;
 });



fluent-ffmpeg interface package is includued in the server code, but I have been using ffmpeg in the terminal (Pop OS) to debug. The goal is to save the file to a ram disk and use fluent ffmpeg to mix before sending to a different client for playback. Currently I am just trying to save it to disk and get ffmpeg command line to work on it.


Update :
Problem was that the chunk I was analyzing didn't have the header info. MediaRecorder encodes, then slices it up, not slices it up into your specified time slot and encodes. I have not found a good solution to this. Saving the file, without toString I believe, results in a playable webm when the header is properly included.


-
Shell Script for passing multiple audio sources as arguments to a script that uses ffmpeg for recording video/audio ? [duplicate]
29 novembre 2023, par Raul ChiarellaI have the following Shell/Bash Script, that records video with audio in my Linux :


#!/bin/bash

# Command to record audio from one microphone and video from one webcam
ffmpeg_cmd="ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 -f alsa -ac 1 -i hw:1,0 -acodec libmp3lame -ab 96k camera.mp4"

# Execute the command
echo "Executing: $ffmpeg_cmd"
eval $ffmpeg_cmd



This one works... But now :


What I want is to be able to execute this Bash/Shell script passing multiple audio sources parameters and changing if video is going to be recorded or not (Video 1 Enabled, Video 0 Disabled), for example, using
./recordVideo.sh -a "audio1,audio2" -v 1
...

I tried recording a video accepting multiple audio sources arguments and passing it to FFMPEG using the following script :


#!/bin/bash

# Default variables
audio_sources=""
video_flag=0

# Function to display help
usage() {
 echo "Usage: $0 -a 'Microphone1,Microphone2' -v 1"
 echo " -a : List of audio sources, separated by commas"
 echo " -v : Video flag (1 to record video, 0 not to record)"
 exit 1
}

# Parse arguments
while getopts "a,v" opt; do
 case $opt in
 a) audio_sources=$OPTARG ;;
 v) video_flag=$OPTARG ;;
 *) usage ;;
 esac
done

# Check if audio arguments were provided
if [ -z "$audio_sources" ]; then
 echo "Error: Audio sources not specified."
 usage
fi

# Initial ffmpeg command configuration
cmd="ffmpeg"

# Counter for source mapping
source_count=0

# Audio settings
IFS=',' read -ra ADDR <<< "$audio_sources"
for source in $ADDR; do
 cmd+=" -f alsa -ac 1 -i $source"
 cmd+=" -map $source_count"
 ((source_count++))
done

# Video settings
if [ "$video_flag" -eq 1 ]; then
 cmd+=" -f video4linux2 -s 320x240 -i /dev/video0 -map $source_count"
 ((source_count++))
fi

# Audio codec configuration
cmd+=" -acodec libmp3lame -ab 96k"

# Output file name
cmd+=" recordedVideo.mp4"

# Execute command
echo "Executing: $cmd"
eval $cmd



But it is throwing the following error when I execute
./recordVideo.sh -a 'Microphone1,Microphone2' -v 1
:



Error : Audio sources not specified.
Usage : ./recordVideo.sh -a 'Microphone1,Microphone2' -v 1
-a : List of audio sources, separated by commas
-v : Video flag (1 to record video, 0 not to record)




Can someone help me ? :(
What am I doing wrong ? Is it the Shell Syntax or FFMPEG arguments that are wrong ?