
Recherche avancée
Autres articles (98)
-
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Que fait exactement ce script ?
18 janvier 2011, parCe script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
Installation de dépendances de MediaSPIP
Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (8045)
-
Google Speech - Streaming Request Returns EOF
9 octobre 2017, par JoshUsing Go, I’m taking a RTMP stream, transcoding it to FLAC (using ffmpeg) and attempting to stream to Google’s Speech API to transcribe the audio. However, I keep getting
EOF
errors when sending the data. I can’t find any information on this error in the docs so I’m not exactly sure what’s causing it.I’m chunking the received data into 3s clips (length isn’t relevant as long as it’s less than the maximum length of a streaming recognition request).
Here is the core of my code :
func main() {
done := make(chan os.Signal)
received := make(chan []byte)
go receive(received)
go transcribe(received)
signal.Notify(done, os.Interrupt, syscall.SIGTERM)
select {
case <-done:
os.Exit(0)
}
}
func receive(received chan<- []byte) {
var b bytes.Buffer
stdout := bufio.NewWriter(&b)
cmd := exec.Command("ffmpeg", "-i", "rtmp://127.0.0.1:1935/live/key", "-f", "flac", "-ar", "16000", "-")
cmd.Stdout = stdout
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
duration, _ := time.ParseDuration("3s")
ticker := time.NewTicker(duration)
for {
select {
case <-ticker.C:
stdout.Flush()
log.Printf("Received %d bytes", b.Len())
received <- b.Bytes()
b.Reset()
}
}
}
func transcribe(received <-chan []byte) {
ctx := context.TODO()
client, err := speech.NewClient(ctx)
if err != nil {
log.Fatal(err)
}
stream, err := client.StreamingRecognize(ctx)
if err != nil {
log.Fatal(err)
}
// Send the initial configuration message.
if err = stream.Send(&speechpb.StreamingRecognizeRequest{
StreamingRequest: &speechpb.StreamingRecognizeRequest_StreamingConfig{
StreamingConfig: &speechpb.StreamingRecognitionConfig{
Config: &speechpb.RecognitionConfig{
Encoding: speechpb.RecognitionConfig_FLAC,
LanguageCode: "en-GB",
SampleRateHertz: 16000,
},
},
},
}); err != nil {
log.Fatal(err)
}
for {
select {
case data := <-received:
if len(data) > 0 {
log.Printf("Sending %d bytes", len(data))
if err := stream.Send(&speechpb.StreamingRecognizeRequest{
StreamingRequest: &speechpb.StreamingRecognizeRequest_AudioContent{
AudioContent: data,
},
}); err != nil {
log.Printf("Could not send audio: %v", err)
}
}
}
}
}Running this code gives this output :
2017/10/09 16:05:00 Received 191704 bytes
2017/10/09 16:05:00 Saving 191704 bytes
2017/10/09 16:05:00 Sending 191704 bytes
2017/10/09 16:05:00 Could not send audio: EOF
2017/10/09 16:05:03 Received 193192 bytes
2017/10/09 16:05:03 Saving 193192 bytes
2017/10/09 16:05:03 Sending 193192 bytes
2017/10/09 16:05:03 Could not send audio: EOF
2017/10/09 16:05:06 Received 193188 bytes
2017/10/09 16:05:06 Saving 193188 bytes
2017/10/09 16:05:06 Sending 193188 bytes // Notice that this doesn't error
2017/10/09 16:05:09 Received 191704 bytes
2017/10/09 16:05:09 Saving 191704 bytes
2017/10/09 16:05:09 Sending 191704 bytes
2017/10/09 16:05:09 Could not send audio: EOFNotice that not all of the
Send
s fail.Could anyone point me in the right direction here ? Is it something to do with the FLAC headers or something ? I also wonder if maybe resetting the buffer causes some of the data to be dropped (i.e. it’s a non-trivial operation that actually takes some time to complete) and it doesn’t like this missing information ?
Any help would be really appreciated.
-
google speech to text errors out (grpc invalid deadline NaN)
15 décembre 2019, par jamescharlesworthI have a ffmpeg script that cuts an audio file into a short 5 second clip, however after I cut the file, calling the google speech
recognize
command errors out.Creating a clip - full code link :
const uri = 'http://traffic.libsyn.com/joeroganexp/p1400.mp3?dest-id=19997';
const command = ffmpeg(got.stream(uri));
command
.seek(0)
.duration(5)
.audioBitrate(128)
.format('mp3')
...which works fine and creates
./clip2.mp3
.I then take that file and upload it to speech to text api and it times out (script here. When I put
timeout
andmaxRetries
argument I can get the actual error :Error: 2 UNKNOWN: Getting metadata from plugin failed with error: Deadline is too far in the future
at Object.callErrorFromStatus (/Users/jamescharlesworth/Downloads/demo/node_modules/@grpc/grpc-js/build/src/call.js:30:26)
at Http2CallStream.<anonymous> (/Users/jamescharlesworth/Downloads/demo/node_modules/@grpc/grpc-js/build/src/client.js:96:33)
at Http2CallStream.emit (events.js:215:7)
at /Users/jamescharlesworth/Downloads/demo/node_modules/@grpc/grpc-js/build/src/call-stream.js:98:22
at processTicksAndRejections (internal/process/task_queues.js:75:11) {
code: 2,
details: 'Getting metadata from plugin failed with error: Deadline is too far in the future',
metadata: Metadata { internalRepr: Map {}, options: {} },
note: 'Exception occurred in retry method that was not classified as transient'
}
</anonymous>Stepping through the grpc code i see that the deadline is an invalid date.
This seems to be causing the issue but i assume it may be from incorrect params passed into the speechclient.recognize()
method.A few other things to note :
- The script works for some audio files, not all
- I can upload the broken my clip mp3 clip2.mp3 to the demo app here and it works fine.
- If I change the seek command of my ffmpeg script to start at
0.01
speech recognize command will work (however it breaks other audio clips as its not the correct starting point). I notice that when i do this the png of the mp3 gets stripped out and is a much smaller file size
-
How to Add PulseAudio Server to quay.io/browser/google-chrome-stable Docker Image for Audio Support with Screen Recording ?
17 avril, par Ahmed Seddik BouchibaI’m trying to set up an environment for recording the screen of a Chrome browser running in a Docker container, and I need to enable audio support. I’m using the quay.io/browser/google-chrome-stable:133.0.6943.98-6 image for the browser and quay.io/aerokube/xvfb:21.1 for the virtual framebuffer to capture the screen.


However, I’m facing an issue where audio is not supported in the Chrome Docker image, which I need for recording. The setup involves using FFmpeg in a separate container to stream the recorded video, but without audio from the browser, this setup isn’t complete.


I’m looking for guidance on how to add a PulseAudio server to the Chrome image to enable audio support. Specifically :


How can I configure the Docker image quay.io/browser/google-chrome-stable:133.0.6943.98-6 to support PulseAudio?

Are there any considerations or best practices when adding PulseAudio to a headless browser Docker container?

Is it possible to run the PulseAudio server in a separate container and link it to the Chrome container, or should it be included directly in the Chrome container?



Any help on adding PulseAudio support to this Chrome Docker image would be greatly appreciated !


Additional Context :


The goal is to run a headless Chrome browser with audio support to record the browser’s activities (both video and audio) and stream it using FFmpeg.

I’m using Docker Compose to orchestrate the containers but haven’t figured out how to integrate PulseAudio into the setup effectively.



Thanks in advance !