
Recherche avancée
Médias (91)
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#1 The Wires
11 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (85)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (11818)
-
How to print the video meta output by the browser version of ffmpeg.wasm to the console of Google Chrome ?
17 janvier 2021, par helloAlI would like to ask about how to use the browser version of ffmpeg.wasm.


Through my investigation, I know that the following command can be used to output the video metadata to a file in the terminal of windows or mac.


ffmpeg -i testvideo.mp4 -f ffmetadata testoutput.txt



and then I can get this matadata like this :



I want to parse the metadata of the video through the browser, and then print the metadata to the Google console (or output to a file). At present, I know that the browser version of ffmpeg.wasm can achieve this function, but I have looked at its examples, which does not involve this part of the content. (https://github.com/ffmpegwasm/ffmpeg.wasm/blob/master/examples/browser/image2video.html)


But I want to print it to the console of Google Chrome through the browser version(usage:brower) of ffmpeg.wasm (https://github.com/ffmpegwasm/ffmpeg.wasm).
So I want to ask you how to achieve this, thank you.


-
avcodec/pngenc : fix sBIT writing for indexed-color PNGs
19 juillet 2024, par Leo Izenavcodec/pngenc : fix sBIT writing for indexed-color PNGs
We currently write invalid sBIT entries for indexed PNGs, which by PNG
specification[1] must be 3-bytes long. The values also are capped at 8
for indexed-color PNGs, not the palette depth. This patch fixes both of
these issues previously fixed in the decoder, but not the encoder.[1] : https://www.w3.org/TR/png-3/#11sBIT
Regression since : c125860892e931d9b10f88ace73c91484815c3a8.
Signed-off-by : Leo Izen <leo.izen@gmail.com>
Reported-by : Ramiro Polla : <ramiro.polla@gmail.com> -
Google cloud speech to text not giving output for OGG & MP3 files
27 avril 2021, par Vedant JumleI am trying to perform speech to text on a bunch of audio files which are over 10 mins long. I don't want to waste storage on the cloud bucket by straight-up uploading wav files on it. So I am using
ffmpeg
to convert the files either to ogg or mp3 like :
ffmpeg -y -i audio.wav -ar 12000 -r 16000 audio.mp3


ffmpeg -y -i audio.wav -ar 12000 -r 16000 audio.ogg


For testing purpose I ran the speech to text service on a dummy wav file and it seemed to work, I got the text as expected. But for some reason it isn't detecting any speech when I use the ogg or mp3 file. I could not give amr files to work either.


My code :


def transcribe_gcs(gcs_uri):
 client = speech.SpeechClient()

 audio = speech.RecognitionAudio(uri=gcs_uri)
 config = speech.RecognitionConfig(
 encoding="OGG_OPUS", #replace with "LINEAR16" for wav, "OGG_OPUS" for ogg, "AMR" for amr
 sample_rate_hertz=16000,
 language_code="en-US",
 )
 print("starting operation")
 operation = client.long_running_recognize(config=config, audio=audio)
 response = operation.result()
 print(response)



I have set up the authentication properly, so that is not a problem.


When I run the speech to text service on the same audio but in ogg or mp3(I just comment out the encoding setting from the config for mp3) format, it gives no response, just prints out a line break and done.


What can I do to fix this ?