
Recherche avancée
Médias (91)
-
Corona Radiata
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Lights in the Sky
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Head Down
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Echoplex
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Discipline
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Letting You
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (76)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6224)
-
How to insert bullet screen comments into a video by using ffmpeg ?
7 septembre 2019, par Saeron MengI would like to add some real-time comments in my video but I do not know how to use ffmpeg to realize this. The comments are like screen bullets through the screen, scrolling from right to left.
My thought is to count the length of the comments and define speeds for them to move. Don’t worry about the source of the comments, I have already gotten them saved as an xml file. Also, I can transfer it into srt. For instance :
<chat timestamp="671.195">
<ems utctime="1562584080" sender="Bill">
<richtext></richtext>
</ems>
</chat>
<chat timestamp="677.798">
<ems utctime="1562584086" sender="Jack">
<richtext></richtext>
</ems>
</chat>The final result is like this (I did not find an example displayed in English), these fancy characters can move horizontally :
I have searched some solutions on the Internet, most of which talk about how to write ass/srt files and add motionless subtitles. Like this :
3
00:00:39,770 --> 00:00:41,880
When I was lying there in the VA hospital ...
4
00:00:42,550 --> 00:00:44,690
... with a big hole blown through the middle of my life,
5
00:00:45,590 --> 00:00:48,120
... I started having these dreams of flying.ffmpeg -i infile.mp4 -i infile.srt -c copy -c:s mov_text outfile.mp4
But I need another kind of "subtitles" which can move.
So my question is, how to modify the ffmpeg command and the template of srt file so as to arrange the subtitles from the top to the bottom and let them move from right to left ?
-
Improving Google Cloud Speech-to-Text accuracy
6 juillet 2020, par lr_optimI'm working on a project where I need to perform these steps :


- 

- Record a voice call (
.webm
-file) - Split the
webm
-file into chunks withffmpeg
and convert the file intowav
- Transcribe the chunks using
SpeechRecognition
-library and Google Cloud API








I've faced problems with the transcription accuracy and wondering if there is something I could do to improve it. At the time I'm splitting the original file into 30s chunks. I thought there might be one problem, that I might be missing words because of splitting so I've tried also with longer chunks under 60s but didn't notice any improve in accuracy.
Reading trough the speechRecognition docs I decided to set
r.energy_threshold = 4000
, I also tried to set theenergy_treshold
dynamically like this :

with sr.AudioFile(name) as source:
 r.dynamic_energy_threshold = True
 r.adjust_for_ambient_noise(source, duration = 1)
 audio = r.record(source)



I've also tested
en-US
anden-GB
to see if there's some difference but there isn't as much as I'd want. The program is supposed to work with english language spoken by nordic people. If someone has experience about choosing a right language model for people speaking with accent, please let me know.

This is the
ffmpeg
command is use to split the webm file into chunks :command = ['ffmpeg', '-i', filename, '-f', 'segment', '-segment_time', '30', parts_dir + outputname + '%09d.wav']


Is there somethig I could do better ? I'm wondering if the quality is not good enough an Google is having hard time because of that ?


The main problem is I'm getting bad results (lots of wrong words) from Google and wondering if there is something I could do about it.


- Record a voice call (
-
Convert an RTSP/RTMP-Livestream with G.711 audio into RTMP/RTSP with aac-audio
31 août 2018, par Alex Fuhrim new at this forum and my english skills are not the best !
I have a website where i publish the videostreams of the cameras to show what happens inside during the nesting-time live ! An guy with high IT-skills has build me a little Server for Restream it (Datarhei-Restreamer) But this guy has still no time and worse response-times...
To my Problem : The Restreamer dont support the "G.711" Audio-Codec from the cameras and the Livestream are still without audio at the website. So, i need to convert the Livestreams (RTSP and RTMP- in H.264) so that the audio changes to "aac" or something other supported. But i have no plan how to do this. I tried it with FFMPEG but i dont find the correct commands to get the my result. There is something with an Streaming-server to send the new created stream to - i dont get it into my head to do this (i need just a stream that are viewable with VLC player and then as input for my restreamer-server, jsut the same like ca
I want to change the source-stream into the correct codec (audio from G.711 to AAC, the rest like source) and then, put this "new" stream into my Restreamer-Server and it will work fine ! (Tested with XSplitbroadcaster, but dont runs on Raspberry, only 1 instance runable but 2 livestreams needs to be encoded at same time) And this programm has annoying bugs (endless and not removeable error-messages, but running stream)
I have a new second raspberry that are planned as "live-encoder" for the restreamer-raspberry were the "new" streams are are going in (rtmp/rtsp-input on a graphical ui) I try it still with FFMPEG but still no result...
Sorry about this long text with all the language-issues but im really frustrated with it because i have purchased 2 new cameras with total 450 euros just to get the livestream with sound now :(