
Recherche avancée
Autres articles (96)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (5233)
-
ffmpeg : Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_pad_5
26 mars 2019, par rsswtmrThis shouldn’t be that hard. I’m trying to combine three disparate video sources. I’m upscaling them to a consistent 1280x720 frame, with black backgrounds for letterboxing, and trying to concatenate to the output file. The two input files are show segments, and the bumper is a random commercial that goes in the middle.
On an iMac Pro, System 10.14.3, ffmpeg 4.1.1. The command I’m trying to make work is :
ffmpeg -y -hide_banner -i "input1.mkv" -i "bumper.mkv" -i "input2.mkv" -filter_complex '[0:v]scale=1280x720:force_original_aspect_ratio=increase[v0],pad=1280x720:max(0\,(ow-iw)/2):max(0\,(oh-ih)/2):black[v0]; [1:v]scale=1280x720:force_original_aspect_ratio=increase[v1],pad=1280x720:max(0\,(ow-iw)/2):max(0\,(oh-ih)/2):black[v1]; [2:v]scale=1280x720:force_original_aspect_ratio=increase[v2],pad=1280x720:max(0\,(ow-iw)/2):max(0\,(oh-ih)/2):black[v2]; [v0][0:a][v1][1:a][v2][2:a]concat=n=3:v=1:a=1 [outv] [outa]' -map "[outv]" -map "[outa]" 'output.mkv'
The resulting frame I get back is :
[h264 @ 0x7fbec9000600] [verbose] Reinit context to 720x480, pix_fmt: yuv420p
[info] Input #0, matroska,webm, from 'input1.mkv':
[info] Metadata:
[info] encoder : libebml v0.7.7 + libmatroska v0.8.1
[info] creation_time : 2009-07-20T01:33:54.000000Z
[info] Duration: 00:12:00.89, start: 0.000000, bitrate: 1323 kb/s
[info] Stream #0:0(eng): Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 708x480 (720x480) [SAR 10:11 DAR 59:44], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
[info] Stream #0:1(eng): Audio: ac3, 48000 Hz, stereo, fltp, 160 kb/s (default)
[info] Metadata:
[info] title : English AC3
[info] Stream #0:2(eng): Subtitle: subrip
[h264 @ 0x7fbec9019a00] [verbose] Reinit context to 304x240, pix_fmt: yuv420p
[info] Input #1, matroska,webm, from 'bumper.mkv':
[info] Metadata:
[info] CREATION_TIME : 2019-03-15T15:16:00Z
[info] ENCODER : Lavf57.7.2
[info] Duration: 00:00:18.18, start: 0.000000, bitrate: 274 kb/s
[info] Stream #1:0: Video: h264 (Main), 1 reference frame, yuv420p(tv, smpte170m/smpte170m/bt709, progressive, left), 302x232 (304x240) [SAR 1:1 DAR 151:116], 29.97 fps, 29.97 tbr, 1k tbn, 180k tbc (default)
[info] Stream #1:1: Audio: aac (LC), 44100 Hz, stereo, fltp, delay 2111 (default)
[info] Metadata:
[info] title : Stereo
[error] Truncating packet of size 3515 to 1529
[h264 @ 0x7fbec9014600] [verbose] Reinit context to 704x480, pix_fmt: yuv420p
[h264 @ 0x7fbec9014600] [info] concealing 769 DC, 769 AC, 769 MV errors in I frame
[matroska,webm @ 0x7fbec9011e00] [error] Read error at pos. 50829 (0xc68d)
[info] Input #2, matroska,webm, from 'input2.mkv':
[info] Metadata:
[info] encoder : libebml v0.7.7 + libmatroska v0.8.1
[info] creation_time : 2009-07-19T22:37:48.000000Z
[info] Duration: 00:10:07.20, start: 0.000000, bitrate: 1391 kb/s
[info] Stream #2:0(eng): Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 704x480 [SAR 10:11 DAR 4:3], 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc (default)
[info] Stream #2:1(eng): Audio: ac3, 48000 Hz, stereo, fltp, 160 kb/s (default)
[info] Metadata:
[info] title : English AC3
[info] Stream #2:2(eng): Subtitle: subrip
[Parsed_scale_0 @ 0x7fbec8716540] [verbose] w:1280 h:720 flags:'bilinear' interl:0
[Parsed_scale_2 @ 0x7fbec8702480] [verbose] w:1280 h:720 flags:'bilinear' interl:0
[Parsed_scale_4 @ 0x7fbec8702e40] [verbose] w:1280 h:720 flags:'bilinear' interl:0
[fatal] Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_pad_5
[AVIOContext @ 0x7fbec862bfc0] [verbose] Statistics: 104366 bytes read, 2 seeks
[AVIOContext @ 0x7fbec870a100] [verbose] Statistics: 32768 bytes read, 0 seeks
[AVIOContext @ 0x7fbec87135c0] [verbose] Statistics: 460284 bytes read, 2 seeksI have no idea what
Parsed_pad_5
means. I GoogledCannot find a matching stream for unlabeled input pad
and found absolutely no explanation, anywhere. I’m a relative ffmpeg newbie. Before I start rooting around in the source code, can anyone point me in the right direction ? Thanks in advance. -
Convert an RTSP/RTMP-Livestream with G.711 audio into RTMP/RTSP with aac-audio
31 août 2018, par Alex Fuhrim new at this forum and my english skills are not the best !
I have a website where i publish the videostreams of the cameras to show what happens inside during the nesting-time live ! An guy with high IT-skills has build me a little Server for Restream it (Datarhei-Restreamer) But this guy has still no time and worse response-times...
To my Problem : The Restreamer dont support the "G.711" Audio-Codec from the cameras and the Livestream are still without audio at the website. So, i need to convert the Livestreams (RTSP and RTMP- in H.264) so that the audio changes to "aac" or something other supported. But i have no plan how to do this. I tried it with FFMPEG but i dont find the correct commands to get the my result. There is something with an Streaming-server to send the new created stream to - i dont get it into my head to do this (i need just a stream that are viewable with VLC player and then as input for my restreamer-server, jsut the same like ca
I want to change the source-stream into the correct codec (audio from G.711 to AAC, the rest like source) and then, put this "new" stream into my Restreamer-Server and it will work fine ! (Tested with XSplitbroadcaster, but dont runs on Raspberry, only 1 instance runable but 2 livestreams needs to be encoded at same time) And this programm has annoying bugs (endless and not removeable error-messages, but running stream)
I have a new second raspberry that are planned as "live-encoder" for the restreamer-raspberry were the "new" streams are are going in (rtmp/rtsp-input on a graphical ui) I try it still with FFMPEG but still no result...
Sorry about this long text with all the language-issues but im really frustrated with it because i have purchased 2 new cameras with total 450 euros just to get the livestream with sound now :(
-
Improving Google Cloud Speech-to-Text accuracy
6 juillet 2020, par lr_optimI'm working on a project where I need to perform these steps :


- 

- Record a voice call (
.webm
-file) - Split the
webm
-file into chunks withffmpeg
and convert the file intowav
- Transcribe the chunks using
SpeechRecognition
-library and Google Cloud API








I've faced problems with the transcription accuracy and wondering if there is something I could do to improve it. At the time I'm splitting the original file into 30s chunks. I thought there might be one problem, that I might be missing words because of splitting so I've tried also with longer chunks under 60s but didn't notice any improve in accuracy.
Reading trough the speechRecognition docs I decided to set
r.energy_threshold = 4000
, I also tried to set theenergy_treshold
dynamically like this :

with sr.AudioFile(name) as source:
 r.dynamic_energy_threshold = True
 r.adjust_for_ambient_noise(source, duration = 1)
 audio = r.record(source)



I've also tested
en-US
anden-GB
to see if there's some difference but there isn't as much as I'd want. The program is supposed to work with english language spoken by nordic people. If someone has experience about choosing a right language model for people speaking with accent, please let me know.

This is the
ffmpeg
command is use to split the webm file into chunks :command = ['ffmpeg', '-i', filename, '-f', 'segment', '-segment_time', '30', parts_dir + outputname + '%09d.wav']


Is there somethig I could do better ? I'm wondering if the quality is not good enough an Google is having hard time because of that ?


The main problem is I'm getting bad results (lots of wrong words) from Google and wondering if there is something I could do about it.


- Record a voice call (