
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (14458)
-
Video Conferencing in HTML5 : WebRTC via Web Sockets
1er janvier 2014, par silviaA bit over a week ago I gave a presentation at Web Directions Code 2012 in Melbourne. Maxine and John asked me to speak about something related to HTML5 video, so I went for the new shiny : WebRTC – real-time communication in the browser.
I only had 20 min, so I had to make it tight. I wanted to show off video conferencing without special plugins in Google Chrome in just a few lines of code, as is the promise of WebRTC. To a large extent, I achieved this. But I made some interesting discoveries along the way. Demos are in the slide deck.
UPDATE : Opera 12 has been released with WebRTC support.
Housekeeping : if you want to replicate what I have done, you need to install a Google Chrome Web Browser 19+. Then make sure you go to chrome ://flags and activate the MediaStream and PeerConnection experiment(s). Restart your browser and now you can experiment with this feature. Big warning up-front : it’s not production-ready, since there are still changes happening to the spec and there is no compatible implementation by another browser yet.
Here is a brief summary of the steps involved to set up video conferencing in your browser :
- Set up a video element each for the local and the remote video stream.
- Grab the local camera and stream it to the first video element.
- (*) Establish a connection to another person running the same Web page.
- Send the local camera stream on that peer connection.
- Accept the remote camera stream into the second video element.
Now, the most difficult part of all of this – believe it or not – is the signalling part that is required to build the peer connection (marked with (*)). Initially I wanted to run completely without a server and just enter the remote’s IP address to establish the connection. This is, however, not a functionality that the PeerConnection object provides [might this be something to add to the spec ?].
So, you need a server known to both parties that can provide for the handshake to set up the connection. All the examples that I have seen, such as https://apprtc.appspot.com/, use a channel management server on Google’s appengine. I wanted it all working with HTML5 technology, so I decided to use a Web Socket server instead.
I implemented my Web Socket server using node.js (code of websocket server). The video conferencing demo is in the slide deck in an iframe – you can also use the stand-alone html page. Works like a treat.
While it is still using Google’s STUN server to get through NAT, the messaging for setting up the connection is running completely through the Web Socket server. The messages that get exchanged are plain SDP message packets with a session ID. There are OFFER, ANSWER, and OK packets exchanged for each streaming direction. You can see some of it in the below image :
I’m not running a public WebSocket server, so you won’t be able to see this part of the presentation working. But the local loopback video should work.
At the conference, it all went without a hitch (while the wireless played along). I believe you have to host the WebSocket server on the same machine as the Web page, otherwise it won’t work for security reasons.
A whole new world of opportunities lies out there when we get the ability to set up video conferencing on every Web page – scary and exciting at the same time !
-
Analytics for ePortfolios, Mahara hui conference
I was privileged to present at the Mahara Hui conference in Wellington, New Zealand.
Here are the slides of my presentation “Analytics for ePortfolios” :
Summary : by using an analytics tool that integrates well with Mahara, such as Piwik, Mahara users can benefit from a multitude of insightful analytics reports.
Learn more
Mahara is a web application to build your electronic portfolio. You can create journals, upload files, embed social media resources from the web and collaborate with other users in groups. Mahara is a popular open source project built by a passionate community, and used in universities, schools and companies all over the world.
Mahara Hui is the first kiwi conference on Mahara, the open source ePortfolio system, in New Zealand. This 2-day conference was held at Te Papa in Wellington from 19 to 20 March 2014 (schedule)
Next steps
I’m excited to join the Mahara team at the Mahara Hui Hackfest organised today at Catalyst IT offices. We will brainstorm how to integrate Piwik beautifully within Mahara, and how to ultimately provide students and employees useful analytics on all the content they create !
-
ffmpeg failed to load audio file
14 avril 2024, par Vaishnav GhengeFailed to load audio: ffmpeg version 5.1.4-0+deb12u1 Copyright (c) Failed to load audio: ffmpeg version 5.1.4-0+deb12u1 Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12 (Debian 12.2.0-14)
 configuration: --prefix=/usr --extra-version=0+deb12u1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librist --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --disable-sndio --enable-libjxl --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-libplacebo --enable-librav1e --enable-shared
 libavutil 57. 28.100 / 57. 28.100
 libavcodec 59. 37.100 / 59. 37.100
 libavformat 59. 27.100 / 59. 27.100
 libavdevice 59. 7.100 / 59. 7.100
 libavfilter 8. 44.100 / 8. 44.100
 libswscale 6. 7.100 / 6. 7.100
 libswresample 4. 7.100 / 4. 7.100
 libpostproc 56. 6.100 / 56. 6.100
/tmp/tmpjlchcpdm.wav: Invalid data found when processing input



backend :



@app.route("/transcribe", methods=["POST"])
def transcribe():
 # Check if audio file is present in the request
 if 'audio_file' not in request.files:
 return jsonify({"error": "No file part"}), 400
 
 audio_file = request.files.get('audio_file')

 # Check if audio_file is sent in files
 if not audio_file:
 return jsonify({"error": "`audio_file` is missing in request.files"}), 400

 # Check if the file is present
 if audio_file.filename == '':
 return jsonify({"error": "No selected file"}), 400

 # Save the file with a unique name
 filename = secure_filename(audio_file.filename)
 unique_filename = os.path.join("uploads", str(uuid.uuid4()) + '_' + filename)
 # audio_file.save(unique_filename)
 
 # Read the contents of the audio file
 contents = audio_file.read()

 max_file_size = 500 * 1024 * 1024
 if len(contents) > max_file_size:
 return jsonify({"error": "File is too large"}), 400

 # Check if the file extension suggests it's a WAV file
 if not filename.lower().endswith('.wav'):
 # Delete the file if it's not a WAV file
 os.remove(unique_filename)
 return jsonify({"error": "Only WAV files are supported"}), 400

 print(f"\033[92m{filename}\033[0m")

 # Call Celery task asynchronously
 result = transcribe_audio.delay(contents)

 return jsonify({
 "task_id": result.id,
 "status": "pending"
 })


@celery_app.task
def transcribe_audio(contents):
 # Transcribe the audio
 try:
 # Create a temporary file to save the audio data
 with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as temp_audio:
 temp_path = temp_audio.name
 temp_audio.write(contents)

 print(f"\033[92mFile temporary path: {temp_path}\033[0m")
 transcribe_start_time = time.time()

 # Transcribe the audio
 transcription = transcribe_with_whisper(temp_path)
 
 transcribe_end_time = time.time()
 print(f"\033[92mTranscripted text: {transcription}\033[0m")

 return transcription, transcribe_end_time - transcribe_start_time

 except Exception as e:
 print(f"\033[92mError: {e}\033[0m")
 return str(e)



frontend :


useEffect(() => {
 const init = () => {
 navigator.mediaDevices.getUserMedia({audio: true})
 .then((audioStream) => {
 const recorder = new MediaRecorder(audioStream);

 recorder.ondataavailable = e => {
 if (e.data.size > 0) {
 setChunks(prevChunks => [...prevChunks, e.data]);
 }
 };

 recorder.onerror = (e) => {
 console.log("error: ", e);
 }

 recorder.onstart = () => {
 console.log("started");
 }

 recorder.start();

 setStream(audioStream);
 setRecorder(recorder);
 });
 }

 init();

 return () => {
 if (recorder && recorder.state === 'recording') {
 recorder.stop();
 }

 if (stream) {
 stream.getTracks().forEach(track => track.stop());
 }
 }
 }, []);

 useEffect(() => {
 // Send chunks of audio data to the backend at regular intervals
 const intervalId = setInterval(() => {
 if (recorder && recorder.state === 'recording') {
 recorder.requestData(); // Trigger data available event
 }
 }, 8000); // Adjust the interval as needed


 return () => {
 if (intervalId) {
 console.log("Interval cleared");
 clearInterval(intervalId);
 }
 };
 }, [recorder]);

 useEffect(() => {
 const processAudio = async () => {
 if (chunks.length > 0) {
 // Send the latest chunk to the server for transcription
 const latestChunk = chunks[chunks.length - 1];

 const audioBlob = new Blob([latestChunk]);
 convertBlobToAudioFile(audioBlob);
 }
 };

 void processAudio();
 }, [chunks]);

 const convertBlobToAudioFile = useCallback((blob: Blob) => {
 // Convert Blob to audio file (e.g., WAV)
 // This conversion may require using a third-party library or service
 // For example, you can use the MediaRecorder API to record audio in WAV format directly
 // Alternatively, you can use a library like recorderjs to perform the conversion
 // Here's a simplified example using recorderjs:

 const reader = new FileReader();
 reader.onload = () => {
 const audioBuffer = reader.result; // ArrayBuffer containing audio data

 // Send audioBuffer to Flask server or perform further processing
 sendAudioToFlask(audioBuffer as ArrayBuffer);
 };

 reader.readAsArrayBuffer(blob);
 }, []);

 const sendAudioToFlask = useCallback((audioBuffer: ArrayBuffer) => {
 const formData = new FormData();
 formData.append('audio_file', new Blob([audioBuffer]), `speech_audio.wav`);

 console.log(formData.get("audio_file"));

 fetch('http://34.87.75.138:8000/transcribe', {
 method: 'POST',
 body: formData
 })
 .then(response => response.json())
 .then((data: { task_id: string, status: string }) => {
 pendingTaskIdsRef.current.push(data.task_id);
 })
 .catch(error => {
 console.error('Error sending audio to Flask server:', error);
 });
 }, []);



I was trying to pass the audio from frontend to whisper model which is in flask app