
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (53)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Les images
15 mai 2013 -
Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur
8 février 2011, parLa visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
Configuration de la boite multimédia
Dès (...)
Sur d’autres sites (7695)
-
How to convert mp3 to wav from S3 using python lambda
6 novembre 2020, par SSenI am trying to create a python lambda that uploads an mp3 file into an S3 bucket, converts it to wav and then transcripts the wav to text using speech_recognition.


Currently, I used
os.system('ffmpeg -i {} -acodec pcm_s16le -ar 16000 {}.wav'.format(filename, wav_filename))
to convert the mp3 to wav in local and it downloads the file to the same path as the python file. But it doesn't work in lambda when I try fetching it from S3 and download the converted wav file inside lambda.

Here is the current code :


import os
import speech_recognition as sr
import json
import boto3

r = sr.Recognizer()

def lambda_handler(event, context):

 s3 = boto3.resource('s3')

 obj_audio_file = s3.Object('s3-bucket-audio-files', 'audio-mp3-format.mp3') # assuming audio-mp3-format.mp3 already exists in S3
 body = obj.get()['Body'].read()

 wav_filename = 'audio-wav-format'

 os.system('ffmpeg -i {} -acodec pcm_s16le -ar 16000 {}.wav'.format(obj_audio_file, actual_filename))
 audio_file = sr.AudioFile('audio-wav-format.wav') # new wav file created

 with audio_file as source:
 r.adjust_for_ambient_noise(source)
 audio = r.record(source)

 type(audio)
 print(r.recognize_google(audio))



I am getting
sh: ffmpeg: command not found
error when I run the lambda.

Could someone please help me with how to fix these issues ?


-
tests/lavf-regression : fix gbrp10 dpx test on big endian
13 mai 2013, par Paul B Mahol -
ffmpeg : playing media files does not release processor after media ends ?
2 septembre 2017, par Blake SenftnerI have a commercial C++ application which uses FFMPEG’s libav series of dlls to play media in a Windows application. I basically started with the dranger tutorial about two years ago, and created a library that can playback USB cameras, IP camera / online streams, and media files on disk. (http://dranger.com/ffmpeg/)
My question is directed at anyone who has created their own similar library :
I recently noticed after playing a video file from disk (as opposed to a live stream from USB or IP source), my 8 core i7 workstation will show 28-29% CPU usage after a media file has ended. My application can play an unlimited number of videos, and each "virtual video panel" (not a window, just a "virtual tab" created using wxWidgets that holds an OpenGL context that I use to glDrawPixels() to the visible app panel) will play any of the three media types fine (USB, IP stream or media file) and when I stop a USB or IP stream my application’s CPU usage drops to zero. But when I "stop" a media file playing or the media file ends on its own the CPU usage does not drop - until the application quits.
Three media files playing will take my application to 80-83% CPU, and it never drops. UNLESS I reuse that same "virtual video panel" to play a USB or IP stream. If I stop those streams, CPU usage is released.
MP4 (h264) video files exhibit this "holding a processor" problem.
MP4 (mpeg2) files do not.
MP4 (h265) files do not.
MPG (mpeg1) files do not.
ASF (MS MPEG-4 Video v3) files do not.
MKV (vp8) files do not.
MOV files using h265 do not, as well as MOV (h264) files do not.
FLV (sorensen) files do not, as well as FLV (h264) files do not.
So it is not just the h264 codec.
Anyone know what is going on, and how I tell libav to release CPU usage when a media file is no longer playing ?