
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (109)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (7934)
-
Show progress bar of a ffmpeg video convertion
13 juin 2022, par stackexchange.com-upvm25mzI'm trying to print a progress bar while executing ffmpeg but I'm having trouble getting the total number of frames and the current frame being processed. The error I get is
AttributeError: 'NoneType' object has no attribute 'groups'
. I've copied the function from this program and for some reason it works there but not here, even though I haven't changed that part.



pattern_duration = re.compile(
 'duration[ \t\r]?:[ \t\r]?(.+?),[ \t\r]?start', re.IGNORECASE)
pattern_progress = re.compile('time=(.+?)[ \t\r]?bitrate', re.IGNORECASE)

def execute_ffmpeg(self, manager, cmd):
 proc = expect.spawn(cmd, encoding='utf-8')
 self.update_progress_bar(proc, manager)
 self.raise_ffmpeg_error(proc)

def update_progress_bar(self, proc, manager):
 total = self.get_total_frames(proc)
 cont = 0
 pbar = self.initialize_progress_bar(manager)
 try:
 proc.expect(pattern_duration)
 while True:
 progress = self.get_current_frame(proc)
 percent = progress / total * 100
 pbar.update(percent - cont)
 cont = percent
 except expect.EOF:
 pass
 finally:
 if pbar is not None:
 pbar.close()

def raise_ffmpeg_error(self, proc):
 proc.expect(expect.EOF)
 res = proc.before
 res += proc.read()
 exitstatus = proc.wait()
 if exitstatus:
 raise ffmpeg.Error('ffmpeg', '', res)

def initialize_progress_bar(self, manager):
 pbar = None
 pbar = manager.counter(
 total=100,
 desc=self.path.rsplit(os.path.sep, 1)[-1],
 unit='%',
 bar_format=BAR_FMT,
 counter_format=COUNTER_FMT,
 leave=False
 )
 return pbar

def get_total_frames(self, proc):
 return sum(map(lambda x: float(
 x[1])*60**x[0], enumerate(reversed(proc.match.groups()[0].strip().split(':')))))

def get_current_frame(self, proc):
 proc.expect(pattern_progress)
 return sum(map(lambda x: float(
 x[1])*60**x[0], enumerate(reversed(proc.match.groups()[0].strip().split(':')))))



-
I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound
19 avril 2022, par Siti MaynaSo I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?


Notes :


I've tried to test the skill using my mobile phone also.


I've tried to encode the audio using FFmpeg.


I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter


I don't know how to fix this error.


There is no error message on cloud watch.


Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.


Steps to reproduce :




Build the interaction model






Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :




A :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 



B :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 



C :


ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3





Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds






Use the link and insert it to APLA.json





 {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 }




notes : I change the link sources based on audio that I tried.




the intent on lambda_function.py :




def _load_apl_document(file_path):
 # type: (str) -> Dict[str, Any]
 """Load the apl json document at the path into a dict object."""
 with open(file_path) as f:
 return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
 """Handler for Skill Launch."""
 def can_handle(self, handler_input):
 # type: (HandlerInput) -> bool

 return ask_utils.is_request_type("LaunchRequest")(handler_input)

 def handle(self, handler_input):
 # type: (HandlerInput) -> Response
 logger.info("In LaunchRequestHandler")

 # type: (HandlerInput) -> Response
 speak_output = "Hello World!"
 # .ask("add a reprompt if you want to keep the session open for the user to respond")

 return (
 handler_input.response_builder
 #.speak(speak_output)
 .add_directive(
 RenderDocumentDirective(
 token="pagerToken",
 document=_load_apl_document("APLA.json"),
 datasources={}
 )
 )
 .response
 )





Deploy






Test it






The result of the test on my end :

The response for testing




the JSON response :


{
 "body": {
 "version": "1.0",
 "response": {
 "directives": [
 {
 "type": "Alexa.Presentation.APLA.RenderDocument",
 "token": "pagerToken",
 "document": {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 },
 "datasources": {}
 }
 ],
 "type": "_DEFAULT_RESPONSE"
 },
 "sessionAttributes": {},
 "userAgent": "ask-python/1.16.1 Python/3.7.12"
 }
}





On my cloud Watch :
Cloud Watch




-
avdevice/dshow : list_devices : show media type(s) per device
21 décembre 2021, par Diederick Niehorsteravdevice/dshow : list_devices : show media type(s) per device
the list_devices option of dshow didn't indicate whether a specific
device provides audio or video output. This patch iterates through all
media formats of all pins exposed by the device to see what types it
provides for capture, and prints this to the console for each device.
Importantly, this now allows to find devices that provide both audio and
video, and devices that provide neither.Signed-off-by : Diederick Niehorster <dcnieho@gmail.com>
Reviewed-by : Roger Pack <rogerdpack2@gmail.com>