
Recherche avancée
Autres articles (77)
-
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (10152)
-
Android mp4 remove rotation and rotate the video stream
6 novembre 2018, par Mathijs SegersI’m having some issues trying to remove the rotation value of Android video’s.
For some reason convertion tools in the cloud cannot seem to handle android’s rotation value correctly. f/e I have a portrait video recorded in 1080x1920 (so the file’s headers tell me it’s actually 1920x1080 with rotation : 90).So now I’m trying to convert these video’s to an actual 1080x1920 format when they have this rotation value but i’m kind of stuck, probably using the wrong search terms on SO and Google.
In the hope of making things clear I’ve actually added some ffmpeg libs to android following these steps, of course with some changes to parameters. I’m building this on a Mac and this all works fine now.
http://www.roman10.net/how-to-build-ffmpeg-with-ndk-r9/Now the following, I have no real clue how these libs work and how to use them or which I actually need or if I even need them at all.
Is there anyone who can point me in the right direction of solving my issue ? Basicly the video’s are on my android filesystem and I can access them fully, before uploading I want to check the values and remove and rotate the video’s if needed.
-
I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound
19 avril 2022, par Siti MaynaSo I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?


Notes :


I've tried to test the skill using my mobile phone also.


I've tried to encode the audio using FFmpeg.


I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter


I don't know how to fix this error.


There is no error message on cloud watch.


Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.


Steps to reproduce :




Build the interaction model






Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :




A :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 



B :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 



C :


ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3





Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds






Use the link and insert it to APLA.json





 {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 }




notes : I change the link sources based on audio that I tried.




the intent on lambda_function.py :




def _load_apl_document(file_path):
 # type: (str) -> Dict[str, Any]
 """Load the apl json document at the path into a dict object."""
 with open(file_path) as f:
 return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
 """Handler for Skill Launch."""
 def can_handle(self, handler_input):
 # type: (HandlerInput) -> bool

 return ask_utils.is_request_type("LaunchRequest")(handler_input)

 def handle(self, handler_input):
 # type: (HandlerInput) -> Response
 logger.info("In LaunchRequestHandler")

 # type: (HandlerInput) -> Response
 speak_output = "Hello World!"
 # .ask("add a reprompt if you want to keep the session open for the user to respond")

 return (
 handler_input.response_builder
 #.speak(speak_output)
 .add_directive(
 RenderDocumentDirective(
 token="pagerToken",
 document=_load_apl_document("APLA.json"),
 datasources={}
 )
 )
 .response
 )





Deploy






Test it






The result of the test on my end :

The response for testing




the JSON response :


{
 "body": {
 "version": "1.0",
 "response": {
 "directives": [
 {
 "type": "Alexa.Presentation.APLA.RenderDocument",
 "token": "pagerToken",
 "document": {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 },
 "datasources": {}
 }
 ],
 "type": "_DEFAULT_RESPONSE"
 },
 "sessionAttributes": {},
 "userAgent": "ask-python/1.16.1 Python/3.7.12"
 }
}





On my cloud Watch :
Cloud Watch




-
How to convert a .m4a file to a .wav file on Android using React Native ?
23 octobre 2022, par aurelien_morelI am trying to convert a .m4a file that I record using expo-audio into a .wav file. The goal is then to use it as a blob to send it on a Google Cloud Storage.
I tried to do this using ffmpeg-kit-react-native :


const uri = recording.getURI();
console.log(uri);

if (Platform.OS === 'android') {
 FFmpegKit.execute(`-i ${uri} temp.wav`).then(async (session) => {
 // const returnCode = await session.getReturnCode();
 uri = 'temp.wav';
 });
}

const response = await fetch(uri);
const blob = await response.blob();



but I have no sucess (getting the error) :


TypeError : null is not an object (evaluating 'FFmpegKitReactNativeModule.ffmpegSession')


uri have this form :


file :///data/user/0/host.exp.exponent/cache/ExperienceData/%2540aamorel%252Fvoki/Audio/recording-4038abed-f264-48ca-a0cc-861268190874.m4a


I am not sure if I use the FFmpeg toolkit correctly. Do you know how to make this work ? Or is there a simpler way to do it ?