
Recherche avancée
Autres articles (72)
-
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
MediaSPIP en mode privé (Intranet)
17 septembre 2013, parÀ partir de la version 0.3, un canal de MediaSPIP peut devenir privé, bloqué à toute personne non identifiée grâce au plugin "Intranet/extranet".
Le plugin Intranet/extranet, lorsqu’il est activé, permet de bloquer l’accès au canal à tout visiteur non identifié, l’empêchant d’accéder au contenu en le redirigeant systématiquement vers le formulaire d’identification.
Ce système peut être particulièrement utile pour certaines utilisations comme : Atelier de travail avec des enfants dont le contenu ne doit pas (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (10292)
-
I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound
19 avril 2022, par Siti MaynaSo I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?


Notes :


I've tried to test the skill using my mobile phone also.


I've tried to encode the audio using FFmpeg.


I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter


I don't know how to fix this error.


There is no error message on cloud watch.


Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.


Steps to reproduce :




Build the interaction model






Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :




A :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 



B :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 



C :


ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3





Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds






Use the link and insert it to APLA.json





 {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 }




notes : I change the link sources based on audio that I tried.




the intent on lambda_function.py :




def _load_apl_document(file_path):
 # type: (str) -> Dict[str, Any]
 """Load the apl json document at the path into a dict object."""
 with open(file_path) as f:
 return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
 """Handler for Skill Launch."""
 def can_handle(self, handler_input):
 # type: (HandlerInput) -> bool

 return ask_utils.is_request_type("LaunchRequest")(handler_input)

 def handle(self, handler_input):
 # type: (HandlerInput) -> Response
 logger.info("In LaunchRequestHandler")

 # type: (HandlerInput) -> Response
 speak_output = "Hello World!"
 # .ask("add a reprompt if you want to keep the session open for the user to respond")

 return (
 handler_input.response_builder
 #.speak(speak_output)
 .add_directive(
 RenderDocumentDirective(
 token="pagerToken",
 document=_load_apl_document("APLA.json"),
 datasources={}
 )
 )
 .response
 )





Deploy






Test it






The result of the test on my end :

The response for testing




the JSON response :


{
 "body": {
 "version": "1.0",
 "response": {
 "directives": [
 {
 "type": "Alexa.Presentation.APLA.RenderDocument",
 "token": "pagerToken",
 "document": {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 },
 "datasources": {}
 }
 ],
 "type": "_DEFAULT_RESPONSE"
 },
 "sessionAttributes": {},
 "userAgent": "ask-python/1.16.1 Python/3.7.12"
 }
}





On my cloud Watch :
Cloud Watch




-
How to show an video while converting from one format to another format using ffmpeg in php.?
15 avril 2014, par maniPresently i am writing a one application in php.
which can be able to convert one format to another format using ffmpeg.Example : AVI to MP4
Here my client wants
"while converting he want's to see the ongoing converting video file
like shown in standard video converting softwares"Is it possilble in PHP or is there any options available in ffmpeg
-
Fragment shader does not show any colour when compiled with vs2013
11 juin 2015, par 5mayfiveWhen compiled with vs2010, the fragment shader works, but when I compiled and run in vs 2013, it’s grey.
My fragment shader converts the yuv texture into rgb
Below is my fragment code
const char *FProgram =
"uniform sampler2D Ytex;\n"
"uniform sampler2D Utex;\n"
"uniform sampler2D Vtex;\n"
"void main(void) {\n"
" vec4 c = vec4((texture2D(Ytex, gl_TexCoord[0]).r - 16./255.) * 1.164);\n"
" vec4 U = vec4(texture2D(Utex, gl_TexCoord[0]).r - 128./255.);\n"
" vec4 V = vec4(texture2D(Vtex, gl_TexCoord[0]).r - 128./255.);\n"
" c += V * vec4(1.596, -0.813, 0, 0);\n"
" c += U * vec4(0, -0.392, 2.017, 0);\n"
" c.a = 1.0;\n"
" gl_FragColor = c;\n"
"}\n";
glClearColor(0, 0, 0, 0);
PHandle = glCreateProgram();
FSHandle = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(FSHandle, 1, &FProgram, NULL);
glCompileShader(FSHandle);
glAttachShader(PHandle, FSHandle);
glLinkProgram(PHandle);
glUseProgram(PHandle);
glDeleteProgram(PHandle);
glDeleteProgram(FSHandle);This is my texture code, I receive linesize and yuv frame data from ffmpeg and make into texture. Everything works fine in VS 2010 computer, but when compiled and run in vs2013 computer, it is grey (black n white), no colour
/* Select texture unit 1 as the active unit and bind the U texture. */
glPixelStorei(GL_UNPACK_ROW_LENGTH, linesize1);
glActiveTexture(GL_TEXTURE1);
i = glGetUniformLocation(PHandle, "Utex");
glUniform1i(i, 1); /* Bind Utex to texture unit 1 */
glBindTexture(GL_TEXTURE_2D, 1);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width / 2, height / 2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, frame1);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
/* Select texture unit 2 as the active unit and bind the V texture. */
glPixelStorei(GL_UNPACK_ROW_LENGTH, linesize2);
glActiveTexture(GL_TEXTURE2);
i = glGetUniformLocation(PHandle, "Vtex");
glUniform1i(i, 2); /* Bind Vtext to texture unit 2 */
glBindTexture(GL_TEXTURE_2D, 2);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width / 2, height / 2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, frame2);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
/* Select texture unit 0 as the active unit and bind the Y texture. */
glPixelStorei(GL_UNPACK_ROW_LENGTH, linesize0);
glActiveTexture(GL_TEXTURE0);
i = glGetUniformLocation(PHandle, "Ytex");
glUniform1i(i, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, frame0);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glClear(GL_COLOR_BUFFER_BIT);
/* Draw image (again and again). */
glBegin(GL_QUADS);
glTexCoord2i(0, 0);
glVertex2i(-w / 2, h / 2);
glTexCoord2i(1, 0);
glVertex2i(w / 2, h / 2);
glTexCoord2i(1, 1);
glVertex2i(w / 2, -h / 2);
glTexCoord2i(0, 1);
glVertex2i(-w / 2, -h / 2);
glEnd();Need guidance here, thanks in advance !