
Recherche avancée
Autres articles (55)
-
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (11178)
-
Covert a file from mp4 to a file that HTML video tag can display
26 juin 2019, par Trying_To_UnderstandI have a mp4 file and I want to display it in HTML. The problem is that it won’t, not only in HTML but also in my player. In VLC I can watch the video but there is no sound. Maybe the file is corrupted ?
This is the output when I runffmpeg -i my_file.mp4
:[h264 @ 000001b4afc61880] decode_slice_header error
[h264 @ 000001b4afc61880] no frame!
[h264 @ 000001b4afc61880] non-existing PPS 0 referenced
Last message repeated 1 times
[h264 @ 000001b4afc61880] decode_slice_header error
[h264 @ 000001b4afc61880] no frame!
Input #0, mpeg, from '150_2.mp4':
Duration: 00:50:51.75, start: 13182.386222, bitrate: 983 kb/s
Stream #0:0[0x1e0]: Video: h264 (High), yuv420p(progressive), 1280x720, 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x1c0]: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s
At least one output file must be specified.Can I convert this with ffmpeg to a good quality of video and audio ?
-
Wrong duration of the result video when using ffmpeg filter complex concat [closed]
9 décembre 2023, par Igor ChubinI am trying to do a trivial task of concatenating
several segments of the original video/audio
into a new file (and dropping the rest) :


ffmpeg -i bdt.mkv -filter_complex '
 [0:v]trim=start=10.0:end=15.0,setpts=PTS-STARTPTS[0v];
 [0:a]atrim=start=10.0:end=15.0,asetpts=PTS-STARTPTS[0a];
 [0:v]trim=start=65.0:end=70.0,setpts=PTS-STARTPTS[1v];
 [0:a]atrim=start=65.0:end=70.0,asetpts=PTS-STARTPTS[1a];[0v][0a][1v]
 [1a]concat=n=2:v=1:a=1[outv][outa]' -map [outv] -map [outa] out.mp4



When I am trying to watch
out.mp4
after that, it has the original duration ofbdt.mkv
(70 minutes) and not 10 seconds, as I would expect.

What am I doing wrong ?


Update 1.


(exactly as Scott Codez suggested)


The problem is related to the formats. If the output format is
mkv
too, the problem disappears. But when I use mp4, that I need, the problem persists.

-
I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound
19 avril 2022, par Siti MaynaSo I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?


Notes :


I've tried to test the skill using my mobile phone also.


I've tried to encode the audio using FFmpeg.


I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter


I don't know how to fix this error.


There is no error message on cloud watch.


Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.


Steps to reproduce :




Build the interaction model






Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :




A :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 



B :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 



C :


ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3





Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds






Use the link and insert it to APLA.json





 {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 }




notes : I change the link sources based on audio that I tried.




the intent on lambda_function.py :




def _load_apl_document(file_path):
 # type: (str) -> Dict[str, Any]
 """Load the apl json document at the path into a dict object."""
 with open(file_path) as f:
 return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
 """Handler for Skill Launch."""
 def can_handle(self, handler_input):
 # type: (HandlerInput) -> bool

 return ask_utils.is_request_type("LaunchRequest")(handler_input)

 def handle(self, handler_input):
 # type: (HandlerInput) -> Response
 logger.info("In LaunchRequestHandler")

 # type: (HandlerInput) -> Response
 speak_output = "Hello World!"
 # .ask("add a reprompt if you want to keep the session open for the user to respond")

 return (
 handler_input.response_builder
 #.speak(speak_output)
 .add_directive(
 RenderDocumentDirective(
 token="pagerToken",
 document=_load_apl_document("APLA.json"),
 datasources={}
 )
 )
 .response
 )





Deploy






Test it






The result of the test on my end :

The response for testing




the JSON response :


{
 "body": {
 "version": "1.0",
 "response": {
 "directives": [
 {
 "type": "Alexa.Presentation.APLA.RenderDocument",
 "token": "pagerToken",
 "document": {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 },
 "datasources": {}
 }
 ],
 "type": "_DEFAULT_RESPONSE"
 },
 "sessionAttributes": {},
 "userAgent": "ask-python/1.16.1 Python/3.7.12"
 }
}





On my cloud Watch :
Cloud Watch