
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (97)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)
Sur d’autres sites (7557)
-
Handling high volume traffic and traffic peaks with Matomo just got easier
16 avril 2018, par Matomo Core TeamWhen you use the self-hosted version of Matomo on-premise instead of the Matomo cloud-hosted solution, you may experience some traffic peaks on your Matomo server when the traffic volume on your websites increases. For example, every day at a certain time you might receive two or three times the amount of traffic that usually visits your website. This can have many negative impacts, including :
- Slow loading time for your JavaScript tracker (piwik.js) which in turn may slow down your website giving your users a poor experience. Also you may see less page views in Matomo because by the time the tracker is loaded on your website, the user has already moved on to another page.
- Some tracking requests might be simply ignored at some point because your server might not be able to handle any tracking requests anymore which results in many untracked visits and page views.
- You may need additional servers only to handle traffic peaks which results in increased server costs, maintenance work and maintenance costs.
The solution
Handling traffic peaks has been possible with Matomo for years using the Queued Tracking plugin. When this feature is enabled, tracking requests are put into a queue instead of being processed immediately. Then when a job is running separately it takes the requests out of the queue and processes them. This brings various benefits.
Faster tracking
It improves the tracking speed on your server by a factor of 5 to 15. So for example, instead of a tracking request taking 50ms, it takes only 5ms. This means your server will be able to handle a lot more concurrent requests compared to the traditional tracking and is likely to survive traffics peaks much more likely without any trouble at all.
Faster processing
When a request is queued, the request still needs to be processed eventually. Because the Queued Tracking solution can take multiple tracking requests out of the queue at once and process them in one go, the processing speed increases massively as well. This is because by default each tracking request has to bootstrap Matomo and do a lot of things again and again which takes quite a bit of time (you’d be surprised). Instead, many things can now be cached and don’t have to be done multiple times. As a result, your server can process tracking requests much faster and needs less resources overall which in turn reduces cost and trouble.
Queued Tracking is now easier to set up
In the background, Queued Tracking has been using Redis, an in-memory database. While Redis is very fast, it’s not simple to setup and maintain it. Especially when it comes to making Redis “highly available” and when you need to scale your Redis. Also, your servers will need a lot more memory for Redis as all queued tracking requests are stored in memory.
One click setup
We have now added support for a MySQL database so you can activate Queued Tracking with a simple click. What used to take hours or maybe even weeks to set up and a lot of maintenance, can now be cut down to seconds. Queued Tracking will then simply reuse the database that you have been using all along for storing all your visits. A side benefit is that your server won’t need more memory and all queued tracking requests even survive a server reboot.
Both Redis and MySQL are now supported in Queued Tracking. If you do have experience with managing Redis, we still recommend using this solution as it’s likely a bit faster. However, in most cases the MySQL solution should work just as well.
Further improvements
We have made various other improvements for Queued Tracking that increases the performance and you can now be notified when the number of queued tracking requests reaches a certain threshold. View the changelog for a list of all changes.
Learn more
We have been setting up Queued Tracking multiple times when it comes to high volume traffic or dealing with peaks and are amazed by the results. Often, we can even reduce the overall amount of needed servers.
If this sounds like something that could be beneficial to you, we recommend you have a look at the Queued Tracking page and also check out the FAQ. You might be also interested in learning how to configure Matomo for speed.
Need help with setting up, maintaining, or scaling Matomo ? Get in touch now.
The post Handling high volume traffic and traffic peaks with Matomo just got easier appeared first on Analytics Platform - Matomo.
-
Trimmed a video with ffmpeg ; looks fine on a Mac, but audio/video are 3 seconds out of sync with Windows default player. Huh ?
12 avril 2021, par kaltorakI've got a video which looks and sounds right when I play it in Windows or on a Mac.

I trimmed it with ffmpeg.

The resulting file

- 

- plays fine on a Mac with QuickTime
- throws an error on Windows QuickTime (Error -2041 : an invalid sample description was found in the movie (myfile.trimmed.mp4))
- plays with Win10's default player (Movies and TV ?), but with the audio lagging nearly 3 seconds behind the video (as determined by me counting Mississippis, nothing more precise than that.)








My original file :


ffprobe -hide_banner myfile.mpg
[h264 @ 00000253d2d762c0] Increasing reorder buffer to 2
[mpegts @ 00000253d2d6fe00] PES packet size mismatch
[mpegts @ 00000253d2d6fe00] Packet corrupt (stream = 1, dts = 8467425232).
[mpegts @ 00000253d2d6fe00] Could not find codec parameters for stream 2 (Unknown: none ([151][0][0][0] / 0x0097)): unknown codec
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #0, mpegts, from 'myfile.mpg':
 Duration: 00:30:00.63, start: 92282.982578, bitrate: 6249 kb/s
 Program 1
 Stream #0:0[0x1aab]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, progressive), 1280x720 [SAR 1:1 DAR 16:9], Closed Captions, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
 Stream #0:1[0x1abf]: Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, 5.1(side), fltp, 384 kb/s
 Stream #0:2[0x1ac1]: Unknown: none ([151][0][0][0] / 0x0097)
Unsupported codec with id 0 for input stream 2



Stuff I notice :


- 

- The mpg container shouldn't hold h264 video. That messed me up, but remuxing to an mp4 container during the trimming step seemed to make it OK.
- The start time isn't anywhere close to zero... but I don't think there's anything wrong with that.
- The audio and video are in sync as I watch in a player, but the file contains audio starting nearly 1 second before the video. The first audio packet has pkt_pts_time=92282.982578 (matching the start reported by ffprobe, above), while the first video packet has pkt_pts_time=92283.926411








So I trim it, like so...


ffmpeg -hide_banner -ss 00:17:24 -i myfile.mpg -t 00:02:40 -c copy myfile.trimmed.mp4
[h264 @ 000002d5d73f4040] Increasing reorder buffer to 2
[mpegts @ 000002d5d73edbc0] PES packet size mismatch
[mpegts @ 000002d5d73edbc0] Packet corrupt (stream = 1, dts = 8467425232).
[mpegts @ 000002d5d73edbc0] Could not find codec parameters for stream 2 (Unknown: none ([151][0][0][0] / 0x0097)): unknown codec
Consider increasing the value for the 'analyzeduration' and 'probesize' options
[mpegts @ 000002d5d73edbc0] PES packet size mismatch
[mpegts @ 000002d5d73edbc0] Packet corrupt (stream = 1, dts = 8467425232).
Input #0, mpegts, from 'myfile.mpg':
 Duration: 00:30:00.63, start: 92282.982578, bitrate: 6249 kb/s
 Program 1
 Stream #0:0[0x1aab]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, progressive), 1280x720 [SAR 1:1 DAR 16:9], Closed Captions, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
 Stream #0:1[0x1abf]: Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, 5.1(side), fltp, 384 kb/s
 Stream #0:2[0x1ac1]: Unknown: none ([151][0][0][0] / 0x0097)
[mp4 @ 000002d5d7e90540] track 1: codec frame size is not set
Output #0, mp4, to 'myfile.trimmed.mp4':
 Metadata:
 encoder : Lavf58.45.100
 Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 59.94 fps, 59.94 tbr, 90k tbn, 90k tbc
 Stream #0:1: Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, 5.1(side), fltp, 384 kb/s
Stream mapping:
 Stream #0:0 -> #0:0 (copy)
 Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame= 9570 fps=0.0 q=-1.0 Lsize= 122587kB time=00:02:39.99 bitrate=6276.6kbits/s speed= 462x
video:114772kB audio:7545kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.220873%



ffprobe -hide_banner myfile.trimmed.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'myfile.trimmed.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.45.100
 Duration: 00:02:40.96, start: 0.000000, bitrate: 6239 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv), 1280x720 [SAR 1:1 DAR 16:9], Closed Captions, 5890 kb/s, 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc (default)
 Metadata:
 handler_name : VideoHandler
 Stream #0:1(und): Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, 5.1(side), fltp, 384 kb/s (default)
 Metadata:
 handler_name : SoundHandler
 Side data:
 audio service type: main



...and i get the file that plays fine on a Mac, but not on Windows's default player.
The start time as reported by ffprobe is 0 (above), so that must've been cleaned up by either the trimming or the remuxing. When I look at frames, the first audio packet has a pkt_pts_time=0.000000, and the first video packet has a pkt_pts_time=0.452000.


Where do I go next ?


-
I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound
19 avril 2022, par Siti MaynaSo I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?


Notes :


I've tried to test the skill using my mobile phone also.


I've tried to encode the audio using FFmpeg.


I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter


I don't know how to fix this error.


There is no error message on cloud watch.


Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.


Steps to reproduce :




Build the interaction model






Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :




A :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 



B :


ffmpeg -i -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 



C :


ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3





Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds






Use the link and insert it to APLA.json





 {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 }




notes : I change the link sources based on audio that I tried.




the intent on lambda_function.py :




def _load_apl_document(file_path):
 # type: (str) -> Dict[str, Any]
 """Load the apl json document at the path into a dict object."""
 with open(file_path) as f:
 return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
 """Handler for Skill Launch."""
 def can_handle(self, handler_input):
 # type: (HandlerInput) -> bool

 return ask_utils.is_request_type("LaunchRequest")(handler_input)

 def handle(self, handler_input):
 # type: (HandlerInput) -> Response
 logger.info("In LaunchRequestHandler")

 # type: (HandlerInput) -> Response
 speak_output = "Hello World!"
 # .ask("add a reprompt if you want to keep the session open for the user to respond")

 return (
 handler_input.response_builder
 #.speak(speak_output)
 .add_directive(
 RenderDocumentDirective(
 token="pagerToken",
 document=_load_apl_document("APLA.json"),
 datasources={}
 )
 )
 .response
 )





Deploy






Test it






The result of the test on my end :

The response for testing




the JSON response :


{
 "body": {
 "version": "1.0",
 "response": {
 "directives": [
 {
 "type": "Alexa.Presentation.APLA.RenderDocument",
 "token": "pagerToken",
 "document": {
 "type": "APLA",
 "version": "0.91",
 "description": "Simple document that generates speech",
 "mainTemplate": {
 "parameters": [
 "payload"
 ],
 "type": "Sequencer",
 "items": [
 {
 "type": "Audio",
 "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
 }
 ]
 }
 },
 "datasources": {}
 }
 ],
 "type": "_DEFAULT_RESPONSE"
 },
 "sessionAttributes": {},
 "userAgent": "ask-python/1.16.1 Python/3.7.12"
 }
}





On my cloud Watch :
Cloud Watch