
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (56)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (9474)
-
Converting AAC stream to DASH MP4 with high fragment length precision
5 mars 2017, par vdudouytFor my HTML5 project I need to create a fragmented MP4 file with a single audio stream (no video), each fragment of which has a duration of exactly 0.1 second.
Accordingly to ffmpeg docs, you can accomplish that by passing a value in microseconds with ’-frag_duration’ - which I found to be working and playable with HTML5 MediaSource API :
$ ffmpeg -y -i input.aac -c:a libfdk_aac -b:a 64k -level:v 13 -r 25 -strict experimental -movflags empty_moov+default_base_moof -frag_duration 100000 output.mp4
As we have a 210 second audio split up by 0.1s fragments, I expect that in output.mp4 we’d have 2100 fragments, hence 2100 moof atoms. But, upon inspecting it I’ve figured out that we only have 1811 moof atoms - which means that some (or maybe even all) fragments are bigger than expected :
$ python ~/git/mp4viewer/src/showboxes.py output.mp4 |grep moof|wc -l
1811Could anybody tell me what’s wrong, and how could I accomplish what I want ?
Right now my assumption is that during an encoding I have AAC frame length which is not a multiple of 0.1s, hence ffmpeg has no chance to produce the fragments that are strictly equal to 0.1s but I’m not sure. If somebody can confirm that - and let me know a way to explicitly set AAC frame_size in FFMPEG (I couldn’t find anything like that in the docs), or completely disprove this - it would be also highly appreciated.
-
Create dynamic audio broadcast stream (node, ffmpeg, ..?)
24 mars 2021, par TheCrunkieeI have coded a videoboard. Like a soundboard but with video. You go to one URL that's just a black screen and another one which has a list of different videos (sender). When you click one of these videos it plays on the black screen (receiver). If you play 2 different videos at the same time both videos are shown next to each other on the receiver. That's working fine for several months now. It just creates multiple html video-elements with multiple source-tags (x265 mp4 and vp9 webm).


I recently made a discord bot which takes the webm, extracts the opus stream and plays its sound in the voice channel where the bot is connected. This has one disadvantage : It can only play one sound at a time. It happens a lot that there are multiple videos/sounds playing at the same time so this a bit of a bummer.


So I thought I should create a audiostream on the server which hosts the videoboard and just connect the bot to that stream. But I have no clue how to do this. All I know is that it's very likely going to involve ffmpeg.


What would be the best option here ? What I think I would need is basically an infinite silence stream and the possibility to add a audio file onto that stream at any point which will play simultaneously with other audio files that were added before and have not ended payback yet. How is that possible ? Somehow with m3u8 playlist-files or via rtsp protocol ?


Thanks :)


-
WebSRT and HTML5 media accessibility
9 septembre 2010, par silviaOn 23rd July, Ian Hickson, the HTML5 editor, posted an update to the WHATWG mailing list introducing the first draft of a platform for accessibility for the HTML5 <video> element. The platform provides for captions, subtitles, audio descriptions, chapter markers and similar (...)