
Recherche avancée
Médias (1)
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
Autres articles (28)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (8776)
-
Converting AAC stream to DASH MP4 with high fragment length precision
5 mars 2017, par vdudouytFor my HTML5 project I need to create a fragmented MP4 file with a single audio stream (no video), each fragment of which has a duration of exactly 0.1 second.
Accordingly to ffmpeg docs, you can accomplish that by passing a value in microseconds with ’-frag_duration’ - which I found to be working and playable with HTML5 MediaSource API :
$ ffmpeg -y -i input.aac -c:a libfdk_aac -b:a 64k -level:v 13 -r 25 -strict experimental -movflags empty_moov+default_base_moof -frag_duration 100000 output.mp4
As we have a 210 second audio split up by 0.1s fragments, I expect that in output.mp4 we’d have 2100 fragments, hence 2100 moof atoms. But, upon inspecting it I’ve figured out that we only have 1811 moof atoms - which means that some (or maybe even all) fragments are bigger than expected :
$ python ~/git/mp4viewer/src/showboxes.py output.mp4 |grep moof|wc -l
1811Could anybody tell me what’s wrong, and how could I accomplish what I want ?
Right now my assumption is that during an encoding I have AAC frame length which is not a multiple of 0.1s, hence ffmpeg has no chance to produce the fragments that are strictly equal to 0.1s but I’m not sure. If somebody can confirm that - and let me know a way to explicitly set AAC frame_size in FFMPEG (I couldn’t find anything like that in the docs), or completely disprove this - it would be also highly appreciated.
-
avcodec/lossless_videodsp : use ptrdiff_t for length parameters
22 mars 2017, par James Almeravcodec/lossless_videodsp : use ptrdiff_t for length parameters
Signed-off-by : James Almer <jamrial@gmail.com>
-
If multiple channels, merge then take sample length from audio file and save it to s3
18 mai 2017, par khinesterI am using transloadit to extract the audio from a video file, which is then saved to S3.
This works great, but I wanted to know how to :-
check if the file has multiple channels and then squash it inot one as per https://transloadit.com/demos/audio-encoding/merging-multiple-audio-streams/ - do I need to check for this or do i default to use this robot ?
-
extract a small sample from the audio file - and save this as a separate file.
For example, I have a 2h audio file from which I want to take 5% of the length and save this as sample.mp3
In ffmpeg, i can cut :
ffmpeg -ss 0 -t 30 -i original.mp3 sample.mp3
but I am unsure how to chain this workflow, here is what i have thus far :
const opts = {
params: {
notify_url: `${ process.env.SELF }/services/trans/${ jwToken }`,
steps: {
import: {
robot: '/s3/import',
use: ':original',
bucket: process.env.S3_INGEST,
path: ingest.key,
key: process.env.AWS_ID,
secret: process.env.AWS_SECRET,
},
encode: {
robot: '/audio/encode',
use: 'import',
ffmpeg_stack: 'v2.2.3',
preset: 'aac',
ffmpeg: {
ab: '128k',
},
},
export: {
robot: '/s3/store',
use: 'encode',
bucket: s3Export,
path: `${ prefix }/${ token }.m4a`,
headers: {
'Content-Type': 'audio/mp4',
'x-amz-server-side-encryption': 'AES256',
},
key: process.env.AWS_ID,
secret: process.env.AWS_SECRET,
},
},
},
};in the docs, https://transloadit.com/docs/conversion-robots/ i can’t see how to do this ?
any advice is much appreciated.
-