
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (50)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (8338)
-
FFmpeg ignores some HTTP options when using the PUT method
6 mars 2020, par mehdi.rI am using FFmpeg to create a CMAF stream and I upload it to an AWS resource (AWS MediaStore) using the
PUT
method of FFMpeg.
I need to pass theContent-Type
header when uploading manifests & segments.
I have 3 type of files :application/x-mpegURL
: m3u8 manifestapplication/dash+xml
: mpd manifestvideo/mp4
: video segmentsCurrently, all the types are set to
Binary - octet-stream
in the AWS resource (AWS MediaStore).
As I will upload a huge number of files, I can’t use AWS Lambda functions to set the correct content type after a file as been uploaded.FFmpeg upload logs
[https @ 0x555fe7a7d1c0] Opening 'https://XXXX.YYYY.amazonaws.com/chunk-stream0-00001.mp4' for writing
[https @ 0x555fe7a7d0c0] request: PUT /chunk-stream0-00001.mp4 HTTP/1.1
Transfer-Encoding: chunked
User-Agent: Lavf/58.28.100
Accept: */*
Connection: keep-alive
Host: XXXXX.YYYY.amazonaws.com
Icy-MetaData: 1My tries
I tried static builds & master branch of FFMpeg.
I tried different ways to pass the content type, without success :-mime_type 1 -headers "Content-type: video/mp4\r\n"
-mime_type "video/mp4,application/dash+xml,application/x-mpegURL"
-content_type application/dash+xml
-multiple_requests 1 -headers "a:b" -icy 0
Upload command :
./ffmpeg -re -i ~/videos/BigBuckBunny.mp4 -loglevel debug \
-map 0 -map 0 -map 0 -c:a aac -c:v libx264 -tune zerolatency \
-b:v:0 2000k -s:v:0 1280x720 -profile:v:0 high -b:v:1 1500k -s:v:1 640x340 -profile:v:1 main -b:v:2 500k -s:v:2 320x170 -profile:v:2 baseline -bf 1 \
-keyint_min 24 -g 24 -sc_threshold 0 -b_strategy 0 -ar:a:1 22050 -use_timeline 1 -use_template 1 -window_size 5 \
-adaptation_sets "id=0,streams=v id=1,streams=a" -hls_playlist 1 -seg_duration 3 -streaming 1 \
-strict experimental -lhls 1 -remove_at_exit 0 -master_m3u8_publish_rate 3 \
-f dash -method PUT -http_persistent 1 https://example.com/manifest.mpdAny help would be highly appreciated.
Reference :
https://www.ffmpeg.org/ffmpeg-protocols.html#http -
FFmpeg segment desktop capture and send over http
18 décembre 2014, par staticI’m trying to capture the desktop video and segment it in order to send it over http to my node.js server, where I want to encode it to multiple bit-rates in order to serve it to clients for live streaming.
The video has to be received in segments in order to create the manifest for live streaming, because i’m trying to use DASH, and to play the video on the client side using dash.js.The problem is that i can’t seem to be able to segment the video properly when i’m sending it to the server.
This is the ffmpeg command that i’ve tried :
ffmpeg -rtbufsize 1500M -f dshow -r 10 -i video="UScreenCapture"
-vcodec libvpx -crf 10 -quality good -cpu-used 3 -b:v 1000k -qmin 10 -qmax 42 -
threads 2 -vf scale=-1:480 -bufsize 1500 -flags -global_header -map 0 -f stream_
segment -segment_time 2 -segment_format webm - http://localhost:3000/stream/22I’ve managed to send it over http without segmenting the video, but on the server side i need to receive segments that have a duration of 2 seconds so that i can create the manifest.
Also processing the video on the server side is also going to be done using ffmpeg(fluent ffmpeg module).
I’m open to any suggestions. -
10 Customer Segments Examples and Their Benefits
9 mai 2024, par Erin