
Recherche avancée
Autres articles (69)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (11862)
-
Creating personalized video from user submitted pictures and data
18 décembre 2016, par Rajat SinghalCan someone guide me towards what technology to use to create personalized video from user submitted pictures and data.. The process has to be automated, as in server must be able to create a downloadable video from photos and text submitted by user..
So the process will probably be like one video will be created by a hired artist, with placeholders where the user submitted pictures and text will fit in. Now with user submitted data video can be created and downloaded from the website..
An example can be the videos created by facebook now-a-days on your birthday or year end. They consist some of your photos, some text and have a common video theme.. You can view one here http://newsroom.fb.com/news/2016/12/facebook-2016-year-in-review/
One way I’ve found is to write the video code in html5 and then record it with phantomjs and ffmpeg.. http://mindthecode.com/recording-a-website-with-phantomjs-and-ffmpeg/
But it seems a bit unnatural way of doing it.. And also I think not a lot of good artists are out there who can create the video theme in html5..
-
Piping FFmpeg output to Unity texture
13 décembre 2016, par SincressI’m working on a networking component where the server provides a Texture and sends it to FFmpeg to be encoded (h264_qsv), and sends it over the network. The client receives the stream (mp4 presumably), decodes it using FFmpeg again and displays it on a Texture.
Currently this works very slowly since I am saving the texture to the disk before encoding it to a mp4 file (also saved to disk), and on the client side I am saving the .png texture to disk after decoding it so that I could use it in Unity.
Server side FFmpeg process is started with
process.StartInfo.Arguments = @" -y -i testimg.png -c:v h264_qsv -q 5 -look_ahead 0 -preset:v faster -crf 0 test.qsv.mp4";
currently and client side withprocess.StartInfo.Arguments = @" -y -i test.qsv.mp4 output.png";
Since this needs to be very fast (30 fps at least) and real time, I need to pipe the Texture directly to the FFmpeg process. On the client side, I need to pipe the decoded data to the displayed Texture directly as well (opposed to saving it and then reading from disk).
A few days of researching showed me that FFmpeg supports various pipelining options, including using data formats such as bmp_pipe (piped bmp sequence), bin(binary text), data (raw data) and image2pipe (piped image2 sequence) however documentation and examples on how to use these options are very scarce.
Please help me : which format should I use (and how should it be used) ?
-
Generate individual HLS-compatible .ts segments on-demand by downloading as little bytes as possible from a remote input file
27 janvier 2017, par Romain CointepasI’m trying to generate individual HLS-compatible .ts segments on-demand by downloading/reading as little bytes as possible from a remote input file (hosted on a server supporting byte-ranges requests).
One of the application for this would be to be able to transcode and play on Apple TV (via Airplay) a remote file that is not Airplay compatible, without having to download the entire file first.
I am generating the playlist myself, and I have access to the ffprobe results for the remote file (that gives video duration, etc.).
I have something working that plays via Airplay but with small video and audio glitches between each segments when I use the following command to generate each segment :
ffmpeg -ss 60 -t 6 -i http://s3.amazonaws.com/misc-12345/avicii.vob -f mpegts -map 0:v:0 -map 0:a:0 -c:v libx264 -bsf:v h264_mp4toannexb -force_key_frames "expr:gte(t,n_forced*6)" -forced-idr 1 -pix_fmt yuv420p -colorspace bt709 -c:a aac -async 1 -preset ultrafast pipe:1
Note : above command is for segment 11.ts, and in the m3u8 playlist I advertise each segment duration as 6 seconds.
Here is a Youtube video showing the audio/video glitches between segments :
https://www.youtube.com/watch?v=0vMwgbSfsu0The segment or hls modules of ffmpeg can’t be used because they both generate all the segments at once.
I’ve been struggling on this for some days now and I would really appreciate some help !