
Recherche avancée
Autres articles (21)
-
Les images
15 mai 2013 -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)
Sur d’autres sites (5738)
-
Generate individual HLS-compatible .ts segments on-demand by downloading as little bytes as possible from a remote input file
27 janvier 2017, par Romain CointepasI’m trying to generate individual HLS-compatible .ts segments on-demand by downloading/reading as little bytes as possible from a remote input file (hosted on a server supporting byte-ranges requests).
One of the application for this would be to be able to transcode and play on Apple TV (via Airplay) a remote file that is not Airplay compatible, without having to download the entire file first.
I am generating the playlist myself, and I have access to the ffprobe results for the remote file (that gives video duration, etc.).
I have something working that plays via Airplay but with small video and audio glitches between each segments when I use the following command to generate each segment :
ffmpeg -ss 60 -t 6 -i http://s3.amazonaws.com/misc-12345/avicii.vob -f mpegts -map 0:v:0 -map 0:a:0 -c:v libx264 -bsf:v h264_mp4toannexb -force_key_frames "expr:gte(t,n_forced*6)" -forced-idr 1 -pix_fmt yuv420p -colorspace bt709 -c:a aac -async 1 -preset ultrafast pipe:1
Note : above command is for segment 11.ts, and in the m3u8 playlist I advertise each segment duration as 6 seconds.
Here is a Youtube video showing the audio/video glitches between segments :
https://www.youtube.com/watch?v=0vMwgbSfsu0The segment or hls modules of ffmpeg can’t be used because they both generate all the segments at once.
I’ve been struggling on this for some days now and I would really appreciate some help !
-
Piping FFmpeg output to Unity texture
1er février 2021, par SincressI'm working on a networking component where the server provides a Texture and sends it to FFmpeg to be encoded (h264_qsv), and sends it over the network. The client receives the stream (mp4 presumably), decodes it using FFmpeg again and displays it on a Texture.



Currently this works very slowly since I am saving the texture to the disk before encoding it to a mp4 file (also saved to disk), and on the client side I am saving the .png texture to disk after decoding it so that I could use it in Unity.



Server side FFmpeg process is started with
process.StartInfo.Arguments = @" -y -i testimg.png -c:v h264_qsv -q 5 -look_ahead 0 -preset:v faster -crf 0 test.qsv.mp4";
currently and client side withprocess.StartInfo.Arguments = @" -y -i test.qsv.mp4 output.png";



Since this needs to be very fast (30 fps at least) and real time, I need to pipe the Texture directly to the FFmpeg process. On the client side, I need to pipe the decoded data to the displayed Texture directly as well (opposed to saving it and then reading from disk).



A few days of researching showed me that FFmpeg supports various pipelining options, including using data formats such as bmp_pipe (piped bmp sequence), bin(binary text), data (raw data) and image2pipe (piped image2 sequence) however documentation and examples on how to use these options are very scarce.



Please help me : which format should I use (and how should it be used) ?


-
Piping FFmpeg output to Unity texture
13 décembre 2016, par SincressI’m working on a networking component where the server provides a Texture and sends it to FFmpeg to be encoded (h264_qsv), and sends it over the network. The client receives the stream (mp4 presumably), decodes it using FFmpeg again and displays it on a Texture.
Currently this works very slowly since I am saving the texture to the disk before encoding it to a mp4 file (also saved to disk), and on the client side I am saving the .png texture to disk after decoding it so that I could use it in Unity.
Server side FFmpeg process is started with
process.StartInfo.Arguments = @" -y -i testimg.png -c:v h264_qsv -q 5 -look_ahead 0 -preset:v faster -crf 0 test.qsv.mp4";
currently and client side withprocess.StartInfo.Arguments = @" -y -i test.qsv.mp4 output.png";
Since this needs to be very fast (30 fps at least) and real time, I need to pipe the Texture directly to the FFmpeg process. On the client side, I need to pipe the decoded data to the displayed Texture directly as well (opposed to saving it and then reading from disk).
A few days of researching showed me that FFmpeg supports various pipelining options, including using data formats such as bmp_pipe (piped bmp sequence), bin(binary text), data (raw data) and image2pipe (piped image2 sequence) however documentation and examples on how to use these options are very scarce.
Please help me : which format should I use (and how should it be used) ?