
Recherche avancée
Autres articles (34)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)
Sur d’autres sites (2552)
-
FFmpeg chromakey libavfilter using C-API - key/replace green with alpha transparency
3 juin 2020, par ZeroDefectI'm trying to use the FFmpeg chromakey libavfilter (via the C-API using C++17) to key some green pixels in a YUVA422p image and replace them with alpha transparency.



Now, I setup/initialise the graph, connect the filters, and push through a frame ; however, the output frame appears unchanged. I suspect one of my configuration parameters is incorrect, but I'm really unsure. After having read the pertinent documentation, I still don't understand the problem.



I have published a (minimal) code sample in github - https://github.com/zerodefect/chromakey_test. I have tried to keep the code sample as brief as possible but it is still a bit lengthy.



The code sample includes a sample image (green_screen.png) for the purposes of testing.



To run the application, the following parameters are required :





./cb_chroma_key_test ./green_screen.png [OUTPUT_PATH]





The application dumps out a PLANAR image at YUV422p which I then load in via rawpixels.net - a brilliant little online utility to view raw image data (packed or planar).



My avfilter graph consists of :





buffersrc -> format -> chromakey -> buffersink





The format filter is taking the RGBA (packed) format and converting it to YUVA422 planar.



- 

- GCC 8.4
- Ubuntu 18.04
- FFmpeg 4.2








-
Clip long video segment quickly
30 janvier 2020, par PRManLet’s say I have a video called Concert.mp4. I want to extract a performance from it quickly with minimal reencoding. I want to do the equivalent of this, but faster :
ffmpeg -i "Concert.mp4" -ss 00:11:45 -to 00:18:15 -preset veryfast -y artist.mp4
This takes 17 seconds, which is way too long for our needs.
Now, it turns out that 11:45 and 18:15 don’t fall on iframes, so if you try this you will get a 3 second delay at the beginning before the video shows :
ffmpeg -i "Concert.mp4" -ss 00:11:45 -to 00:18:15 -c copy -y artist.mp4
Running this command, we can see where we need to cut :
ffprobe -read_intervals "11:00%19:00" -v error -skip_frame nokey -show_entries frame=pkt_pts_time -select_streams v -of csv=p=0 "Concert.mp4" > frames.txt
So what we need to do is encode the first 3.708 seconds, copy the middle, and then encode the last 5.912 seconds.
I can get the 3 segments to all look perfect (by themselves) like this :
ffmpeg -ss 698.698 -i "Concert.mp4" -ss 6.302 -t 3.708 -c:v libx264 -c:a copy -c:s copy -y clipbegin.mp4
ffmpeg -ss 708.708 -to 1089.088 -i "Concert.mp4" -c copy -y clipmiddle.mp4
ffmpeg -ss 1089.088 -i "Concert.mp4" -t 5.912 -c:v libx264 -c:a copy -c:s copy -y clipend.mp4
ffmpeg -f concat -i segments.txt -c copy -y artist.mp4segments.txt of course contains the following :
file 'clipbegin.mkv'
file 'clipmiddle.mkv'
file 'clipend.mkv'I saw this solution presented here, but no amount of tweaking gets it to work for me :
https://superuser.com/a/1039134/73272
As far as I can tell, this method doesn’t work at all. It crashes VLC pretty hard no matter what I try.
The combined video keeps glitching after the 3 seconds, probably because the PTS times are different or something (using some options, I have seen warning messages to this effect). Is there anything I can add to the commands above to get this to work ? The only requirement is that the middle command must not re-encode the video, but must do a fast copy.
Thanks in advance.
-
ffmpeg encoder streaming issues
8 août 2017, par bobsingh1I am trying to build ffmpeg encoder on linux. I started with a custom built server Dual 1366 2.6 Ghz Xeon CPUs (6 cores) with 16 GB RAM with Ubuntu 16.04 minimal install. Built ffmpeg with h264 and aac. I am taking live source OTA channels and encoding/streaming them with following parameters
-vcodec libx264 -preset superfast -crf 25 -x264opts keyint=60:min-keyint=60:scenecut=-1 -bufsize 7000k -b:v 6000k -maxrate 6300k -muxrate 6000k -s 1920x1080 -format yuv420p -g 60 -sn -c:a aac -b:a 384k -ar 44100
And I am able to successfully udp out using mpegts. My problem starts with 5th stream. The server can handle four streams and as soon as I introduce 5th stream I start seeing hiccups in output. Looking at my cpu usage using top I still see only 65% to 75% usage with occasional 80% hit. Memory usage is well within acceptable parameters. So I am wondering either top is not giving me accurate cpu usage or something is not right with ffmpeg. The server is isolated for udp in/out on a 1 Gbps network.
I decided to up the cpu power and installed two 3.5 Ghz CPUs (6 cores) thinking it was perhaps the cpu clock. To my surprise the results were no different. So now I am wondering is there some built in limit I am hitting when I process at 1080p. If I change the resolution to 720p it is able to process 8 streams but 720 is not acceptable.
My target is 10 1080p streams per server.
So my questions are
1. If I use a quad motherboard and up the cpu count to 4 (6 or 8 cores) will I get 10 1080p streams ? Is there any theoretical max I can go with ffmpeg per machine ?
2. Do cores matter more or does clock matter more ?
3. Any suggestions in improvement with my options. I have tried ultrafast preset but the output quality is unacceptable.Thanks in advance