
Recherche avancée
Autres articles (53)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (8300)
-
Clip long video segment quickly
30 janvier 2020, par PRManLet’s say I have a video called Concert.mp4. I want to extract a performance from it quickly with minimal reencoding. I want to do the equivalent of this, but faster :
ffmpeg -i "Concert.mp4" -ss 00:11:45 -to 00:18:15 -preset veryfast -y artist.mp4
This takes 17 seconds, which is way too long for our needs.
Now, it turns out that 11:45 and 18:15 don’t fall on iframes, so if you try this you will get a 3 second delay at the beginning before the video shows :
ffmpeg -i "Concert.mp4" -ss 00:11:45 -to 00:18:15 -c copy -y artist.mp4
Running this command, we can see where we need to cut :
ffprobe -read_intervals "11:00%19:00" -v error -skip_frame nokey -show_entries frame=pkt_pts_time -select_streams v -of csv=p=0 "Concert.mp4" > frames.txt
So what we need to do is encode the first 3.708 seconds, copy the middle, and then encode the last 5.912 seconds.
I can get the 3 segments to all look perfect (by themselves) like this :
ffmpeg -ss 698.698 -i "Concert.mp4" -ss 6.302 -t 3.708 -c:v libx264 -c:a copy -c:s copy -y clipbegin.mp4
ffmpeg -ss 708.708 -to 1089.088 -i "Concert.mp4" -c copy -y clipmiddle.mp4
ffmpeg -ss 1089.088 -i "Concert.mp4" -t 5.912 -c:v libx264 -c:a copy -c:s copy -y clipend.mp4
ffmpeg -f concat -i segments.txt -c copy -y artist.mp4segments.txt of course contains the following :
file 'clipbegin.mkv'
file 'clipmiddle.mkv'
file 'clipend.mkv'I saw this solution presented here, but no amount of tweaking gets it to work for me :
https://superuser.com/a/1039134/73272
As far as I can tell, this method doesn’t work at all. It crashes VLC pretty hard no matter what I try.
The combined video keeps glitching after the 3 seconds, probably because the PTS times are different or something (using some options, I have seen warning messages to this effect). Is there anything I can add to the commands above to get this to work ? The only requirement is that the middle command must not re-encode the video, but must do a fast copy.
Thanks in advance.
-
ffmpeg encoder streaming issues
8 août 2017, par bobsingh1I am trying to build ffmpeg encoder on linux. I started with a custom built server Dual 1366 2.6 Ghz Xeon CPUs (6 cores) with 16 GB RAM with Ubuntu 16.04 minimal install. Built ffmpeg with h264 and aac. I am taking live source OTA channels and encoding/streaming them with following parameters
-vcodec libx264 -preset superfast -crf 25 -x264opts keyint=60:min-keyint=60:scenecut=-1 -bufsize 7000k -b:v 6000k -maxrate 6300k -muxrate 6000k -s 1920x1080 -format yuv420p -g 60 -sn -c:a aac -b:a 384k -ar 44100
And I am able to successfully udp out using mpegts. My problem starts with 5th stream. The server can handle four streams and as soon as I introduce 5th stream I start seeing hiccups in output. Looking at my cpu usage using top I still see only 65% to 75% usage with occasional 80% hit. Memory usage is well within acceptable parameters. So I am wondering either top is not giving me accurate cpu usage or something is not right with ffmpeg. The server is isolated for udp in/out on a 1 Gbps network.
I decided to up the cpu power and installed two 3.5 Ghz CPUs (6 cores) thinking it was perhaps the cpu clock. To my surprise the results were no different. So now I am wondering is there some built in limit I am hitting when I process at 1080p. If I change the resolution to 720p it is able to process 8 streams but 720 is not acceptable.
My target is 10 1080p streams per server.
So my questions are
1. If I use a quad motherboard and up the cpu count to 4 (6 or 8 cores) will I get 10 1080p streams ? Is there any theoretical max I can go with ffmpeg per machine ?
2. Do cores matter more or does clock matter more ?
3. Any suggestions in improvement with my options. I have tried ultrafast preset but the output quality is unacceptable.Thanks in advance
-
FFMPEG - Struggling to find correct input audio codec parameters on macOS
20 septembre 2024, par XaviI am trying to read my external stereo microphone with ffmpeg within my Qt Windows+macOs application, but I am struggling to obtain consistent correct input codec parameters on macOs. My findings and suspicions so far :


The code I'm using in macOs is the following, where everything returns a successful return code :


avdevice_register_all();
 
 //macOs only, the same code in windows looks for "dshow" 
 const AVInputFormat *inputFormat = av_find_input_format("avfoundation");
 
 AVFormatContext* inputFormatContext;
 avformat_open_input(&inputFormatContext, inputDevice, inputFormat, NULL);

 avformat_find_stream_info(inputFormatContext, NULL);
 
 //... allocate the codec context for the single input stream and
 // copy the parameters from the stream to the context




In my standalone minimal reproducer this always results on the codec ID of the single stream being AV_CODEC_ID_PCM_F32LE, in both macOS and Windows. When I integrate this code in my Qt application on Windows, I get the same result. However, on macOS, most of the times results in the codec id of the stream being AV_CODEC_ID_PCM_S16LE (via AV_CODEC_ID_FIRST_AUDIO) and sometimes AV_CODEC_ID_PCM_F32LE. Both sample formats are supported by my microphone.


AV_CODEC_ID_PCM_F32LE always results in a correct output. AV_CODEC_ID_PCM_S16LE results on buzzy noisy audio slowed down to 0.5x, and If in this case I decode with AV_CODEC_ID_PCM_F32LE instead of copying the codec parameters from the stream, the output sounds correct again.


I am trying to write generic code, so while enforcing the AV_CODEC_ID_PCM_F32LE codec works, I'd rather understand what is happening.


What am I missing ? Is Qt interacting in some way that I can't think of ? I am compiling and linking my own ffmpeg libraries (6.1.1) and not using Qt's ones.