
Recherche avancée
Autres articles (101)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Soumettre bugs et patchs
10 avril 2011Un logiciel n’est malheureusement jamais parfait...
Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
Si vous pensez avoir résolu vous même le bug (...)
Sur d’autres sites (7354)
-
Using FFMPEG select video filter with -ss on input
16 juillet 2018, par mwlonI’m currently using an ffmpeg command like this, where I want to select very particular video frames from between (say) 6 and 8 seconds into the video :
ffmpeg
-t 10
-i test/timer.mp4
-ss 6
-vf "select=eq(ceil(n * 1 / 29.97) + 1\, ceil((n+1) * 1 / 29.97)) * lt(n\, 8 * 29.97)"
tmp/%07d.pngHowever, this makes ffmpeg decode the entire video up to 6s because the
-ss
comes after the-i
. How can I change this command to still do the video filter based on absolute timestamp into the video ? For instance,ffmpeg
-ss 6
-t 4
-i test/timer.mp4
-vf "select=eq(ceil(n * 1 / 29.97) + 1\, ceil((n+1) * 1 / 29.97)) * lt(n\, 8 * 29.97)"
tmp/%07d.pngIs not equivalent because
n
now refers to the frame number starting after 6s into the video. This ends up selecting different frames.Any way to reference the input video’s absolute timestamp or frame number when using
-ss
on it ? -
Xamarin Android merge audio files with FFMpeg
15 juin 2018, par BogdanI am using this binding library for FFMpeg :
https://github.com/gperozzo/XamarinAndroidFFmpegMy goal is to mix two audio files.
String s = "-i " + "test.wav" + " -i " + test2.mp3 + " -filter_complex amix=inputs=2:duration=first " + "result.mp3";
Device.BeginInvokeOnMainThread(async () =>
{
await FFMpeg.Xamarin.FFmpegLibrary.Run(Forms.Context, s);
});So I have 2 input files : one is .mp3 and another one is .wav.
I’ve tried also next commands :
String s= "-i "+ "test.wav" +" -i "+ "test2.mp3" + " -filter_complex [0:0][1:0]concat=n=2:v=0:a=1[out] -map [out] " + "result.mp3";
String s = "-i " + "test.wav" + " -i " + "test2.mp3" + " -filter_complex [0:a][1:a]amerge=inputs=2[aout] -map [aout] -ac 2 " + "result.mp3";1) Could I mix two different audio formats (in my case .mp3 & .wav) or they should be equivalent ?
2) What is the correct command line for the mixing ?Thanks in advance.
-
Getting raw h264 packets from USB camera on Raspberry Pi
14 juin 2018, par AninanoI am trying to receive H264 frames from a USB webcamera connected to my Raspberry PI
Using the RPi Camera Module I can run the following command to get H264 data outputted in stdin :
raspivid -t 0 -w 640 -h 320 -fps 15 -o -
with close to zero latencyIs there an equivalent function to do this with a USB camera ? I have two USB cameras I would like to do this with.
Using
ffprobe /dev/videoX
I get the following output : (shorted down to the important details) :$ ffprobe /dev/video0
...
Input #0, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 18876.273861, bitrate: 147456 kb/s
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 1280x720, 147456 kb/s, 10 fps, 10 tbr, 1000k tbn, 1000k tbc
$ ffprobe /dev/video1
...
Input #0, video4linux2,v4l2, from '/dev/video1':
Duration: N/A, start: 18980.783228, bitrate: 115200 kb/s
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 800x600, 115200 kb/s, 15 fps, 15 tbr, 1000k tbn, 1000k tbc
$ ffprobe /dev/video2
...
Input #0, video4linux2,v4l2, from '/dev/video2':
Duration: N/A, start: 18998.984143, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, -5 kb/s, 30 fps, 30 tbr, 1000k tbn, 2000k tbcAs far as I can tell two of them are not H264, which will need to be "decoded" to H264 so I understand there is added a bit latency there. But the third one (video2) is H264 so I should be able to get data from it ? I’ve tried to just pipe it out with CAT but it says I got invalid arguments.
I’ve come as far as using FFMPEG might be the only option here. Would like to use software easily available for all RPi (apt install).
Bonus question regarding H264 packets : When I stream the data from raspivid command to my decoder it works perfectly. But if I decide to drop the 10 first packets then it never initializes the decoding process and just shows a black background. Anyone know what might be missing in the first packets that I might be able to recreate in my software so I dont have to restart the stream for every newly connected user ?
EDIT : Bonus Question Answer : After googling around I see that the first two frames
raspivid
sends me are. So by ignoring the two first frames my decoder wont "decode" properly. So if I save those frames and send them first to all new users it works perfectly. Seems like these are used in some kind of initial process.0x27 = 01 00111 = type 7 Sequence parameter set (B-frame)
0x28 = 01 01000 = type 8 Picture parameter set (B-frame)