
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (103)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (10769)
-
Add, remove and tune filters on running ffmpeg
28 mai 2023, par TurmundPreface


I have been fiddling around with
ffmpeg
andffplay
using command line adding and tuning filters and effects.

It quickly becomes rather tiresome to


- 

- start playback with some audio file
- stop
- modify command
- back to 1.










When for example fine-tuning noise reduction or adding effects and equalizer.


I have played around with using
zmq
to tune filters by executing commands in a different terminal, but this also becomes somewhat cumbersome.

I want some interface where I can add, remove and tune filters during runtime / while listening to the changes taking effect.


FFMPEG


I use
filter
to mean effect / filter from here on out. For exampleafftdn
,rubberband
, ...

ffmpeg
is somewhat intimidating. It's powerful but also complex, at least when starting to dig into it. :0

Looking at the library and examples I am looking at API example for audio decoding and filtering, - which at least at first looks promising as a starting platter.


Output


I imagine it would be best to have multiple sinks or some container with multiple audio tracks :


- 

- Raw audio
- Audio with effects applied






Optionally :


- 

- Raw audio
- Audio with all filters
- Audio with filter group 1
- Audio with filter group 2
- ... etc.













Processing


I imagine the routine would have to be something like :


- 

- Read packet from stream/file/url
- Unpack the sample
- Copy / duplicate sample for each filter group / or one for filters
- Apply filter(s) to these “effect sample” (s)
- Write raw audio, filtered audio 1, filtered audio 2, filtered audio N, ... to out












Or for step 3 - 5 (as one would only be listening to one track at a time (But this is perhaps not the best if one decide to jump back / forth in the audio stream) :


- 

- Apply currently active filter(s)
- Write raw audio, filtered audio to out






Simultaneously one would read and check changes to filters by some interface. I.e. input :


afftdn=rf=-20:nr=20



then, if
afftdn
is not present in filters add it, else set new values.

Idea is to output "raw-audio". I.e. used in a sampling and tuning phase - then produce a line with filter-options one can use with the
ffmpeg
-tool to process the audio files once satisfied.

Questions section


- 

- Does something similar exist ?




General :


- 

- Does this seem like a way to do it and use the
ffmpeg
library ?
- 

- Can one add, remove and change filter values during runtime or do one have to re-initialize the entire stream for each added / removed filter etc ?
- Is the “Processing” part sound ?






- Would using a container that supports multiple audio tracks be the likely best solution ? E.g. mp4.

- 

- Any container preferred over others ?
- Any drawbacks (i.e. jumping back / forth in the stream)













Sub-note


Dream is to have a Arduino interfacing with this program where I use physical rotary switches, incremental rotary encoders, buttons and whistles. Tuning the various options for the filters using physical knobs. But at first I need some working sample where I use FIFO or what ever to test
ffmpeg
itself.

-
Is it possible to grab a frame from a video stream and save it as png in another stream with ffmpeg ?
9 février 2020, par Lázár ZsoltI am trying to use FFMpeg with System.IO.Process to convert a seek-able in-memory video stream into a thumbnail. Piping the thumbnail out through stdout isn’t a problem, but piping in the video is tricky.
My current code copies the entire video stream into stdin, which is very slow and unnecessary, because ffmpeg obviously doesn’t need the entire file to get the first frame. Writing the stream to the file system and specifying its path as an input argument is also very slow, because the source video can be several gigabytes.
I have tried accomplishing this using existing libraries, such as AForge, FFMpegCore, Xabe.FFMpeg, xFFMpeg.NET and Accord.FFMPEG.Video, but unfortunately they can only work with actual files, not streams, and my input video is not available as a file.
The stream object that supplies the video fully implements seeking and random access reading functionalities, just like a file stream, so there is literally no valid reason for this to not be possible, besides the limitations of the APIs (or my knowledge).
As a last resort, I could use the Dokan.NET filesystem driver to expose the video stream as a virtual file so ffmpeg can read it, but that would be an extreme overkill and I’m looking for a better solution.
Below is my current code. For the sake of simplicity, I am emulating the input video stream with a FileStream.
var process = new Process();
process.StartInfo.FileName = "ffmpeg.exe";
process.StartInfo.Arguments = "-i - -ss 00:00:01 -vframes 1 -q:v 2 -c:v png -f image2pipe -";
process.StartInfo.RedirectStandardInput = true;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.UseShellExecute = false;
process.StartInfo.CreateNoWindow = true;
process.Start();
var stream = File.OpenRead("test.mp4");
stream.CopyTo(process.StandardInput.BaseStream);
process.StandardInput.BaseStream.Flush();
process.StandardInput.BaseStream.Close();
var stream2 = File.Create("test.png");
var buffer = new byte[4096];
int read;
while((read = process.StandardOutput.BaseStream.Read(buffer, 0, buffer.Length)) > 0)
stream2.Write(buffer, 0, read);EDIT :
It might be useful to clarify what kind of data does the input stream contain. It is basically a video file that can be in any commonly used format (avi, mp4, mov, ts, mkv, wmv,...). The extension of the video (as if it was a file) is also known. -
audio and video out of sync streaming usb webcam to RTMP server
2 avril 2021, par Mateus GarciaI'm trying to stream 4 webcams connected to my raspberry pi 4 to a nginx RTMP server (same network) using ffmpeg.


this works well for the very first moments, but after some time the video is ahead of audio, and after some more time, they are totally out of sync.


this is the command that i am running :


ffmpeg -r 15 -s 384x288 -f video4linux2 -i /dev/video0 \
-f alsa -thread_queue_size 1024 -ac 1 -i hw:1 \
-f flv rtmp://192.168.0.100/live/cam1



rtmp server is a docker image : tiangolo/nginx-rtmp


client is VLC media player.


Please help me. i dont know what more i can do.


command output :


[video4linux2,v4l2 @ 0x1a932b0] The V4L2 driver changed the video from 384x288 to 352x288
Input #0, video4linux2,v4l2, from '/dev/video0':
 Duration: N/A, start: 6196.784888, bitrate: 24330 kb/s
 Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 352x288, 24330 kb/s, 15 fps, 15 tbr, 1000k tbn, 1000k tbc
Guessed Channel Layout for Input Stream #1.0 : mono
Input #1, alsa, from 'hw:1':
 Duration: N/A, start: 1617338591.667887, bitrate: 768 kb/s
 Stream #1:0: Audio: pcm_s16le, 48000 Hz, mono, s16, 768 kb/s
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> flv1 (flv))
 Stream #1:0 -> #0:1 (pcm_s16le (native) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
Output #0, flv, to 'rtmp://192.168.0.100/live/cam1':
 Metadata:
 encoder : Lavf58.20.100
 Stream #0:0: Video: flv1 (flv) ([2][0][0][0] / 0x0002), yuv420p(progressive), 352x288, q=2-31, 200 kb/s, 15 fps, 1k tbn, 15 tbc
 Metadata:
 encoder : Lavc58.35.100 flv
 Side data:
 cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
 Stream #0:1: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 48000 Hz, mono, s16p
 Metadata:
 encoder : Lavc58.35.100 libmp3lame
frame= 639 fps= 15 q=2.0 size= 1513kB time=00:00:42.57 bitrate= 291.1kbits/s speed=0.993x