
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (73)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (7778)
-
Is it possible to grab a frame from a video stream and save it as png in another stream with ffmpeg ?
9 février 2020, par Lázár ZsoltI am trying to use FFMpeg with System.IO.Process to convert a seek-able in-memory video stream into a thumbnail. Piping the thumbnail out through stdout isn’t a problem, but piping in the video is tricky.
My current code copies the entire video stream into stdin, which is very slow and unnecessary, because ffmpeg obviously doesn’t need the entire file to get the first frame. Writing the stream to the file system and specifying its path as an input argument is also very slow, because the source video can be several gigabytes.
I have tried accomplishing this using existing libraries, such as AForge, FFMpegCore, Xabe.FFMpeg, xFFMpeg.NET and Accord.FFMPEG.Video, but unfortunately they can only work with actual files, not streams, and my input video is not available as a file.
The stream object that supplies the video fully implements seeking and random access reading functionalities, just like a file stream, so there is literally no valid reason for this to not be possible, besides the limitations of the APIs (or my knowledge).
As a last resort, I could use the Dokan.NET filesystem driver to expose the video stream as a virtual file so ffmpeg can read it, but that would be an extreme overkill and I’m looking for a better solution.
Below is my current code. For the sake of simplicity, I am emulating the input video stream with a FileStream.
var process = new Process();
process.StartInfo.FileName = "ffmpeg.exe";
process.StartInfo.Arguments = "-i - -ss 00:00:01 -vframes 1 -q:v 2 -c:v png -f image2pipe -";
process.StartInfo.RedirectStandardInput = true;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.UseShellExecute = false;
process.StartInfo.CreateNoWindow = true;
process.Start();
var stream = File.OpenRead("test.mp4");
stream.CopyTo(process.StandardInput.BaseStream);
process.StandardInput.BaseStream.Flush();
process.StandardInput.BaseStream.Close();
var stream2 = File.Create("test.png");
var buffer = new byte[4096];
int read;
while((read = process.StandardOutput.BaseStream.Read(buffer, 0, buffer.Length)) > 0)
stream2.Write(buffer, 0, read);EDIT :
It might be useful to clarify what kind of data does the input stream contain. It is basically a video file that can be in any commonly used format (avi, mp4, mov, ts, mkv, wmv,...). The extension of the video (as if it was a file) is also known. -
Add, remove and tune filters on running ffmpeg
28 mai 2023, par TurmundPreface


I have been fiddling around with
ffmpeg
andffplay
using command line adding and tuning filters and effects.

It quickly becomes rather tiresome to


- 

- start playback with some audio file
- stop
- modify command
- back to 1.










When for example fine-tuning noise reduction or adding effects and equalizer.


I have played around with using
zmq
to tune filters by executing commands in a different terminal, but this also becomes somewhat cumbersome.

I want some interface where I can add, remove and tune filters during runtime / while listening to the changes taking effect.


FFMPEG


I use
filter
to mean effect / filter from here on out. For exampleafftdn
,rubberband
, ...

ffmpeg
is somewhat intimidating. It's powerful but also complex, at least when starting to dig into it. :0

Looking at the library and examples I am looking at API example for audio decoding and filtering, - which at least at first looks promising as a starting platter.


Output


I imagine it would be best to have multiple sinks or some container with multiple audio tracks :


- 

- Raw audio
- Audio with effects applied






Optionally :


- 

- Raw audio
- Audio with all filters
- Audio with filter group 1
- Audio with filter group 2
- ... etc.













Processing


I imagine the routine would have to be something like :


- 

- Read packet from stream/file/url
- Unpack the sample
- Copy / duplicate sample for each filter group / or one for filters
- Apply filter(s) to these “effect sample” (s)
- Write raw audio, filtered audio 1, filtered audio 2, filtered audio N, ... to out












Or for step 3 - 5 (as one would only be listening to one track at a time (But this is perhaps not the best if one decide to jump back / forth in the audio stream) :


- 

- Apply currently active filter(s)
- Write raw audio, filtered audio to out






Simultaneously one would read and check changes to filters by some interface. I.e. input :


afftdn=rf=-20:nr=20



then, if
afftdn
is not present in filters add it, else set new values.

Idea is to output "raw-audio". I.e. used in a sampling and tuning phase - then produce a line with filter-options one can use with the
ffmpeg
-tool to process the audio files once satisfied.

Questions section


- 

- Does something similar exist ?




General :


- 

- Does this seem like a way to do it and use the
ffmpeg
library ?
- 

- Can one add, remove and change filter values during runtime or do one have to re-initialize the entire stream for each added / removed filter etc ?
- Is the “Processing” part sound ?






- Would using a container that supports multiple audio tracks be the likely best solution ? E.g. mp4.

- 

- Any container preferred over others ?
- Any drawbacks (i.e. jumping back / forth in the stream)













Sub-note


Dream is to have a Arduino interfacing with this program where I use physical rotary switches, incremental rotary encoders, buttons and whistles. Tuning the various options for the filters using physical knobs. But at first I need some working sample where I use FIFO or what ever to test
ffmpeg
itself.

-
Forward youtube-dl output to ffmpeg with hardware-accelerated encoding [closed]
16 mars 2021, par YehorI'm trying to download some videos from youtube and simultaneously forward those to ffmpeg for hardware decoding to h265 (just for purpose of seeing how it works).


My command, that works, but uses (as I understand) software encoding :


youtube-dl -o "./%(playlist)s/%(title)s.%(ext)s" --merge-output-format mkv --postprocessor-args "-c:v libx265 -c:a opus -strict experimental" "target_url"



When I specify the hardware acceleration, I receive the error "Invalid argument". The command is :


youtube-dl -o "./%(playlist)s/%(title)s.%(ext)s" --merge-output-format mkv --postprocessor-args "-hwaccel cuda -c:v hevc_nvenc -c:a opus -strict experimental" "target_url"



What I've done wrong ?


System info :


- 

- OS : Windows 10 x64
- GPU : Nvidia GeForce 960M (installed the latest driver and CUDA) and should be supported by CUDA
- latest youtube-dl and ffmpeg build








The
ffmpeg -encoders
output contains 265 codecs, so they also should be supported :

V..... libx265 libx265 H.265 / HEVC (codec hevc)
V..... nvenc_hevc NVIDIA NVENC hevc encoder (codec hevc)
V..... hevc_amf AMD AMF HEVC encoder (codec hevc)
V..... hevc_nvenc NVIDIA NVENC hevc encoder (codec hevc)
V..... hevc_qsv HEVC (Intel Quick Sync Video acceleration) (codec hevc)