
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (28)
-
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (4887)
-
How to decode mp3 to raw sample data for FFMpeg using FFMediaToolkit
28 décembre 2022, par LeeMy objective is to create a video slideshow with audio using a database as the source. The final implementation video and audio inputs need to be memory streams or byte arrays, not a file system path. The sample code is file based for portability. It's just trying to read a file based mp3 then write it to the output.


I've tried a few FFMpeg wrappers and I'm open to alternatives. This code is using FFMediaToolkit. The video portion of the code works. It's the audio that I can't get to work.


The input is described as "A 2D jagged array of multi-channel sample data with NumChannels rows and NumSamples columns." The datatype is float[][].


My mp3 source is mono. I'm using NAudio.Wave to decode the mp3. It is then split into chunks equal to the frame size for the sample rate. It is then converted into the jagged float with the data on channel 0.


The FFMpeg decoder displays a long list of "buffer underflow" and "packet too large, ignoring buffer limits to mux it". C# returns "Specified argument was out of the range of valid values." The offending line of code being "file.Audio.AddFrame(frameAudio)".


The source is 16 bit samples. The PCM_S16BE codec is the only one that I could get to accept 16 bit sample format. I could only get the MP3 encoder to work with "Signed 32-bit integer (planar)" as the sample format. I'm not certain if the source data needs to be converted from 16 to 32 bit to use the codec.


`


using FFMediaToolkit;
using FFMediaToolkit.Decoding;
using FFMediaToolkit.Encoding;
using FFMediaToolkit.Graphics;
using System;
using System.Collections.Generic;
using System.Drawing.Imaging;
using System.Drawing;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using FFMediaToolkit.Audio;
using NAudio.Wave;
using FFmpeg.AutoGen;

 internal class FFMediaToolkitTest
 {
 const int frameRate = 30;
 const int vWidth = 1920;
 const int vHeight = 1080;
 const int aSampleRate = 24_000; // source sample rate
 //const int aSampleRate = 44_100;
 const int aSamplesPerFrame = aSampleRate / frameRate;
 const int aBitRate = 32_000;
 const string dirInput = @"D:\Websites\Vocabulary\Videos\source\";
 const string pathOutput = @"D:\Websites\Vocabulary\Videos\example.mpg";

 public FFMediaToolkitTest()
 {
 try
 {
 FFmpegLoader.FFmpegPath = "."; // FFMpeg DLLs in root project directory
 var settings = new VideoEncoderSettings(width: vWidth, height: vHeight, framerate: frameRate, codec: VideoCodec.H264);
 settings.EncoderPreset = EncoderPreset.Fast;
 settings.CRF = 17;

 //var settingsAudio = new AudioEncoderSettings(aSampleRate, 1, (AudioCodec)AVCodecID.AV_CODEC_ID_PCM_S16BE); // Won't run with low bitrate.
 var settingsAudio = new AudioEncoderSettings(aSampleRate, 1, AudioCodec.MP3); // mpg runs with SampleFormat.SignedDWordP
 settingsAudio.Bitrate = aBitRate;
 //settingsAudio.SamplesPerFrame = aSamplesPerFrame;
 settingsAudio.SampleFormat = SampleFormat.SignedDWordP;

 using (var file = MediaBuilder.CreateContainer(pathOutput).WithVideo(settings).WithAudio(settingsAudio).Create())
 {
 var files = Directory.GetFiles(dirInput, "*.jpg");
 foreach (var inputFile in files)
 {
 Console.WriteLine(inputFile);
 var binInputFile = File.ReadAllBytes(inputFile);
 var memInput = new MemoryStream(binInputFile);
 var bitmap = Bitmap.FromStream(memInput) as Bitmap;
 var rect = new System.Drawing.Rectangle(System.Drawing.Point.Empty, bitmap.Size);
 var bitLock = bitmap.LockBits(rect, ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb);
 var bitmapData = ImageData.FromPointer(bitLock.Scan0, ImagePixelFormat.Bgr24, bitmap.Size);

 for (int i = 0; i < 60; i++)
 file.Video.AddFrame(bitmapData); 
 bitmap.UnlockBits(bitLock);
 }

 var mp3files = Directory.GetFiles(dirInput, "*.mp3");
 foreach (var inputFile in mp3files)
 {
 Console.WriteLine(inputFile);
 var binInputFile = File.ReadAllBytes(inputFile);
 var memInput = new MemoryStream(binInputFile);

 foreach (float[][] frameAudio in GetFrames(memInput))
 {
 file.Audio.AddFrame(frameAudio); // encode the frame
 }
 }
 //Console.WriteLine(file.Audio.CurrentDuration);
 Console.WriteLine(file.Video.CurrentDuration);
 Console.WriteLine(file.Video.Configuration);
 }
 }
 catch (Exception e)
 {
 Vocab.LogError("FFMediaToolkitTest", e.StackTrace + " " + e.Message);
 Console.WriteLine(e.StackTrace + " " + e.Message);
 }

 Console.WriteLine();
 Console.WriteLine("Done");
 Console.ReadLine();
 }


 public static List GetFrames(MemoryStream mp3stream)
 {
 List output = new List();
 
 int frameCount = 0;

 NAudio.Wave.StreamMediaFoundationReader smfReader = new StreamMediaFoundationReader(mp3stream);
 Console.WriteLine(smfReader.WaveFormat);
 Console.WriteLine(smfReader.WaveFormat.AverageBytesPerSecond); //48000
 Console.WriteLine(smfReader.WaveFormat.BitsPerSample); // 16
 Console.WriteLine(smfReader.WaveFormat.Channels); // 1 
 Console.WriteLine(smfReader.WaveFormat.SampleRate); //24000

 Console.WriteLine("PCM bytes: " + smfReader.Length);
 Console.WriteLine("Total Time: " + smfReader.TotalTime);

 int samplesPerFrame = smfReader.WaveFormat.SampleRate / frameRate;
 int bytesPerFrame = samplesPerFrame * smfReader.WaveFormat.BitsPerSample / 8;
 byte[] byteBuffer = new byte[bytesPerFrame];

 while (smfReader.Read(byteBuffer, 0, bytesPerFrame) != 0)
 {
 float[][] buffer = Convert16BitToFloat(byteBuffer);
 output.Add(buffer);
 frameCount++;
 }
 return output;
 }

 public static float[][] Convert16BitToFloat(byte[] input)
 {
 // Only works with single channel data
 int inputSamples = input.Length / 2;
 float[][] output = new float[1][]; 
 output[0] = new float[inputSamples];
 int outputIndex = 0;
 for (int n = 0; n < inputSamples; n++)
 {
 short sample = BitConverter.ToInt16(input, n * 2);
 output[0][outputIndex++] = sample / 32768f;
 }
 return output;
 }

 }





`


I've tried multiple codecs with various settings. I couldn't get any of the codecs to accept a mp4 output file extension. FFMpeg will run but error out with mpg as the output file.


-
How to blur side of horizontal video with ffmpeg
4 janvier 2023, par GetroI would like to find a way to blur the top and the bottom of an horizontal video with the same video.


At the moment I have the code to do the opposite :


ffmpeg -i test.mp4 -lavfi "[0:v]scale=1920*2:1080*2,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[0:v]scale=-1:1080[ov];[bg][ov]overlay=(W-w)/2:(H-h)/2,crop=w=1920:h=1080" outpt.mp4



So I have this :
1
And I would like to have this :
2


-
How to reduce bitrate with out change video quality in FFMPEG
13 juillet 2016, par Muthu GMI’m using FFMPEG C library. I’m use modified muxing.c example to encode video. Video quality is reduce frame by frame when I control bitrate (like 1080 x 720 - bitrate 680k ). But same image I using FFMPEG command line tool to encode same bitrate 680k image quality to not change.
What is the reason for same image and bitrate encoded video quality reduce whan I encode using C API and reason why quality did not change command line tool.
I use :
Command line arg :
- ffmpeg -framerate 5 image%d.jpg -c:v libx264 -b:v 64k -pix_fmt yuv420p
out.mp4
Muxing.c(modified) Codec setting :
- fps = 5 ;
- CODEC_ID = H264 (libx264) ;
- Pixel_fmt = yuv420 ;
- Image decoder = MJPEG ;
- bitrate = 64000 ;
The video size are same but quality is reduce frame by frame in muxing.c
but same bitrate video quality is perfect.please provide how to I reduce bitrate with out change quality using FFMPEG C API.
- ffmpeg -framerate 5 image%d.jpg -c:v libx264 -b:v 64k -pix_fmt yuv420p