
Recherche avancée
Médias (91)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
avec chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
sans chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
config chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (63)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Sélection de projets utilisant MediaSPIP
29 avril 2011, parLes exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
Ferme MediaSPIP @ Infini
L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)
Sur d’autres sites (5846)
-
FFMPEG or FFPLAY, catch FFT signal in real time as floats
25 avril 2021, par NVRMLooking to extract in real time a FFT snapshot of waveforms data with
ffplay
, in the view of creating animations.

This is exactly what I am looking to catch, but this demo is using JavaScript in a browser. (Source own post)




const audio = document.getElementById('music');
audio.load();
audio.play();

const ctx = new AudioContext();
const audioSrc = ctx.createMediaElementSource(audio);
const analyser = ctx.createAnalyser();

audioSrc.connect(analyser);
analyser.connect(ctx.destination);

analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
const frequencyData = new Uint8Array(bufferLength);

setInterval(() => {
 analyser.getByteFrequencyData(frequencyData);
 console.log(frequencyData);
}, 1000);


<audio src="http://strm112.1.fm/reggae_mobile_mp3" crossorigin="use-URL-credentials" controls="true"></audio>








I tried many variations around the method posted on https://trac.ffmpeg.org/wiki/Waveform .




The problem is the output format for FFT is
PCM
(Pulse Code Modulation), and not real time.


In a generic way, is there a simple way to do this, while the sound is playing, to retrieve this data ?


ffplay -fft file.mp3 > fft.json




Using C, same stuff : Apply FFT on pcm data and convert to a spectrogram


FFMPEG waveform filter documentation


-
real time video streaming in C#
16 juin 2016, par NuwanI’m developing an application for real time streaming. Two parts include for the streaming.
I use a capture card to capture some live source and need to stream in real time.
and also need to stream a local video file.To stream local video file in real time I use emgu cv to capture the video frame as bitmaps.
To achieve this I create the bitmap list and I save captured bitmap to this list using one thread.
and also I display those frames in a picture box. Bitmap list can store 1 second video. if frame rate is
30 it will store 30 video frames. After filling this list I start another thread to encode that 1 second chunk
video.For encoding purpose I use ffmpeg wrapper called nreco. I write that video frames to ffmpeg
and start the ffmpeg to encode. After stopping that task I can get encoded data as byte array.Then I’m sending that data using UDP protocol through LAN.
This works fine. But I cannot achieve the smooth streaming. When I received stream via VLC player there is some millisecond of delay between packets and also I noticed there a frame lost.
private Capture _capture = null;
Image frame;
// Here I capture the frames and store them in a list
private void ProcessFrame(object sender, EventArgs arg)
{
frame = _capture.QueryFrame();
frameBmp = new Bitmap((int)frameWidth, (int)frameHeight, PixelFormat.Format24bppRgb);
frameBmp = frame.ToBitmap();
twoSecondVideoBitmapFramesForEncode.Add(frameBmp);
////}
if (twoSecondVideoBitmapFramesForEncode.Count == (int)FrameRate)
{
isInitiate = false;
thread = new Thread(new ThreadStart(encodeTwoSecondVideo));
thread.IsBackground = true;
thread.Start();
}
}
public void encodeTwoSecondVideo()
{
List<bitmap> copyOfTwoSecondVideo = new List<bitmap>();
copyOfTwoSecondVideo = twoSecondVideoBitmapFramesForEncode.ToList();
twoSecondVideoBitmapFramesForEncode.Clear();
int g = (int)FrameRate * 2;
// create the ffmpeg task. these are the parameters i use for h264 encoding
string outPutFrameSize = frameWidth.ToString() + "x" + frameHeight.ToString();
//frame.ToBitmap().Save(msBit, frame.ToBitmap().RawFormat);
ms = new MemoryStream();
//Create video encoding task and set main parameters for the video encode
ffMpegTask = ffmpegConverter.ConvertLiveMedia(
Format.raw_video,
ms,
Format.h264,
new ConvertSettings()
{
CustomInputArgs = " -pix_fmt bgr24 -video_size " + frameWidth + "x" + frameHeight + " -framerate " + FrameRate + " ", // windows bitmap pixel format
CustomOutputArgs = " -threads 7 -preset ultrafast -profile:v baseline -level 3.0 -tune zerolatency -qp 0 -pix_fmt yuv420p -g " + g + " -keyint_min " + g + " -flags -global_header -sc_threshold 40 -qscale:v 1 -crf 25 -b:v 10000k -bufsize 20000k -s " + outPutFrameSize + " -r " + FrameRate + " -pass 1 -coder 1 -movflags frag_keyframe -movflags +faststart -c:a libfdk_aac -b:a 128k "
//VideoFrameSize = FrameSize.hd1080,
//VideoFrameRate = 30
});
////////ffMpegTask.Start();
ffMpegTask.Start();
// I get the 2 second chunk video bitmap from the list and write to the ffmpeg
foreach (var item in copyOfTwoSecondVideo)
{
id++;
byte[] buf = null;
BitmapData bd = null;
Bitmap frameBmp = null;
Thread.Sleep((int)(1000.5 / FrameRate));
bd = item.LockBits(new Rectangle(0, 0, item.Width, item.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb);
buf = new byte[bd.Stride * item.Height];
Marshal.Copy(bd.Scan0, buf, 0, buf.Length);
ffMpegTask.Write(buf, 0, buf.Length);
item.UnlockBits(bd);
}
}
</bitmap></bitmap>This is the process I used to achieve the live streaming. But the stream is not smooth. I tried using a queue instead
of list to reduce the the latency to fill the list. Because I thought that latency happens encoding thread encode
and send 2 second video very quickly. But when it finishes this encoding process of bitmap list not
completely full. So encoding thread will stop until the next 2 second video is ready.If any one can help me to figure this out, it is very grateful. If the way of I’m doing this is wrong, please correct me.
Thank You ! -
libavcodec initialization to achieve real time playback with frame dropping when necessary
20 octobre 2019, par Blake SenftnerI have a C++ computer vision application linking with the ffmpeg libraries that provides frames from video streams to analysis routines. The idea being one can provide a moderately generic video stream identifier, and that video source will be decompressed and passed frame after frame to an analysis routine (which runs the user’s analysis functions.) The "moderately generic video identifier" covers 3 generic video stream types : paths to video files on disk, IP video streams (cameras or video streaming services), and USB webcam pins with desired format & rate.
My current video player is generic as possible : video only, ignoring audio and other streams. It has a switch case for retrieving a stream’s frame rate based upon the stream’s source and codec, which is used to estimate the delay between decompressing frames. I’ve had many issues with trying to get reliable timestamps from the streams, so I am currently ignoring pts and dts. I know ignoring pts/dts is bad for variable frame rate streams. I plan to special case them later. The player currently checks to see if the last decompressed frame is more than 2 frames late (assuming a constant frame rate), and if so "drops the frame" - does not pass it to the user’s analysis routine.
Essentially, the video player’s logic is determining when to skip frames (not pass them to the time consuming analysis routine) so the analysis is fed video frames in as close as possible to real time.
I am looking for examples or discussions how one can initialize and/or maintain their AVFormatContext, AVStream, and AVCodecContext using (presumably but not limited to) AVDictionary options such that frame dropping as is necessary to maintain real time is performed at the libav libraries level, and not at my video player level. If achieving this requires separate AVDictionaies (or more) for each stream type and codec, then so be it. I am interested in understanding the pros and cons of both approachs : dropping frames at the player level or at the libav level.
(When some analysis requires every frame, the existing player implementation with frame dropping disabled is fine. I suspect if I can get frame dropping to occur at the libav level, I’ll save the packet to frame decompression time as well, reducing the processing more than my current version.)