
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (111)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (8967)
-
Convert encoded audio file to text with signal values
15 août 2017, par Niccolò CironeI’ve been programming in c for the first time with audio files. I found this code which supposedly should read an audio file and then write a csv file containing several info in order to analyse the audio waves,that in case will be a simple voice : i’m interested in amplitude of the waves, in the timbre of the voice and its hight and extension.
main () {
// Create a 20 ms audio buffer (assuming Fs = 44.1 kHz)
int16_t buf[N] = {0}; // buffer
int n; // buffer index
// Open WAV file with FFmpeg and read raw samples via the pipe.
FILE *pipein;
pipein = popen("ffmpeg -i whistle.wav -f s16le -ac 1 -", "r");
fread(buf, 2, N, pipein);
pclose(pipein);
// Print the sample values in the buffer to a CSV file
FILE *csvfile;
csvfile = fopen("samples.csv", "w");
for (n=0 ; n<n></n>code>Could someone explain me in detail how can I read an audio file so that I could extract from it the info I need ? Referring to this code, could someone explain me the meaning of the pipe at line 8
pipein = popen("ffmpeg -i whistle.wav -f s16le -ac 1 -", "r");
p.s. I already know how to read the header of the audio file, which contains a lot of useful info, but I also want to analyse the entire audio file, sample by sample.
-
avcodec/v4l2_m2m_dec : remove redundant packet and fix double free
16 juillet 2020, par Andriy Gelmanavcodec/v4l2_m2m_dec : remove redundant packet and fix double free
v4l2_receive_frame() uses two packets s->buf_pkt and avpkt. If avpkt
cannot be enqueued, the packet is buffered in s->buf_pkt and enqueued in
the next call. Currently the ownership transfer between the two packets
is not properly handled. A double free occurs if
ff_v4l2_context_enqueue_packet() returns EAGAIN and v4l2_try_start
returns EINVAL.In fact, having two AVPackets is not needed and everything can be
handled by s->buf_pkt.This commit removes the local avpkt from v4l2_receive_frame(), meaning
that the ownership transfer doesn't need to be handled and the double
free is fixed.Signed-off-by : Andriy Gelman <andriy.gelman@gmail.com>
-
JavaCV FFmpegFrameRecorder properties explanation needed
29 décembre 2014, par LeronI’m using
FFmpegFrameRecorder
to get the video input from my webcam and record it into a video file. The problem is that I’m building my application using a few different demo source codes that I found and I use properties some of which are not completely clear to me.First, here is my code snippet :
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(FILENAME, grabber.getImageWidth(),grabber.getImageHeight());
recorder.setVideoCodec(13);
recorder.setFormat("mp4");
recorder.setPixelFormat(avutil.PIX_FMT_YUV420P);
recorder.setFrameRate(30);
recorder.setVideoBitrate(10 * 1024 * 1024);
recorder.start();- setVideoCodec(13) - What is the meaning of this
(13)
how can I understand what actual codec stands behind any number ? - setPixelFormat - Just get this, don’t know what it’s doing in general
- setFrameRate(30) - I think this should be pretty clear but still what is the logic behind what frame rate we choose (isn’t the high the better ?)
- setVideoBitrate(10*1024*1024) - again almost no idea what this does and what’s the logic behind the numbers ?
At the end I just want to mention one last problem that I get recording video like this. If the actual length of the video is let’s say 20secs. When I play the video file created from the program it runs significantly faster. Can’t tell if it’s exactly 2 times faster than it should be but in general if I record a 20sec video then it’s played for about 10secs. What may cause this and how can I fix it ?
- setVideoCodec(13) - What is the meaning of this