
Recherche avancée
Médias (1)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
Autres articles (60)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 is the first MediaSPIP stable release.
Its official release date is June 21, 2013 and is announced here.
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)
Sur d’autres sites (11012)
-
ffmpeg : Unrecognized option 'alpha_quality'
16 avril 2024, par László MondaI want to make transparent videos work in Safari, which doesn't support WebM for this purpose but only H265 with alpha transparency.


According to this post, I used Shutter Encoder, but only some of its versions work for this purpose on Mac.


Instead of using Shutter Encoder on Mac, I want to use ffmpeg on my Linux PC. Shutter Encoder uses the following command in the background :


ffmpeg -threads 0 -hwaccel none -i input.mov -c:v hevc_videotoolbox -alpha_quality 1 -b:v 1000k -profile:v main -level 5.2 -map v:0 -an -pix_fmt yuva420p -sws_flags bicubic -tag:v hvc1 -metadata creation_time=2024-04-14T14:53:08.734684Z -y output.mp4


which yields the following output on my PC :


ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)
 configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
 WARNING: library configuration mismatch
 avcodec configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared --enable-version3 --disable-doc --disable-programs --enable-libaribb24 --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libtesseract --enable-libvo_amrwbenc --enable-libsmbclient
 libavutil 56. 70.100 / 56. 70.100
 libavcodec 58.134.100 / 58.134.100
 libavformat 58. 76.100 / 58. 76.100
 libavdevice 58. 13.100 / 58. 13.100
 libavfilter 7.110.100 / 7.110.100
 libswscale 5. 9.100 / 5. 9.100
 libswresample 3. 9.100 / 3. 9.100
 libpostproc 55. 9.100 / 55. 9.100
Unrecognized option 'alpha_quality'.
Error splitting the argument list: Option not found



When googling for "Unrecognized option 'alpha_quality'.", there are no results, which I find very odd.


What's going on, and how can I make ffmpeg work for this purpose without Shutter Encoder ?


-
Reading mp3 file using ffmpeg caues memory leaks, even after freeing it in main
12 août 2020, par leonardltk1i am continuously reading mp3 files and processing them, but the memory keeps getting build up even though i freed it.


At the bottom
read_audio_mp3()
, they are already freeing some variable.
why do i still face a memory build up and how do i deal with it ?

following this code : https://rodic.fr/blog/libavcodec-tutorial-decode-audio-file/, i read mp3 using this function


int read_audio_mp3(string filePath_str, const int sample_rate, 
 double** output_buffer, int &AUDIO_DURATION){
 const char* path = filePath_str.c_str();

 /* Reads the file header and stores information about the file format. */
 AVFormatContext* format = avformat_alloc_context();
 if (avformat_open_input(&format, path, NULL, NULL) != 0) {
 fprintf(stderr, "Could not open file '%s'\n", path);
 return -1;
 }

 /* Check out the stream information in the file. */
 if (avformat_find_stream_info(format, NULL) < 0) {
 fprintf(stderr, "Could not retrieve stream info from file '%s'\n", path);
 return -1;
 }

 /* find an audio stream. */
 int stream_index =- 1;
 for (unsigned i=0; inb_streams; i++) {
 if (format->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO) {
 stream_index = i;
 break;
 }
 }
 if (stream_index == -1) {
 fprintf(stderr, "Could not retrieve audio stream from file '%s'\n", path);
 return -1;
 }
 AVStream* stream = format->streams[stream_index];

 // find & open codec
 AVCodecContext* codec = stream->codec;
 if (avcodec_open2(codec, avcodec_find_decoder(codec->codec_id), NULL) < 0) {
 fprintf(stderr, "Failed to open decoder for stream #%u in file '%s'\n", stream_index, path);
 return -1;
 }

 // prepare resampler
 struct SwrContext* swr = swr_alloc();
 av_opt_set_int(swr, "in_channel_count", codec->channels, 0);
 av_opt_set_int(swr, "out_channel_count", 1, 0);
 av_opt_set_int(swr, "in_channel_layout", codec->channel_layout, 0);
 av_opt_set_int(swr, "out_channel_layout", AV_CH_LAYOUT_MONO, 0);
 av_opt_set_int(swr, "in_sample_rate", codec->sample_rate, 0);
 av_opt_set_int(swr, "out_sample_rate", sample_rate, 0);
 av_opt_set_sample_fmt(swr, "in_sample_fmt", codec->sample_fmt, 0);
 av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_DBL, 0);
 swr_init(swr);
 if (!swr_is_initialized(swr)) {
 fprintf(stderr, "Resampler has not been properly initialized\n");
 return -1;
 }

 /* Allocate an audio frame. */
 AVPacket packet;
 av_init_packet(&packet);
 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 fprintf(stderr, "Error allocating the frame\n");
 return -1;
 }

 // iterate through frames
 *output_buffer = NULL;
 AUDIO_DURATION = 0;
 while (av_read_frame(format, &packet) >= 0) {
 // decode one frame
 int gotFrame;
 if (avcodec_decode_audio4(codec, frame, &gotFrame, &packet) < 0) {
 // free packet
 av_free_packet(&packet);
 break;
 }
 if (!gotFrame) {
 // free packet
 av_free_packet(&packet);
 continue;
 }
 // resample frames
 double* buffer;
 av_samples_alloc((uint8_t**) &buffer, NULL, 1, frame->nb_samples, AV_SAMPLE_FMT_DBL, 0);
 int frame_count = swr_convert(swr, (uint8_t**) &buffer, frame->nb_samples, (const uint8_t**) frame->data, frame->nb_samples);
 // append resampled frames to output_buffer
 *output_buffer = (double*) realloc(*output_buffer,
 (AUDIO_DURATION + frame->nb_samples) * sizeof(double));
 memcpy(*output_buffer + AUDIO_DURATION, buffer, frame_count * sizeof(double));
 AUDIO_DURATION += frame_count;
 // free buffer & packet
 av_free_packet(&packet);
 av_free( buffer );
 }

 // clean up
 av_frame_free(&frame);
 swr_free(&swr);
 avcodec_close(codec);
 avformat_free_context(format);

 return 0;
 }



Main Script :
MemoryLeak.cpp


// imports
 #include <fstream>
 #include 
 #include 
 #include 
 #include 
 #include <iostream>
 #include <sstream>
 #include <vector>
 #include <sys></sys>time.h> 
 extern "C"
 {
 #include <libavutil></libavutil>opt.h>
 #include <libavcodec></libavcodec>avcodec.h>
 #include <libavformat></libavformat>avformat.h>
 #include <libswresample></libswresample>swresample.h>
 }
 using namespace std;

 int main (int argc, char ** argv) {
 string wavpath = argv[1];
 printf("wavpath=%s\n", wavpath.c_str());

 printf("\n==== Params =====\n");
 // Init
 int AUDIO_DURATION;
 int sample_rate = 8000;
 av_register_all();

 printf("\n==== Reading MP3 =====\n");
 while (true) {
 // Read mp3
 double* buffer;
 if (read_audio_mp3(wavpath, sample_rate, &buffer, AUDIO_DURATION) != 0) {
 printf("Cannot read %s\n", wavpath.c_str());
 continue;
 }

 /* 
 Process the buffer for down stream tasks.
 */

 // Freeing the buffer
 free(buffer);
 }

 return 0 ;
 }
</vector></sstream></iostream></fstream>


Compiling


g++ -o ./MemoryLeak.out -Ofast -Wall -Wextra \
 -std=c++11 "./MemoryLeak.cpp" \
 -lavformat -lavcodec -lavutil -lswresample



Running, by right my input an argument
wav.scp
that reads text file of all the mp3s.
But for easy to replicate purpose, i only read 1 filesong.mp3
in and i keep re-reading it

./MemoryLeak.out song.mp3



Why do i know i have memory leaks ?


- 

- I was running up 32 jobs in parallel for 14 million files, and when i wake up in the morning, they were abruptly killed.
- I run
htop
and i monitor the progress when i re-run it, and i saw that theVIRT
&RES
&Mem
are continuously increasing.








Edit 1 :
My setup :




ffmpeg version 2.8.15-0ubuntu0.16.04.1
built with gcc 5.4.0



-
JSmpeg is not playing audio from websocket stream
5 juin 2023, par NikI am trying to stream RTSP to web browser using ffmpeg through web socket relay written in node js taken from https://github.com/phoboslab/jsmpeg , and on the browser i am using JSMpeg to display the RTSP stream, the video is playing fine, but audio is not playing,


The ffmpeg command :


ffmpeg -rtsp_transport tcp -i rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4 
 -f mpegts -c:v mpeg1video -c:a mp2 http://127.0.0.1:8081/stream_from_ffmpeg/



The node js web socket relay :


// Use the websocket-relay to serve a raw MPEG-TS over WebSockets. You can use
// ffmpeg to feed the relay. ffmpeg -> websocket-relay -> browser
// Example:
// node websocket-relay yoursecret 8081 8082
// ffmpeg -i <some input="input"> -f mpegts http://localhost:8081/yoursecret

var fs = require('fs'),
 http = require('http'),
 WebSocket = require('ws');

if (process.argv.length < 3) {
 console.log(
 'Usage: \n' +
 'node websocket-relay.js <secret> [ ]'
 );
 process.exit();
}

var STREAM_SECRET = process.argv[2],
 STREAM_PORT = process.argv[3] || 8081,
 WEBSOCKET_PORT = process.argv[4] || 8082,
 RECORD_STREAM = false;

// Websocket Server
var socketServer = new WebSocket.Server({port: WEBSOCKET_PORT, perMessageDeflate: false});
socketServer.connectionCount = 0;
socketServer.on('connection', function(socket, upgradeReq) {
 socketServer.connectionCount++;
 console.log(
 'New WebSocket Connection: ',
 (upgradeReq || socket.upgradeReq).socket.remoteAddress,
 (upgradeReq || socket.upgradeReq).headers['user-agent'],
 '('+socketServer.connectionCount+' total)'
 );
 socket.on('close', function(code, message){
 socketServer.connectionCount--;
 console.log(
 'Disconnected WebSocket ('+socketServer.connectionCount+' total)'
 );
 });
});
socketServer.broadcast = function(data) {
 socketServer.clients.forEach(function each(client) {
 if (client.readyState === WebSocket.OPEN) {
 client.send(data);
 }
 });
};

// HTTP Server to accept incoming MPEG-TS Stream from ffmpeg
var streamServer = http.createServer( function(request, response) {
 var params = request.url.substr(1).split('/');

 if (params[0] !== STREAM_SECRET) {
 console.log(
 'Failed Stream Connection: '+ request.socket.remoteAddress + ':' +
 request.socket.remotePort + ' - wrong secret.'
 );
 response.end();
 }

 response.connection.setTimeout(0);
 console.log(
 'Stream Connected: ' +
 request.socket.remoteAddress + ':' +
 request.socket.remotePort
 );
 request.on('data', function(data){
 socketServer.broadcast(data);
 if (request.socket.recording) {
 request.socket.recording.write(data);
 }
 });
 request.on('end',function(){
 console.log('close');
 if (request.socket.recording) {
 request.socket.recording.close();
 }
 });

 // Record the stream to a local file?
 if (RECORD_STREAM) {
 var path = 'recordings/' + Date.now() + '.ts';
 request.socket.recording = fs.createWriteStream(path);
 }
})
// Keep the socket open for streaming
streamServer.headersTimeout = 0;
streamServer.listen(STREAM_PORT);

console.log('Listening for incoming MPEG-TS Stream on http://127.0.0.1:'+STREAM_PORT+'/<secret>');
console.log('Awaiting WebSocket connections on ws://127.0.0.1:'+WEBSOCKET_PORT+'/');
</secret></secret></some>


The front end code




 
 
 
 
 <code class="echappe-js"><script src='http://stackoverflow.com/feeds/tag/jsmpeg.min.js'></script>

 
 
 
 
 
<script>&#xA; let url;&#xA; let player;&#xA; let canvas = document.getElementById("video-canvas");&#xA; let ipAddr = "127.0.0.1:8082";&#xA; window.onload = async() => {&#xA; url = `ws://${ipAddr}`;&#xA; player = new JSMpeg.Player(url, { canvas: canvas, });&#xA; };&#xA;&#xA; </script>





The above code works fine and plays the video, but no audio is playing
Things I tried :


Changed the audio context state inside the player object from suspended to running


player.audioOut.context.onstatechange = async () => {
 console.log("Event triggered by audio");

 if (player.audioOut.context === "suspended") {
 await player.audioOut.context.resume();
 }
}