
Recherche avancée
Médias (3)
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (23)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...)
Sur d’autres sites (2786)
-
C++ ffmpeg lib version 7.0 - export audio to different format
4 septembre 2024, par Chris PI want to make a C++ lib named cppdub which will mimic the python module pydub.


One main function is to export the AudioSegment to a file with a specific format (example : mp3).


The code is :



AudioSegment AudioSegment::from_file(const std::string& file_path, const std::string& format, const std::string& codec,
 const std::map& parameters, int start_second, int duration) {

 avformat_network_init();
 av_log_set_level(AV_LOG_ERROR); // Adjust logging level as needed

 AVFormatContext* format_ctx = nullptr;
 if (avformat_open_input(&format_ctx, file_path.c_str(), nullptr, nullptr) != 0) {
 std::cerr << "Error: Could not open audio file." << std::endl;
 return AudioSegment(); // Return an empty AudioSegment on failure
 }

 if (avformat_find_stream_info(format_ctx, nullptr) < 0) {
 std::cerr << "Error: Could not find stream information." << std::endl;
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 int audio_stream_index = -1;
 for (unsigned int i = 0; i < format_ctx->nb_streams; i++) {
 if (format_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
 audio_stream_index = i;
 break;
 }
 }

 if (audio_stream_index == -1) {
 std::cerr << "Error: Could not find audio stream." << std::endl;
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 AVCodecParameters* codec_par = format_ctx->streams[audio_stream_index]->codecpar;
 const AVCodec* my_codec = avcodec_find_decoder(codec_par->codec_id);
 AVCodecContext* codec_ctx = avcodec_alloc_context3(my_codec);

 if (!codec_ctx) {
 std::cerr << "Error: Could not allocate codec context." << std::endl;
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 if (avcodec_parameters_to_context(codec_ctx, codec_par) < 0) {
 std::cerr << "Error: Could not initialize codec context." << std::endl;
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 if (avcodec_open2(codec_ctx, my_codec, nullptr) < 0) {
 std::cerr << "Error: Could not open codec." << std::endl;
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 SwrContext* swr_ctx = swr_alloc();
 if (!swr_ctx) {
 std::cerr << "Error: Could not allocate SwrContext." << std::endl;
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }
 codec_ctx->sample_rate = 44100;
 // Set up resampling context to convert to S16 format with 2 bytes per sample
 av_opt_set_chlayout(swr_ctx, "in_chlayout", &codec_ctx->ch_layout, 0);
 av_opt_set_int(swr_ctx, "in_sample_rate", codec_ctx->sample_rate, 0);
 av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", codec_ctx->sample_fmt, 0);

 AVChannelLayout dst_ch_layout;
 av_channel_layout_copy(&dst_ch_layout, &codec_ctx->ch_layout);
 av_channel_layout_uninit(&dst_ch_layout);
 av_channel_layout_default(&dst_ch_layout, 2);

 av_opt_set_chlayout(swr_ctx, "out_chlayout", &dst_ch_layout, 0);
 av_opt_set_int(swr_ctx, "out_sample_rate", codec_ctx->sample_rate, 0); // Match input sample rate
 av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", AV_SAMPLE_FMT_S16, 0); // Force S16 format

 if (swr_init(swr_ctx) < 0) {
 std::cerr << "Error: Failed to initialize the resampling context" << std::endl;
 swr_free(&swr_ctx);
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 AVPacket packet;
 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 std::cerr << "Error: Could not allocate frame." << std::endl;
 swr_free(&swr_ctx);
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 std::vector<char> output;
 while (av_read_frame(format_ctx, &packet) >= 0) {
 if (packet.stream_index == audio_stream_index) {
 if (avcodec_send_packet(codec_ctx, &packet) == 0) {
 while (avcodec_receive_frame(codec_ctx, frame) == 0) {
 if (frame->pts != AV_NOPTS_VALUE) {
 frame->pts = av_rescale_q(frame->pts, codec_ctx->time_base, format_ctx->streams[audio_stream_index]->time_base);
 }

 uint8_t* output_buffer;
 int output_samples = av_rescale_rnd(
 swr_get_delay(swr_ctx, codec_ctx->sample_rate) + frame->nb_samples,
 codec_ctx->sample_rate, codec_ctx->sample_rate, AV_ROUND_UP);

 int output_buffer_size = av_samples_get_buffer_size(
 nullptr, 2, output_samples, AV_SAMPLE_FMT_S16, 1);

 output_buffer = (uint8_t*)av_malloc(output_buffer_size);

 if (output_buffer) {
 memset(output_buffer, 0, output_buffer_size); // Zero padding to avoid random noise
 int converted_samples = swr_convert(swr_ctx, &output_buffer, output_samples,
 (const uint8_t**)frame->extended_data, frame->nb_samples);

 if (converted_samples >= 0) {
 output.insert(output.end(), output_buffer, output_buffer + output_buffer_size);
 }
 else {
 std::cerr << "Error: Failed to convert audio samples." << std::endl;
 }
 // Make sure output_buffer is valid before freeing
 if (output_buffer != nullptr) {
 av_free(output_buffer);
 output_buffer = nullptr; // Prevent double-free
 }
 }
 else {
 std::cerr << "Error: Could not allocate output buffer." << std::endl;
 }
 }
 }
 else {
 std::cerr << "Error: Failed to send packet to codec context." << std::endl;
 }
 }
 av_packet_unref(&packet);
 }

 int frame_width = av_get_bytes_per_sample(AV_SAMPLE_FMT_S16) * 2; // Use 2 bytes per sample and 2 channels

 std::map metadata = {
 {"sample_width", 2}, // S16 format has 2 bytes per sample
 {"frame_rate", codec_ctx->sample_rate}, // Use the input sample rate
 {"channels", 2}, // Assuming stereo output
 {"frame_width", frame_width}
 };

 av_frame_free(&frame);
 swr_free(&swr_ctx);
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);

 return AudioSegment(static_cast<const>(output.data()), output.size(), metadata);
}









std::ofstream AudioSegment::export_segment(const std::string& out_f,
 const std::string& format,
 const std::string& codec,
 const std::string& bitrate,
 const std::vector& parameters,
 const std::map& tags,
 const std::string& id3v2_version,
 const std::string& cover) {

 av_log_set_level(AV_LOG_DEBUG);
 AVCodecContext* codec_ctx = nullptr;
 AVFormatContext* format_ctx = nullptr;
 AVStream* stream = nullptr;
 AVFrame* frame = nullptr;
 AVPacket* pkt = nullptr;
 SwrContext* swr_ctx = nullptr;
 int ret;

 // Initialize format context
 if (avformat_alloc_output_context2(&format_ctx, nullptr, format.c_str(), out_f.c_str()) < 0) {
 throw std::runtime_error("Could not allocate format context.");
 }

 // Find encoder
 const AVCodec* codec_ptr = avcodec_find_encoder_by_name(codec.c_str());
 if (!codec_ptr) {
 throw std::runtime_error("Codec not found.");
 }

 // Add stream
 stream = avformat_new_stream(format_ctx, codec_ptr);
 if (!stream) {
 throw std::runtime_error("Failed to create new stream.");
 }

 // Allocate codec context
 codec_ctx = avcodec_alloc_context3(codec_ptr);
 if (!codec_ctx) {
 throw std::runtime_error("Could not allocate audio codec context.");
 }

 // Set codec parameters
 codec_ctx->bit_rate = std::stoi(bitrate);
 codec_ctx->sample_rate = this->get_frame_rate(); // Ensure this returns the correct sample rate
 av_channel_layout_default(&codec_ctx->ch_layout, 2);
 codec_ctx->sample_fmt = codec_ptr->sample_fmts ? codec_ptr->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;

 // Open codec
 if (avcodec_open2(codec_ctx, codec_ptr, nullptr) < 0) {
 throw std::runtime_error("Could not open codec.");
 }

 // Set codec parameters to the stream
 if (avcodec_parameters_from_context(stream->codecpar, codec_ctx) < 0) {
 throw std::runtime_error("Could not initialize stream codec parameters.");
 }

 // Open output file
 std::ofstream out_file(out_f, std::ios::binary);
 if (!out_file) {
 throw std::runtime_error("Failed to open output file.");
 }

 if (!(format_ctx->oformat->flags & AVFMT_NOFILE)) {
 if (avio_open(&format_ctx->pb, out_f.c_str(), AVIO_FLAG_WRITE) < 0) {
 throw std::runtime_error("Could not open output file.");
 }
 }

 // Write file header
 if (avformat_write_header(format_ctx, nullptr) < 0) {
 throw std::runtime_error("Error occurred when opening output file.");
 }

 // Initialize packet
 pkt = av_packet_alloc();
 if (!pkt) {
 throw std::runtime_error("Could not allocate AVPacket.");
 }

 // Initialize frame
 frame = av_frame_alloc();
 if (!frame) {
 throw std::runtime_error("Could not allocate AVFrame.");
 }
 frame->nb_samples = codec_ctx->frame_size;
 frame->format = codec_ctx->sample_fmt;
 frame->ch_layout = codec_ctx->ch_layout;

 // Allocate data buffer
 if (av_frame_get_buffer(frame, 0) < 0) {
 throw std::runtime_error("Could not allocate audio data buffers.");
 }

 // Initialize SwrContext for resampling
 swr_ctx = swr_alloc();
 if (!swr_ctx) {
 throw std::runtime_error("Could not allocate SwrContext.");
 }

 // Set options for resampling
 av_opt_set_chlayout(swr_ctx, "in_chlayout", &codec_ctx->ch_layout, 0);
 av_opt_set_chlayout(swr_ctx, "out_chlayout", &codec_ctx->ch_layout, 0);
 av_opt_set_int(swr_ctx, "in_sample_rate", codec_ctx->sample_rate, 0);
 av_opt_set_int(swr_ctx, "out_sample_rate", codec_ctx->sample_rate, 0);
 av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0); // Assuming input is S16
 av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", codec_ctx->sample_fmt, 0);

 // Initialize the resampling context
 if (swr_init(swr_ctx) < 0) {
 throw std::runtime_error("Failed to initialize SwrContext.");
 }

 int samples_read = 0;
 int total_samples = data_.size() / (av_get_bytes_per_sample(AV_SAMPLE_FMT_S16) * 2); // Assuming input is stereo

 while (samples_read < total_samples) {
 if (av_frame_make_writable(frame) < 0) {
 throw std::runtime_error("Frame not writable.");
 }

 int num_samples = std::min(codec_ctx->frame_size, total_samples - samples_read);

 // Prepare input data
 const uint8_t* input_data[2] = { reinterpret_cast<const>(data_.data() + samples_read * av_get_bytes_per_sample(AV_SAMPLE_FMT_S16) * 2), nullptr };
 int output_samples = swr_convert(swr_ctx, frame->data, frame->nb_samples,
 input_data, num_samples);

 if (output_samples < 0) {
 throw std::runtime_error("Error converting audio samples.");
 }

 frame->nb_samples = output_samples;

 // Send the frame for encoding
 if (avcodec_send_frame(codec_ctx, frame) < 0) {
 throw std::runtime_error("Error sending frame for encoding.");
 }

 // Receive and write packets
 while (avcodec_receive_packet(codec_ctx, pkt) >= 0) {
 out_file.write(reinterpret_cast(pkt->data), pkt->size);
 av_packet_unref(pkt);
 }

 samples_read += num_samples;
 }

 // Flush the encoder
 if (avcodec_send_frame(codec_ctx, nullptr) < 0) {
 throw std::runtime_error("Error flushing the encoder.");
 }

 while (avcodec_receive_packet(codec_ctx, pkt) >= 0) {
 out_file.write(reinterpret_cast(pkt->data), pkt->size);
 av_packet_unref(pkt);
 }

 // Write file trailer
 av_write_trailer(format_ctx);

 // Cleanup
 av_frame_free(&frame);
 av_packet_free(&pkt);
 swr_free(&swr_ctx);
 avcodec_free_context(&codec_ctx);

 if (!(format_ctx->oformat->flags & AVFMT_NOFILE)) {
 avio_closep(&format_ctx->pb);
 }
 avformat_free_context(format_ctx);

 out_file.close();
 return out_file;
}

//declaration
/*
std::ofstream export_segment(const std::string& out_f,
 const std::string& format = "mp3",
 const std::string& codec = "libmp3lame",
 const std::string& bitrate = "128000",
 const std::vector& parameters = {},
 const std::map& tags = {},
 const std::string& id3v2_version = "4",
 const std::string& cover = "");
*/
</const></const></char>


This code only works for mp3 format. I also want to export to aac,ogg,flv,wav and any other popular formats.


-
fluent-ffmpeg unable to achieve consistent audio bitrate
20 septembre 2020, par MartinI am working on an electron app that runs ffmpeg commands to combine an audio file + image file into a video file. I'm using an fluent-ffmpeg command I wrote, which works, but the outputted video fiels always have a lower audio bit rate, even though I specify 320kbps in my command.


I've been running multiple tests with mp3/flac/wav files that I logged below ; using my fluent-ffmpeg command as follows :


ffmpeg()
 .input(imgPath)
 .loop()
 .addInputOption('-framerate 2')
 .input(audioPath)
 .videoCodec('libx264')
 .audioCodec('aac')
 //.audioCodec('copy')
 .audioBitrate('320k')
 .videoBitrate('8000k', true)
 .size('1920x1080')
 .outputOptions([
 '-preset medium',
 '-tune stillimage',
 '-crf 18',
 '-b:a 320k',
 '-pix_fmt yuv420p',
 '-shortest'
 ])

 .on('progress', function (progress) {
 document.getElementById(updateInfoLocation).innerText = `Generating Video: ${Math.round(progress.percent)}%`
 console.info(`vid() Processing : ${progress.percent} % done`);
 })
 .on('codecData', function (data) {
 console.log('vid() codecData=', data);
 })
 .on('end', function () {
 document.getElementById(updateInfoLocation).innerText = `Video generated.`
 console.log('vid() file has been converted succesfully; resolve() promise');
 resolve();
 })
 .on('error', function (err) {
 document.getElementById(updateInfoLocation).innerText = `Error generating video.`
 console.log('vid() an error happened: ' + err.message, ', reject()');
 reject(err);
 })
 .output(vidOutput).run()



1) 320bkps mp3 file


- 

- imgPath = C :\Users\marti\Documents\projects\mp3 audio\img.jpg
- audioPath = C :\Users\marti\Documents\projects\mp3 audio\08. Strange - Gigi D'Agostino.mp3
- vidOutput = C :\Users\marti\Documents\projects\mp3 audio\08. Strange - Gigi D'Agostino.mp4








My audio file has a bit rate of 320kbps :



If I use my above ffmpeg function with
.audioCodec('aac')
, the output video file has an audio bitrate of 300kbps. If I use.audioCodec('copy')
instead, my output video file has an audio bitrate of 317kbps.

2) 1411kbps wav file


- 

- audioPath = C :\Users\marti\Documents\projects\wav audio\01 Nem Kaldi.wav
- imgPath = C :\Users\marti\Documents\projects\wav audio\cover.jpg
- vidOutput = C :\Users\marti\Documents\projects\wav audio\01 Nem Kaldi.mp4








My wav file has an audio bit rate of 1411kbps ;

When I run my ffmpeg command using
.audioCodec('aac')
, my output video file is again only 303kbps


3) 937kbps flac file


- 

- audioPath = C :\Users\marti\Documents\projects\flac audio\02 - ryokika.flac
- imgPath = C :\Users\marti\Documents\projects\flac audio\R-1042328-1388635895-5022.jpeg.jpg
- vidOutput = C :\Users\marti\Documents\projects\flac audio\02 - ryokika.mp4








When I run my ffmpeg command with
.audioCodec('aac')
, the outputted video file is only 304kbps

is there any command or additional flag I need to add to ensure my output video files have a high quality audio bitrate ?


-
The problems with wavelets
I have periodically noted in this blog and elsewhere various problems with wavelet compression, but many readers have requested that I write a more detailed post about it, so here it is.
Wavelets have been researched for quite some time as a replacement for the standard discrete cosine transform used in most modern video compression. Their methodology is basically opposite : each coefficient in a DCT represents a constant pattern applied to the whole block, while each coefficient in a wavelet transform represents a single, localized pattern applied to a section of the block. Accordingly, wavelet transforms are usually very large with the intention of taking advantage of large-scale redundancy in an image. DCTs are usually quite small and are intended to cover areas of roughly uniform patterns and complexity.
Both are complete transforms, offering equally accurate frequency-domain representations of pixel data. I won’t go into the mathematical details of each here ; the real question is whether one offers better compression opportunities for real-world video.
DCT transforms, though it isn’t mathematically required, are usually found as block transforms, handling a single sharp-edged block of data. Accordingly, they usually need a deblocking filter to smooth the edges between DCT blocks. Wavelet transforms typically overlap, avoiding such a need. But because wavelets don’t cover a sharp-edged block of data, they don’t compress well when the predicted data is in the form of blocks.
Thus motion compensation is usually performed as overlapped-block motion compensation (OBMC), in which every pixel is calculated by performing the motion compensation of a number of blocks and averaging the result based on the distance of those blocks from the current pixel. Another option, which can be combined with OBMC, is “mesh MC“, where every pixel gets its own motion vector, which is a weighted average of the closest nearby motion vectors. The end result of either is the elimination of sharp edges between blocks and better prediction, at the cost of greatly increased CPU requirements. For an overlap factor of 2, it’s 4 times the amount of motion compensation, plus the averaging step. With mesh MC, it’s even worse, with SIMD optimizations becoming nearly impossible.
At this point, it would seem wavelets would have pretty big advantages : when used with OBMC, they have better inter prediction, eliminate the need for deblocking, and take advantage of larger-scale correlations. Why then hasn’t everyone switched over to wavelets then ? Dirac and Snow offer modern implementations. Yet despite decades of research, wavelets have consistently disappointed for image and video compression. It turns out there are a lot of serious practical issues with wavelets, many of which are open problems.
1. No known method exists for efficient intra coding. H.264′s spatial intra prediction is extraordinarily powerful, but relies on knowing the exact decoded pixels to the top and left of the current block. Since there is no such boundary in overlapped-wavelet coding, such prediction is impossible. Newer intra prediction methods, such as markov-chain intra prediction, also seem to require an H.264-like situation with exactly-known neighboring pixels. Intra coding in wavelets is in the same state that DCT intra coding was in 20 years ago : the best known method was to simply transform the block with no prediction at all besides DC. NB : as described by Pengvado in the comments, the switching between inter and intra coding is potentially even more costly than the inefficient intra coding.
2. Mixing partition sizes has serious practical problems. Because the overlap between two motion partitions depends on the partitions’ size, mixing block sizes becomes quite difficult to define. While in H.264 an smaller partition always gives equal or better compression than a larger one when one ignores the extra overhead, it is actually possible for a larger partition to win when using OBMC due to the larger overlap. All of this makes both the problem of defining the result of mixed block sizes and making decisions about them very difficult.
Both Snow and Dirac offer variable block size, but the overlap amount is constant ; larger blocks serve only to save bits on motion vectors, not offer better overlap characteristics.
3. Lack of spatial adaptive quantization. As shown in x264 with VAQ, and correspondingly in HCEnc’s implementation and Theora’s recent implementation, spatial adaptive quantization has staggeringly impressive (before, after) effects on visual quality. Only Dirac seems to have such a feature, and the encoder doesn’t even use it. No other wavelet formats (Snow, JPEG2K, etc) seem to have such a feature. This results in serious blurring problems in areas with subtle texture (as in the comparison below).
4. Wavelets don’t seem to code visual energy effectively. Remember that a single coefficient in a DCT represents a pattern which applies across an entire block : this makes it very easy to create apparent “detail” with a DCT. Furthermore, the sharp edges of DCT blocks, despite being an apparent weakness, often result in a “fake sharpness” that can actually improve the visual appearance of videos, as was seen with Xvid. Thus wavelet codecs have a tendency to look much blurrier than DCT-based codecs, but since PSNR likes blur, this is often seen as a benefit during video compression research. Some of the consequences of these factors can be seen in this comparison ; somewhat outdated and not general-case, but which very effectively shows the difference in how wavelets handle sharp edges and subtle textures.
Another problem that periodically crops up is the visual aliasing that tends to be associated with wavelets at lower bitrates. Standard wavelets effectively consist of a recursive function that upscales the coefficients coded by the previous level by a factor of 2 and then adds a new set of coefficients. If the upscaling algorithm is naive — as it often is, for the sake of speed — the result can look quite ugly, as if parts of the image were coded at a lower resolution and then badly scaled up. Of course, it looks like that because they were coded at a lower resolution and then badly scaled up.
JPEG2000 is a classic example of wavelet failure : despite having more advanced entropy coding, being designed much later than JPEG, being much more computationally intensive, and having much better PSNR, comparisons have consistently shown it to be visually worse than JPEG at sane filesizes. Here’s an example from Wikipedia. By comparison, H.264′s intra coding, when used for still image compression, can beat JPEG by a factor of 2 or more (I’ll make a post on this later). With the various advancements in DCT intra coding since H.264, I suspect that a state-of-the-art DCT compressor could win by an even larger factor.
Despite the promised benefits of wavelets, a wavelet encoder even close to competitive with x264 has yet to be created. With some tests even showing Dirac losing to Theora in visual comparisons, it’s clear that many problems remain to be solved before wavelets can eliminate the ugliness of block-based transforms once and for all.