
Recherche avancée
Autres articles (94)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Installation en mode standalone
4 février 2011, parL’installation de la distribution MediaSPIP se fait en plusieurs étapes : la récupération des fichiers nécessaires. À ce moment là deux méthodes sont possibles : en installant l’archive ZIP contenant l’ensemble de la distribution ; via SVN en récupérant les sources de chaque modules séparément ; la préconfiguration ; l’installation définitive ;
[mediaspip_zip]Installation de l’archive ZIP de MediaSPIP
Ce mode d’installation est la méthode la plus simple afin d’installer l’ensemble de la distribution (...)
Sur d’autres sites (7632)
-
QAudioSink play AVFrame data decoded by ffmpeg
26 mai 2022, par liuyulvvI want to using QAudioSink to play PCM data decoded by ffmpeg.


I first try to play a pcm file and I made it :


Here is the override function
readData
ofQIODevice


qint64 PCMDevice::readData(char *data, qint64 maxlen)
{
 int len = m_inputFile.size();
 len = len < maxlen ? len : maxlen;

 m_inputFile.read(data, len);
 return len;
}



And I get the right result, the file is playing.


But I want to play AVFrame data from the memory :


qint64 PCMDevice::readData(char *data, qint64 maxlen)
{
 AVFrame *res = getFrame();
 if (res == nullptr)
 return 0;

 // Something wrong here
 auto ret = res->nb_samples;
 memcpy(data, res->data[0], ret);

 delete res;
 res = nullptr;
 return ret;
}



How to make
AVFrame's data
to the data thatQAudioSink
need ?

I'm pretty sure I'm using the correct sample_fmt, sample_rate etc.


Where is the data of the stereo AVFrame and how to copy it to the
data
correctly ?

-
Nginx-rtmp vod play a list of files from a directory
7 juillet 2022, par BalazsI have the following use case :


I would like to playback MP4 files one after another in a mounted directory with nginx-rtmp. I managed to play/access only 1 files at a time.


So instead of this
rtsp ://192.168.1.100/vod/movie01.MP4


then change it to this manually
rtsp ://192.168.1.100/vod/movie02.MP4


Have and endpoint like this and play all of the videos one after another (without using ffmpeg stream to nginx rtmp endpoint).


rtsp ://192.168.1.100/vod/stream


I have a simple config so far


rtmp {
 server {
 listen 1935;
 chunk_size 4096;

 # video on demand for mp4 files
 application vod {
 allow play all;
 wait_video on;
 play /opt/video/vod;

 }
 }
}



How can I achieve this ?


-
C++ ffmpeg - export to wav error : Invalid PCM packet, data has size 2 but at least a size of 4 was expected
9 septembre 2024, par Chris PC++ code :


AudioSegment AudioSegment::from_file(const std::string& file_path, const std::string& format, const std::string& codec,
 const std::map& parameters, int start_second, int duration) {

 avformat_network_init();
 av_log_set_level(AV_LOG_ERROR); // Adjust logging level as needed

 AVFormatContext* format_ctx = nullptr;
 if (avformat_open_input(&format_ctx, file_path.c_str(), nullptr, nullptr) != 0) {
 std::cerr << "Error: Could not open audio file." << std::endl;
 return AudioSegment(); // Return an empty AudioSegment on failure
 }

 if (avformat_find_stream_info(format_ctx, nullptr) < 0) {
 std::cerr << "Error: Could not find stream information." << std::endl;
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 int audio_stream_index = -1;
 for (unsigned int i = 0; i < format_ctx->nb_streams; i++) {
 if (format_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
 audio_stream_index = i;
 break;
 }
 }

 if (audio_stream_index == -1) {
 std::cerr << "Error: Could not find audio stream." << std::endl;
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 AVCodecParameters* codec_par = format_ctx->streams[audio_stream_index]->codecpar;
 const AVCodec* my_codec = avcodec_find_decoder(codec_par->codec_id);
 AVCodecContext* codec_ctx = avcodec_alloc_context3(my_codec);

 if (!codec_ctx) {
 std::cerr << "Error: Could not allocate codec context." << std::endl;
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 if (avcodec_parameters_to_context(codec_ctx, codec_par) < 0) {
 std::cerr << "Error: Could not initialize codec context." << std::endl;
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 if (avcodec_open2(codec_ctx, my_codec, nullptr) < 0) {
 std::cerr << "Error: Could not open codec." << std::endl;
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 SwrContext* swr_ctx = swr_alloc();
 if (!swr_ctx) {
 std::cerr << "Error: Could not allocate SwrContext." << std::endl;
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }
 codec_ctx->sample_rate = 44100;
 // Set up resampling context to convert to S16 format with 2 bytes per sample
 av_opt_set_chlayout(swr_ctx, "in_chlayout", &codec_ctx->ch_layout, 0);
 av_opt_set_int(swr_ctx, "in_sample_rate", codec_ctx->sample_rate, 0);
 av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", codec_ctx->sample_fmt, 0);

 AVChannelLayout dst_ch_layout;
 av_channel_layout_copy(&dst_ch_layout, &codec_ctx->ch_layout);
 av_channel_layout_uninit(&dst_ch_layout);
 av_channel_layout_default(&dst_ch_layout, 2);

 av_opt_set_chlayout(swr_ctx, "out_chlayout", &dst_ch_layout, 0);
 av_opt_set_int(swr_ctx, "out_sample_rate", codec_ctx->sample_rate, 0); // Match input sample rate
 av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", AV_SAMPLE_FMT_S16, 0); // Force S16 format

 if (swr_init(swr_ctx) < 0) {
 std::cerr << "Error: Failed to initialize the resampling context" << std::endl;
 swr_free(&swr_ctx);
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 AVPacket packet;
 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 std::cerr << "Error: Could not allocate frame." << std::endl;
 swr_free(&swr_ctx);
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);
 return AudioSegment();
 }

 std::vector<char> output;
 while (av_read_frame(format_ctx, &packet) >= 0) {
 if (packet.stream_index == audio_stream_index) {
 if (avcodec_send_packet(codec_ctx, &packet) == 0) {
 while (avcodec_receive_frame(codec_ctx, frame) == 0) {
 if (frame->pts != AV_NOPTS_VALUE) {
 frame->pts = av_rescale_q(frame->pts, codec_ctx->time_base, format_ctx->streams[audio_stream_index]->time_base);
 }

 uint8_t* output_buffer;
 int output_samples = av_rescale_rnd(
 swr_get_delay(swr_ctx, codec_ctx->sample_rate) + frame->nb_samples,
 codec_ctx->sample_rate, codec_ctx->sample_rate, AV_ROUND_UP);

 int output_buffer_size = av_samples_get_buffer_size(
 nullptr, 2, output_samples, AV_SAMPLE_FMT_S16, 1);

 output_buffer = (uint8_t*)av_malloc(output_buffer_size);

 if (output_buffer) {
 memset(output_buffer, 0, output_buffer_size); // Zero padding to avoid random noise
 int converted_samples = swr_convert(swr_ctx, &output_buffer, output_samples,
 (const uint8_t**)frame->extended_data, frame->nb_samples);

 if (converted_samples >= 0) {
 output.insert(output.end(), output_buffer, output_buffer + output_buffer_size);
 }
 else {
 std::cerr << "Error: Failed to convert audio samples." << std::endl;
 }
 // Make sure output_buffer is valid before freeing
 if (output_buffer != nullptr) {
 av_free(output_buffer);
 output_buffer = nullptr; // Prevent double-free
 }
 }
 else {
 std::cerr << "Error: Could not allocate output buffer." << std::endl;
 }
 }
 }
 else {
 std::cerr << "Error: Failed to send packet to codec context." << std::endl;
 }
 }
 av_packet_unref(&packet);
 }

 int frame_width = av_get_bytes_per_sample(AV_SAMPLE_FMT_S16) * 2; // Use 2 bytes per sample and 2 channels

 std::map metadata = {
 {"sample_width", 2}, // S16 format has 2 bytes per sample
 {"frame_rate", codec_ctx->sample_rate}, // Use the input sample rate
 {"channels", 2}, // Assuming stereo output
 {"frame_width", frame_width}
 };

 av_frame_free(&frame);
 swr_free(&swr_ctx);
 avcodec_free_context(&codec_ctx);
 avformat_close_input(&format_ctx);

 return AudioSegment(static_cast<const>(output.data()), output.size(), metadata);
}

std::ofstream AudioSegment::export_segment_to_wav_file(const std::string& out_f) {
 std::cout << this->get_channels() << std::endl;
 av_log_set_level(AV_LOG_ERROR);
 AVCodecContext* codec_ctx = nullptr;
 AVFormatContext* format_ctx = nullptr;
 AVStream* stream = nullptr;
 AVFrame* frame = nullptr;
 AVPacket* pkt = nullptr;
 int ret;

 // Initialize format context for WAV
 if (avformat_alloc_output_context2(&format_ctx, nullptr, "wav", out_f.c_str()) < 0) {
 throw std::runtime_error("Could not allocate format context.");
 }

 // Find encoder for PCM
 const AVCodec* codec_ptr = avcodec_find_encoder(AV_CODEC_ID_PCM_S16LE);
 if (!codec_ptr) {
 throw std::runtime_error("PCM encoder not found.");
 }

 // Add stream
 stream = avformat_new_stream(format_ctx, codec_ptr);
 if (!stream) {
 throw std::runtime_error("Failed to create new stream.");
 }

 // Allocate codec context
 codec_ctx = avcodec_alloc_context3(codec_ptr);
 if (!codec_ctx) {
 throw std::runtime_error("Could not allocate audio codec context.");
 }

 // Set codec parameters for PCM
 codec_ctx->bit_rate = 128000; // Bitrate
 codec_ctx->sample_rate = this->get_frame_rate(); // Use correct sample rate
 codec_ctx->ch_layout.nb_channels = this->get_channels(); // Set the correct channel count

 // Set the channel layout: stereo or mono
 if (this->get_channels() == 2) {
 av_channel_layout_default(&codec_ctx->ch_layout, 2); // Stereo layout
 }
 else {
 av_channel_layout_default(&codec_ctx->ch_layout, 1); // Mono layout
 }

 codec_ctx->sample_fmt = AV_SAMPLE_FMT_S16; // PCM 16-bit format

 // Open codec
 if (avcodec_open2(codec_ctx, codec_ptr, nullptr) < 0) {
 throw std::runtime_error("Could not open codec.");
 }

 // Set codec parameters to the stream
 if (avcodec_parameters_from_context(stream->codecpar, codec_ctx) < 0) {
 throw std::runtime_error("Could not initialize stream codec parameters.");
 }

 // Open output file
 std::ofstream out_file(out_f, std::ios::binary);
 if (!out_file) {
 throw std::runtime_error("Failed to open output file.");
 }

 if (!(format_ctx->oformat->flags & AVFMT_NOFILE)) {
 if (avio_open(&format_ctx->pb, out_f.c_str(), AVIO_FLAG_WRITE) < 0) {
 throw std::runtime_error("Could not open output file.");
 }
 }

 // Write file header
 if (avformat_write_header(format_ctx, nullptr) < 0) {
 throw std::runtime_error("Error occurred when writing file header.");
 }

 // Initialize packet
 pkt = av_packet_alloc();
 if (!pkt) {
 throw std::runtime_error("Could not allocate AVPacket.");
 }

 // Initialize frame
 frame = av_frame_alloc();
 if (!frame) {
 throw std::runtime_error("Could not allocate AVFrame.");
 }

 // Set the frame properties
 frame->format = codec_ctx->sample_fmt;
 frame->ch_layout = codec_ctx->ch_layout;

 // Number of audio samples available in the data buffer
 int total_samples = data_.size() / (av_get_bytes_per_sample(AV_SAMPLE_FMT_S16) * codec_ctx->ch_layout.nb_channels);
 int samples_read = 0;

 // Set the number of samples per frame dynamically based on the input data
 while (samples_read < total_samples) {
 // Determine how many samples to read in this iteration (don't exceed the total sample count)
 int num_samples = std::min(codec_ctx->frame_size, total_samples - samples_read);
 if (num_samples == 0) {
 num_samples = 1024;
 codec_ctx->frame_size = 1024;
 }
 // Ensure num_samples is not zero
 if (num_samples <= 0) {
 throw std::runtime_error("Invalid number of samples in frame.");
 }

 // Set the number of samples in the frame
 frame->nb_samples = num_samples;

 // Allocate the frame buffer based on the number of samples
 ret = av_frame_get_buffer(frame, 0);
 if (ret < 0) {
 std::cerr << "Error allocating frame buffer: " << ret << std::endl;
 throw std::runtime_error("Could not allocate audio data buffers.");
 }

 // Copy the audio data into the frame's buffer (interleaving if necessary)
 /*if (codec_ctx->ch_layout.nb_channels == 2) {
 // If stereo, interleave planar data into packed format
 for (int i = 0; i < num_samples; ++i) {
 ((int16_t*)frame->data[0])[2 * i] = ((int16_t*)data_.data())[i]; // Left channel
 ((int16_t*)frame->data[0])[2 * i + 1] = ((int16_t*)data_.data())[total_samples + i]; // Right channel
 }
 }
 else {
 // For mono or packed data, directly copy the samples
 std::memcpy(frame->data[0], data_.data() + samples_read * av_get_bytes_per_sample(AV_SAMPLE_FMT_S16) * codec_ctx->ch_layout.nb_channels,
 num_samples * av_get_bytes_per_sample(AV_SAMPLE_FMT_S16) * codec_ctx->ch_layout.nb_channels);
 }
 */
 std::memcpy(frame->data[0], data_.data() + samples_read * av_get_bytes_per_sample(AV_SAMPLE_FMT_S16) * codec_ctx->ch_layout.nb_channels,
 num_samples * av_get_bytes_per_sample(AV_SAMPLE_FMT_S16) * codec_ctx->ch_layout.nb_channels);

 // Send the frame for encoding
 ret = avcodec_send_frame(codec_ctx, frame);
 if (ret < 0) {
 std::cerr << "Error sending frame for encoding: " << ret << std::endl;
 throw std::runtime_error("Error sending frame for encoding.");
 }

 // Receive and write encoded packets
 while (ret >= 0) {
 ret = avcodec_receive_packet(codec_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 break;
 }
 else if (ret < 0) {
 throw std::runtime_error("Error during encoding.");
 }

 out_file.write(reinterpret_cast(pkt->data), pkt->size);
 av_packet_unref(pkt);
 }

 samples_read += num_samples;
 }

 // Flush the encoder
 if (avcodec_send_frame(codec_ctx, nullptr) < 0) {
 throw std::runtime_error("Error flushing the encoder.");
 }

 while (avcodec_receive_packet(codec_ctx, pkt) >= 0) {
 out_file.write(reinterpret_cast(pkt->data), pkt->size);
 av_packet_unref(pkt);
 }

 // Write file trailer
 av_write_trailer(format_ctx);

 // Cleanup
 av_frame_free(&frame);
 av_packet_free(&pkt);
 avcodec_free_context(&codec_ctx);

 if (!(format_ctx->oformat->flags & AVFMT_NOFILE)) {
 avio_closep(&format_ctx->pb);
 }
 avformat_free_context(format_ctx);

 out_file.close();
 return out_file;
}

</const></char>


Run code :


#include "audio_segment.h"
#include "effects.h"
#include "playback.h"
#include "cppaudioop.h"
#include "exceptions.h"
#include "generators.h"
#include "silence.h"
#include "utils.h"

#include <iostream>
#include <filesystem>

using namespace cppdub;

int main() {
 try {
 // Load the source audio file
 AudioSegment seg_1 = AudioSegment::from_file("../data/test10.mp3");
 std::string out_file_name = "ah-ah-ah.wav";

 // Export the audio segment to a new file with specified settings
 //seg_1.export_segment(out_file_name, "mp3");
 seg_1.export_segment_to_wav_file(out_file_name);


 // Optionally play the audio segment to verify
 // play(seg_1);

 // Load the exported audio file
 AudioSegment seg_2 = AudioSegment::from_file(out_file_name);

 // Play segments
 //play(seg_1);
 play(seg_2);
 }
 catch (const std::exception& e) {
 std::cerr << "An error occurred: " << e.what() << std::endl;
 }

 return 0;
}
</filesystem></iostream>


Error in second call of from_file function :


[pcm_s16le @ 000002d82ca5bfc0] Invalid PCM packet, data has size 2 but at least a size of 4 was expected


The process continue, i call hear the seg_2 with play(seg_2) call, but i can't directly play seg_2 export wav file (from windows explorer).


I had a guess that error may be because packed vs plannar formats missmatch but i am not quit sure. Maybe a swr_convert is necessary.