
Recherche avancée
Autres articles (79)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (10032)
-
Revision a32ba91d53d82039af0910cfb352b45012b669f4 : un pipeline trig_calculer_langues_rubriques qui manquait sur la fonction ...
4 avril 2010, par Cerdic — Logun pipeline trig_calculer_langues_rubriques qui manquait sur la fonction calculer_langues_rubriques() git-svn-id : svn ://trac.rezo.net/spip/branches/spip-2.0@15579 caf5f3e8-d4fe-0310-bb3e-c32d5e47d55d
-
FFmpeg what exactly is the filtergraph pipeline like during transcoding ?
8 septembre 2017, par Jeff GongI have been studying the source code for FFmpeg to attempt to understand its threading model and how it processes inputs. For example, when I run a command like :
ffmpeg -i video.mp4 -s hd720 -c:v libx264 --preset medium -c:a aac -profile:v main -r 60 -f null /dev/null
The input itself is irrelevant, but I am trying to understand how the transcoding pipeline works. In the source code, I see that the main steps occur in the functions
transcode
andtranscode_step
.It seems like for a single input, a single frame is read in, decoded, encoded, and written out. The process is obviously very complex but what I am really not understanding is what FFmpeg is doing when it attempts to build out a filtergraph. For example, in
transcode_step
offfmpeg.c
, there is the following code that happens right after an output stream has been selected :if (ost->filter && !ost->filter->graph->graph) {
if (ifilter_has_all_input_formats(ost->filter->graph)) {
ret = configure_filtergraph(ost->filter->graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error reinitializing filters!\n");
return ret;
}
}
}Does this only apply if I specify a specific series of filtering options to FFmpeg, like the one in this link ? For the sample command I input above, is this code still executed ?
One last other question I had was for the case where I run an FFmpeg instance with a single input but multiple outputs (perhaps different variants for transcoding). In this scenario, does a single phase of
transcode_step
take in an input frame and send that frame through decoding and encoding for only a single one of the outputs ? Or does it take a frame at a time and process this frame for each of the outputs we have specified ? -
avformat_write_header() function call crashed when I try to save several RGB data to a output.mp4 file
1er mai 2023, par ollydbg23I try to save several image data(in memory, with BGR format) to a output.mp4 file, here is the C++ code to call the ffmpeg library, the code builds correctly, but will crash when I call the
ret = avformat_write_header(outFormatCtx, nullptr);
, do you know how to solve this crash issue ?

Thanks.


#include <iostream>
#include <vector>
#include <cstring>
#include <fstream>
#include <sstream>
#include <stdexcept>
#include <opencv2></opencv2>opencv.hpp>
extern "C" {
#include <libavutil></libavutil>imgutils.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>opt.h>
}

using namespace std;
using namespace cv;

int main()
{
 // Set up input frames as BGR byte arrays
 vector<mat> frames;

 int width = 640;
 int height = 480;
 int num_frames = 100;
 Scalar black(0, 0, 0);
 Scalar white(255, 255, 255);
 int font = FONT_HERSHEY_SIMPLEX;
 double font_scale = 1.0;
 int thickness = 2;

 for (int i = 0; i < num_frames; i++) {
 Mat frame = Mat::zeros(height, width, CV_8UC3);
 putText(frame, std::to_string(i), Point(width / 2 - 50, height / 2), font, font_scale, white, thickness);
 frames.push_back(frame);
 }


 // Populate frames with BGR byte arrays

 // Initialize FFmpeg
 //av_register_all();

 // Set up output file
 AVFormatContext* outFormatCtx = nullptr;
 //AVCodec* outCodec = nullptr;
 AVCodecContext* outCodecCtx = nullptr;
 //AVStream* outStream = nullptr;
 AVPacket outPacket;

 const char* outFile = "output.mp4";
 int outWidth = frames[0].cols;
 int outHeight = frames[0].rows;
 int fps = 30;

 // Open output file
 avformat_alloc_output_context2(&outFormatCtx, nullptr, nullptr, outFile);
 if (!outFormatCtx) {
 cerr << "Error: Could not allocate output format context" << endl;
 return -1;
 }

 // Set up output codec
 const AVCodec* outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
 if (!outCodec) {
 cerr << "Error: Could not find H.264 codec" << endl;
 return -1;
 }

 outCodecCtx = avcodec_alloc_context3(outCodec);
 if (!outCodecCtx) {
 cerr << "Error: Could not allocate output codec context" << endl;
 return -1;
 }
 outCodecCtx->codec_id = AV_CODEC_ID_H264;
 outCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
 outCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
 outCodecCtx->width = outWidth;
 outCodecCtx->height = outHeight;
 outCodecCtx->time_base = { 1, fps };

 // Open output codec
 if (avcodec_open2(outCodecCtx, outCodec, nullptr) < 0) {
 cerr << "Error: Could not open output codec" << endl;
 return -1;
 }

 // Create output stream
 AVStream* outStream = avformat_new_stream(outFormatCtx, outCodec);
 if (!outStream) {
 cerr << "Error: Could not allocate output stream" << endl;
 return -1;
 }

 // Configure output stream parameters (e.g., time base, codec parameters, etc.)
 // ...

 // Connect output stream to format context
 outStream->codecpar->codec_id = outCodecCtx->codec_id;
 outStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 outStream->codecpar->width = outCodecCtx->width;
 outStream->codecpar->height = outCodecCtx->height;
 outStream->codecpar->format = outCodecCtx->pix_fmt;
 outStream->time_base = outCodecCtx->time_base;

 int ret = avcodec_parameters_from_context(outStream->codecpar, outCodecCtx);
 if (ret < 0) {
 cerr << "Error: Could not copy codec parameters to output stream" << endl;
 return -1;
 }

 outStream->avg_frame_rate = outCodecCtx->framerate;
 outStream->id = outFormatCtx->nb_streams++;


 ret = avformat_write_header(outFormatCtx, nullptr);
 if (ret < 0) {
 cerr << "Error: Could not write output header" << endl;
 return -1;
 }

 // Convert frames to YUV format and write to output file
 for (const auto& frame : frames) {
 AVFrame* yuvFrame = av_frame_alloc();
 if (!yuvFrame) {
 cerr << "Error: Could not allocate YUV frame" << endl;
 return -1;
 }
 av_image_alloc(yuvFrame->data, yuvFrame->linesize, outWidth, outHeight, AV_PIX_FMT_YUV420P, 32);

 // Convert BGR frame to YUV format
 Mat yuvMat;
 cvtColor(frame, yuvMat, COLOR_BGR2YUV_I420);
 memcpy(yuvFrame->data[0], yuvMat.data, outWidth * outHeight);
 memcpy(yuvFrame->data[1], yuvMat.data + outWidth * outHeight, outWidth * outHeight / 4);
 memcpy(yuvFrame->data[2], yuvMat.data + outWidth * outHeight * 5 / 4, outWidth * outHeight / 4);

 // Set up output packet
 av_init_packet(&outPacket);
 outPacket.data = nullptr;
 outPacket.size = 0;

 // Encode frame and write to output file
 int ret = avcodec_send_frame(outCodecCtx, yuvFrame);
 if (ret < 0) {
 cerr << "Error: Could not send frame to output codec" << endl;
 return -1;
 }
 while (ret >= 0) {
 ret = avcodec_receive_packet(outCodecCtx, &outPacket);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 break;
 } else if (ret < 0) {
 cerr << "Error: Could not receive packet from output codec" << endl;
 return -1;
 }

 av_packet_rescale_ts(&outPacket, outCodecCtx->time_base, outStream->time_base);
 outPacket.stream_index = outStream->index;

 ret = av_interleaved_write_frame(outFormatCtx, &outPacket);
 if (ret < 0) {
 cerr << "Error: Could not write packet to output file" << endl;
 return -1;
 }
 }

 av_frame_free(&yuvFrame);
 }

 // Write output trailer
 av_write_trailer(outFormatCtx);

 // Clean up
 avcodec_close(outCodecCtx);
 avcodec_free_context(&outCodecCtx);
 avformat_free_context(outFormatCtx);

 return 0;
}
</mat></stdexcept></sstream></fstream></cstring></vector></iostream>


In-fact, I try to solve my original question here : What is the best way to save an image sequence with different time intervals in a simgle file in C++, but in that discussion, it is difficult to write a c++ code for ffmpeg library.