
Recherche avancée
Autres articles (68)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Activation de l’inscription des visiteurs
12 avril 2011, parIl est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...) -
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (8333)
-
using ffmpeg libraray to write to a mp4, ffprobe shows there are 100 frames and 100 packets, but av_interleaved_write_frame only called 50 times
2 mai 2023, par ollydbg23here is my code to generate a mp4 file by using ffmpeg and opencv library. The opencv library is only try to generate 100 images(frames), and ffmpeg library is to compress the images to a mp4 files.


Here is the working code :


#include <iostream>
#include <vector>
#include <cstring>
#include <fstream>
#include <sstream>
#include <stdexcept>
#include <opencv2></opencv2>opencv.hpp>
extern "C" {
#include <libavutil></libavutil>imgutils.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>opt.h>
}

#include<cstdlib> // to generate time stamps

using namespace std;
using namespace cv;

int main()
{
 // Set up input frames as BGR byte arrays
 vector<mat> frames;

 int width = 640;
 int height = 480;
 int num_frames = 100;
 Scalar black(0, 0, 0);
 Scalar white(255, 255, 255);
 int font = FONT_HERSHEY_SIMPLEX;
 double font_scale = 1.0;
 int thickness = 2;

 for (int i = 0; i < num_frames; i++) {
 Mat frame = Mat::zeros(height, width, CV_8UC3);
 putText(frame, std::to_string(i), Point(width / 2 - 50, height / 2), font, font_scale, white, thickness);
 frames.push_back(frame);
 }

 // generate a serial of time stamps which is used to set the PTS value
 // suppose they are in ms unit, the time interval is between 30ms to 59ms
 vector<int> timestamps;

 for (int i = 0; i < num_frames; i++) {
 int timestamp;
 if (i == 0)
 timestamp = 0;
 else
 {
 int random = 30 + (rand() % 30);
 timestamp = timestamps[i-0] + random;
 }

 timestamps.push_back(timestamp);
 }

 // Populate frames with BGR byte arrays

 // Initialize FFmpeg
 //av_register_all();

 // Set up output file
 AVFormatContext* outFormatCtx = nullptr;
 //AVCodec* outCodec = nullptr;
 AVCodecContext* outCodecCtx = nullptr;
 //AVStream* outStream = nullptr;
 //AVPacket outPacket;

 const char* outFile = "output.mp4";
 int outWidth = frames[0].cols;
 int outHeight = frames[0].rows;
 int fps = 25;

 // Open the output file context
 avformat_alloc_output_context2(&outFormatCtx, nullptr, nullptr, outFile);
 if (!outFormatCtx) {
 cerr << "Error: Could not allocate output format context" << endl;
 return -1;
 }

 // Open the output file
 if (avio_open(&outFormatCtx->pb, outFile, AVIO_FLAG_WRITE) < 0) {
 cerr << "Error opening output file" << std::endl;
 return -1;
 }

 // Set up output codec
 const AVCodec* outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
 if (!outCodec) {
 cerr << "Error: Could not find H.264 codec" << endl;
 return -1;
 }

 outCodecCtx = avcodec_alloc_context3(outCodec);
 if (!outCodecCtx) {
 cerr << "Error: Could not allocate output codec context" << endl;
 return -1;
 }
 outCodecCtx->codec_id = AV_CODEC_ID_H264;
 outCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
 outCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
 outCodecCtx->width = outWidth;
 outCodecCtx->height = outHeight;
 //outCodecCtx->time_base = { 1, fps*1000 }; // 25000
 outCodecCtx->time_base = { 1, fps}; // 25000
 outCodecCtx->framerate = {fps, 1}; // 25
 outCodecCtx->bit_rate = 4000000;

 //https://github.com/leandromoreira/ffmpeg-libav-tutorial
 //We set the flag AV_CODEC_FLAG_GLOBAL_HEADER which tells the encoder that it can use the global headers.
 if (outFormatCtx->oformat->flags & AVFMT_GLOBALHEADER)
 {
 outCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; //
 }

 // Open output codec
 if (avcodec_open2(outCodecCtx, outCodec, nullptr) < 0) {
 cerr << "Error: Could not open output codec" << endl;
 return -1;
 }

 // Create output stream
 AVStream* outStream = avformat_new_stream(outFormatCtx, outCodec);
 if (!outStream) {
 cerr << "Error: Could not allocate output stream" << endl;
 return -1;
 }

 // Configure output stream parameters (e.g., time base, codec parameters, etc.)
 // ...

 // Connect output stream to format context
 outStream->codecpar->codec_id = outCodecCtx->codec_id;
 outStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 outStream->codecpar->width = outCodecCtx->width;
 outStream->codecpar->height = outCodecCtx->height;
 outStream->codecpar->format = outCodecCtx->pix_fmt;
 outStream->time_base = outCodecCtx->time_base;

 int ret = avcodec_parameters_from_context(outStream->codecpar, outCodecCtx);
 if (ret < 0) {
 cerr << "Error: Could not copy codec parameters to output stream" << endl;
 return -1;
 }

 outStream->avg_frame_rate = outCodecCtx->framerate;
 //outStream->id = outFormatCtx->nb_streams++; <--- We shouldn't modify outStream->id

 ret = avformat_write_header(outFormatCtx, nullptr);
 if (ret < 0) {
 cerr << "Error: Could not write output header" << endl;
 return -1;
 }

 // Convert frames to YUV format and write to output file
 int frame_count = -1;
 for (const auto& frame : frames) {
 frame_count++;
 AVFrame* yuvFrame = av_frame_alloc();
 if (!yuvFrame) {
 cerr << "Error: Could not allocate YUV frame" << endl;
 return -1;
 }
 av_image_alloc(yuvFrame->data, yuvFrame->linesize, outWidth, outHeight, AV_PIX_FMT_YUV420P, 32);

 yuvFrame->width = outWidth;
 yuvFrame->height = outHeight;
 yuvFrame->format = AV_PIX_FMT_YUV420P;

 // Convert BGR frame to YUV format
 Mat yuvMat;
 cvtColor(frame, yuvMat, COLOR_BGR2YUV_I420);
 memcpy(yuvFrame->data[0], yuvMat.data, outWidth * outHeight);
 memcpy(yuvFrame->data[1], yuvMat.data + outWidth * outHeight, outWidth * outHeight / 4);
 memcpy(yuvFrame->data[2], yuvMat.data + outWidth * outHeight * 5 / 4, outWidth * outHeight / 4);

 // Set up output packet
 //av_init_packet(&outPacket); //error C4996: 'av_init_packet': was declared deprecated
 AVPacket* outPacket = av_packet_alloc();
 memset(outPacket, 0, sizeof(outPacket)); //Use memset instead of av_init_packet (probably unnecessary).
 //outPacket->data = nullptr;
 //outPacket->size = 0;

 // set the frame pts, do I have to set the package pts?

 // yuvFrame->pts = av_rescale_q(timestamps[frame_count]*25, outCodecCtx->time_base, outStream->time_base); //Set PTS timestamp
 yuvFrame->pts = av_rescale_q(frame_count*frame_count, outCodecCtx->time_base, outStream->time_base); //Set PTS timestamp

 // Encode frame and write to output file
 int ret = avcodec_send_frame(outCodecCtx, yuvFrame);
 if (ret < 0) {
 cerr << "Error: Could not send frame to output codec" << endl;
 return -1;
 }
 while (ret >= 0)
 {
 ret = avcodec_receive_packet(outCodecCtx, outPacket);

 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 {
 int abc;
 abc++;
 break;
 }
 else if (ret < 0)
 {
 cerr << "Error: Could not receive packet from output codec" << endl;
 return -1;
 }

 //av_packet_rescale_ts(&outPacket, outCodecCtx->time_base, outStream->time_base);

 outPacket->stream_index = outStream->index;

 outPacket->duration = av_rescale_q(1, outCodecCtx->time_base, outStream->time_base); // Set packet duration

 ret = av_interleaved_write_frame(outFormatCtx, outPacket);

 static int call_write = 0;

 call_write++;
 printf("av_interleaved_write_frame %d\n", call_write);

 av_packet_unref(outPacket);
 if (ret < 0) {
 cerr << "Error: Could not write packet to output file" << endl;
 return -1;
 }
 }

 av_frame_free(&yuvFrame);
 }

 // Flush the encoder
 ret = avcodec_send_frame(outCodecCtx, nullptr);
 if (ret < 0) {
 std::cerr << "Error flushing encoder: " << std::endl;
 return -1;
 }

 while (ret >= 0) {
 AVPacket* pkt = av_packet_alloc();
 if (!pkt) {
 std::cerr << "Error allocating packet" << std::endl;
 return -1;
 }
 ret = avcodec_receive_packet(outCodecCtx, pkt);

 // Write the packet to the output file
 if (ret == 0)
 {
 pkt->stream_index = outStream->index;
 pkt->duration = av_rescale_q(1, outCodecCtx->time_base, outStream->time_base); // <---- Set packet duration
 ret = av_interleaved_write_frame(outFormatCtx, pkt);
 av_packet_unref(pkt);
 if (ret < 0) {
 std::cerr << "Error writing packet to output file: " << std::endl;
 return -1;
 }
 }
 }


 // Write output trailer
 av_write_trailer(outFormatCtx);

 // Clean up
 avcodec_close(outCodecCtx);
 avcodec_free_context(&outCodecCtx);
 avformat_free_context(outFormatCtx);

 return 0;
}

</int></mat></cstdlib></stdexcept></sstream></fstream></cstring></vector></iostream>


Note that I have used the
ffprobe
tool(one of the tool from ffmpeg) to inspect the generated mp4 files.

I see that the mp4 file has 100 frames and 100 packets, but in my code, I have such lines :


static int call_write = 0;

 call_write++;
 printf("av_interleaved_write_frame %d\n", call_write);



I just see that the
av_interleaved_write_frame
function is only called 50 times, not the expected 100 times, anyone can explain it ?

Thanks.


BTW, from the ffmpeg document( see here : For video, it should typically contain one compressed frame ), I see that a
packet
mainly has one videoframe
, so theffprobe
's result looks correct.

Here is the command I used to inspect the mp4 file :


ffprobe -show_frames output.mp4 >> frames.txt
ffprobe -show_packets output.mp4 >> packets.txt



My testing code is derived from an answer in another question here : avformat_write_header() function call crashed when I try to save several RGB data to a output.mp4 file


-
'ffmpeg' or 'handbrake cli' for video conversion on server ? [closed]
18 juin 2013, par AaronJiangI want to convert videos on my server using command line, maily mp4 -> flv or flv -> mp4. I googled and found these two products 'ffmpeg' and 'HandBrake cli'.
https://trac.handbrake.fr/wiki/CLIGuide
Which one is better ?
Plus I am running on ubuntu server.
-
How to implement HTTP Live Streaming server on Unix ?
11 juillet 2019, par alexI just realized that Apple required HTTP Live Streaming in order to view videos in iPhone apps. I was not aware of this before... I am now trying to understand what this involves so I can decide whether I want to do the work and make the videos available in 3G or limit video playing to users who are connected to wi-fi.
I read the overview provided by Apple, and now understand that my server needs to segment and index my media files. I also understand that I don’t have to host the content to be able to stream it (I can point to a video hosted somewhere else, right ?).
What’s not clear to me at this point is what to implement on my server (Ubuntu Hardy) to do the actual segmenting and indexing on the fly (once again, I do not host the videos I want to serve).
I found a link explaining how to install FFmpeg and X264, but I don’t know if this is the best solution (since I have an Ubuntu server, I can’t use the Apple Live Streaming tools, is it correct ?). Also, I do not understand at which point my server knows that a video needs to be converted and starts the job...
Any feedback that could help me understand exactly what to do on the server side to be able to stream videos on my iPhone app in 3G would be greatly appreciated ! (Oh, and just it makes any difference, my app back-end is in Rails)