
Recherche avancée
Médias (91)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
-
Les Miserables
4 juin 2012, par
Mis à jour : Février 2013
Langue : English
Type : Texte
-
Ne pas afficher certaines informations : page d’accueil
23 novembre 2011, par
Mis à jour : Novembre 2011
Langue : français
Type : Image
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Richard Stallman et la révolution du logiciel libre - Une biographie autorisée (version epub)
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (72)
-
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (8375)
-
FFMPEG and STB_Image Create awful Picture
9 février 2023, par murage kibichoI was learning how to use the FFMPEG C api and I was trying to encode a jpeg into a MPEG file. I load the JPEG into (unsigned char *) using the stb-image library. Then I create a (uint8_t *) and copy my rgb values. Finally, I convert RGB to YUV420 using sws_scale. However, a portion of my image blurs out when I perform the encoding.

/
This is the original image



Perhaps I allocate my frame buffer incorrectly ?

ret = av_frame_get_buffer(frame, 0);




This is my entire program


#define STB_IMAGE_IMPLEMENTATION
#include "stb_image.h"
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include "stb_image_write.h"
#define STB_IMAGE_RESIZE_IMPLEMENTATION
#include "stb_image_resize.h"
#include 

#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>imgutils.h>
#include <libswscale></libswscale>swscale.h>
//gcc stack.c -lm -o stack.o `pkg-config --cflags --libs libavformat libavcodec libswresample libswscale libavutil` && ./stack.o

/*
int i : pts of current frame
*/
void PictureToFrame(int i, AVFrame *frame, int height, int width)
{
 //Use stb image to get rgb values
 char *fileName = "profil.jpeg";
 int imageHeight = 0;
 int imageWidth = 0;
 int colorChannels = 0;
 int arrayLength = 0;
 unsigned char *image = stbi_load(fileName,&imageWidth,&imageHeight,&colorChannels,0);
 
 printf("(height: %d, width: %d)\n",imageHeight, imageWidth);
 assert(colorChannels == 3 && imageHeight == height && imageWidth == width);
 
 //Convert unsigned char * to uint8_t *
 arrayLength = imageHeight * imageWidth * colorChannels;
 uint8_t *rgb = calloc(arrayLength, sizeof(uint8_t));
 int j = arrayLength-1;
 for(int i = 0; i < arrayLength; i++)
 {
 rgb[i] = (uint8_t) image[i];
 }
 
 //Use SwsContext to scale RGB to YUV420P and write to frame
 const int in_linesize[1] = { 3* imageWidth};
 struct SwsContext *sws_context = NULL;
 sws_context = sws_getCachedContext(sws_context,
 imageWidth, imageHeight, AV_PIX_FMT_RGB24,
 imageWidth, imageHeight, AV_PIX_FMT_YUV420P,
 0, 0, 0, 0);
 sws_scale(sws_context, (const uint8_t * const *)&rgb, in_linesize, 0,
 imageHeight, frame->data, frame->linesize);
 //Save frame pts
 frame->pts = i;
 
 //Free alloc'd data
 stbi_image_free(image);
 sws_freeContext(sws_context);
 free(rgb);
}
static void encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt, FILE *outfile)
{
 int returnValue;
 /* send the frame to the encoder */
 if(frame)
 {
 printf("Send frame %3"PRId64"\n", frame->pts);
 }
 returnValue = avcodec_send_frame(enc_ctx, frame);
 if(returnValue < 0)
 {
 printf("Error sending a frame for encoding\n");
 return;
 }
 while(returnValue >= 0)
 {
 returnValue = avcodec_receive_packet(enc_ctx, pkt);
 if(returnValue == AVERROR(EAGAIN) || returnValue == AVERROR_EOF)
 {
 return;
 }
 else if(returnValue < 0)
 {
 printf("Error during encoding\n");
 return;
 }

 printf("Write packet %3"PRId64" (size=%5d)\n", pkt->pts, pkt->size);
 fwrite(pkt->data, 1, pkt->size, outfile);
 av_packet_unref(pkt);
 }
}


int main(int argc, char **argv)
{
 const char *filename, *codec_name;
 const AVCodec *codec;
 AVCodecContext *c= NULL;
 int i, ret, x, y;
 FILE *f;
 AVFrame *frame;
 AVPacket *pkt;
 uint8_t endcode[] = { 0, 0, 1, 0xb7 };

 filename = "outo.mp4";
 codec_name = "mpeg1video";//"mpeg1video";//"libx264";


 /* find the mpeg1video encoder */
 codec = avcodec_find_encoder_by_name(codec_name);
 if(!codec)
 {
 printf("Error finding codec\n");
 return 0;
 }

 c = avcodec_alloc_context3(codec);
 if(!c)
 {
 printf("Error allocating c\n");
 return 0;
 }

 pkt = av_packet_alloc();
 if(!pkt)
 {
 printf("Error allocating pkt\n");
 return 0;
 }

 /* put sample parameters */
 c->bit_rate = 400000;
 /* resolution must be a multiple of two */
 c->width = 800;
 c->height = 800;
 /* frames per second */
 c->time_base = (AVRational){1, 25};
 c->framerate = (AVRational){25, 1};
 c->gop_size = 10;
 c->max_b_frames = 1;
 c->pix_fmt = AV_PIX_FMT_YUV420P;

 if(codec->id == AV_CODEC_ID_H264)
 {
 av_opt_set(c->priv_data, "preset", "slow", 0);
 }
 

 /* open it */
 ret = avcodec_open2(c, codec, NULL);
 if(ret < 0) 
 {
 printf("Error opening codec\n");
 return 0;
 }

 f = fopen(filename, "wb");
 if(!f)
 {
 printf("Error opening file\n");
 return 0;
 }

 frame = av_frame_alloc();
 if(!frame)
 {
 printf("Error allocating frame\n");
 return 0;
 }
 frame->format = c->pix_fmt;
 frame->width = c->width;
 frame->height = c->height;

 //I suspect this is the problem
 ret = av_frame_get_buffer(frame, 0);
 if(ret < 0)
 {
 fprintf(stderr, "Could not allocate the video frame data\n");
 exit(1);
 }

 /* encode 25 frames*/
 for(i = 0; i < 25; i++) 
 {

 /* make sure the frame data is writable */
 ret = av_frame_make_writable(frame);
 if(ret < 0)
 {
 return 0;
 }
 //FIll Frame with picture data
 PictureToFrame(i, frame, c->height, c->width);

 /* encode the image */
 encode(c, frame, pkt, f);
 }

 /* flush the encoder */
 encode(c, NULL, pkt, f);

 /* add sequence end code to have a real MPEG file */
 if (codec->id == AV_CODEC_ID_MPEG1VIDEO || codec->id == AV_CODEC_ID_MPEG2VIDEO)
 fwrite(endcode, 1, sizeof(endcode), f);
 fclose(f);

 avcodec_free_context(&c);
 av_frame_free(&frame);
 av_packet_free(&pkt);

 return 0;
}




-
How AVCodecContext bitrate, framerate and timebase is used when encoding single frame
28 mars 2023, par CyrusI am trying to learn FFmpeg from examples as there is a tight schedule. The task is to encode a raw YUV image into JPEG format of the given width and height. I have found examples from ffmpeg official website, which turns out to be quite straight-forward. However there are some fields in AVCodecContext that I thought only makes sense when encoding videos(e.g. bitrate, framerate, timebase, gopsize, max_b_frames etc).


I understand on a high level what those values are when it comes to videos, but do I need to care about those when I just want a single image ? Currently for testing, I am just setting them as dummy values and it seems to work. But I want to make sure that I am not making terrible assumptions that will break in the long run.


EDIT :


Here is the code I got. Most of them are copy and paste from examples, with some changes to replace old APIs with newer ones.


#include "thumbnail.h"
#include "libavcodec/avcodec.h"
#include "libavutil/imgutils.h"
#include 
#include 
#include 

void print_averror(int error_code) {
 char err_msg[100] = {0};
 av_strerror(error_code, err_msg, 100);
 printf("Reason: %s\n", err_msg);
}

ffmpeg_status_t save_yuv_as_jpeg(uint8_t* source_buffer, char* output_thumbnail_filename, int thumbnail_width, int thumbnail_height) {
 const AVCodec* mjpeg_codec = avcodec_find_encoder(AV_CODEC_ID_MJPEG);
 if (!mjpeg_codec) {
 printf("Codec for mjpeg cannot be found.\n");
 return FFMPEG_THUMBNAIL_CODEC_NOT_FOUND;
 }

 AVCodecContext* codec_ctx = avcodec_alloc_context3(mjpeg_codec);
 if (!codec_ctx) {
 printf("Codec context cannot be allocated for the given mjpeg codec.\n");
 return FFMPEG_THUMBNAIL_ALLOC_CONTEXT_FAILED;
 }

 AVPacket* pkt = av_packet_alloc();
 if (!pkt) {
 printf("Thumbnail packet cannot be allocated.\n");
 return FFMPEG_THUMBNAIL_ALLOC_PACKET_FAILED;
 }

 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 printf("Thumbnail frame cannot be allocated.\n");
 return FFMPEG_THUMBNAIL_ALLOC_FRAME_FAILED;
 }

 // The part that I don't understand
 codec_ctx->bit_rate = 400000;
 codec_ctx->width = thumbnail_width;
 codec_ctx->height = thumbnail_height;
 codec_ctx->time_base = (AVRational){1, 25};
 codec_ctx->framerate = (AVRational){1, 25};

 codec_ctx->gop_size = 10;
 codec_ctx->max_b_frames = 1;
 codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
 int ret = av_image_fill_arrays(frame->data, frame->linesize, source_buffer, AV_PIX_FMT_YUV420P, thumbnail_width, thumbnail_height, 32);
 if (ret < 0) {
 print_averror(ret);
 printf("Pixel format: yuv420p, width: %d, height: %d\n", thumbnail_width, thumbnail_height);
 return FFMPEG_THUMBNAIL_FILL_FRAME_DATA_FAILED;
 }

 ret = avcodec_send_frame(codec_ctx, frame);
 if (ret < 0) {
 print_averror(ret);
 printf("Failed to send frame to encoder.\n");
 return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;
 }

 ret = avcodec_receive_packet(codec_ctx, pkt);
 if (ret < 0) {
 print_averror(ret);
 printf("Failed to receive packet from encoder.\n");
 return FFMPEG_THUMBNAIL_FILL_SEND_FRAME_FAILED;
 }

 // store the thumbnail in output
 int fd = open(output_thumbnail_filename, O_CREAT | O_RDWR);
 write(fd, pkt->data, pkt->size);
 close(fd);

 // freeing allocated structs
 avcodec_free_context(&codec_ctx);
 av_frame_free(&frame);
 av_packet_free(&pkt);
 return FFMPEG_SUCCESS;
}



-
How to publish selfmade stream with ffmpeg and c++ to rtmp server ?
10 janvier 2024, par rLinoHave a nice day to you, people !



I am writing an application for Windows that will capture the screen and send the stream to Wowza server by rtmp (for broadcasting). My application use ffmpeg and Qt.
I capture the screen with WinApi, convert a buffer to YUV444(because it's simplest) and encode frame as described at the file decoding_encoding.c (from FFmpeg examples) :



///////////////////////////
//Encoder initialization
///////////////////////////
avcodec_register_all();
codec=avcodec_find_encoder(AV_CODEC_ID_H264);
c = avcodec_alloc_context3(codec);
c->width=scr_width;
c->height=scr_height;
c->bit_rate = 400000;
int base_num=1;
int base_den=1;//for one frame per second
c->time_base= (AVRational){base_num,base_den};
c->gop_size = 10;
c->max_b_frames=1;
c->pix_fmt = AV_PIX_FMT_YUV444P;
av_opt_set(c->priv_data, "preset", "slow", 0);

frame = avcodec_alloc_frame();
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;

for(int counter=0;counter<10;counter++)
{
///////////////////////////
//Capturing Screen
///////////////////////////
 GetCapScr(shotbuf,scr_width,scr_height);//result: shotbuf is filled by screendata from HBITMAP
///////////////////////////
//Convert buffer to YUV444 (standard formula)
//It's handmade function because of problems with prepare buffer to swscale from HBITMAP
///////////////////////////
 RGBtoYUV(shotbuf,frame->linesize,frame->data,scr_width,scr_height);//result in frame->data
///////////////////////////
//Encode Screenshot
///////////////////////////
 av_init_packet(&pkt);
 pkt.data = NULL; // packet data will be allocated by the encoder
 pkt.size = 0;
 frame->pts = counter;
 avcodec_encode_video2(c, &pkt, frame, &got_output);
 if (got_output) 
 {
 //I think that sending packet by rtmp must be here!
 av_free_packet(&pkt); 

 }

}
// Get the delayed frames
for (int got_output = 1,i=0; got_output; i++)
{
 ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
 if (ret < 0)
 {
 fprintf(stderr, "Error encoding frame\n");
 exit(1);
 }
 if (got_output)
 {
 //I think that sending packet by rtmp must be here!
 av_free_packet(&pkt); 
 }
}

///////////////////////////
//Deinitialize encoder
///////////////////////////
avcodec_close(c);
av_free(c);
av_freep(&frame->data[0]);
avcodec_free_frame(&frame);




I need to send video stream generated by this code to RTMP server.
In other words, I need c++/c analog for this command :



ffmpeg -re -i "sample.h264" -f flv rtmp://sample.url.com/screen/test_stream




It's useful, but I don't want to save stream to file, I want to use ffmpeg libraries for realtime encoding screen capture and sending encoded frames to RTMP server inside my own application.
Please give me a little example how to initialize AVFormatContext properly and to send my encoded video AVPackets to server.



Thanks.