
Recherche avancée
Autres articles (18)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)
Sur d’autres sites (4085)
-
Encoding frames to video with ffmpeg
5 septembre 2017, par Mher DidaryanI am trying to encode a video in Unreal Engine 4 with C++. I have access to the separate frames. Below is the code which reads
viewport's
displayed pixels and stores in buffer.//Safely get render target resource.
FRenderTarget* RenderTarget = TextureRenderTarget->GameThread_GetRenderTargetResource();
FIntPoint Size = RenderTarget->GetSizeXY();
auto ImageBytes = Size.X* Size.Y * static_cast<int32>(sizeof(FColor));
TArray<uint8> RawData;
RawData.AddUninitialized(ImageBytes);
//Get image raw data.
if (!RenderTarget->ReadPixelsPtr((FColor*)RawData.GetData()))
{
RawData.Empty();
UE_LOG(ExportRenderTargetBPFLibrary, Error, TEXT("ExportRenderTargetAsImage: Failed to get raw data."));
return false;
}
Buffer::getInstance().add(RawData);
</uint8></int32>Unreal Engine has
IImageWrapperModule
with which you can get an image from frame, but noting for video encoding. What I want is to encode frames in real time basis for live streaming service.I found this post Encoding a screenshot into a video using FFMPEG which is kind of what I want, but I have problems adapting this solution for my case. The code is outdated (for example
avcodec_encode_video
changed toavcodec_encode_video2
with different parameters).Bellow is the code of encoder.
void Compressor::DoWork()
{
AVCodec* codec;
AVCodecContext* c = NULL;
//uint8_t* outbuf;
//int /*i, out_size,*/ outbuf_size;
UE_LOG(LogTemp, Warning, TEXT("encoding"));
codec = avcodec_find_encoder(AV_CODEC_ID_MPEG1VIDEO); // finding the H264 encoder
if (!codec) {
UE_LOG(LogTemp, Warning, TEXT("codec not found"));
exit(1);
}
else UE_LOG(LogTemp, Warning, TEXT("codec found"));
c = avcodec_alloc_context3(codec);
c->bit_rate = 400000;
c->width = 1280; // resolution must be a multiple of two (1280x720),(1900x1080),(720x480)
c->height = 720;
c->time_base.num = 1; // framerate numerator
c->time_base.den = 25; // framerate denominator
c->gop_size = 10; // emit one intra frame every ten frames
c->max_b_frames = 1; // maximum number of b-frames between non b-frames
c->keyint_min = 1; // minimum GOP size
c->i_quant_factor = (float)0.71; // qscale factor between P and I frames
//c->b_frame_strategy = 20; ///// find out exactly what this does
c->qcompress = (float)0.6; ///// find out exactly what this does
c->qmin = 20; // minimum quantizer
c->qmax = 51; // maximum quantizer
c->max_qdiff = 4; // maximum quantizer difference between frames
c->refs = 4; // number of reference frames
c->trellis = 1; // trellis RD Quantization
c->pix_fmt = AV_PIX_FMT_YUV420P; // universal pixel format for video encoding
c->codec_id = AV_CODEC_ID_MPEG1VIDEO;
c->codec_type = AVMEDIA_TYPE_VIDEO;
if (avcodec_open2(c, codec, NULL) < 0) {
UE_LOG(LogTemp, Warning, TEXT("could not open codec")); // opening the codec
//exit(1);
}
else UE_LOG(LogTemp, Warning, TEXT("codec oppened"));
FString FinalFilename = FString("C:/Screen/sample.mpg");
auto &PlatformFile = FPlatformFileManager::Get().GetPlatformFile();
auto FileHandle = PlatformFile.OpenWrite(*FinalFilename, true);
if (FileHandle)
{
delete FileHandle; // remove when ready
UE_LOG(LogTemp, Warning, TEXT("file opened"));
while (true)
{
UE_LOG(LogTemp, Warning, TEXT("removing from buffer"));
int nbytes = avpicture_get_size(AV_PIX_FMT_YUV420P, c->width, c->height); // allocating outbuffer
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes * sizeof(uint8_t));
AVFrame* inpic = av_frame_alloc();
AVFrame* outpic = av_frame_alloc();
outpic->pts = (int64_t)((float)1 * (1000.0 / ((float)(c->time_base.den))) * 90); // setting frame pts
avpicture_fill((AVPicture*)inpic, (uint8_t*)Buffer::getInstance().remove().GetData(),
AV_PIX_FMT_PAL8, c->width, c->height); // fill image with input screenshot
avpicture_fill((AVPicture*)outpic, outbuffer, AV_PIX_FMT_YUV420P, c->width, c->height); // clear output picture for buffer copy
av_image_alloc(outpic->data, outpic->linesize, c->width, c->height, c->pix_fmt, 1);
/*
inpic->data[0] += inpic->linesize[0]*(screenHeight-1);
// flipping frame
inpic->linesize[0] = -inpic->linesize[0];
// flipping frame
struct SwsContext* fooContext = sws_getContext(screenWidth, screenHeight, PIX_FMT_RGB32, c->width, c->height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
sws_scale(fooContext, inpic->data, inpic->linesize, 0, c->height, outpic->data, outpic->linesize); // converting frame size and format
out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
// save in file
*/
}
delete FileHandle;
}
else
{
UE_LOG(LogTemp, Warning, TEXT("Can't open file"));
}
}Can someone explain flipping frame part (why it’s done ?) and how to use
avcodec_encode_video2
function instead ofavcodec_encode_video
? -
Writing image to RTP with ffmpeg
22 septembre 2017, par Gaulois94I am actually trying to send real time images via the network efficiently. For this, I thought that the RTP protocole in video streaming can be a good way to achieve this.
I actually tried this :
extern "C"
{
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>common.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavutil></libavutil>samplefmt.h>
}
#include <iostream>
#include
#include
//Mainly based on https://stackoverflow.com/questions/40825300/ffmpeg-create-rtp-stream
int main()
{
//Init ffmpeg
avcodec_register_all();
av_register_all();
avformat_network_init();
//Init the codec used to encode our given image
AVCodecID codecID = AV_CODEC_ID_MPEG4;
AVCodec* codec;
AVCodecContext* codecCtx;
codec = avcodec_find_encoder(codecID);
codecCtx = avcodec_alloc_context3(codec);
//codecCtx->bit_rate = 400000;
codecCtx->width = 352;
codecCtx->height = 288;
codecCtx->time_base.num = 1;
codecCtx->time_base.den = 25;
codecCtx->gop_size = 25;
codecCtx->max_b_frames = 1;
codecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
codecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
if (codecID == AV_CODEC_ID_H264)
{
av_opt_set(codecCtx->priv_data, "preset", "ultrafast", 0);
av_opt_set(codecCtx->priv_data, "tune", "zerolatency", 0);
}
avcodec_open2(codecCtx, codec, NULL);
//Init the Frame containing our raw data
AVFrame* frame;
frame = av_frame_alloc();
frame->format = codecCtx->pix_fmt;
frame->width = codecCtx->width;
frame->height = codecCtx->height;
av_image_alloc(frame->data, frame->linesize, frame->width, frame->height, codecCtx->pix_fmt, 32);
//Init the format context
AVFormatContext* fmtCtx = avformat_alloc_context();
AVOutputFormat* format = av_guess_format("rtp", NULL, NULL);
avformat_alloc_output_context2(&fmtCtx, format, format->name, "rtp://127.0.0.1:49990");
avio_open(&fmtCtx->pb, fmtCtx->filename, AVIO_FLAG_WRITE);
//Configure the AVStream for the output format context
struct AVStream* stream = avformat_new_stream(fmtCtx, codec);
avcodec_parameters_from_context(stream->codecpar, codecCtx);
stream->time_base.num = 1;
stream->time_base.den = 25;
/* Rewrite the header */
avformat_write_header(fmtCtx, NULL);
/* Write a file for VLC */
char buf[200000];
AVFormatContext *ac[] = { fmtCtx };
av_sdp_create(ac, 1, buf, 20000);
printf("sdp:\n%s\n", buf);
FILE* fsdp = fopen("test.sdp", "w");
fprintf(fsdp, "%s", buf);
fclose(fsdp);
AVPacket pkt;
int j = 0;
for(int i = 0; i < 10000; i++)
{
fflush(stdout);
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
int R, G, B;
R = G = B = i % 255;
int Y = 0.257 * R + 0.504 * G + 0.098 * B + 16;
int U = -0.148 * R - 0.291 * G + 0.439 * B + 128;
int V = 0.439 * R - 0.368 * G - 0.071 * B + 128;
/* prepare a dummy image */
/* Y */
for (int y = 0; y < codecCtx->height; y++)
for (int x = 0; x < codecCtx->width; x++)
frame->data[0][y * codecCtx->width + x] = Y;
for (int y = 0; y < codecCtx->height/2; y++)
for (int x=0; x < codecCtx->width / 2; x++)
{
frame->data[1][y * frame->linesize[1] + x] = U;
frame->data[2][y * frame->linesize[2] + x] = V;
}
/* Which frame is it ? */
frame->pts = i;
/* Send the frame to the codec */
avcodec_send_frame(codecCtx, frame);
/* Use the data in the codec to the AVPacket */
switch(avcodec_receive_packet(codecCtx, &pkt))
{
case AVERROR_EOF:
printf("Stream EOF\n");
break;
case AVERROR(EAGAIN):
printf("Stream EAGAIN\n");
break;
default:
printf("Write frame %3d (size=%5d)\n", j++, pkt.size);
/* Write the data on the packet to the output format */
av_interleaved_write_frame(fmtCtx, &pkt);
/* Reset the packet */
av_packet_unref(&pkt);
break;
}
usleep(1e6/25);
}
// end
avcodec_send_frame(codecCtx, NULL);
//Free everything
av_free(codecCtx);
av_free(fmtCtx);
return 0;
}
</iostream>And I can with VLC to see one image, but not a video (I have to reaload it to see another image in grayscale).
Does someone know why vlc don’t play my video well ? Thank you !
-
C++ FFmpeg create mp4 file
1er février 2021, par DDovzhenkoI'm trying to create mp4 video file with FFmpeg and C++, but in result I receive broken file (windows player shows "Can't play ... 0xc00d36c4"). If I create .h264 file, it can be played with 'ffplay' and successfully converted to mp4 via CL.



My code :



int main() {
 char *filename = "tmp.mp4";
 AVOutputFormat *fmt;
 AVFormatContext *fctx;
 AVCodecContext *cctx;
 AVStream *st;

 av_register_all();
 avcodec_register_all();

 //auto detect the output format from the name
 fmt = av_guess_format(NULL, filename, NULL);
 if (!fmt) {
 cout << "Error av_guess_format()" << endl; system("pause"); exit(1);
 }

 if (avformat_alloc_output_context2(&fctx, fmt, NULL, filename) < 0) {
 cout << "Error avformat_alloc_output_context2()" << endl; system("pause"); exit(1);
 }


 //stream creation + parameters
 st = avformat_new_stream(fctx, 0);
 if (!st) {
 cout << "Error avformat_new_stream()" << endl; system("pause"); exit(1);
 }

 st->codecpar->codec_id = fmt->video_codec;
 st->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 st->codecpar->width = 352;
 st->codecpar->height = 288;
 st->time_base.num = 1;
 st->time_base.den = 25;

 AVCodec *pCodec = avcodec_find_encoder(st->codecpar->codec_id);
 if (!pCodec) {
 cout << "Error avcodec_find_encoder()" << endl; system("pause"); exit(1);
 }

 cctx = avcodec_alloc_context3(pCodec);
 if (!cctx) {
 cout << "Error avcodec_alloc_context3()" << endl; system("pause"); exit(1);
 }

 avcodec_parameters_to_context(cctx, st->codecpar);
 cctx->bit_rate = 400000;
 cctx->width = 352;
 cctx->height = 288;
 cctx->time_base.num = 1;
 cctx->time_base.den = 25;
 cctx->gop_size = 12;
 cctx->pix_fmt = AV_PIX_FMT_YUV420P;
 if (st->codecpar->codec_id == AV_CODEC_ID_H264) {
 av_opt_set(cctx->priv_data, "preset", "ultrafast", 0);
 }
 if (fctx->oformat->flags & AVFMT_GLOBALHEADER) {
 cctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }
 avcodec_parameters_from_context(st->codecpar, cctx);

 av_dump_format(fctx, 0, filename, 1);

 //OPEN FILE + WRITE HEADER
 if (avcodec_open2(cctx, pCodec, NULL) < 0) {
 cout << "Error avcodec_open2()" << endl; system("pause"); exit(1);
 }
 if (!(fmt->flags & AVFMT_NOFILE)) {
 if (avio_open(&fctx->pb, filename, AVIO_FLAG_WRITE) < 0) {
 cout << "Error avio_open()" << endl; system("pause"); exit(1);
 }
 }
 if (avformat_write_header(fctx, NULL) < 0) {
 cout << "Error avformat_write_header()" << endl; system("pause"); exit(1);
 }


 //CREATE DUMMY VIDEO
 AVFrame *frame = av_frame_alloc();
 frame->format = cctx->pix_fmt;
 frame->width = cctx->width;
 frame->height = cctx->height;
 av_image_alloc(frame->data, frame->linesize, cctx->width, cctx->height, cctx->pix_fmt, 32);

 AVPacket pkt;
 double video_pts = 0;
 for (int i = 0; i < 50; i++) {
 video_pts = (double)cctx->time_base.num / cctx->time_base.den * 90 * i;

 for (int y = 0; y < cctx->height; y++) {
 for (int x = 0; x < cctx->width; x++) {
 frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
 if (y < cctx->height / 2 && x < cctx->width / 2) {
 /* Cb and Cr */
 frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
 frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
 }
 }
 }

 av_init_packet(&pkt);
 pkt.flags |= AV_PKT_FLAG_KEY;
 pkt.pts = frame->pts = video_pts;
 pkt.data = NULL;
 pkt.size = 0;
 pkt.stream_index = st->index;

 if (avcodec_send_frame(cctx, frame) < 0) {
 cout << "Error avcodec_send_frame()" << endl; system("pause"); exit(1);
 }
 if (avcodec_receive_packet(cctx, &pkt) == 0) {
 //cout << "Write frame " << to_string((int) pkt.pts) << endl;
 av_interleaved_write_frame(fctx, &pkt);
 av_packet_unref(&pkt);
 }
 }

 //DELAYED FRAMES
 for (;;) {
 avcodec_send_frame(cctx, NULL);
 if (avcodec_receive_packet(cctx, &pkt) == 0) {
 //cout << "-Write frame " << to_string((int)pkt.pts) << endl;
 av_interleaved_write_frame(fctx, &pkt);
 av_packet_unref(&pkt);
 }
 else {
 break;
 }
 }

 //FINISH
 av_write_trailer(fctx);
 if (!(fmt->flags & AVFMT_NOFILE)) {
 if (avio_close(fctx->pb) < 0) {
 cout << "Error avio_close()" << endl; system("pause"); exit(1);
 }
 }
 av_frame_free(&frame);
 avcodec_free_context(&cctx);
 avformat_free_context(fctx);

 system("pause");
 return 0;
}




Output of program :



Output #0, mp4, to 'tmp.mp4':
 Stream #0:0: Video: h264, yuv420p, 352x288, q=2-31, 400 kb/s, 25 tbn
[libx264 @ 0000021c4a995ba0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0000021c4a995ba0] profile Constrained Baseline, level 2.0
[libx264 @ 0000021c4a995ba0] 264 - core 152 r2851 ba24899 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=12 keyint_min=1 scenecut=0 intra_refresh=0 rc=abr mbtree=0 bitrate=400 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
[libx264 @ 0000021c4a995ba0] frame I:5 Avg QP: 7.03 size: 9318
[libx264 @ 0000021c4a995ba0] frame P:45 Avg QP: 4.53 size: 4258
[libx264 @ 0000021c4a995ba0] mb I I16..4: 100.0% 0.0% 0.0%
[libx264 @ 0000021c4a995ba0] mb P I16..4: 0.0% 0.0% 0.0% P16..4: 100.0% 0.0% 0.0% 0.0% 0.0% skip: 0.0%
[libx264 @ 0000021c4a995ba0] final ratefactor: 9.11
[libx264 @ 0000021c4a995ba0] coded y,uvDC,uvAC intra: 18.9% 21.8% 14.5% inter: 7.8% 100.0% 15.5%
[libx264 @ 0000021c4a995ba0] i16 v,h,dc,p: 4% 5% 5% 86%
[libx264 @ 0000021c4a995ba0] i8c dc,h,v,p: 2% 9% 6% 82%
[libx264 @ 0000021c4a995ba0] kb/s:264.68




If I will try to play mp4 file with 'ffplay' it prints :



[mov,mp4,m4a,3gp,3g2,mj2 @ 00000000026bf900] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 352x288, 138953 kb/s): unspecified pixel format
[h264 @ 00000000006c6ae0] non-existing PPS 0 referenced
[h264 @ 00000000006c6ae0] decode_slice_header error
[h264 @ 00000000006c6ae0] no frame!




I've spent a lot of time without success of finding issue, what could be the reason of it ?



Thank for help !