
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (56)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (11395)
-
Encoding frames to video with ffmpeg
5 septembre 2017, par Mher DidaryanI am trying to encode a video in Unreal Engine 4 with C++. I have access to the separate frames. Below is the code which reads
viewport's
displayed pixels and stores in buffer.//Safely get render target resource.
FRenderTarget* RenderTarget = TextureRenderTarget->GameThread_GetRenderTargetResource();
FIntPoint Size = RenderTarget->GetSizeXY();
auto ImageBytes = Size.X* Size.Y * static_cast<int32>(sizeof(FColor));
TArray<uint8> RawData;
RawData.AddUninitialized(ImageBytes);
//Get image raw data.
if (!RenderTarget->ReadPixelsPtr((FColor*)RawData.GetData()))
{
RawData.Empty();
UE_LOG(ExportRenderTargetBPFLibrary, Error, TEXT("ExportRenderTargetAsImage: Failed to get raw data."));
return false;
}
Buffer::getInstance().add(RawData);
</uint8></int32>Unreal Engine has
IImageWrapperModule
with which you can get an image from frame, but noting for video encoding. What I want is to encode frames in real time basis for live streaming service.I found this post Encoding a screenshot into a video using FFMPEG which is kind of what I want, but I have problems adapting this solution for my case. The code is outdated (for example
avcodec_encode_video
changed toavcodec_encode_video2
with different parameters).Bellow is the code of encoder.
void Compressor::DoWork()
{
AVCodec* codec;
AVCodecContext* c = NULL;
//uint8_t* outbuf;
//int /*i, out_size,*/ outbuf_size;
UE_LOG(LogTemp, Warning, TEXT("encoding"));
codec = avcodec_find_encoder(AV_CODEC_ID_MPEG1VIDEO); // finding the H264 encoder
if (!codec) {
UE_LOG(LogTemp, Warning, TEXT("codec not found"));
exit(1);
}
else UE_LOG(LogTemp, Warning, TEXT("codec found"));
c = avcodec_alloc_context3(codec);
c->bit_rate = 400000;
c->width = 1280; // resolution must be a multiple of two (1280x720),(1900x1080),(720x480)
c->height = 720;
c->time_base.num = 1; // framerate numerator
c->time_base.den = 25; // framerate denominator
c->gop_size = 10; // emit one intra frame every ten frames
c->max_b_frames = 1; // maximum number of b-frames between non b-frames
c->keyint_min = 1; // minimum GOP size
c->i_quant_factor = (float)0.71; // qscale factor between P and I frames
//c->b_frame_strategy = 20; ///// find out exactly what this does
c->qcompress = (float)0.6; ///// find out exactly what this does
c->qmin = 20; // minimum quantizer
c->qmax = 51; // maximum quantizer
c->max_qdiff = 4; // maximum quantizer difference between frames
c->refs = 4; // number of reference frames
c->trellis = 1; // trellis RD Quantization
c->pix_fmt = AV_PIX_FMT_YUV420P; // universal pixel format for video encoding
c->codec_id = AV_CODEC_ID_MPEG1VIDEO;
c->codec_type = AVMEDIA_TYPE_VIDEO;
if (avcodec_open2(c, codec, NULL) < 0) {
UE_LOG(LogTemp, Warning, TEXT("could not open codec")); // opening the codec
//exit(1);
}
else UE_LOG(LogTemp, Warning, TEXT("codec oppened"));
FString FinalFilename = FString("C:/Screen/sample.mpg");
auto &PlatformFile = FPlatformFileManager::Get().GetPlatformFile();
auto FileHandle = PlatformFile.OpenWrite(*FinalFilename, true);
if (FileHandle)
{
delete FileHandle; // remove when ready
UE_LOG(LogTemp, Warning, TEXT("file opened"));
while (true)
{
UE_LOG(LogTemp, Warning, TEXT("removing from buffer"));
int nbytes = avpicture_get_size(AV_PIX_FMT_YUV420P, c->width, c->height); // allocating outbuffer
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes * sizeof(uint8_t));
AVFrame* inpic = av_frame_alloc();
AVFrame* outpic = av_frame_alloc();
outpic->pts = (int64_t)((float)1 * (1000.0 / ((float)(c->time_base.den))) * 90); // setting frame pts
avpicture_fill((AVPicture*)inpic, (uint8_t*)Buffer::getInstance().remove().GetData(),
AV_PIX_FMT_PAL8, c->width, c->height); // fill image with input screenshot
avpicture_fill((AVPicture*)outpic, outbuffer, AV_PIX_FMT_YUV420P, c->width, c->height); // clear output picture for buffer copy
av_image_alloc(outpic->data, outpic->linesize, c->width, c->height, c->pix_fmt, 1);
/*
inpic->data[0] += inpic->linesize[0]*(screenHeight-1);
// flipping frame
inpic->linesize[0] = -inpic->linesize[0];
// flipping frame
struct SwsContext* fooContext = sws_getContext(screenWidth, screenHeight, PIX_FMT_RGB32, c->width, c->height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
sws_scale(fooContext, inpic->data, inpic->linesize, 0, c->height, outpic->data, outpic->linesize); // converting frame size and format
out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
// save in file
*/
}
delete FileHandle;
}
else
{
UE_LOG(LogTemp, Warning, TEXT("Can't open file"));
}
}Can someone explain flipping frame part (why it’s done ?) and how to use
avcodec_encode_video2
function instead ofavcodec_encode_video
? -
FFmpeg : Multiple x265-params are not recognized
13 septembre 2022, par zinonI'm using ffmpeg in x265 and I want to use multiple
x265-params
in one encoding.


When I use more than one parameter, ffmpeg does not recognize them.



My script is :



ffmpeg -s:v 1440x1080 -r 25 -i incident_10d_1440x1080_25.yuv -c:v rawvideo \
-pix_fmt yuv420p -c:v libx265 -x265-params "--qp=16:--preset=medium:--psnr" \
out_1440x1080_qp16.mp4




I set quantization parameter value equal to 16.



But my output in terminal contains the following :



x265 [info]: Main profile, Level-4 (Main tier)
x265 [info]: Thread pool created using 4 threads
x265 [info]: Slices : 1
x265 [info]: frame threads / pool features : 2 / wpp(17 rows)
x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
x265 [info]: ME / range / subpel / merge : hex / 57 / 2 / 2
x265 [info]: Keyframe min / max / scenecut / bias: 25 / 250 / 40 / 5.00
x265 [info]: Lookahead / bframes / badapt : 20 / 4 / 2
x265 [info]: b-pyramid / weightp / weightb : 1 / 1 / 0
x265 [info]: References / ref-limit cu / depth : 3 / on / on
x265 [info]: AQ: mode / str / qg-size / cu-tree : 1 / 1.0 / 32 / 1
x265 [info]: Rate Control / qCompress : CRF-28.0 / 0.60




As can be seen I get
Rate Control / qCompress : CRF-28.0 / 0.60
.


The correct one must be
x265 [info]: Rate Control : CQP-16
.


When I have only this parameter in
x265-params
like-x265-params "--qp=16"
it's working properly.

-
SIMD opus pvq_search implementation
8 juin 2017, par Ivan KalvachevSIMD opus pvq_search implementation
Explanation on the workings and methods used by the
Pyramid Vector Quantization Search function
could be found in the following Work-In-Progress mail threads :
http://ffmpeg.org/pipermail/ffmpeg-devel/2017-June/212146.html
http://ffmpeg.org/pipermail/ffmpeg-devel/2017-June/212816.html
http://ffmpeg.org/pipermail/ffmpeg-devel/2017-July/213030.html
http://ffmpeg.org/pipermail/ffmpeg-devel/2017-July/213436.htmlSigned-off-by : Ivan Kalvachev <ikalvachev@gmail.com>