
Recherche avancée
Autres articles (50)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (7012)
-
How to create video using avcodec from jpeg images of type OpenCV::Mat ?
23 juillet 2015, par theateistI have colored jpeg images of
OpenCV::Mat
type and I create from them video usingavcodec
. The video that I get is upside-down, black & white and each row of each frame is shifted and I got diagonal line. What could be the reason for such output ?
Follow this link to watch the video I get using avcodec.
I’m usingacpicture_fill
function to createavFrame
fromcv::Mat
frame !P.S.
Each cv::Mat cvFrame has width=810, height=610, step=2432
I noticed that avFrame (that is filled by acpicture_fill) haslinesize[0]=2430
I tried manually settingavFrame->linesizep0]=2432
and not 2430 but it still didn’t helped.======== CODE =========================================================
AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
AVStream *outStream = avformat_new_stream(outContainer, encoder);
avcodec_get_context_defaults3(outStream->codec, encoder);
outStream->codec->pix_fmt = AV_PIX_FMT_YUV420P;
outStream->codec->width = 810;
outStream->codec->height = 610;
//...
SwsContext *swsCtx = sws_getContext(outStream->codec->width, outStream->codec->height, PIX_FMT_RGB24,
outStream->codec->width, outStream->codec->height, outStream->codec->pix_fmt, SWS_BICUBIC, NULL, NULL, NULL);
for (uint i=0; i < frameNums; i++)
{
// get frame at location I using OpenCV
cv::Mat cvFrame;
myReader.getFrame(cvFrame, i);
cv::Size frameSize = cvFrame.size();
//Each cv::Mat cvFrame has width=810, height=610, step=2432
1. // create AVPicture from cv::Mat frame
2. avpicture_fill((AVPicture*)avFrame, cvFrame.data, PIX_FMT_RGB24, outStream->codec->width, outStream->codec->height);
3avFrame->width = frameSize.width;
4. avFrame->height = frameSize.height;
// rescale to outStream format
sws_scale(swsCtx, avFrame->data, avFrame->linesize, 0, outStream->codec->height, avFrameRescaledFrame->data, avFrameRescaledFrame ->linesize);
encoderRescaledFrame->pts=i;
avFrameRescaledFrame->width = frameSize.width;
avFrameRescaledFrame->height = frameSize.height;
av_init_packet(&avEncodedPacket);
avEncodedPacket.data = NULL;
avEncodedPacket.size = 0;
// encode rescaled frame
if(avcodec_encode_video2(outStream->codec, &avEncodedPacket, avFrameRescaledFrame, &got_frame) < 0) exit(1);
if(got_frame)
{
if (avEncodedPacket.pts != AV_NOPTS_VALUE)
avEncodedPacket.pts = av_rescale_q(avEncodedPacket.pts, outStream->codec->time_base, outStream->time_base);
if (avEncodedPacket.dts != AV_NOPTS_VALUE)
avEncodedPacket.dts = av_rescale_q(avEncodedPacket.dts, outStream->codec->time_base, outStream->time_base);
// outContainer is "mp4"
av_write_frame(outContainer, & avEncodedPacket);
av_free_packet(&encodedPacket);
}
}UPDATED
As @Alex suggested I changed the lines 1-4 with the code below
int width = frameSize.width, height = frameSize.height;
avpicture_alloc((AVPicture*)avFrame, AV_PIX_FMT_RGB24, outStream->codec->width, outStream->codec->height);
for (int h = 0; h < height; h++)
{
memcpy(&(avFrame->data[0][h*avFrame->linesize[0]]), &(cvFrame.data[h*cvFrame.step]), width*3);
}The video (here) I get now is almost perfect. It’s NOT upside-down, NOT black & white, BUT it seems that one of the RGB components is missing. Every brown/red colors became blue (in original images it should be vice-verse).
What could be the problem ? Could rescaling(sws_scale
) toAV_PIX_FMT_YUV420P
format causes this ? -
Overlay two sequences of png's and turn them into a movie
12 mai 2014, par Lau LlobetI’ve found how to turn a png sequence into a movie and I’ve found how to overlay two movies using transparency but I don’t know how to do both things at once (using png’s transparency).
The bottom layer of png’s is smaller than the top one and needs to be stretched to a certain resolution and also have a padding to be centered.
The output doesn’t have to have alpha (black for alpha is ok). I’m a bit confused by the abundance of filter options.
For the moment I’ve found :
./ffmpeg -i ./seq1/%d.bmp -vf "movie=./%d.png [a]; [in][a] overlay=0:366" combined.m2v
It works, now I’ve got to find the padding and resize things
-
Revision 734c5ffa2c : Apply constrained partition search range to non-RD mode decision This commit en
23 avril 2014, par Jingning HanChanged Paths :
Modify /vp9/encoder/vp9_encodeframe.c
Apply constrained partition search range to non-RD mode decisionThis commit enables a chessboard pattern for partition search. All
the black blocks run regular partition search ranging from 8x8 to
32x32. The rest white blocks take the nearby blocks’ information
to adaptively decide the effective search range.The compression performance for rtc set at speed -5 is down by 1.5%.
For pedestrian 1080p at speed -5, the runtime goes from 41594 ms to
39697 ms, i.e., about 5% faster.Change-Id : Ia4b96e237abfaada487c743bca08fe1afd298685