
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (100)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (12592)
-
Greed is Good ; Greed Works
25 novembre 2010, par Multimedia Mike — VP8Greed, for lack of a better word, is good ; Greed works. Well, most of the time. Maybe.
Picking Prediction Modes
VP8 uses one of 4 prediction modes to predict a 16x16 luma block or 8x8 chroma block before processing it (for luma, a block can also be broken into 16 4x4 blocks for individual prediction using even more modes).So, how to pick the best predictor mode ? I had no idea when I started writing my VP8 encoder. I did not read any literature on the matter ; I just sat down and thought of a brute-force approach. According to the comments in my code :
// naive, greedy algorithm : // residual = source - predictor // mean = mean(residual) // residual -= mean // find the max diff between the mean and the residual // the thinking is that, post-prediction, the best block will // be comprised of similar samples
After removing the predictor from the macroblock, individual 4x4 subblocks are put through a forward DCT and quantized. Optimal compression in this scenario results when all samples are the same since only the DC coefficient will be non-zero. Failing that, when the input samples are at least similar to each other, few of the AC coefficients will be non-zero, which helps compression. When the samples are all over the scale, there aren’t a whole lot of non-zero coefficients unless you crank up the quantizer, which results in poor quality in the reconstructed subblocks.
Thus, my goal was to pick a prediction mode that, when applied to the input block, resulted in a residual in which each element would feature the least deviation from the mean of the residual (relative to other prediction choices).
Greedy Approach
I realized that this algorithm falls into the broad general category of "greedy" algorithms— one that makes locally optimal decisions at each stage. There are most likely smarter algorithms. But this one was good enough for making an encoder that just barely works.Compression Results
I checked the total file compression size on my usual 640x360 Big Buck Bunny logo image while forcing prediction modes vs. using my greedy prediction picking algorithm. In this very simple test, DC-only actually resulted in slightly better compression than the greedy algorithm (which says nothing about overall quality).prediction mode quantizer index = 0 (minimum) quantizer index = 10 greedy 286260 98028 DC 280593 95378 vertical 297206 105316 horizontal 295357 104185 TrueMotion 311660 113480 As another data point, in both quantizer cases, my greedy algorithm selected a healthy mix of prediction modes :
- quantizer index 0 : DC = 521, VERT = 151, HORIZ = 183, TM = 65
- quantizer index 10 : DC = 486, VERT = 167, HORIZ = 190, TM = 77
Size vs. Quality
Again, note that this ad-hoc test only measures one property (a highly objective one)— compression size. It did not account for quality which is a far more controversial topic that I have yet to wade into. -
Converting uint8_t data to AVFrame with FFmpeg
30 octobre 2017, par J.LefebvreI am currently working in C++ with the Autodesk 3DStudio Max 2014 SDK (toolset 100) and the Ffmpeg library in Visual Studio 2015 and trying to convert a DIB (Device Independent Bitmap) to uint8_t pointer array and then convert these data to an AVFrame.
I don’t have any errors, but my video is still black and without meta data.
(no time display, etc)I made approximatively the same with a Visual Studio Console application to convert jpeg image sequence from disk and this is working fine.
(The only difference is that instead of converting jpeg to AVFrame with the Ffmpeg library, I try to convert raw data to an AVFrame.)So I think the problem could be either on the DIB conversion to the uint8_t data or the uint8_t data to the AVFrame.
(The second is more plausible, because I used the SFML library to display a window with my rgb uint8_t* data for debuging and it is working fine.)I first initialize the ffmpeg library :
This function is called once at the beginning.
int Converter::Initialize(AVCodecID codec_id, int width, int height, int fps, const char *filename)
{
avcodec_register_all();
av_register_all();
AVCodec *codec;
inputFrame = NULL;
codecContext = NULL;
pkt = NULL;
file = NULL;
outputFilename = new char[strlen(filename)]();
*outputFilename = '\0';
strcpy(outputFilename, filename);
int ret;
//Initializing AVCodecContext and getting PixelFormat supported by encoder
codec = avcodec_find_encoder(codec_id);
if (!codec)
return 1;
AVPixelFormat pixFormat = codec->pix_fmts[0];
codecContext = avcodec_alloc_context3(codec);
if (!codecContext)
return 1;
codecContext->bit_rate = 400000;
codecContext->width = width;
codecContext->height = height;
codecContext->time_base.num = 1;
codecContext->time_base.den = fps;
codecContext->gop_size = 10;
codecContext->max_b_frames = 1;
codecContext->pix_fmt = pixFormat;
if (codec_id == AV_CODEC_ID_H264)
av_opt_set(codecContext->priv_data, "preset", "slow", 0);
//Actually opening the encoder
if (avcodec_open2(codecContext, codec, NULL) < 0)
return 1;
file = fopen(outputFilename, "wb");
if (!file)
return 1;
inputFrame = av_frame_alloc();
inputFrame->format = codecContext->pix_fmt;
inputFrame->width = codecContext->width;
inputFrame->height = codecContext->height;
ret = av_image_alloc(inputFrame->data, inputFrame->linesize, codecContext->width, codecContext->height, codecContext->pix_fmt, 32);
if (ret < 0)
return 1;
return 0;
}Then for each frame, I get the DIB and convert to a uint8_t* it with this function :
uint8_t* Util::ToUint8_t(RGBQUAD *data, int width, int height)
{
uint8_t* buf = (uint8_t*)data;
int imageSize = width * height;
size_t rgbquad_size = sizeof(RGBQUAD);
size_t total_bytes = imageSize * rgbquad_size;
uint8_t * pCopyBuffer = new uint8_t[total_bytes];
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int index = (x + width * y) * rgbquad_size;
int invertIndex = (x + width* (height - y - 1)) * rgbquad_size;
//BGRA to RGBA
pCopyBuffer[index] = buf[invertIndex + 2];
pCopyBuffer[index + 1] = buf[invertIndex + 1];
pCopyBuffer[index + 2] = buf[invertIndex];
pCopyBuffer[index + 3] = 0xFF;
}
}
return pCopyBuffer;
}
void GetDIBBuffer(Interface* ip, BITMAPINFO *bmi, uint8_t** outBuffer)
{
int size;
ViewExp& view = ip->GetActiveViewExp();
view.getGW()->getDIB(NULL, &size);
bmi = (BITMAPINFO *)malloc(size);
BITMAPINFOHEADER *bmih = (BITMAPINFOHEADER *)bmi;
view.getGW()->getDIB(bmi, &size);
uint8_t * pCopyBuffer = Util::ToUint8_t(bmi->bmiColors, bmih->biWidth, bmih->biHeight);
*outBuffer = pCopyBuffer;
}This function is used to get the DIB :
void GetViewportDIB(Interface* ip, BITMAPINFO *bmi, BITMAPINFOHEADER *bmih, BitmapInfo biFile, Bitmap *map)
{
int size;
if (!biFile.Name()[0])
return;
ViewExp& view = ip->GetActiveViewExp();
view.getGW()->getDIB(NULL, &size);
bmi = (BITMAPINFO *)malloc(size);
bmih = (BITMAPINFOHEADER *)bmi;
view.getGW()->getDIB(bmi, &size);
biFile.SetWidth((WORD)bmih->biWidth);
biFile.SetHeight((WORD)bmih->biHeight);
biFile.SetType(BMM_TRUE_32);
map = TheManager->Create(&biFile);
map->OpenOutput(&biFile);
map->FromDib(bmi);
map->Write(&biFile);
map->Close(&biFile);
}And after the conversion to AVFrame and video encoding :
The EncodeFromMem function is call each frame.
int Converter::EncodeFromMem(const char *outputDir, int frameNumber, uint8_t* data)
{
int ret;
inputFrame->pts = frameNumber;
EncodeFrame(data, codecContext, inputFrame, &pkt, file);
return 0;
}
static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
{
struct SwsContext *swsCtx = NULL;
const int in_linesize[1] = { 3 * c->width };// RGB stride
swsCtx = sws_getCachedContext(swsCtx, c->width, c->height, AV_PIX_FMT_RGB24, c->width, c->height, AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
sws_scale(swsCtx, (const uint8_t * const *)&rgb, in_linesize, 0, c->height, frame->data, frame->linesize);
}
static void EncodeFrame(uint8_t *rgb, AVCodecContext *c, AVFrame *frame, AVPacket **pkt, FILE *file)
{
int ret, got_output;
RgbToYuv(rgb, c, frame);
*pkt = av_packet_alloc();
av_init_packet(*pkt);
(*pkt)->data = NULL;
(*pkt)->size = 0;
ret = avcodec_encode_video2(c, *pkt, frame, &got_output);
if (ret < 0)
{
fprintf(stderr, "Error encoding frame/n");
exit(1);
}
if (got_output)
{
fwrite((*pkt)->data, 1, (*pkt)->size, file);
av_packet_unref(*pkt);
}
}To finish I have a function that write the packets and free the memory :
This function is called once at the end of the time range.int Converter::Finalize()
{
int ret, got_output;
uint8_t endcode[] = { 0, 0, 1, 0xb7 };
/* get the delayed frames */
do
{
fflush(stdout);
ret = avcodec_encode_video2(codecContext, pkt, NULL, &got_output);
if (ret < 0)
{
fprintf(stderr, "Error encoding frame/n");
return 1;
}
if (got_output)
{
fwrite(pkt->data, 1, pkt->size, file);
av_packet_unref(pkt);
}
} while (got_output);
fwrite(endcode, 1, sizeof(endcode), file);
fclose(file);
avcodec_close(codecContext);
av_free(codecContext);
av_frame_unref(inputFrame);
av_frame_free(&inputFrame);
//av_freep(&inputFrame->data[0]); //Crash
delete outputFilename;
outputFilename = 0;
return 0;
}EDIT :
I modify my RgbToYuv function and create another one to convert back the yuv frame to an rgb one.
This not really solve the problem, but maybe focus the problem on the conversion from YuvToRgb.
This is the result of the conversion from YUV to RGB :
![YuvToRgb result] : https://img42.com/kHqpt+
static void YuvToRgb(AVCodecContext *c, AVFrame *frame)
{
struct SwsContext *img_convert_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
AVFrame * rgbPictInfo = av_frame_alloc();
avpicture_fill((AVPicture*)rgbPictInfo, *(frame)->data, AV_PIX_FMT_RGB24, c->width, c->height);
sws_scale(img_convert_ctx, frame->data, frame->linesize, 0, c->height, rgbPictInfo->data, rgbPictInfo->linesize);
Util::DebugWindow(c->width, c->height, rgbPictInfo->data[0]);
}
static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
{
AVFrame * rgbPictInfo = av_frame_alloc();
avpicture_fill((AVPicture*)rgbPictInfo, rgb, AV_PIX_FMT_RGBA, c->width, c->height);
struct SwsContext *swsCtx = sws_getContext(c->width, c->height, AV_PIX_FMT_RGBA, c->width, c->height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
avpicture_fill((AVPicture*)frame, rgb, AV_PIX_FMT_YUV420P, c->width, c->height);
sws_scale(swsCtx, rgbPictInfo->data, rgbPictInfo->linesize, 0, c->height, frame->data, frame->linesize);
YuvToRgb(c, frame);
} -
x264 Building error - Android
21 avril 2016, par Jay ParikhI am using this repository to build ffmpeg static library which includes x264,libpng and others, please
visit this link https://github.com/writingminds/ffmpeg-androidi am using windows 7 as host and ubuntu 15.10 (_64) as guest os using VMware Workstation 12
and
Android-ndk-r11b-linux-x86_64i do have Prebuilt libraries , but now i want it without PIE support
i am getting this error in config.log in x264 folder while building
through./android_build.sh
here is the log :
x264 configure script
Command line options: "--cross-prefix=/mnt/hgfs/uShare/ffmpeg-android/toolchain-android/bin/arm-linux
/mnt/hgfs/uShare/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi-gcc
checking whether /mnt/hgfs/uShare/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi-gcc
--sysroot=/mnt/hgfs/uShare/ffmpeg-android/toolchain-android/sysroot works... no
Failed commandline was:
--sysroot=/mnt/hgfs/uShare/ffmpeg-android/toolchain-android/sysroot conftest.c -Wall -I. -I$(SRCPATH) --sysroot=/mnt/hgfs/uShare/ffmpeg-android/toolchain-android/sysroot --sysroot=/mnt/hgfs/uShare/ffmpeg-android/toolchain-android/sysroot -lm -o conftest
/mnt/hgfs/uShare/ffmpeg-android/toolchain-android/bin/../lib/gcc/arm-linux-androideabi/4.9/../../../../arm-linux-androideabi/bin/ld: fatal error:
conftest: Input/output error
Failed program was:
int main (void) { return 0; }
DIED: No working C compiler found.ushare is my shared folder between windows and ubuntu
I have spend almost a week ,trying to solve every error i get.
these errors are like never ending , 1 solution give 10 more errors
i have researched a LOT for this librarythanks a lot in advance.
Also i thought that x264 library might have poroblem ,so i tried to disable it
but next library "libpng" also had Same log Errori think problem is in Input/output error (obviously)
this line in log kind of confuses me (those /../../)/mnt/hgfs/uShare/ffmpeg-android/toolchain-android/bin/../lib/gcc/arm-linux-androideabi/4.9/../../../../arm-linux-androideabi/bin/ld : fatal error :
its like two folder overlaping address...
thanks a lot in advance.
please don’t go harsh on me ,its my first time,all thanks to this thing...