
Recherche avancée
Autres articles (111)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (7373)
-
RGB to YUV conversion with libav (ffmpeg) triplicates image
17 avril 2021, par José Tomás TocinoI'm building a small program to capture the screen (using X11 MIT-SHM extension) on video. It works well if I create individual PNG files of the captured frames, but now I'm trying to integrate libav (ffmpeg) to create the video and I'm getting... funny results.


The furthest I've been able to reach is this. The expected result (which is a PNG created directly from the RGB data of the XImage file) is this :




However, the result I'm getting is this :




As you can see the colors are funky and the image appears cropped three times. I have a loop where I capture the screen, and first I generate the individual PNG files (currently commented in the code below) and then I try to use libswscale to convert from RGB24 to YUV420 :


while (gRunning) {
 printf("Processing frame framecnt=%i \n", framecnt);

 if (!XShmGetImage(display, RootWindow(display, DefaultScreen(display)), img, 0, 0, AllPlanes)) {
 printf("\n Ooops.. Something is wrong.");
 break;
 }

 // PNG generation
 // snprintf(imageName, sizeof(imageName), "salida_%i.png", framecnt);
 // writePngForImage(img, width, height, imageName);

 unsigned long red_mask = img->red_mask;
 unsigned long green_mask = img->green_mask;
 unsigned long blue_mask = img->blue_mask;

 // Write image data
 for (int y = 0; y < height; y++) {
 for (int x = 0; x < width; x++) {
 unsigned long pixel = XGetPixel(img, x, y);

 unsigned char blue = pixel & blue_mask;
 unsigned char green = (pixel & green_mask) >> 8;
 unsigned char red = (pixel & red_mask) >> 16;

 pixel_rgb_data[y * width + x * 3] = red;
 pixel_rgb_data[y * width + x * 3 + 1] = green;
 pixel_rgb_data[y * width + x * 3 + 2] = blue;
 }
 }

 uint8_t* inData[1] = { pixel_rgb_data };
 int inLinesize[1] = { in_w };

 printf("Scaling frame... \n");
 int sliceHeight = sws_scale(sws_context, inData, inLinesize, 0, height, pFrame->data, pFrame->linesize);

 printf("Obtained slice height: %i \n", sliceHeight);
 pFrame->pts = framecnt * (pVideoStream->time_base.den) / ((pVideoStream->time_base.num) * 25);

 printf("Frame pts: %li \n", pFrame->pts);
 int got_picture = 0;

 printf("Encoding frame... \n");
 int ret = avcodec_encode_video2(pCodecCtx, &pkt, pFrame, &got_picture);

// int ret = avcodec_send_frame(pCodecCtx, pFrame);

 if (ret != 0) {
 printf("Failed to encode! Error: %i\n", ret);
 return -1;
 }

 printf("Succeed to encode frame: %5d - size: %5d\n", framecnt, pkt.size);

 framecnt++;

 pkt.stream_index = pVideoStream->index;
 ret = av_write_frame(pFormatCtx, &pkt);

 if (ret != 0) {
 printf("Error writing frame! Error: %framecnt \n", ret);
 return -1;
 }

 av_packet_unref(&pkt);
 }



I've placed the entire code at this gist. This question right here looks pretty similar to mine, but not quite, and the solution did not work for me, although I think this has something to do with the way the line stride is calculated.


-
How to convert from AV_PIX_FMT_BGRA to PIX_FMT_PAL8 ?
29 juillet 2014, par JonaI’m having a hard time converting my images from
AV_PIX_FMT_BGRA
toPIX_FMT_PAL8
. Unfortunatelysws_getCachedContext
doesn’t support the conversion toPIX_FMT_PAL8
.What I’m trying to do is convert my images into a GIF video with higher quality output. It seems that
PIX_FMT_PAL8
could potentially provide the higher quality output I’m looking for.According to this documentation I need to palettize the pixel data, but I have no clue how to do that.
When the pixel format is palettized RGB (
PIX_FMT_PAL8
), the palettized
image data is stored inAVFrame.data[0]
. The palette is transported in
AVFrame.data[1]
, is 1024 bytes long (256 4-byte entries) and is
formatted the same as inPIX_FMT_RGB32
described above (i.e., it is
also endian-specific). Note also that the individual RGB palette
components stored inAVFrame.data[1]
should be in the range 0..255.
This is important as many custom PAL8 video codecs that were designed
to run on the IBM VGA graphics adapter use 6-bit palette components.Any help or direction would be appreciated.
-
avcodec/mediacodecdec : warn when input buffers are not configured with proper size
5 septembre 2019, par Aman Guptaavcodec/mediacodecdec : warn when input buffers are not configured with proper size
In rare circumstances, if the codec is not configured with the
proper parameters the input buffers can be allocated with a size
that's too small to hold an individual packet. Since MediaCodec
expects exactly one incoming buffer with a given PTS, it is not
valid to split data for a given PTS across two input buffers.See https://developer.android.com/reference/android/media/MediaCodec#data-processing:
> Do not submit multiple input buffers with the same timestamp
Signed-off-by : Aman Gupta <aman@tmm1.net>