Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP

Autres articles (101)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Que fait exactement ce script ?

    18 janvier 2011, par

    Ce script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
    Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
    Installation de dépendances de MediaSPIP
    Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
    Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)

Sur d’autres sites (10119)

  • RGB to YUV conversion with libav (ffmpeg) triplicates image

    17 avril 2021, par José Tomás Tocino

    I'm building a small program to capture the screen (using X11 MIT-SHM extension) on video. It works well if I create individual PNG files of the captured frames, but now I'm trying to integrate libav (ffmpeg) to create the video and I'm getting... funny results.

    


    The furthest I've been able to reach is this. The expected result (which is a PNG created directly from the RGB data of the XImage file) is this :

    


    Expected result

    


    However, the result I'm getting is this :

    


    Obtained result

    


    As you can see the colors are funky and the image appears cropped three times. I have a loop where I capture the screen, and first I generate the individual PNG files (currently commented in the code below) and then I try to use libswscale to convert from RGB24 to YUV420 :

    


    while (gRunning) {
        printf("Processing frame framecnt=%i \n", framecnt);

        if (!XShmGetImage(display, RootWindow(display, DefaultScreen(display)), img, 0, 0, AllPlanes)) {
            printf("\n Ooops.. Something is wrong.");
            break;
        }

        // PNG generation
        // snprintf(imageName, sizeof(imageName), "salida_%i.png", framecnt);
        // writePngForImage(img, width, height, imageName);

        unsigned long red_mask = img->red_mask;
        unsigned long green_mask = img->green_mask;
        unsigned long blue_mask = img->blue_mask;

        // Write image data
        for (int y = 0; y < height; y++) {
            for (int x = 0; x < width; x++) {
                unsigned long pixel = XGetPixel(img, x, y);

                unsigned char blue = pixel & blue_mask;
                unsigned char green = (pixel & green_mask) >> 8;
                unsigned char red = (pixel & red_mask) >> 16;

                pixel_rgb_data[y * width + x * 3] = red;
                pixel_rgb_data[y * width + x * 3 + 1] = green;
                pixel_rgb_data[y * width + x * 3 + 2] = blue;
            }
        }

        uint8_t* inData[1] = { pixel_rgb_data };
        int inLinesize[1] = { in_w };

        printf("Scaling frame... \n");
        int sliceHeight = sws_scale(sws_context, inData, inLinesize, 0, height, pFrame->data, pFrame->linesize);

        printf("Obtained slice height: %i \n", sliceHeight);
        pFrame->pts = framecnt * (pVideoStream->time_base.den) / ((pVideoStream->time_base.num) * 25);

        printf("Frame pts: %li \n", pFrame->pts);
        int got_picture = 0;

        printf("Encoding frame... \n");
        int ret = avcodec_encode_video2(pCodecCtx, &pkt, pFrame, &got_picture);

//                int ret = avcodec_send_frame(pCodecCtx, pFrame);

        if (ret != 0) {
            printf("Failed to encode! Error: %i\n", ret);
            return -1;
        }

        printf("Succeed to encode frame: %5d - size: %5d\n", framecnt, pkt.size);

        framecnt++;

        pkt.stream_index = pVideoStream->index;
        ret = av_write_frame(pFormatCtx, &pkt);

        if (ret != 0) {
            printf("Error writing frame! Error: %framecnt \n", ret);
            return -1;
        }

        av_packet_unref(&pkt);
    }


    


    I've placed the entire code at this gist. This question right here looks pretty similar to mine, but not quite, and the solution did not work for me, although I think this has something to do with the way the line stride is calculated.

    


  • FFMPEG : How to overlay a large size png on jpg [closed]

    21 juin 2022, par Yoav Mor

    I'm trying to use ffmpeg to achieve the following :
x.jpg = a file that is 540x540 in size.
Gradient.png = a 540x960 image that has a gradient of semi transparency (which starts with dark blue with no transparency in the top and then gradually becomes transparent in the bottom)
example of Gradient.png attached

    


    The goal is to end up with a jpg that is 540x960, where x.jpg is in the bottom 540x540 of the frame and the Gradient.png is laid on top of it. Because Gradient.png is larger, this seems to pose a problem.

    


    I tried :

    


    ffmpeg.exe -i x.jpg -i Gradient.png -filter_complex overlay=0:480 -c:v png -pix_fmt rgba out.png


    


    Which just creates a 540x540 frame, so doesn't work. Feels close though. If I new how to enlarge the "canvas" this could have maybe worked.

    


    I tried.

    


    ffmpeg.exe -i Gradient.png -i x.jpg -filter_complex overlay=0:480 -c:v png -pix_fmt rgba out.png


    


    Which does create a 540x960 frame but Gradient.png is not on top of 1.jpg, the opposite is happening here, so I don't get the effect of the semi transparency shown on top of x.jpg like I need.

    


    Any help will be highly appreciated

    


    Edit :
I was able to do it in 2 steps.
ffmpeg -i x.jpg -vf "scale=540:960:force_original_aspect_ratio=decrease,pad=540:960 :(ow-iw)/2 :(oh-ih)" -c:v png -pix_fmt rgba out.jpg
ffmpeg -i out.jpg -i Gradient.png -filter_complex overlay=0:0 OUT2.jpg

    


    If someone can come up with a way to do it in one step, it'll be great.

    


  • Wrong colors when converting an AVFrame to QVideoFrame

    2 janvier 2020, par Michael G.

    I read videos with libav and display them in a QAbstractVideoSurface in QML. It works so far, however, I did not manage to get the colors right.

    My av_read_frame Loop looks like this :

    if (frameFinished)
    {
                           SwsContext* context = nullptr;
                           context = sws_getContext(_frame->width, _frame->height, (AVPixelFormat)_frame->format, _frame->width, _frame->height, AVPixelFormat::AV_PIX_FMT_RGBA, SWS_BICUBIC, nullptr, nullptr, nullptr);
                           QImage img(_frame->width, _frame->height, QImage::Format_RGBA8888);
                           uint8_t* dstSlice[] = { img.bits() };
                           int dstStride = img.width() * 4;
                           sws_scale(context, _frame->data, _frame->linesize,
                               0, _frame->height, dstSlice, &dstStride);

                           av_packet_unref(&packet);
                           sws_freeContext(context);
    }

    If I save the image to disk at this point, the colors are already wrong (everything looks red).
    Later, I display the images in a video surface with the format QVideoFrame::Format_ARGB32, and the colors are wrong again, but look different than the saved image (everything looks blue).

    I started to experiment with libav/ffmpeg recently, so maybe the problem is something else and I just have no clue. Let me know, if you need more information :)