Recherche avancée

Médias (0)

Mot : - Tags -/tags

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (73)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (8147)

  • RGB to YUV conversion with libav (ffmpeg) triplicates image

    17 avril 2021, par José Tomás Tocino

    I'm building a small program to capture the screen (using X11 MIT-SHM extension) on video. It works well if I create individual PNG files of the captured frames, but now I'm trying to integrate libav (ffmpeg) to create the video and I'm getting... funny results.

    


    The furthest I've been able to reach is this. The expected result (which is a PNG created directly from the RGB data of the XImage file) is this :

    


    Expected result

    


    However, the result I'm getting is this :

    


    Obtained result

    


    As you can see the colors are funky and the image appears cropped three times. I have a loop where I capture the screen, and first I generate the individual PNG files (currently commented in the code below) and then I try to use libswscale to convert from RGB24 to YUV420 :

    


    while (gRunning) {
        printf("Processing frame framecnt=%i \n", framecnt);

        if (!XShmGetImage(display, RootWindow(display, DefaultScreen(display)), img, 0, 0, AllPlanes)) {
            printf("\n Ooops.. Something is wrong.");
            break;
        }

        // PNG generation
        // snprintf(imageName, sizeof(imageName), "salida_%i.png", framecnt);
        // writePngForImage(img, width, height, imageName);

        unsigned long red_mask = img->red_mask;
        unsigned long green_mask = img->green_mask;
        unsigned long blue_mask = img->blue_mask;

        // Write image data
        for (int y = 0; y < height; y++) {
            for (int x = 0; x < width; x++) {
                unsigned long pixel = XGetPixel(img, x, y);

                unsigned char blue = pixel & blue_mask;
                unsigned char green = (pixel & green_mask) >> 8;
                unsigned char red = (pixel & red_mask) >> 16;

                pixel_rgb_data[y * width + x * 3] = red;
                pixel_rgb_data[y * width + x * 3 + 1] = green;
                pixel_rgb_data[y * width + x * 3 + 2] = blue;
            }
        }

        uint8_t* inData[1] = { pixel_rgb_data };
        int inLinesize[1] = { in_w };

        printf("Scaling frame... \n");
        int sliceHeight = sws_scale(sws_context, inData, inLinesize, 0, height, pFrame->data, pFrame->linesize);

        printf("Obtained slice height: %i \n", sliceHeight);
        pFrame->pts = framecnt * (pVideoStream->time_base.den) / ((pVideoStream->time_base.num) * 25);

        printf("Frame pts: %li \n", pFrame->pts);
        int got_picture = 0;

        printf("Encoding frame... \n");
        int ret = avcodec_encode_video2(pCodecCtx, &pkt, pFrame, &got_picture);

//                int ret = avcodec_send_frame(pCodecCtx, pFrame);

        if (ret != 0) {
            printf("Failed to encode! Error: %i\n", ret);
            return -1;
        }

        printf("Succeed to encode frame: %5d - size: %5d\n", framecnt, pkt.size);

        framecnt++;

        pkt.stream_index = pVideoStream->index;
        ret = av_write_frame(pFormatCtx, &pkt);

        if (ret != 0) {
            printf("Error writing frame! Error: %framecnt \n", ret);
            return -1;
        }

        av_packet_unref(&pkt);
    }


    


    I've placed the entire code at this gist. This question right here looks pretty similar to mine, but not quite, and the solution did not work for me, although I think this has something to do with the way the line stride is calculated.

    


  • bash : receive single frames from ffmpeg pipe

    30 août 2014, par manu

    I’m trying to achieve single-frame handling in a pipe where the the j2c encoder "kdu_compress" (Kakadu) only accepts single files. To save harddrive space. I didn’t manage to pipe frames directly, so I’m trying to handle them via a bash script, by creating each picture, process it, and overwrite it with the next.

    Here is my approach. Thanks for your advice, I really want to climb this mountain, though I’m a bit fresh here thanks.


    Is it possible to pipe an ffmpeg output to a bash script and save the individual frame,
    do further commands with the file before the next frame is handled ?

    Best result so far is, that ALL frames are added into the intermediate file, without recognizing the end of a frame.

    I used this ffmpeg setting to pipe, example with .ppm :

    ffmpeg -y  -i "/path/to/source.mov" -an -c:v ppm -updatefirst 1 -f image2 - \
    | /path/to/receiver.sh

    and this script as a receiver.sh

    #!/bin/bash  

    while read a;
    do
       cat /dev/null > "/path/to/tempfile.ppm"; #to empty the file first
       cat $a >> "/path/to/tempfile.ppm";        #to fill one picture

       kdu_compress -i /path/to/tempfile.ppm -otherparams   #to process this intermediate

    done
    exit;

    Thank you very much.

  • How can I best utilize an AWS service to segment a video into smaller chunks and then combine them back to together ? [on hold]

    19 avril 2018, par Justin Malin

    I am trying to do processing on videos uploaded to AWS S3 using an AWS Lambda function in Python. However, FFmpeg and ffmpeg-python (as far as I am aware) are unable to process objects and must do processing on stored files. Lambda only allows for 500 MB of storage in the /tmp/ folder, thus limiting the size of video that I can do processing on.

    If there is an alternative to FFmpeg that allows me to work on object files that I am unaware of, that would be a reasonable solution because I can scale up the memory of the Lambda function (although there is still a limit).

    Alternatively, I have looked into segmenting the video using AWS Elastic Transcoder, but I do not think I can dynamically segment the video using that service. If there is a service similar to this that could segment the video into individual frames (and back), that would be even better.

    I have also considered using AWS EC2, but I would only be using the EC2 service to segment videos sporadically, so it would be a waste to constantly have a server that capable running. If I use the AWS Elastic Beanstalk, would it automatically start a more powerful instance of EC2 to do the video segmentation (and reformation) when that is called and revert back to a much smaller instance when dormant ?

    Essentially, I would like to know if there are any services (preferably within AWS) that allow me to segment a video into shorter videos or into each frame at-will.