Recherche avancée

Médias (0)

Mot : - Tags -/presse-papier

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (47)

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7702)

  • bash : receive single frames from ffmpeg pipe

    30 août 2014, par manu

    I’m trying to achieve single-frame handling in a pipe where the the j2c encoder "kdu_compress" (Kakadu) only accepts single files. To save harddrive space. I didn’t manage to pipe frames directly, so I’m trying to handle them via a bash script, by creating each picture, process it, and overwrite it with the next.

    Here is my approach. Thanks for your advice, I really want to climb this mountain, though I’m a bit fresh here thanks.


    Is it possible to pipe an ffmpeg output to a bash script and save the individual frame,
    do further commands with the file before the next frame is handled ?

    Best result so far is, that ALL frames are added into the intermediate file, without recognizing the end of a frame.

    I used this ffmpeg setting to pipe, example with .ppm :

    ffmpeg -y  -i "/path/to/source.mov" -an -c:v ppm -updatefirst 1 -f image2 - \
    | /path/to/receiver.sh

    and this script as a receiver.sh

    #!/bin/bash  

    while read a;
    do
       cat /dev/null > "/path/to/tempfile.ppm"; #to empty the file first
       cat $a >> "/path/to/tempfile.ppm";        #to fill one picture

       kdu_compress -i /path/to/tempfile.ppm -otherparams   #to process this intermediate

    done
    exit;

    Thank you very much.

  • RGB to YUV conversion with libav (ffmpeg) triplicates image

    17 avril 2021, par José Tomás Tocino

    I'm building a small program to capture the screen (using X11 MIT-SHM extension) on video. It works well if I create individual PNG files of the captured frames, but now I'm trying to integrate libav (ffmpeg) to create the video and I'm getting... funny results.

    


    The furthest I've been able to reach is this. The expected result (which is a PNG created directly from the RGB data of the XImage file) is this :

    


    Expected result

    


    However, the result I'm getting is this :

    


    Obtained result

    


    As you can see the colors are funky and the image appears cropped three times. I have a loop where I capture the screen, and first I generate the individual PNG files (currently commented in the code below) and then I try to use libswscale to convert from RGB24 to YUV420 :

    


    while (gRunning) {
        printf("Processing frame framecnt=%i \n", framecnt);

        if (!XShmGetImage(display, RootWindow(display, DefaultScreen(display)), img, 0, 0, AllPlanes)) {
            printf("\n Ooops.. Something is wrong.");
            break;
        }

        // PNG generation
        // snprintf(imageName, sizeof(imageName), "salida_%i.png", framecnt);
        // writePngForImage(img, width, height, imageName);

        unsigned long red_mask = img->red_mask;
        unsigned long green_mask = img->green_mask;
        unsigned long blue_mask = img->blue_mask;

        // Write image data
        for (int y = 0; y < height; y++) {
            for (int x = 0; x < width; x++) {
                unsigned long pixel = XGetPixel(img, x, y);

                unsigned char blue = pixel & blue_mask;
                unsigned char green = (pixel & green_mask) >> 8;
                unsigned char red = (pixel & red_mask) >> 16;

                pixel_rgb_data[y * width + x * 3] = red;
                pixel_rgb_data[y * width + x * 3 + 1] = green;
                pixel_rgb_data[y * width + x * 3 + 2] = blue;
            }
        }

        uint8_t* inData[1] = { pixel_rgb_data };
        int inLinesize[1] = { in_w };

        printf("Scaling frame... \n");
        int sliceHeight = sws_scale(sws_context, inData, inLinesize, 0, height, pFrame->data, pFrame->linesize);

        printf("Obtained slice height: %i \n", sliceHeight);
        pFrame->pts = framecnt * (pVideoStream->time_base.den) / ((pVideoStream->time_base.num) * 25);

        printf("Frame pts: %li \n", pFrame->pts);
        int got_picture = 0;

        printf("Encoding frame... \n");
        int ret = avcodec_encode_video2(pCodecCtx, &pkt, pFrame, &got_picture);

//                int ret = avcodec_send_frame(pCodecCtx, pFrame);

        if (ret != 0) {
            printf("Failed to encode! Error: %i\n", ret);
            return -1;
        }

        printf("Succeed to encode frame: %5d - size: %5d\n", framecnt, pkt.size);

        framecnt++;

        pkt.stream_index = pVideoStream->index;
        ret = av_write_frame(pFormatCtx, &pkt);

        if (ret != 0) {
            printf("Error writing frame! Error: %framecnt \n", ret);
            return -1;
        }

        av_packet_unref(&pkt);
    }


    


    I've placed the entire code at this gist. This question right here looks pretty similar to mine, but not quite, and the solution did not work for me, although I think this has something to do with the way the line stride is calculated.

    


  • How to convert from AV_PIX_FMT_BGRA to PIX_FMT_PAL8 ?

    29 juillet 2014, par Jona

    I’m having a hard time converting my images from AV_PIX_FMT_BGRA to PIX_FMT_PAL8. Unfortunately sws_getCachedContext doesn’t support the conversion to PIX_FMT_PAL8.

    What I’m trying to do is convert my images into a GIF video with higher quality output. It seems that PIX_FMT_PAL8 could potentially provide the higher quality output I’m looking for.

    According to this documentation I need to palettize the pixel data, but I have no clue how to do that.

    When the pixel format is palettized RGB (PIX_FMT_PAL8), the palettized
    image data is stored in AVFrame.data[0]. The palette is transported in
    AVFrame.data[1], is 1024 bytes long (256 4-byte entries) and is
    formatted the same as in PIX_FMT_RGB32 described above (i.e., it is
    also endian-specific). Note also that the individual RGB palette
    components stored in AVFrame.data[1] should be in the range 0..255.
    This is important as many custom PAL8 video codecs that were designed
    to run on the IBM VGA graphics adapter use 6-bit palette components.

    Any help or direction would be appreciated.