Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (31)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

Sur d’autres sites (5872)

  • How to get video pixel location from screen pixel location ?

    22 février 2024, par AmLearning

    Wall of Text so I tried breaking it up into sections to make it better sorry in advance

    


    The problem

    


    I have some video files that I am reading with ffmpeg to get the colors at specific pixels, and all seems well, but I just ran into a problem with finding the right pixel to input. I realized (or mistakingly believe) that the pixel location (x,y) on the screen will be different than the local pixel location so to speak of the video (ie. If I want to get pixel 50,0 of the video that will be different than my screen's pixel 50,0 because the resolutions don't match). I was trying to think of a way to convert my screen's pixel location into the "local pixel location", and I have two ideas but I am not sure if any of them is any good. Note I am currently using cmd+shift+4 on macos to get the screen coordinates and the video is playing fullscreen like in the screenshot below.

    


    Ideas

    


      

    1. enter image description here If I manually measure and account for this vertical offset, would it effectively convert the screen coordinate into the "local" one ?

      


    2. 


    3. If I instead adjust my SwsContext to put the destination height and width as that of my screen, will it effectively replace the need to convert screen coordinates to the video coordinates ?

      


    4. 


    


    Problems with the Ideas

    


    The problems I see with the first solution are that I am assuming there is no hidden horizontal offset (or conversely that all of the width of the video is actually renderable on the screen). Additionally, this solution would only get an approximate result as I would need to manually measure the offsets, screen width, and screen height using the method I currently am using to get the screen coordinates.

    


    With the second solution, aside from the question of if it will even work, the problem becomes that I can no longer measure what the screen coordinates I want are because I can't seem to get rid of those black bars in VLC.

    


    Some Testing I did

    


    Given that if the black bars are part of the video itself, my entire problem would be fixed (maybe ?) I tried seeing if the black bars were part of the video, and when I looked at the frame data's first pixel, it was black. The problem then is that if the black bars are entirely part of the video, then why are the colors I get for some pixels slightly off (I am checking with ColorSync Utility). These colors aren't just slightly off as in wrong but it seems more that they belong to a slightly offset region of the video.

    


    However, this may be somewhat explained if ffmpeg reads right to left. When I put the top left corner of the video into the program and looked again at the pixel data in the frame for that location (location again was calculated by assuming the video location would be the same as the screen location) instead of getting white, I got a bluish color much like the glove in the top right corner.

    


    The Watered Down Code

    


        struct SwsContext *rescaler = NULL;
    rescaler = sws_getContext(codec_context->width, codec_context->height, codec_context->pix_fmt, codec_context->width, codec_context->height, AV_PIX_FMT_RGB0, SWS_FAST_BILINEAR, NULL, NULL, 0);

// Get Packets (containers for frames but not guaranteed to have a full frame) and Frames
    while (av_read_frame(avformatcontext, packet) >= 0)
    {
        
        // determine if packet is video packet
        if (packet->stream_index != video_index)
        {
            continue;
        }
        
        // send packet to decoder
        if (avcodec_send_packet(codec_context, packet) < 0)
        {
            perror("Failed to decode packet");
        }
        
        // get frame from decoder
        int response = avcodec_receive_frame(codec_context, frame);
        if (response == AVERROR(EAGAIN))
        {
            continue;
        }
        else if (response < 0)
        {
            perror("Failed to get frame");
        }
        
        // convert frame to RGB0 colorspace 4 bytes per pixel 1 per channel
        response = sws_scale_frame(rescaler, scaled_frame, frame);
        if(response < 0){
            perror("Failed to change colorspace");
        }
        // get data and write it
        int pixel_number = y*(scaled_frame->linesize[0]/4)+x; // divide by four gets pixel linesize (4 byte per pixel)
        int byte_number = 4*(pixel_number-1); // position of pixel in array
        // start of debugging things
        int temp = scaled_frame->data[0][byte_number]; // R
        int one_after = scaled_frame->data[0][byte_number+1]; // G
        int two_after = scaled_frame->data[0][byte_number+2]; // B
        int als; // where i put the breakpoint
        // end of debugging things
    }


    


    In Summary

    


    I have no idea what is happening.

    


    I take the data for a pixel and compare it to what colorsync utility says should be there, but it is always slightly off as though the pixel I was actually reading was offset from what I thought I was reading. Therefore, I want to find a way to get the pixel location in a video given a screen coordinate when the video is in fullscreen, but I have no idea how to (aside from a few ideas that are probably bad at best).

    


    Also does FFMPEG put the frame data right to left ?

    


    A Video Better Showing My Problem

    


    https://www.youtube.com/watch?v=NSEErs2lC3A

    


  • how to resolve pixel jittering in javacv generated video

    16 septembre 2024, par Vikram

    I am trying to generate video using images, where image have some overlays text and png icons. I am using javacv library for this.
Final output video seems pixelated, i don't understand what is it since i do not have video processing domain knowledge, i am beginner to this.
I know that video bitrate and choice of video encoder are important factor which contributes to video quality and there are many more factors too.

    


    I am providing you two output for comparison, one of them is generated using javacv and another one is from moviepy library

    


    Please watch it in full screen since the problem i am talking about only gets highlighted in full screen, you will see the pixel dancing in javacv generated video, but python output seems stable

    


    https://imgur.com/a/aowNnKg - javacv generated

    


    https://imgur.com/a/eiLXrbk - Moviepy generated

    


    I am using same encoder in both of the implementation

    


    Encoder - libx264
bitrate - 
   800 Kbps for javacv 
   500 Kbps for moviepy

frame rate - 24fps for both of them

output video size -> 
    7MB (javacv)
    5MB (Moviepy)




    


    generated output size from javacv is bigger then moviepy generated video.

    


    here is my java configuration for FFmpegFrameRecorder

    


            FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(this.outputPath, 
                                           this.screensizeX, this.screensizeY);
        if(this.videoCodecName!=null && "libx264".equals(this.videoCodecName)) {
            recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
        }
        recorder.setFormat("mp4"); 
        recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420);
        recorder.setVideoBitrate(800000);
        recorder.setImageWidth(this.screensizeX);
        recorder.setFrameRate(24);



    


    and here is python configuration for writing video file

    


    Full_final_clip.write_videofile(
                            f"{video_folder_path}/{FILE_ID}_INTERMEDIATE.mp4",
                            codec="libx264",
                            audio_codec="aac",
                            temp_audiofile=f"{FILE_ID}_INTER_temp-audio.m4a",
                            remove_temp=True,
                            fps=24,
                        )



    


    as you can see i am not specifying bitrate in python, but i checked that bitrate of final output is around 500 kbps, which is lower then what i specified in java, yet java generated video quality seems poor.

    


    I have tried setting crf value also , but it seems it does not have any impact when used.

    


    increasing bitrate improve quality somewhat but at the cost of file size, still generated output seems pixelated.

    


    Can someone please highlight what might be the issue, and how python is generating better quality video, when both of the libraries use ffmpeg at the backend.

    


    Edit 1 : also, I am adding code which is being used to make zoom animation for continuous frames, As somewhere i read that this might be the cause for pixel jitter, please see and let me know if there is any improvement we can do to remove pixel jittering

    


    private Mat applyZoomEffect(Mat frame, int currentFrame, long effectFrames, int width, int height, String mode, String position, double speed) {
        long totalFrames = effectFrames;
        double i = currentFrame;
        if ("out".equals(mode)) {
            i = totalFrames - i;
        }
        double zoom = 1 + (i * ((0.1 * speed) / totalFrames));

        double originalCenterX = width/2.0;
        double originalCenterY = height/2.0;
   

        // Resize image
        //opencv_imgproc.resize(frame, resizedMat, new Size(newWidth, newHeight));

        // Determine crop region based on position
        double x = 0, y = 0;
        switch (position.toLowerCase()) {
            case "center":
                // Adjusting for center zoom
                 x = originalCenterX - originalCenterX * zoom;
                 y = originalCenterY - originalCenterY * zoom;
                 
                 x= (width-(width*zoom))/2.0;
                 y= (height-(height*zoom))/2.0;
                break;
        }

        double[][] rowData = {{zoom, 0, x},{0,zoom,y}};

        double[] flatData = flattenArray(rowData);

        // Create a DoublePointer from the flattened array
        DoublePointer doublePointer = new DoublePointer(flatData);

        // Create a Mat object with two rows and three columns
        Mat mat = new Mat(2, 3, org.bytedeco.opencv.global.opencv_core.CV_64F); // CV_64F is for double type

        // Put the data into the Mat object
        mat.data().put(doublePointer);
        Mat transformedFrame = new Mat();
        opencv_imgproc.warpAffine(frame, transformedFrame, mat, new Size(frame.cols(), frame.rows()),opencv_imgproc.INTER_LANCZOS4,0,new Scalar(0,0,0,0));
        return transformedFrame;
    }


    


  • javacv and moviepy comparison for video generation

    15 septembre 2024, par Vikram

    I am trying to generate video using images, where image have some overlays text and png icons. I am using javacv library for this.
Final output video seems pixelated, i don't understand what is it since i do not have video processing domain knowledge, i am beginner to this.
I know that video bitrate and choice of video encoder are important factor which contributes to video quality and there are many more factors too.

    


    I am providing you two output for comparison, one of them is generated using javacv and another one is from moviepy library

    


    Please watch it in full screen since the problem i am talking about only gets highlighted in full screen, you will see the pixel dancing in javacv generated video, but python output seems stable

    


    https://imgur.com/a/aowNnKg - javacv generated

    


    https://imgur.com/a/eiLXrbk - Moviepy generated

    


    I am using same encoder in both of the implementation

    


    Encoder - libx264
bitrate - 
   800 Kbps for javacv 
   500 Kbps for moviepy

frame rate - 24fps for both of them

output video size -> 
    7MB (javacv)
    5MB (Moviepy)




    


    generated output size from javacv is bigger then moviepy generated video.

    


    here is my java configuration for FFmpegFrameRecorder

    


            FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(this.outputPath, 
                                           this.screensizeX, this.screensizeY);
        if(this.videoCodecName!=null && "libx264".equals(this.videoCodecName)) {
            recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
        }
        recorder.setFormat("mp4"); 
        recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420);
        recorder.setVideoBitrate(800000);
        recorder.setImageWidth(this.screensizeX);
        recorder.setFrameRate(24);



    


    and here is python configuration for writing video file

    


    Full_final_clip.write_videofile(
                            f"{video_folder_path}/{FILE_ID}_INTERMEDIATE.mp4",
                            codec="libx264",
                            audio_codec="aac",
                            temp_audiofile=f"{FILE_ID}_INTER_temp-audio.m4a",
                            remove_temp=True,
                            fps=24,
                        )



    


    as you can see i am not specifying bitrate in python, but i checked that bitrate of final output is around 500 kbps, which is lower then what i specified in java, yet java generated video quality seems poor.

    


    I have tried setting crf value also , but it seems it does not have any impact when used.

    


    increasing bitrate improve quality somewhat but at the cost of file size, still generated output seems pixelated.

    


    Can someone please highlight what might be the issue, and how python is generating better quality video, when both of the libraries use ffmpeg at the backend.

    


    Edit 1 : also, I am adding code which is being used to make zoom animation for continuous frames, As somewhere i read that this might be the cause for pixel jitter, please see and let me know if there is any improvement we can do to remove pixel jittering

    


    private Mat applyZoomEffect(Mat frame, int currentFrame, long effectFrames, int width, int height, String mode, String position, double speed) {
        long totalFrames = effectFrames;
        double i = currentFrame;
        if ("out".equals(mode)) {
            i = totalFrames - i;
        }
        double zoom = 1 + (i * ((0.1 * speed) / totalFrames));

        double originalCenterX = width/2.0;
        double originalCenterY = height/2.0;
   

        // Resize image
        //opencv_imgproc.resize(frame, resizedMat, new Size(newWidth, newHeight));

        // Determine crop region based on position
        double x = 0, y = 0;
        switch (position.toLowerCase()) {
            case "center":
                // Adjusting for center zoom
                 x = originalCenterX - originalCenterX * zoom;
                 y = originalCenterY - originalCenterY * zoom;
                 
                 x= (width-(width*zoom))/2.0;
                 y= (height-(height*zoom))/2.0;
                break;
        }

        double[][] rowData = {{zoom, 0, x},{0,zoom,y}};

        double[] flatData = flattenArray(rowData);

        // Create a DoublePointer from the flattened array
        DoublePointer doublePointer = new DoublePointer(flatData);

        // Create a Mat object with two rows and three columns
        Mat mat = new Mat(2, 3, org.bytedeco.opencv.global.opencv_core.CV_64F); // CV_64F is for double type

        // Put the data into the Mat object
        mat.data().put(doublePointer);
        Mat transformedFrame = new Mat();
        opencv_imgproc.warpAffine(frame, transformedFrame, mat, new Size(frame.cols(), frame.rows()),opencv_imgproc.INTER_LANCZOS4,0,new Scalar(0,0,0,0));
        return transformedFrame;
    }