Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (35)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (5283)

  • ffmpeg : programmatically use libavcodec and encode and decode raw bitmap, all in just few milliseconds and small compressed size on Raspberry Pi 4

    15 mars 2023, par Jerry Switalski

    We need to compress the size of the 1024x2048 image we produce, to size of about jpeg (200-500kb) from raw 32bits RGBA (8Mb) on Raspberry Pi 4. All in c/c++ program.

    


    The compression needs to be just in few milliseconds, otherwise it is pointless to us.

    


    We decided to try supported encoding using ffmpeg dev library and c/c++ code.

    


    The problem we are facing is that when we edited example of the encoding, provided by ffmpeg developers, the times we are dealing are unacceptable.

    


    Here you can see the edited code where the frames are created :

    


    for (i = 0; i < 25; i++)
{
#ifdef MEASURE_TIME
        auto start_time = std::chrono::high_resolution_clock::now();
        std::cout << "START Encoding frame...\n";
#endif
    fflush(stdout);

    ret = av_frame_make_writable(frame);
    if (ret < 0)
        exit(1);

    //I try here, to convert our 32 bits RGBA image to YUV pixel format:

    for (y = 0; y < c->height; y++)
    {
        for (x = 0; x < c->width; x++)
        {
            int imageIndexY = y * frame->linesize[0] + x;

            uint32_t rgbPixel = ((uint32_t*)OutputDataImage)[imageIndexY];

            double Y, U, V;
            uint8_t R = rgbPixel << 24;
            uint8_t G = rgbPixel << 16;
            uint8_t B = rgbPixel << 8;

            YUVfromRGB(Y, U, V, (double)R, (double)G, (double)B);
            frame->data[0][imageIndexY] = (uint8_t)Y;

            if (y % 2 == 0 && x % 2 == 0)
            {
                int imageIndexU = (y / 2) * frame->linesize[1] + (x / 2);
                int imageIndexV = (y / 2) * frame->linesize[2] + (x / 2);

                frame->data[1][imageIndexU] = (uint8_t)U;
                frame->data[2][imageIndexV] = (uint8_t)Y;
            }
        }
    }

    frame->pts = i;

    /* encode the image */
    encode(c, frame, pkt, f);

#ifdef MEASURE_TIME
        auto end_time = std::chrono::high_resolution_clock::now();
        auto time = end_time - start_time;
        std::cout << "FINISHED Encoding frame in: " << time / std::chrono::milliseconds(1) << "ms.\n";

#endif
    }


    


    Here are some important parts of the previous parts of that function :

    


    codec_name = "mpeg4";

codec = avcodec_find_encoder_by_name(codec_name);

c = avcodec_alloc_context3(codec);
    
c->bit_rate = 1000000;  
c->width = IMAGE_WIDTH;
c->height = IMAGE_HEIGHT;
c->gop_size = 1;
c->max_b_frames = 1;
c->pix_fmt = AV_PIX_FMT_YUV420P;   


    


    IMAGE_WIDTH and IMAGE_HEIGHT are 1024 and 2048 corresponding.

    


    The result I have ran on Raspberry Pi 4 look like this :

    


    START Encoding frame...
Send frame   0
FINISHED Encoding frame in: 40ms.
START Encoding frame...
Send frame   1
Write packet   0 (size=11329)
FINISHED Encoding frame in: 60ms.
START Encoding frame...
Send frame   2
Write packet   1 (size=11329)
FINISHED Encoding frame in: 58ms.


    


    Since I am completely green in encoding and using codecs, my question will be how to do it the best way and correct way, meaning the way which would reduce timing to few ms, and I am not sure the codec was chosen the best for the job, or the pixel format.

    


    The rest of the meaningful code you can see here (the encode() function you can find in the ffmpeg developer example I gave link to above) :

    


    void RGBfromYUV(double& R, double& G, double& B, double Y, double U, double V)
{
    Y -= 16;
    U -= 128;
    V -= 128;
    R = 1.164 * Y + 1.596 * V;
    G = 1.164 * Y - 0.392 * U - 0.813 * V;
    B = 1.164 * Y + 2.017 * U;
}


    


  • Scale image with ffmpeg in bash script

    17 juin 2014, par Brian Bennett

    I’m playing with jclem’s Gifify bash script as a quick way to make GIFs for documentation. It runs on ffmpeg and ImageMagick and I’m trying to find a way to add a variable to scale the produced GIF so I don’t have to go back and add it again. I thought I added the d (resize) variable correctly, but the script fails and just prints the help contents. It does not show my added variable in that help readout. Any ideas ?

    Update

    I solved the problem with printing help contents rather than running the script, but now I’m receiving an error about the -scale parameter.

    convert: invalid argument for option `-scale': -vf @ error/convert.c/ConvertImageCommand/2513.

    Is this because of my if statement syntax for the scale parameter below ?

    #!/bin/bash

    function printHelpAndExit {
     echo 'Usage:'
     echo '  gifify -conx filename'
     echo ''
     echo 'Options: (all optional)'
     echo '  c CROP:   The x and y crops, from the top left of the image, i.e. 640:480'
     echo '  o OUTPUT: The basename of the file to be output (default "output")'
     echo '  n:        Do not upload the resulting image to CloudApp'
     echo '  r FPS:    Output at this (frame)rate (default 10)'
     echo '  s SPEED:  Output using this speed modifier (default 1)'
     echo '            NOTE: GIFs max out at 100fps depending on platform. For consistency,'
     echo '            ensure that FPSxSPEED is not > ~60!'
     echo '  x:        Remove the original file and resulting .gif once the script is complete'
     echo '  d SCALE:  Scales GIF image to specified dimensions (default no scale)'
     echo ''
     echo 'Example:'
     echo '  gifify -c 240:80 -o my-gif -x my-movie.mov'
     exit $1
    }

    noupload=0
    fps=10
    speed=1

    OPTERR=0

    while getopts "c:o:r:s:d:nx" opt; do
     case $opt in
       c) crop=$OPTARG;;
       h) printHelpAndExit 0;;
       o) output=$OPTARG;;
       n) noupload=1;;
       r) fps=$OPTARG;;
       s) speed=$OPTARG;;
       x) cleanup=1;;
       d) scale=$OPTARG;;
       *) printHelpAndExit 1;;
     esac
    done

    shift $(( OPTIND - 1 ))

    filename=$1

    if [ -z ${output} ]; then
     output=$filename
    fi

    if [ -z $filename ]; then printHelpAndExit 1; fi

    if [ $crop ]; then
     crop="-vf crop=${crop}:0:0"
    else
     crop=
    fi

    if [ $scale ]; then
     scale="-vf scale=${scale}:0:0"
    else
     scale=
    fi

    # -delay uses time per tick (a tick defaults to 1/100 of a second)
    # so 60fps == -delay 1.666666 which is rounded to 2 because convert
    # apparently stores this as an integer. To animate faster than 60fps,
    # you must drop frames, meaning you must specify a lower -r. This is
    # due to the GIF format as well as GIF renderers that cap frame delays
    # < 3 to 3 or sometimes 10. Source:
    # http://humpy77.deviantart.com/journal/Frame-Delay-Times-for-Animated-GIFs-214150546
    echo 'Exporting movie...'
    delay=$(bc -l <<< "100/$fps/$speed")
    temp=$(mktemp /tmp/tempfile.XXXXXXXXX)

    ffmpeg -loglevel panic -i $filename $crop -r $fps -f image2pipe -vcodec ppm - >> $temp

    echo 'Making gif...'
    cat $temp | convert +dither -layers Optimize -delay $delay -scale $scale - ${output}.gif

    if [ $noupload -ne 1 ]; then
     open -a Cloud ${output}.gif

     echo `pbpaste`

     if [ $cleanup ]; then
       rm $filename
       rm ${output}.gif
     fi
    else
     echo ${output}.gif
    fi
  • How to update a byte array in a method, without running it again ?

    18 février 2016, par AR792

    I have a class(an AsyncTask) which does image processing and generates yuv bytes continously, at around 200ms interval.

    Now I send these yuv bytes to another method where the they are recorded using FFmpeg frame recorder :

    public void recordYuvData() {

           byte[] yuv = getNV21();
           System.out.println(yuv.length + "  returned yuv bytes  ");
           if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
               startTime = System.currentTimeMillis();
               return;
           }
           if (RECORD_LENGTH > 0) {
               int i = imagesIndex++ % images.length;
               yuvimage = images[i];
               timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
           }
           /* get video data */
           if (yuvimage != null && recording) {
               ((ByteBuffer) yuvimage.image[0].position(0)).put(yuv);

               if (RECORD_LENGTH <= 0) {
                   try {
                       long t = 1000 * (System.currentTimeMillis() - startTime);
                       if (t > recorder.getTimestamp()) {
                           recorder.setTimestamp(t);
                       }
                       recorder.record(yuvimage);
                   } catch (FFmpegFrameRecorder.Exception e) {

                       e.printStackTrace();
                   }
               }
           }
       }

    This method ; recordYuvData() is initiated on button click.

    1. If I initiate it only once , then only the initial image gets recorded, rest are not.

    2. If I initiate this each time after the end of the image processing it records but leads to ’weird’ fps count of the video ; and finally this leads to application crash after sometime.

      For above what I feel is, at the end of image processing a new instance of recordYuvData() is created without ending the previous one, accumulating many instances of recordYuvData(). [correct me if I am wrong]

    So, how do I update ’ONLY’ yuv bytes in the method without running it again ?

    Thanks....!

    Edit :

    On Click :

       record.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               recordYuvdata();
               startRecording();

    getNV21()

    byte[] getNV21(Bitmap bitmap) {

       int inputWidth = 1024;
       int inputHeight = 640;
       int[] argb = new int[inputWidth * inputHeight];

       bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
       System.out.println(argb.length + "@getpixels ");


       byte[] yuv = new byte[inputWidth * inputHeight * 3 / 2];
       encodeYUV420SP(yuv, argb, inputWidth, inputHeight);

       return yuv;

    }

    void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
       final int frameSize = width * height;

       int yIndex = 0;
       int uvIndex = frameSize;
       System.out.println(yuv420sp.length + " @encoding " + frameSize);

       int a, R, G, B, Y, U, V;
       int index = 0;
       for (int j = 0; j < height; j++) {
           for (int i = 0; i < width; i++) {

               a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
               R = (argb[index] & 0xff0000) >> 16;
               G = (argb[index] & 0xff00) >> 8;
               B = (argb[index] & 0xff) >> 0;

               // well known RGB to YUV algorithm

               Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
               U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
               V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;

               // NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
               //    meaning for every 4 Y pixels there are 1 V and 1 U.  Note the sampling is every other
               //    pixel AND every other scanline.
               yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
               if (j % 2 == 0 && index % 2 == 0) {
                   yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
                   yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
               }

               index++;
           }
       }
    }