Recherche avancée

Médias (0)

Mot : - Tags -/performance

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (22)

  • Librairies et logiciels spécifiques aux médias

    10 décembre 2010, par

    Pour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
    Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

Sur d’autres sites (6112)

  • Dreamcast Development Desktop

    28 mars 2011, par Multimedia Mike — Sega Dreamcast

    Some people are curious about what kind of equipment is required to program a Sega Dreamcast. This is my setup :



    It’s a bit overcomplicated. The only piece in that picture which doesn’t play a role in the Dreamcast development process is the scanner. The Eee PC does the heavy lifting of development (i.e., text editing and cross compilation) and uploads to the Dreamcast via a special serial cable. Those are the most essential parts and are really the only pieces necessary for a lot of algorithmic stuff (things that can be validated via a serial console). But then I have to go up a level where I output video. That’s where things get messy.



    The Mac Mini and giant monitor really just act as a glorified TV in this case. Ideally, it will be more than that. The DC outputs audio and video via composite cables to a Canopus DV capture bridge. That’s connected via FireWire to the external hard drive underneath the Mac Mini, which is connected to the Mac. Adobe Premiere Pro handles the DV capture / display.

    One day I hope to have something worthwhile to capture.

  • How to update a byte array in a method, without running it again ?

    18 février 2016, par AR792

    I have a class(an AsyncTask) which does image processing and generates yuv bytes continously, at around 200ms interval.

    Now I send these yuv bytes to another method where the they are recorded using FFmpeg frame recorder :

    public void recordYuvData() {

           byte[] yuv = getNV21();
           System.out.println(yuv.length + "  returned yuv bytes  ");
           if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
               startTime = System.currentTimeMillis();
               return;
           }
           if (RECORD_LENGTH > 0) {
               int i = imagesIndex++ % images.length;
               yuvimage = images[i];
               timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
           }
           /* get video data */
           if (yuvimage != null && recording) {
               ((ByteBuffer) yuvimage.image[0].position(0)).put(yuv);

               if (RECORD_LENGTH <= 0) {
                   try {
                       long t = 1000 * (System.currentTimeMillis() - startTime);
                       if (t > recorder.getTimestamp()) {
                           recorder.setTimestamp(t);
                       }
                       recorder.record(yuvimage);
                   } catch (FFmpegFrameRecorder.Exception e) {

                       e.printStackTrace();
                   }
               }
           }
       }

    This method ; recordYuvData() is initiated on button click.

    1. If I initiate it only once , then only the initial image gets recorded, rest are not.

    2. If I initiate this each time after the end of the image processing it records but leads to ’weird’ fps count of the video ; and finally this leads to application crash after sometime.

      For above what I feel is, at the end of image processing a new instance of recordYuvData() is created without ending the previous one, accumulating many instances of recordYuvData(). [correct me if I am wrong]

    So, how do I update ’ONLY’ yuv bytes in the method without running it again ?

    Thanks....!

    Edit :

    On Click :

       record.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               recordYuvdata();
               startRecording();

    getNV21()

    byte[] getNV21(Bitmap bitmap) {

       int inputWidth = 1024;
       int inputHeight = 640;
       int[] argb = new int[inputWidth * inputHeight];

       bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
       System.out.println(argb.length + "@getpixels ");


       byte[] yuv = new byte[inputWidth * inputHeight * 3 / 2];
       encodeYUV420SP(yuv, argb, inputWidth, inputHeight);

       return yuv;

    }

    void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
       final int frameSize = width * height;

       int yIndex = 0;
       int uvIndex = frameSize;
       System.out.println(yuv420sp.length + " @encoding " + frameSize);

       int a, R, G, B, Y, U, V;
       int index = 0;
       for (int j = 0; j < height; j++) {
           for (int i = 0; i < width; i++) {

               a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
               R = (argb[index] & 0xff0000) >> 16;
               G = (argb[index] & 0xff00) >> 8;
               B = (argb[index] & 0xff) >> 0;

               // well known RGB to YUV algorithm

               Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
               U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
               V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;

               // NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
               //    meaning for every 4 Y pixels there are 1 V and 1 U.  Note the sampling is every other
               //    pixel AND every other scanline.
               yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
               if (j % 2 == 0 && index % 2 == 0) {
                   yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
                   yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
               }

               index++;
           }
       }
    }
  • Scale image with ffmpeg in bash script

    17 juin 2014, par Brian Bennett

    I’m playing with jclem’s Gifify bash script as a quick way to make GIFs for documentation. It runs on ffmpeg and ImageMagick and I’m trying to find a way to add a variable to scale the produced GIF so I don’t have to go back and add it again. I thought I added the d (resize) variable correctly, but the script fails and just prints the help contents. It does not show my added variable in that help readout. Any ideas ?

    Update

    I solved the problem with printing help contents rather than running the script, but now I’m receiving an error about the -scale parameter.

    convert: invalid argument for option `-scale': -vf @ error/convert.c/ConvertImageCommand/2513.

    Is this because of my if statement syntax for the scale parameter below ?

    #!/bin/bash

    function printHelpAndExit {
     echo 'Usage:'
     echo '  gifify -conx filename'
     echo ''
     echo 'Options: (all optional)'
     echo '  c CROP:   The x and y crops, from the top left of the image, i.e. 640:480'
     echo '  o OUTPUT: The basename of the file to be output (default "output")'
     echo '  n:        Do not upload the resulting image to CloudApp'
     echo '  r FPS:    Output at this (frame)rate (default 10)'
     echo '  s SPEED:  Output using this speed modifier (default 1)'
     echo '            NOTE: GIFs max out at 100fps depending on platform. For consistency,'
     echo '            ensure that FPSxSPEED is not > ~60!'
     echo '  x:        Remove the original file and resulting .gif once the script is complete'
     echo '  d SCALE:  Scales GIF image to specified dimensions (default no scale)'
     echo ''
     echo 'Example:'
     echo '  gifify -c 240:80 -o my-gif -x my-movie.mov'
     exit $1
    }

    noupload=0
    fps=10
    speed=1

    OPTERR=0

    while getopts "c:o:r:s:d:nx" opt; do
     case $opt in
       c) crop=$OPTARG;;
       h) printHelpAndExit 0;;
       o) output=$OPTARG;;
       n) noupload=1;;
       r) fps=$OPTARG;;
       s) speed=$OPTARG;;
       x) cleanup=1;;
       d) scale=$OPTARG;;
       *) printHelpAndExit 1;;
     esac
    done

    shift $(( OPTIND - 1 ))

    filename=$1

    if [ -z ${output} ]; then
     output=$filename
    fi

    if [ -z $filename ]; then printHelpAndExit 1; fi

    if [ $crop ]; then
     crop="-vf crop=${crop}:0:0"
    else
     crop=
    fi

    if [ $scale ]; then
     scale="-vf scale=${scale}:0:0"
    else
     scale=
    fi

    # -delay uses time per tick (a tick defaults to 1/100 of a second)
    # so 60fps == -delay 1.666666 which is rounded to 2 because convert
    # apparently stores this as an integer. To animate faster than 60fps,
    # you must drop frames, meaning you must specify a lower -r. This is
    # due to the GIF format as well as GIF renderers that cap frame delays
    # < 3 to 3 or sometimes 10. Source:
    # http://humpy77.deviantart.com/journal/Frame-Delay-Times-for-Animated-GIFs-214150546
    echo 'Exporting movie...'
    delay=$(bc -l <<< "100/$fps/$speed")
    temp=$(mktemp /tmp/tempfile.XXXXXXXXX)

    ffmpeg -loglevel panic -i $filename $crop -r $fps -f image2pipe -vcodec ppm - >> $temp

    echo 'Making gif...'
    cat $temp | convert +dither -layers Optimize -delay $delay -scale $scale - ${output}.gif

    if [ $noupload -ne 1 ]; then
     open -a Cloud ${output}.gif

     echo `pbpaste`

     if [ $cleanup ]; then
       rm $filename
       rm ${output}.gif
     fi
    else
     echo ${output}.gif
    fi