Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (54)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

Sur d’autres sites (5560)

  • bitmap to yuv , video recorded has only green pixels

    19 janvier 2016, par UserAx

    I am trying to convert a bitmap to yuv, and recording this yuv in the ffmpeg frame recorder...
    I am getting the video output with only green pixels, though when i check the properties of this video it shows the set Frame rate and the resolution...

    The yuv encoding part is correct, but i feel i am making mistake somewhere else, mostly in returning the yuv bytes to recording part ( getByte(byte [] yuv ) because only there the yuv.length displayed in console is 0,, rest all methods return a big value in console ...

    Kindly help...

    @Override
    public void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       setContentView(R.layout.activity_main);
       directory.mkdirs();

       addListenerOnButton();

       play=(Button)findViewById(R.id.buttonplay);
       stop=(Button)findViewById(R.id.buttonstop);
       record=(Button)findViewById(R.id.buttonstart);

       stop.setEnabled(false);
       play.setEnabled(false);


       record.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               startRecording();
               getByte(new byte[]{});
           }
       });

       stop.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               stopRecording();
           }
       });


       play.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) throws IllegalArgumentException, SecurityException, IllegalStateException {
               Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(String.valueOf(asmileys)));
               intent.setDataAndType(Uri.parse(String.valueOf(asmileys)), "video/mp4");
               startActivity(intent);
               Toast.makeText(getApplicationContext(), "Playing Video", Toast.LENGTH_LONG).show();
           }
       });

    }

    ......//......



    public void getByte(byte[] yuv) {
       getNV21(640, 480, bitmap);
       System.out.println(yuv.length + " ");
       if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
           startTime = System.currentTimeMillis();
           return;
       }
       if (RECORD_LENGTH > 0) {
           int i = imagesIndex++ % images.length;
           yuvimage = images[i];
           timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
       }
           /* get video data */
       if (yuvimage != null && recording) {
               ((ByteBuffer) yuvimage.image[0].position(0)).put(yuv);

               if (RECORD_LENGTH <= 0) {
                   try {
                       long t = 1000 * (System.currentTimeMillis() - startTime);
                       if (t > recorder.getTimestamp()) {
                           recorder.setTimestamp(t);
                       }
                       recorder.record(yuvimage);
                   } catch (FFmpegFrameRecorder.Exception e) {

                       e.printStackTrace();
                   }
               }
           }
    }

    public byte [] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {

       int[] argb = new int[inputWidth * inputHeight];

       bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);

       byte[] yuv = new byte[inputWidth * inputHeight * 3 / 2];
       encodeYUV420SP(yuv, argb, inputWidth, inputHeight);

       bitmap.recycle();
       System.out.println(yuv.length + " ");
       return yuv;

    }

    void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
       final int frameSize = width * height;

       int yIndex = 0;
       int uIndex = frameSize;
       int vIndex = frameSize;
       System.out.println(yuv420sp.length + " " + frameSize);

       int a, R, G, B, Y, U, V;
       int index = 0;
       for (int j = 0; j < height; j++) {
           for (int i = 0; i < width; i++) {

               a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
               R = (argb[index] & 0xff0000) >> 16;
               G = (argb[index] & 0xff00) >> 8;
               B = (argb[index] & 0xff) >> 0;

               // well known RGB to YUV algorithm

               Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
               U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
               V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;

               // NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
               //    meaning for every 4 Y pixels there are 1 V and 1 U.  Note the sampling is every other
               //    pixel AND every other scanline.
               yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
               if (j % 2 == 0 && index % 2 == 0) {
                   yuv420sp[uIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
                   yuv420sp[vIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
               }

               index++;
           }
       }
    }

    .....//.....

    public void addListenerOnButton() {
    image = (ImageView) findViewById(R.id.imageView);
    image.setDrawingCacheEnabled(true);
    image.buildDrawingCache();
    bitmap = image.getDrawingCache();
    System.out.println(bitmap.getByteCount() + " " );

    button = (Button) findViewById(R.id.btn1);
    button.setOnClickListener(new OnClickListener() {
    @Override
    public void onClick(View view){
       image.setImageResource(R.drawable.image1);
     }
    });

    ......//......

    EDIT 1 :

    I made few changes in the above code :

    record.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               startRecording();
               getByte();
           }
       });
    .....//....

    public void getbyte() {
       byte[] yuv = getNV21(640, 480, bitmap);

    So now in the console ; i get same yuv length in this method as the yuv length from getNV21 method..

    But now i am getting half screen Black and Half screen green(black above and green below) pixels in the recorded video...

    If i add these lines to onCreate method ;

    image = (ImageView) findViewById(R.id.imageView);
    image.setDrawingCacheEnabled(true);
    image.buildDrawingCache();
    bitmap = image.getDrawingCache();

    I do get distorted frames( frames are 1/4th of the image displayed with mix up of colors here and there) in the video....

    All i am trying to learn is the image processing and flow of Bytes[] from one method to another ; but i am still a noob.. ;

    Kindly help..!

  • ffmpeg "End mismatch 1" warning, jpeg2000 to avi

    11 avril 2023, par jklebes

    Trying to convert a directory of jpeg2000 grayscale images to a video with ffmpeg, I get warnings

    


    [0;36m[jpeg2000 @ 0x55d8fa1b68c0] [0m[0;33mEnd mismatch 1


    


    (and lots of

    


    Last message repeated <n> times&#xA;</n>

    &#xA;

    )

    &#xA;

    The command was

    &#xA;

    ffmpeg -y -r 10 -start_number 1 -i <path>/surface_30///img_000%01d.jp2 -vcodec msmpeg4 -vf scale=1920:-1 -q:v 8 <path>//surface_30///surface_30.avi&#xA;</path></path>

    &#xA;

    The output is

    &#xA;

    ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers&#xA;  built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0)&#xA;  configuration: --prefix=/home/jklebes001/miniconda3 --cc=/tmp/build/80754af9/ffmpeg_1587154242452/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --enable-avresample --enable-gmp --enable-hardcoded-tables --enable-libfreetype --enable-libvpx --enable-pthreads --enable-libopus --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --disable-nonfree --enable-gpl --enable-gnutls --disable-openssl --enable-libopenh264 --enable-libx264&#xA;  libavutil      56. 31.100 / 56. 31.100&#xA;  libavcodec     58. 54.100 / 58. 54.100&#xA;  libavformat    58. 29.100 / 58. 29.100&#xA;  libavdevice    58.  8.100 / 58.  8.100&#xA;  libavfilter     7. 57.100 /  7. 57.100&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  5.100 /  5.  5.100&#xA;  libswresample   3.  5.100 /  3.  5.100&#xA;  libpostproc    55.  5.100 / 55.  5.100&#xA;[0;36m[jpeg2000 @ 0x55cb44144480] [0m[0;33mEnd mismatch 1&#xA;&#xA;[0m    Last message repeated 1 times&#xA;    Last message repeated 2 times&#xA;    Last message repeated 3 times&#xA;

    &#xA;

    ...

    &#xA;

    Last message repeated 73 times&#xA;&#xA;Input #0, image2, from &#x27;<path>//surface_30///img_000%01d.jp2&#x27;:&#xA;&#xA;  Duration: 00:00:00.20, start: 0.000000, bitrate: N/A&#xA;&#xA;    Stream #0:0: Video: jpeg2000, gray, 6737x4869, 25 tbr, 25 tbn, 25 tbc&#xA;&#xA;Stream mapping:&#xA;&#xA;  Stream #0:0 -> #0:0 (jpeg2000 (native) -> msmpeg4v3 (msmpeg4))&#xA;&#xA;Press [q] to stop, [?] for help&#xA;&#xA;[0;36m[jpeg2000 @ 0x55cb4418e200] [0m[0;33mEnd mismatch 1&#xA;&#xA;[0m[0;36m[jpeg2000 @ 0x55cb441900c0] [0m[0;33mEnd mismatch 1&#xA;</path>

    &#xA;

    ...

    &#xA;

    (about 600 lines of "end mismatch" and "last message repeated" cut)

    &#xA;

    ...

    &#xA;

    [0m[0;36m[jpeg2000 @ 0x55cb4418e8c0] [0m[0;33mEnd mismatch 1&#xA;&#xA;[0mOutput #0, avi, to &#x27;<path>/surface_30///surface_30.avi&#x27;:&#xA;&#xA;  Metadata:&#xA;&#xA;    ISFT            : Lavf58.29.100&#xA;&#xA;    Stream #0:0: Video: msmpeg4v3 (msmpeg4) (MP43 / 0x3334504D), yuv420p, 1920x1388, q=2-31, 200 kb/s, 10 fps, 10 tbn, 10 tbc&#xA;&#xA;    Metadata:&#xA;&#xA;      encoder         : Lavc58.54.100 msmpeg4&#xA;&#xA;    Side data:&#xA;&#xA;      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1&#xA;&#xA;frame=    2 fps=0.8 q=8.0 size=       6kB time=00:00:00.20 bitrate= 227.1kbits/s speed=0.0844x    &#xA;frame=    5 fps=1.7 q=8.0 size=       6kB time=00:00:00.50 bitrate=  90.8kbits/s speed=0.172x    &#xA;frame=    5 fps=1.7 q=8.0 Lsize=     213kB time=00:00:00.50 bitrate=3494.7kbits/s speed=0.172x    &#xA;video:208kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.732246%&#xA;</path>

    &#xA;

    What is the meaning of characters like [0 ;33m here ?

    &#xA;

    I thought it might have something to do with bit depth and color format. Setting -pix_fmt gray had no effect, and indeed the format of the jp2 images is already detected as 8-bit gray.

    &#xA;

    The output .avi exists and seems fine.

    &#xA;

    The line was previously used on jpeg files and works fine on jpeg. With jpeg, the output has the line

    &#xA;

    Input #0, image2, from &#x27;<path>/surface_30///img_000%01d.jpeg&#x27;:&#xA;&#xA;  Duration: 00:00:00.16, start: 0.000000, bitrate: N/A&#xA;&#xA;    Stream #0:0: Video: mjpeg (Baseline), gray(bt470bg/unknown/unknown), 6737x4869 [SAR 1:1 DAR 6737:4869], 25 tbr, 25 tbn, 25 tbc&#xA;&#xA;Stream mapping:&#xA;&#xA;  Stream #0:0 -> #0:0 (mjpeg (native) -> msmpeg4v3 (msmpeg4))&#xA;&#xA;Press [q] to stop, [?] for help&#xA;&#xA;Output #0, avi, to &#x27;<path>/surface_30///surface_30.avi&#x27;:&#xA;&#xA;  Metadata:&#xA;&#xA;    ISFT            : Lavf58.29.100&#xA;&#xA;    Stream #0:0: Video: msmpeg4v3 (msmpeg4) (MP43 / 0x3334504D), yuv420p, 6737x4869 [SAR 1:1 DAR 6737:4869], q=2-31, 200 kb/s, 10 fps, 10 tbn, 10 tbc&#xA;&#xA;    Metadata:&#xA;&#xA;      encoder         : Lavc58.54.100 msmpeg4&#xA;&#xA;    Side data:&#xA;&#xA;      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1&#xA;&#xA;frame=    2 fps=0.0 q=8.0 size=    6662kB time=00:00:00.20 bitrate=272859.9kbits/s speed=0.334x    &#xA;frame=    3 fps=2.2 q=10.0 size=   10502kB time=00:00:00.30 bitrate=286764.2kbits/s speed=0.22x    &#xA;frame=    4 fps=1.9 q=12.3 size=   13574kB time=00:00:00.40 bitrate=277987.7kbits/s speed=0.19x    &#xA;frame=    4 fps=1.4 q=12.3 size=   13574kB time=00:00:00.40 bitrate=277987.7kbits/s speed=0.145x    &#xA;frame=    4 fps=1.4 q=12.3 Lsize=   13657kB time=00:00:00.40 bitrate=279702.3kbits/s speed=0.145x    &#xA;video:13652kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.041926%&#xA;</path></path>

    &#xA;

    detecting mjpeg format and similar, but more detailed format gray(bt470bg/unknown/unknown), 6737x4869 [SAR 1:1 DAR 6737:4869].

    &#xA;

    What is the difference when switching input to jp2 ?

    &#xA;

  • Is there a way to crop a video given a videoURL in node js ?

    30 mars 2021, par Radespy

    I’m building an electron-react app and need to crop [x, y, width, height] a video in the main process.

    &#xA;

    The video URL and buffer have been generated in a react rendering component using mediaStream / mediaRecorder and a URL / buffer generated in the render process using URL.createObjectURL.

    &#xA;

    I need to crop the video buffer directly (i.e. select a region of interest within the video) without having to download a file.

    &#xA;

    I would then like to create a buffer from the cropped video to save in MongoDB as a base64 encoded string.

    &#xA;

    I’ve looked at fluent-ffmpeg but this doesn’t seem to work with a URL or buffer and requires a path to a downloaded video file.

    &#xA;

    Does anyone know of a way to do this ?

    &#xA;

    Many thanks

    &#xA;