Advanced search

Medias (91)

Other articles (103)

  • Encoding and processing into web-friendly formats

    13 April 2011, by

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Gestion de la ferme

    2 March 2010, by

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 March 2010, by

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3); le plugin champs extras v2 nécessité par (...)

On other websites (10403)

  • How to Read DJI FPV Feed as OpenCV Object?

    24 May 2019, by Walter Morawa

    I’ve officially spent a lot of time looking for a solution to reading DJI’s FPV feed as an OpenCV Mat object. I am probably overlooking something simple, since I am not too familiar with Image Encoding/Decoding.

    I apologize if I am missing something very basic, but I know I’m not the first person to have issues getting DJI’s FPV feed, and answering this question, especially if option 1 is possible, would be extremely valuable to many developers. Please consider upvoting this question, as I’ve thoroughly researched this issue and future developers who come across it will likely run into a bunch of the same issues I had.

    I’m willing to use ffmpeg or Javacv if necessary, but that’s quite the hurdle for most Android developers as we’re going to have to use cpp, ndk, terminal for testing, etc. That seems like overkill.

    I believe the issue lies in the fact that we need to decode both the byte array of length 6 (info array) and the byte array with current frame info simultaneously. Thanks in advance for your time.

    Basically, DJI’s FPV feed comes in a number of formats.

    1. Raw H264 (MPEG4) in VideoFeeder.VideoDataListener
       // The callback for receiving the raw H264 video data for camera live view
       mReceivedVideoDataListener = new VideoFeeder.VideoDataListener() {
           @Override
           public void onReceive(byte[] videoBuffer, int size) {
               //Log.d("BytesReceived", Integer.toString(videoStreamFrameNumber));
               if (videoStreamFrameNumber++%30 == 0){
                   //convert video buffer to opencv array
                   OpenCvAndModelAsync openCvAndModelAsync = new OpenCvAndModelAsync();
                   openCvAndModelAsync.execute(videoBuffer);
               }
               if (mCodecManager != null) {
                   mCodecManager.sendDataToDecoder(videoBuffer, size);
               }
           }
       };
    1. DJI also has it’s own Android decoder sample with FFMPEG to convert to YUV format.
       @Override
       public void onYuvDataReceived(final ByteBuffer yuvFrame, int dataSize, final int width, final int height) {
           //In this demo, we test the YUV data by saving it into JPG files.
           //DJILog.d(TAG, "onYuvDataReceived " + dataSize);
           if (count++ % 30 == 0 && yuvFrame != null) {
               final byte[] bytes = new byte[dataSize];
               yuvFrame.get(bytes);
               AsyncTask.execute(new Runnable() {
                   @Override
                   public void run() {
                       if (bytes.length >= width * height) {
                           Log.d("MatWidth", "Made it");
                           YuvImage yuvImage = saveYuvDataToJPEG(bytes, width, height);
                           Bitmap rgbYuvConvert = convertYuvImageToRgb(yuvImage, width, height);

                           Mat yuvMat = new Mat(height, width, CvType.CV_8UC1);
                           yuvMat.put(0, 0, bytes);
                           //OpenCv Stuff
                       }
                   }
               });
           }
       }
    1. DJI also appears to have a "getRgbaData" function, but there is literally not a single example online or by DJI. Go ahead and Google "DJI getRgbaData"... There’s only the reference to the api documentation that explains the self explanatory parameters and return values but nothing else. I couldn’t figure out where to call this and there doesn’t appear to be a callback function as there is with YUV. You can’t call it from the h264b byte array directly, but perhaps you can get it from the yuv data.

    Option 1 is much more preferable to option 2, since YUV format has quality issues. Option 3 would also likely involve a decoder.

    Here’s a screenshot that DJI’s own YUV conversion produces. WalletPhoneYuv

    I’ve looked at a bunch of things about how to improve the YUV, remove green and yellow colors and whatnot, but at this point if DJI can’t do it right, I don’t want to invest resources there.

    Regarding Option 1, I know there’s FFMPEG and JavaCV that seem like good options if I have to go the video decoding route. However, both options seem quite time consuming. This JavaCV H264 conversion seems unnecessarily complex. I found it from this relevant question.

    Moreover, from what I understand, OpenCV can’t handle reading and writing video files without FFMPEG, but I’m not trying to read a video file, I am trying to read an H264/MPEG4 byte[] array. The following code seems to get positive results.

       /* Async OpenCV Code */
       private class OpenCvAndModelAsync extends AsyncTask {
           @Override
           protected double[] doInBackground(byte[]... params) {//Background Code Executing. Don't touch any UI components
               //get fpv feed and convert bytes to mat array
               Mat videoBufMat = new Mat(4, params[0].length, CvType.CV_8UC4);
               videoBufMat.put(0,0, params[0]);
               //if I add this in it says the bytes are empty.
               //Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_ANYCOLOR);
               //encodeVideoBuf.release();
               Log.d("MatRgba", videoBufMat.toString());
               for (int i = 0; i< videoBufMat.rows(); i++){
                   for (int j=0; j< videoBufMat.cols(); j++){
                       double[] rgb = videoBufMat.get(i, j);
                       Log.i("Matrix", "red: "+rgb[0]+" green: "+rgb[1]+" blue: "+rgb[2]+" alpha: "
                               + rgb[3] + " Length: " + rgb.length + " Rows: "
                               + videoBufMat.rows() + " Columns: " + videoBufMat.cols());
                   }
               }
               double[] center = openCVThingy(videoBufMat);
               return center;
           }
           protected void onPostExecute(double[] center) {
               //handle ui or another async task if necessary
           }
       }

    Rows = 4, Columns > 30k. I get lots of RGB values that seem valid, such as red = 113, green=75, blue=90, alpha=220 as a made up example; however, I get a ton of 0,0,0,0 values. That should be somewhat okay, since Black is 0,0,0 (although I would have thought the alpha would be higher) and I have a black object in my image.

    However, when I try to compute the contours from this image, I almost always get that the moments (center x, y) are exactly in the center of the image. This error has nothing to do with my color filter or contours algorithm, as I wrote a script in python and tested that I implemented it correctly in Android by reading a still image and getting the exact same number of contours, position, etc in both Python and Android.

    I noticed it has something to do with the videoBuffer byte size (bonus points if you can explain why every other length is 6!)

    2019-05-23 21:14:29.601 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2425
    2019-05-23 21:14:29.802 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2659
    2019-05-23 21:14:30.004 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:30.263 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6015
    2019-05-23 21:14:30.507 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:30.766 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4682
    2019-05-23 21:14:31.005 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:31.234 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2840
    2019-05-23 21:14:31.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4482
    2019-05-23 21:14:31.664 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:31.927 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4768
    2019-05-23 21:14:32.174 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:32.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4700
    2019-05-23 21:14:32.668 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:32.864 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4740
    2019-05-23 21:14:33.102 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:33.365 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4640

    My questions:

    I. Is this the correct format way to read an h264 byte as mat?
    Assuming the format is RGBA, that means row = 4 and columns = byte[].length, and CvType.CV_8UC4. Do I have height and width correct? Something tells me YUV height and width is off. I was getting some meaningful results, but the contours were exactly in the center, just like with the H264.

    II. Does OpenCV handle MP4 in android like this? If not, do I need to use FFMPEG or JavaCV?

    III. Does the int size have something to do with it? Why is the int size occassionally 6, and other times 2400 to 6000? I’ve heard about the difference between this frames information and information about the next frame, but I’m simply not knowledgeable enough to know how to apply that here.
    I’m starting to think this is where the issue lies. Since I need to get the 6 byte array for info about next frame, perhaps my modulo 30 is incorrect. So should I pass the 29th or 31st frame as a format byte for each frame? How is that done in opencv or are we doomed to use to the the complicated ffmpeg.

    IV. Can I fix this using Imcodecs? I was hoping opencv would natively handle whether a frame was color from this frame or info about next frame. I added the below code, but I am getting an empty array:

    Mat videoBufMat = Imgcodecs.imdecode(new MatOfByte(params[0]), Imgcodecs.IMREAD_UNCHANGED);

    This also is empty:

    Mat encodeVideoBuf = new Mat(4, params[0].length, CvType.CV_8UC4);
    encodeVideoBuf.put(0,0, params[0]);
    Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_UNCHANGED);

    V. Should I try converting the bytes into Android jpeg and then import it? Why is djis yuv decoder so complicated looking? It makes me cautious from wanting to try ffmpeg or Javacv and just stick to Android decoder or opencv decoder.

    VI. At what stage should I resize the frames to speed up calculations?

  • Separate server for video encoding? [on hold]

    25 December 2018, by Owenimus

    I’m making a website that will handle video upload and encoding. My idea was to have the main server handle both client requests and video processing. But from my understanding, video encoding is cpu intensive. So I’m not sure if its a good idea to have one server do all the work, or have a separate server to do processing stuff. I want to try to future proof myself a bit in case I ever get high volumes of traffic, thus adding more processing work for the server.

    So my question, is it overkill these days to have a separate server for video encoding, or am I going about this all wrong?

    Ps. I’m using nodejs.

  • avformat/dashenc : Refactored HLS media playlist related code

    12 December 2018, by kjeyapal@akamai.com
    avformat/dashenc : Refactored HLS media playlist related code
    

    Made it as a separate function, so that it could be reused (in future)

    • [DH] libavformat/dashenc.c