Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (50)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (7182)

  • How to analyse 404 pages

    1er juillet 2019, par Matomo Core Team — Development, Plugins

    How to analyse “not found” pages (404) in digital analytics

    Have you ever sent out a newsletter and one link wasn’t active yet ? Would you like to know how many users get affected when this happens ? Would you like to know if your visitors are encountering 404 pages ? 

    In this article we’re describing an easy way to analyse “not found” pages on your website with Matomo to increase your visitors’ user experience, user acquisition, and SEO (search engine optimization).

    How to know the number of 404s on my website ?

    There are different ways to get this information. Depending on how your website is built, you may or may not collect this data.

    The easiest way to answer this question is to fire a 404 page on your website, you do this by accessing a wrong url :

    how to analyse 404 pages

    As you can see here, in our case, the page title starts with “Page non trouvée” which stands for “Page not found” when translated in English (as the website we are considering here is in French) :

    404 page analysis

    In this example 19 page views have been fired and it generated a bounce rate of 67%. As a result ⅔ of the visits ended here.

    In some cases, the information related to a “not found” page can be found either within the title or within the URL, as some websites redirect you to a specific web page when a page can’t be found.

    If you can’t identify “not found” pages via a page title or a page URL, we strongly advise you to use this specific tracking code method on your 404 page : “How to track error pages in Matomo ?”

    You can easily set it with Matomo Tag Manager with a custom HTML tag :

    Analysing 404 pages

    where the trigger is the following :

    how to analyse 404 page

    You will however, have to define this trigger as an exclusion for all the other tags which may conflict with it (here below is the new trigger defined for the generic Matomo tags we are inserting on all pages) :

    404 page how to analyse

    Once this specific tracking is set, you will be able to track the source of the 404 and will gather all the “not found” pages in a specific group within your Page Title report :

    404 url

    Here, for example, you can identify that the homepage of this website had a link pointing to a 404, in our case it was https://www.webassoc.org/pro-du-web.

    Note that this is just one technique. You could also create a custom dimension report and decide to send the 404 there also.

    How to get notified when a 404 page is visited ?

    Trust us, you’re not going to check everyday whether a 404 page has been visited. In order to avoid checking it manually, you can define custom alerts.

    There are three possible scenarios when “not found” pages can be fired :

    • internal 404 : one link within your website is pointing to a wrong url on the same website.
    • external 404 : someone from an external website made a link to yours and the link is not correct.
    • direct access 404 : someone access directly to a not found page on your website.

    You can define all those three within Matomo, but in your case, you will only have to focus on the first two only. In fact, you can’t really fix the third scenario. That’s the reason why we’re not focusing on it. It would result in irrelevant alerts.

    Custom alert for internal 404

    An internal 404 is defined from a 404 where the source is an internal web page. As a result, it will look like the following in your report :

    In this example, we’re using this specific custom implementation, the title of the page will contain “From = https://www.webassoc.org/”. So set our custom alert accordingly :

    Help for 404 pages

    Now every time a 404 page will be fired from an internal page, you’ll be notified by email.

    Note that you can also decide to not receive any email and track the evolution of alerts with the History of triggered alerts feature.

    Custom alert for external 404

    External 404 is almost the same setup. The only thing you need to keep in mind is that we want to exclude the 404 where the source is not indicated. As a result, your configuration will look like the following :

    how to analyse 404 page

    Here your regular expression pattern is the following one :

    404/URL = .*From = (?!https://www.webassoc.org)[^\s]+

    as you’ll want to have any referrer coming from a website which is not Matomo and not a direct 404.

     

    You can now be notified every time that a 404 is fired from any link.

    Note that this configuration may slightly differ from website to website. So always double check your tracking code and the way the values are sent to your reports. Also try to trigger those alerts first before validating them.

    How to follow the evolution of your 404 over time ?

    It may be interesting to know how good or how bad you are performing in terms of 404.

    In order to check this information, you can click on the evolution icon near the 404 title :

    404 page help

    But you may be interested in accessing this information more regularly without having to create this report each time.

    So, one way to analyse the evolution of your 404 is to create a segment such as :

    and to click after that on evolution icon :

    analyse 404

    As you can see below the number of “not found” pages is quite low in general, but we can also notice that a period received an increase in terms of 404 not found pages on May 27. It may be interesting to investigate it :

    404 analysis

    You can start from the overview of referrers :

    404 page help

    As you can notice here the main source of 404 is coming from direct entries which is the most difficult channel to analyse as we don’t really know where the visitors are coming from.

    How to perform your analysis even faster ?

    As you can see analysing reports in Matomo in order to detect 404 pages is a time-consuming activity. In order to make it faster, you can already create a report about it within the Email reports feature with the following settings :

    • Segment : 404
    • Email schedule : never.
    • Visits summary and Page titles as selected report.

    You will then end up with a saved report listing all the URLs concerned :

    404 url help

    You can also have a look at the “Custom reports” premium feature.

    It will provide you with more flexibility. You will then be able to focus on the most important thing : the cause of 404.

    Good luck and happy analytics !

  • How to Read DJI H264 FPV Feed as OpenCV Mat Object ?

    29 mai 2019, par Walter Morawa

    TDLR : All DJI developers would benefit from decoding raw H264 video stream byte arrays to a format compatible with OpenCV.

    I’ve spent a lot of time looking for a solution to reading DJI’s FPV feed as an OpenCV Mat object. I am probably overlooking something fundamental, since I am not too familiar with Image Encoding/Decoding.

    Future developers who come across it will likely run into a bunch of the same issues I had. It would be great if DJI developers could use opencv directly without needing a 3rd party library.

    I’m willing to use ffmpeg or JavaCV if necessary, but that’s quite the hurdle for most Android developers as we’re going to have to use cpp, ndk, terminal for testing, etc. That seems like overkill. Both options seem quite time consuming. This JavaCV H264 conversion seems unnecessarily complex. I found it from this relevant question.

    I believe the issue lies in the fact that we need to decode both the byte array of length 6 (info array) and the byte array with current frame info simultaneously.

    Basically, DJI’s FPV feed comes in a number of formats.

    1. Raw H264 (MPEG4) in VideoFeeder.VideoDataListener
       // The callback for receiving the raw H264 video data for camera live view
       mReceivedVideoDataListener = new VideoFeeder.VideoDataListener() {
           @Override
           public void onReceive(byte[] videoBuffer, int size) {
               //Log.d("BytesReceived", Integer.toString(videoStreamFrameNumber));
               if (videoStreamFrameNumber++%30 == 0){
                   //convert video buffer to opencv array
                   OpenCvAndModelAsync openCvAndModelAsync = new OpenCvAndModelAsync();
                   openCvAndModelAsync.execute(videoBuffer);
               }
               if (mCodecManager != null) {
                   mCodecManager.sendDataToDecoder(videoBuffer, size);
               }
           }
       };
    1. DJI also has it’s own Android decoder sample with FFMPEG to convert to YUV format.
       @Override
       public void onYuvDataReceived(final ByteBuffer yuvFrame, int dataSize, final int width, final int height) {
           //In this demo, we test the YUV data by saving it into JPG files.
           //DJILog.d(TAG, "onYuvDataReceived " + dataSize);
           if (count++ % 30 == 0 && yuvFrame != null) {
               final byte[] bytes = new byte[dataSize];
               yuvFrame.get(bytes);
               AsyncTask.execute(new Runnable() {
                   @Override
                   public void run() {
                       if (bytes.length >= width * height) {
                           Log.d("MatWidth", "Made it");
                           YuvImage yuvImage = saveYuvDataToJPEG(bytes, width, height);
                           Bitmap rgbYuvConvert = convertYuvImageToRgb(yuvImage, width, height);

                           Mat yuvMat = new Mat(height, width, CvType.CV_8UC1);
                           yuvMat.put(0, 0, bytes);
                           //OpenCv Stuff
                       }
                   }
               });
           }
       }

    Edit : For those who want to see DJI’s YUV to JPEG function, here it is from the sample application :

    private YuvImage saveYuvDataToJPEG(byte[] yuvFrame, int width, int height){
           byte[] y = new byte[width * height];
           byte[] u = new byte[width * height / 4];
           byte[] v = new byte[width * height / 4];
           byte[] nu = new byte[width * height / 4]; //
           byte[] nv = new byte[width * height / 4];

           System.arraycopy(yuvFrame, 0, y, 0, y.length);
           Log.d("MatY", y.toString());
           for (int i = 0; i < u.length; i++) {
               v[i] = yuvFrame[y.length + 2 * i];
               u[i] = yuvFrame[y.length + 2 * i + 1];
           }
           int uvWidth = width / 2;
           int uvHeight = height / 2;
           for (int j = 0; j < uvWidth / 2; j++) {
               for (int i = 0; i < uvHeight / 2; i++) {
                   byte uSample1 = u[i * uvWidth + j];
                   byte uSample2 = u[i * uvWidth + j + uvWidth / 2];
                   byte vSample1 = v[(i + uvHeight / 2) * uvWidth + j];
                   byte vSample2 = v[(i + uvHeight / 2) * uvWidth + j + uvWidth / 2];
                   nu[2 * (i * uvWidth + j)] = uSample1;
                   nu[2 * (i * uvWidth + j) + 1] = uSample1;
                   nu[2 * (i * uvWidth + j) + uvWidth] = uSample2;
                   nu[2 * (i * uvWidth + j) + 1 + uvWidth] = uSample2;
                   nv[2 * (i * uvWidth + j)] = vSample1;
                   nv[2 * (i * uvWidth + j) + 1] = vSample1;
                   nv[2 * (i * uvWidth + j) + uvWidth] = vSample2;
                   nv[2 * (i * uvWidth + j) + 1 + uvWidth] = vSample2;
               }
           }
           //nv21test
           byte[] bytes = new byte[yuvFrame.length];
           System.arraycopy(y, 0, bytes, 0, y.length);
           for (int i = 0; i < u.length; i++) {
               bytes[y.length + (i * 2)] = nv[i];
               bytes[y.length + (i * 2) + 1] = nu[i];
           }
           Log.d(TAG,
                 "onYuvDataReceived: frame index: "
                     + DJIVideoStreamDecoder.getInstance().frameIndex
                     + ",array length: "
                     + bytes.length);
           YuvImage yuver = screenShot(bytes,Environment.getExternalStorageDirectory() + "/DJI_ScreenShot", width, height);
           return yuver;
       }

       /**
        * Save the buffered data into a JPG image file
        */
       private YuvImage screenShot(byte[] buf, String shotDir, int width, int height) {
           File dir = new File(shotDir);
           if (!dir.exists() || !dir.isDirectory()) {
               dir.mkdirs();
           }
           YuvImage yuvImage = new YuvImage(buf,
                   ImageFormat.NV21,
                   width,
                   height,
                   null);

           OutputStream outputFile = null;

           final String path = dir + "/ScreenShot_" + System.currentTimeMillis() + ".jpg";

           try {
               outputFile = new FileOutputStream(new File(path));
           } catch (FileNotFoundException e) {
               Log.e(TAG, "test screenShot: new bitmap output file error: " + e);
               //return;
           }
           if (outputFile != null) {
               yuvImage.compressToJpeg(new Rect(0,
                       0,
                       width,
                       height), 100, outputFile);
           }
           try {
               outputFile.close();
           } catch (IOException e) {
               Log.e(TAG, "test screenShot: compress yuv image error: " + e);
               e.printStackTrace();
           }

           runOnUiThread(new Runnable() {
               @Override
               public void run() {
                   displayPath(path);
               }
           });
           return yuvImage;
       }
    1. DJI also appears to have a "getRgbaData" function, but there is literally not a single example online or by DJI. Go ahead and Google "DJI getRgbaData"... There’s only the reference to the api documentation that explains the self explanatory parameters and return values but nothing else. I couldn’t figure out where to call this and there doesn’t appear to be a callback function as there is with YUV. You can’t call it from the h264b byte array directly, but perhaps you can get it from the yuv data.

    Option 1 is much more preferable to option 2, since YUV format has quality issues. Option 3 would also likely involve a decoder.

    Here’s a screenshot that DJI’s own YUV conversion produces. WalletPhoneYuv

    I’ve looked at a bunch of things about how to improve the YUV, remove green and yellow colors and whatnot, but at this point if DJI can’t do it right, I don’t want to invest resources there.

    Regarding Option 1, I know there’s FFMPEG and JavaCV that seem like good options if I have to go the video decoding route.

    Moreover, from what I understand, OpenCV can’t handle reading and writing video files without FFMPEG, but I’m not trying to read a video file, I am trying to read an H264/MPEG4 byte[] array. The following code seems to get positive results.

       /* Async OpenCV Code */
       private class OpenCvAndModelAsync extends AsyncTask {
           @Override
           protected double[] doInBackground(byte[]... params) {//Background Code Executing. Don't touch any UI components
               //get fpv feed and convert bytes to mat array
               Mat videoBufMat = new Mat(4, params[0].length, CvType.CV_8UC4);
               videoBufMat.put(0,0, params[0]);
               //if I add this in it says the bytes are empty.
               //Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_ANYCOLOR);
               //encodeVideoBuf.release();
               Log.d("MatRgba", videoBufMat.toString());
               for (int i = 0; i< videoBufMat.rows(); i++){
                   for (int j=0; j< videoBufMat.cols(); j++){
                       double[] rgb = videoBufMat.get(i, j);
                       Log.i("Matrix", "red: "+rgb[0]+" green: "+rgb[1]+" blue: "+rgb[2]+" alpha: "
                               + rgb[3] + " Length: " + rgb.length + " Rows: "
                               + videoBufMat.rows() + " Columns: " + videoBufMat.cols());
                   }
               }
               double[] center = openCVThingy(videoBufMat);
               return center;
           }
           protected void onPostExecute(double[] center) {
               //handle ui or another async task if necessary
           }
       }

    Rows = 4, Columns > 30k. I get lots of RGB values that seem valid, such as red = 113, green=75, blue=90, alpha=220 as a made up example ; however, I get a ton of 0,0,0,0 values. That should be somewhat okay, since Black is 0,0,0 (although I would have thought the alpha would be higher) and I have a black object in my image. I also don’t seem to get any white values 255, 255, 255, even though there is also plenty of white area. I’m not logging the entire byte so it could be there, but I have yet to see it.

    However, when I try to compute the contours from this image, I almost always get that the moments (center x, y) are exactly in the center of the image. This error has nothing to do with my color filter or contours algorithm, as I wrote a script in python and tested that I implemented it correctly in Android by reading a still image and getting the exact same number of contours, position, etc in both Python and Android.

    I noticed it has something to do with the videoBuffer byte size (bonus points if you can explain why every other length is 6)

    2019-05-23 21:14:29.601 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2425
    2019-05-23 21:14:29.802 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2659
    2019-05-23 21:14:30.004 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:30.263 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6015
    2019-05-23 21:14:30.507 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:30.766 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4682
    2019-05-23 21:14:31.005 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:31.234 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2840
    2019-05-23 21:14:31.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4482
    2019-05-23 21:14:31.664 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:31.927 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4768
    2019-05-23 21:14:32.174 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:32.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4700
    2019-05-23 21:14:32.668 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:32.864 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4740
    2019-05-23 21:14:33.102 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:33.365 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4640

    My questions :

    I. Is this the correct format to read an h264 byte as mat ?
    Assuming the format is RGBA, that means row = 4 and columns = byte[].length, and CvType.CV_8UC4. Do I have height and width correct ? Something tells me YUV height and width is off. I was getting some meaningful results, but the contours were exactly in the center, just like with the H264.

    II. Does OpenCV handle MP4 in android like this ? If not, do we need to use FFMPEG or JavaCV ?

    III. Does the int size have something to do with it ? Why is the int size occassionally 6, and other times 2400 to 6000 ? I’ve heard about the difference between this frames information and information about the next frame, but I’m simply not knowledgeable enough to know how to apply that here.

    I’m starting to think this is where the issue lies. Since I need to get the 6 byte array for info about next frame, perhaps my modulo 30 is incorrect. So should I pass the 29th or 31st frame as a format byte for each frame ? How is that done in opencv or are we doomed to use the complicated ffmpeg ? How would I go about joining the neighboring frames/ byte arrays ?

    IV. Can I fix this using Imcodecs ? I was hoping opencv would natively handle whether a frame was color from this frame or info about next frame. I added the below code, but I am getting an empty array :

    Mat videoBufMat = Imgcodecs.imdecode(new MatOfByte(params[0]), Imgcodecs.IMREAD_UNCHANGED);

    This also is empty :

    Mat encodeVideoBuf = new Mat(4, params[0].length, CvType.CV_8UC4);
    encodeVideoBuf.put(0,0, params[0]);
    Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_UNCHANGED);

    V. Should I try converting the bytes into Android jpeg and then import it ? Why is djis yuv decoder so complicated looking ? It makes me cautious from wanting to try ffmpeg or Javacv and just stick to Android decoder or opencv decoder.

    VI. At what stage should I resize the frames to speed up calculations ?

    Edit : DJI support got back to me and confirmed they don’t have any samples for doing what I’ve described. This is a time for we the community to make this available for everyone !

    Upon further research, I don’t think opencv will be able to handle this as opencv’s android sdk has no functionality for video files/url’s (apart from a homegrown MJPEG codec).

    So is there a way in Android to convert to mjpeg or similar in order to read ? In my application, I only need 1 or 2 frames per second, so perhaps I can save the image as jpeg.

    But for real time applications we will likely need to write our own decoder. Please help so that we can make this available to everyone ! This question seems promising :

  • How to Read DJI FPV Feed as OpenCV Object ?

    24 mai 2019, par Walter Morawa

    I’ve officially spent a lot of time looking for a solution to reading DJI’s FPV feed as an OpenCV Mat object. I am probably overlooking something simple, since I am not too familiar with Image Encoding/Decoding.

    I apologize if I am missing something very basic, but I know I’m not the first person to have issues getting DJI’s FPV feed, and answering this question, especially if option 1 is possible, would be extremely valuable to many developers. Please consider upvoting this question, as I’ve thoroughly researched this issue and future developers who come across it will likely run into a bunch of the same issues I had.

    I’m willing to use ffmpeg or Javacv if necessary, but that’s quite the hurdle for most Android developers as we’re going to have to use cpp, ndk, terminal for testing, etc. That seems like overkill.

    I believe the issue lies in the fact that we need to decode both the byte array of length 6 (info array) and the byte array with current frame info simultaneously. Thanks in advance for your time.

    Basically, DJI’s FPV feed comes in a number of formats.

    1. Raw H264 (MPEG4) in VideoFeeder.VideoDataListener
       // The callback for receiving the raw H264 video data for camera live view
       mReceivedVideoDataListener = new VideoFeeder.VideoDataListener() {
           @Override
           public void onReceive(byte[] videoBuffer, int size) {
               //Log.d("BytesReceived", Integer.toString(videoStreamFrameNumber));
               if (videoStreamFrameNumber++%30 == 0){
                   //convert video buffer to opencv array
                   OpenCvAndModelAsync openCvAndModelAsync = new OpenCvAndModelAsync();
                   openCvAndModelAsync.execute(videoBuffer);
               }
               if (mCodecManager != null) {
                   mCodecManager.sendDataToDecoder(videoBuffer, size);
               }
           }
       };
    1. DJI also has it’s own Android decoder sample with FFMPEG to convert to YUV format.
       @Override
       public void onYuvDataReceived(final ByteBuffer yuvFrame, int dataSize, final int width, final int height) {
           //In this demo, we test the YUV data by saving it into JPG files.
           //DJILog.d(TAG, "onYuvDataReceived " + dataSize);
           if (count++ % 30 == 0 && yuvFrame != null) {
               final byte[] bytes = new byte[dataSize];
               yuvFrame.get(bytes);
               AsyncTask.execute(new Runnable() {
                   @Override
                   public void run() {
                       if (bytes.length >= width * height) {
                           Log.d("MatWidth", "Made it");
                           YuvImage yuvImage = saveYuvDataToJPEG(bytes, width, height);
                           Bitmap rgbYuvConvert = convertYuvImageToRgb(yuvImage, width, height);

                           Mat yuvMat = new Mat(height, width, CvType.CV_8UC1);
                           yuvMat.put(0, 0, bytes);
                           //OpenCv Stuff
                       }
                   }
               });
           }
       }
    1. DJI also appears to have a "getRgbaData" function, but there is literally not a single example online or by DJI. Go ahead and Google "DJI getRgbaData"... There’s only the reference to the api documentation that explains the self explanatory parameters and return values but nothing else. I couldn’t figure out where to call this and there doesn’t appear to be a callback function as there is with YUV. You can’t call it from the h264b byte array directly, but perhaps you can get it from the yuv data.

    Option 1 is much more preferable to option 2, since YUV format has quality issues. Option 3 would also likely involve a decoder.

    Here’s a screenshot that DJI’s own YUV conversion produces. WalletPhoneYuv

    I’ve looked at a bunch of things about how to improve the YUV, remove green and yellow colors and whatnot, but at this point if DJI can’t do it right, I don’t want to invest resources there.

    Regarding Option 1, I know there’s FFMPEG and JavaCV that seem like good options if I have to go the video decoding route. However, both options seem quite time consuming. This JavaCV H264 conversion seems unnecessarily complex. I found it from this relevant question.

    Moreover, from what I understand, OpenCV can’t handle reading and writing video files without FFMPEG, but I’m not trying to read a video file, I am trying to read an H264/MPEG4 byte[] array. The following code seems to get positive results.

       /* Async OpenCV Code */
       private class OpenCvAndModelAsync extends AsyncTask {
           @Override
           protected double[] doInBackground(byte[]... params) {//Background Code Executing. Don't touch any UI components
               //get fpv feed and convert bytes to mat array
               Mat videoBufMat = new Mat(4, params[0].length, CvType.CV_8UC4);
               videoBufMat.put(0,0, params[0]);
               //if I add this in it says the bytes are empty.
               //Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_ANYCOLOR);
               //encodeVideoBuf.release();
               Log.d("MatRgba", videoBufMat.toString());
               for (int i = 0; i< videoBufMat.rows(); i++){
                   for (int j=0; j< videoBufMat.cols(); j++){
                       double[] rgb = videoBufMat.get(i, j);
                       Log.i("Matrix", "red: "+rgb[0]+" green: "+rgb[1]+" blue: "+rgb[2]+" alpha: "
                               + rgb[3] + " Length: " + rgb.length + " Rows: "
                               + videoBufMat.rows() + " Columns: " + videoBufMat.cols());
                   }
               }
               double[] center = openCVThingy(videoBufMat);
               return center;
           }
           protected void onPostExecute(double[] center) {
               //handle ui or another async task if necessary
           }
       }

    Rows = 4, Columns > 30k. I get lots of RGB values that seem valid, such as red = 113, green=75, blue=90, alpha=220 as a made up example ; however, I get a ton of 0,0,0,0 values. That should be somewhat okay, since Black is 0,0,0 (although I would have thought the alpha would be higher) and I have a black object in my image.

    However, when I try to compute the contours from this image, I almost always get that the moments (center x, y) are exactly in the center of the image. This error has nothing to do with my color filter or contours algorithm, as I wrote a script in python and tested that I implemented it correctly in Android by reading a still image and getting the exact same number of contours, position, etc in both Python and Android.

    I noticed it has something to do with the videoBuffer byte size (bonus points if you can explain why every other length is 6 !)

    2019-05-23 21:14:29.601 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2425
    2019-05-23 21:14:29.802 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2659
    2019-05-23 21:14:30.004 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:30.263 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6015
    2019-05-23 21:14:30.507 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:30.766 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4682
    2019-05-23 21:14:31.005 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:31.234 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2840
    2019-05-23 21:14:31.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4482
    2019-05-23 21:14:31.664 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:31.927 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4768
    2019-05-23 21:14:32.174 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:32.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4700
    2019-05-23 21:14:32.668 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:32.864 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4740
    2019-05-23 21:14:33.102 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:33.365 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4640

    My questions :

    I. Is this the correct format way to read an h264 byte as mat ?
    Assuming the format is RGBA, that means row = 4 and columns = byte[].length, and CvType.CV_8UC4. Do I have height and width correct ? Something tells me YUV height and width is off. I was getting some meaningful results, but the contours were exactly in the center, just like with the H264.

    II. Does OpenCV handle MP4 in android like this ? If not, do I need to use FFMPEG or JavaCV ?

    III. Does the int size have something to do with it ? Why is the int size occassionally 6, and other times 2400 to 6000 ? I’ve heard about the difference between this frames information and information about the next frame, but I’m simply not knowledgeable enough to know how to apply that here.
    I’m starting to think this is where the issue lies. Since I need to get the 6 byte array for info about next frame, perhaps my modulo 30 is incorrect. So should I pass the 29th or 31st frame as a format byte for each frame ? How is that done in opencv or are we doomed to use to the the complicated ffmpeg.

    IV. Can I fix this using Imcodecs ? I was hoping opencv would natively handle whether a frame was color from this frame or info about next frame. I added the below code, but I am getting an empty array :

    Mat videoBufMat = Imgcodecs.imdecode(new MatOfByte(params[0]), Imgcodecs.IMREAD_UNCHANGED);

    This also is empty :

    Mat encodeVideoBuf = new Mat(4, params[0].length, CvType.CV_8UC4);
    encodeVideoBuf.put(0,0, params[0]);
    Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_UNCHANGED);

    V. Should I try converting the bytes into Android jpeg and then import it ? Why is djis yuv decoder so complicated looking ? It makes me cautious from wanting to try ffmpeg or Javacv and just stick to Android decoder or opencv decoder.

    VI. At what stage should I resize the frames to speed up calculations ?