Recherche avancée

Médias (1)

Mot : - Tags -/berlin

Autres articles (96)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (6435)

  • FFMPEG fails in node exec but succeeds on OSX

    10 avril 2015, par rrrkren

    So I’m trying to get a thumbnail of a mov video using ffmpeg.
    Here is the command :

    ffmpeg -i video.mov -vf scale=-1:100 -r 1 -an -vframes 1 -f mjpeg thumb.jpg

    it works fine when I type it in terminal. But once I do it in javascript (node) :
    (thumbPath, destPath, and thumbname are all defined earlier, and I doubt they are the problem)

    var command = "ffmpeg -i "+ destPath +" -vf scale=-1:100 -ss 00:01 -r 1 -an -vframes 1  -f mjpeg "+thumbPath+thumbname;
    exec(command,function(err){
       if(err){
           console.log(err);
       };
    });

    The console logs :

    { [Error: Command failed: ffmpeg version 2.4.1-tessus Copyright (c) 2000-2014 the FFmpeg developers
     built on Sep 22 2014 23:16:01 with Apple LLVM version 6.0 (clang-600.0.51) (based on LLVM 3.5svn)
     configuration: --cc=/usr/bin/clang --prefix=/Users/tessus/data/ext/ffmpeg/sw --as=yasm --extra-version=tessus --disable-shared --enable-static --disable-ffplay --enable-gpl --enable-pthreads --enable-postproc --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libx265 --enable-libxvid --enable-libspeex --enable-bzlib --enable-zlib --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libxavs --enable-libsoxr --enable-libwavpack --enable-version3 --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvpx --enable-libgsm --enable-libopus --enable-libmodplug --enable-fontconfig --enable-libfreetype --enable-libass --enable-libbluray --enable-filters --disable-indev=qtkit --enable-runtime-cpudetect
     libavutil      54.  7.100 / 54.  7.100
     libavcodec     56.  1.100 / 56.  1.100
     libavformat    56.  4.101 / 56.  4.101
     libavdevice    56.  0.100 / 56.  0.100
     libavfilter     5.  1.100 /  5.  1.100
     libswscale      3.  0.100 /  3.  0.100
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  0.100 / 53.  0.100
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fe29c817000] moov atom not found
    video.mov: Invalid data found when processing input
    ] killed: false, code: 1, signal: null }

    I’ve looked up online and apparently the "moov atom not found error" is caused by the video being corrupted. But the command works fine when I type it in terminal. What’s wrong with my code ?

    Edit : This code works for mp4 videos, and the mov video was from an iPhone. I tried using a .mov file downloaded elsewhere and it works. Seems to be an issue with mov file shot with iPhone ?

  • HTML5 video player doesn't work on iPad/iPhone

    16 avril 2015, par Avrohom Yisroel

    NOTE This turned out to be a simulator problem, not a video encoding problem, see my edit lower down...

    I’m creating a web site for a local college, and they want to be able to add short videos that people can view online. I’ve spent quite a bit of time trying to work out how to get the videos to play on iDevices, but have failed.

    I’m using Video.js (http://www.videojs.com), and have HTML that looks like this...

    <video class="video-js vjs-default-skin" controls="controls" preload="auto" width="640" height="352" poster="/Content/Images/logobg.png" data-setup="{}">
     <source src="/Content/Videos/video.m4v" type="video/mp4">
     <source src="/Content/Videos/video.webm" type="video/webm">
     <p class="vjs-no-js">
       To view this video please enable JavaScript, and consider upgrading to a web browser that <a href="http://videojs.com/html5-video-support/" target="_blank">supports HTML5 video</a>
     </p>
    </source></source></video>

    This works fine on desktop browsers, where it uses the m4v file. However, if I load the page on an iDevice, the video player says "no compatible source was found for this video", which sounds like it doesn’t like either the m4v or the webm file.

    I created the webm file using instructions found at http://daniemon.com/blog/how-to-convert-videos-to-webm-with-ffmpeg/. I tried creating a .mov file using the accepted answer at iPad Doesn’t Render H.264 Video with HTML5, but this gave the same error.

    Anyone any ideas how I can support iDevices ? Please don’t blind me with science, I’m a real newbie with all this video stuff, and need simple instructions !

    Edit The problem I was having was when trying to view the site on a mobile simulator. When I uploaded the site to a real server, and tried it on an iPad, it worked fine. So, if anyone is having a similar problem, first use something like Handbrake to encode the videos, as it seems to do it fine, then make sure you’re testing on a real mobile device, not a simulator !

  • How to Read DJI FPV Feed as OpenCV Object ?

    24 mai 2019, par Walter Morawa

    I’ve officially spent a lot of time looking for a solution to reading DJI’s FPV feed as an OpenCV Mat object. I am probably overlooking something simple, since I am not too familiar with Image Encoding/Decoding.

    I apologize if I am missing something very basic, but I know I’m not the first person to have issues getting DJI’s FPV feed, and answering this question, especially if option 1 is possible, would be extremely valuable to many developers. Please consider upvoting this question, as I’ve thoroughly researched this issue and future developers who come across it will likely run into a bunch of the same issues I had.

    I’m willing to use ffmpeg or Javacv if necessary, but that’s quite the hurdle for most Android developers as we’re going to have to use cpp, ndk, terminal for testing, etc. That seems like overkill.

    I believe the issue lies in the fact that we need to decode both the byte array of length 6 (info array) and the byte array with current frame info simultaneously. Thanks in advance for your time.

    Basically, DJI’s FPV feed comes in a number of formats.

    1. Raw H264 (MPEG4) in VideoFeeder.VideoDataListener
       // The callback for receiving the raw H264 video data for camera live view
       mReceivedVideoDataListener = new VideoFeeder.VideoDataListener() {
           @Override
           public void onReceive(byte[] videoBuffer, int size) {
               //Log.d("BytesReceived", Integer.toString(videoStreamFrameNumber));
               if (videoStreamFrameNumber++%30 == 0){
                   //convert video buffer to opencv array
                   OpenCvAndModelAsync openCvAndModelAsync = new OpenCvAndModelAsync();
                   openCvAndModelAsync.execute(videoBuffer);
               }
               if (mCodecManager != null) {
                   mCodecManager.sendDataToDecoder(videoBuffer, size);
               }
           }
       };
    1. DJI also has it’s own Android decoder sample with FFMPEG to convert to YUV format.
       @Override
       public void onYuvDataReceived(final ByteBuffer yuvFrame, int dataSize, final int width, final int height) {
           //In this demo, we test the YUV data by saving it into JPG files.
           //DJILog.d(TAG, "onYuvDataReceived " + dataSize);
           if (count++ % 30 == 0 &amp;&amp; yuvFrame != null) {
               final byte[] bytes = new byte[dataSize];
               yuvFrame.get(bytes);
               AsyncTask.execute(new Runnable() {
                   @Override
                   public void run() {
                       if (bytes.length >= width * height) {
                           Log.d("MatWidth", "Made it");
                           YuvImage yuvImage = saveYuvDataToJPEG(bytes, width, height);
                           Bitmap rgbYuvConvert = convertYuvImageToRgb(yuvImage, width, height);

                           Mat yuvMat = new Mat(height, width, CvType.CV_8UC1);
                           yuvMat.put(0, 0, bytes);
                           //OpenCv Stuff
                       }
                   }
               });
           }
       }
    1. DJI also appears to have a "getRgbaData" function, but there is literally not a single example online or by DJI. Go ahead and Google "DJI getRgbaData"... There’s only the reference to the api documentation that explains the self explanatory parameters and return values but nothing else. I couldn’t figure out where to call this and there doesn’t appear to be a callback function as there is with YUV. You can’t call it from the h264b byte array directly, but perhaps you can get it from the yuv data.

    Option 1 is much more preferable to option 2, since YUV format has quality issues. Option 3 would also likely involve a decoder.

    Here’s a screenshot that DJI’s own YUV conversion produces. WalletPhoneYuv

    I’ve looked at a bunch of things about how to improve the YUV, remove green and yellow colors and whatnot, but at this point if DJI can’t do it right, I don’t want to invest resources there.

    Regarding Option 1, I know there’s FFMPEG and JavaCV that seem like good options if I have to go the video decoding route. However, both options seem quite time consuming. This JavaCV H264 conversion seems unnecessarily complex. I found it from this relevant question.

    Moreover, from what I understand, OpenCV can’t handle reading and writing video files without FFMPEG, but I’m not trying to read a video file, I am trying to read an H264/MPEG4 byte[] array. The following code seems to get positive results.

       /* Async OpenCV Code */
       private class OpenCvAndModelAsync extends AsyncTask {
           @Override
           protected double[] doInBackground(byte[]... params) {//Background Code Executing. Don't touch any UI components
               //get fpv feed and convert bytes to mat array
               Mat videoBufMat = new Mat(4, params[0].length, CvType.CV_8UC4);
               videoBufMat.put(0,0, params[0]);
               //if I add this in it says the bytes are empty.
               //Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_ANYCOLOR);
               //encodeVideoBuf.release();
               Log.d("MatRgba", videoBufMat.toString());
               for (int i = 0; i&lt; videoBufMat.rows(); i++){
                   for (int j=0; j&lt; videoBufMat.cols(); j++){
                       double[] rgb = videoBufMat.get(i, j);
                       Log.i("Matrix", "red: "+rgb[0]+" green: "+rgb[1]+" blue: "+rgb[2]+" alpha: "
                               + rgb[3] + " Length: " + rgb.length + " Rows: "
                               + videoBufMat.rows() + " Columns: " + videoBufMat.cols());
                   }
               }
               double[] center = openCVThingy(videoBufMat);
               return center;
           }
           protected void onPostExecute(double[] center) {
               //handle ui or another async task if necessary
           }
       }

    Rows = 4, Columns > 30k. I get lots of RGB values that seem valid, such as red = 113, green=75, blue=90, alpha=220 as a made up example ; however, I get a ton of 0,0,0,0 values. That should be somewhat okay, since Black is 0,0,0 (although I would have thought the alpha would be higher) and I have a black object in my image.

    However, when I try to compute the contours from this image, I almost always get that the moments (center x, y) are exactly in the center of the image. This error has nothing to do with my color filter or contours algorithm, as I wrote a script in python and tested that I implemented it correctly in Android by reading a still image and getting the exact same number of contours, position, etc in both Python and Android.

    I noticed it has something to do with the videoBuffer byte size (bonus points if you can explain why every other length is 6 !)

    2019-05-23 21:14:29.601 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2425
    2019-05-23 21:14:29.802 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2659
    2019-05-23 21:14:30.004 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:30.263 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6015
    2019-05-23 21:14:30.507 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:30.766 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4682
    2019-05-23 21:14:31.005 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:31.234 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2840
    2019-05-23 21:14:31.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4482
    2019-05-23 21:14:31.664 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:31.927 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4768
    2019-05-23 21:14:32.174 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:32.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4700
    2019-05-23 21:14:32.668 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:32.864 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4740
    2019-05-23 21:14:33.102 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:33.365 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4640

    My questions :

    I. Is this the correct format way to read an h264 byte as mat ?
    Assuming the format is RGBA, that means row = 4 and columns = byte[].length, and CvType.CV_8UC4. Do I have height and width correct ? Something tells me YUV height and width is off. I was getting some meaningful results, but the contours were exactly in the center, just like with the H264.

    II. Does OpenCV handle MP4 in android like this ? If not, do I need to use FFMPEG or JavaCV ?

    III. Does the int size have something to do with it ? Why is the int size occassionally 6, and other times 2400 to 6000 ? I’ve heard about the difference between this frames information and information about the next frame, but I’m simply not knowledgeable enough to know how to apply that here.
    I’m starting to think this is where the issue lies. Since I need to get the 6 byte array for info about next frame, perhaps my modulo 30 is incorrect. So should I pass the 29th or 31st frame as a format byte for each frame ? How is that done in opencv or are we doomed to use to the the complicated ffmpeg.

    IV. Can I fix this using Imcodecs ? I was hoping opencv would natively handle whether a frame was color from this frame or info about next frame. I added the below code, but I am getting an empty array :

    Mat videoBufMat = Imgcodecs.imdecode(new MatOfByte(params[0]), Imgcodecs.IMREAD_UNCHANGED);

    This also is empty :

    Mat encodeVideoBuf = new Mat(4, params[0].length, CvType.CV_8UC4);
    encodeVideoBuf.put(0,0, params[0]);
    Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_UNCHANGED);

    V. Should I try converting the bytes into Android jpeg and then import it ? Why is djis yuv decoder so complicated looking ? It makes me cautious from wanting to try ffmpeg or Javacv and just stick to Android decoder or opencv decoder.

    VI. At what stage should I resize the frames to speed up calculations ?