Recherche avancée

Médias (91)

Autres articles (77)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (4876)

  • bitmap to yuv , video recorded has only green pixels

    19 janvier 2016, par UserAx

    I am trying to convert a bitmap to yuv, and recording this yuv in the ffmpeg frame recorder...
    I am getting the video output with only green pixels, though when i check the properties of this video it shows the set Frame rate and the resolution...

    The yuv encoding part is correct, but i feel i am making mistake somewhere else, mostly in returning the yuv bytes to recording part ( getByte(byte [] yuv ) because only there the yuv.length displayed in console is 0,, rest all methods return a big value in console ...

    Kindly help...

    @Override
    public void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       setContentView(R.layout.activity_main);
       directory.mkdirs();

       addListenerOnButton();

       play=(Button)findViewById(R.id.buttonplay);
       stop=(Button)findViewById(R.id.buttonstop);
       record=(Button)findViewById(R.id.buttonstart);

       stop.setEnabled(false);
       play.setEnabled(false);


       record.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               startRecording();
               getByte(new byte[]{});
           }
       });

       stop.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               stopRecording();
           }
       });


       play.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) throws IllegalArgumentException, SecurityException, IllegalStateException {
               Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(String.valueOf(asmileys)));
               intent.setDataAndType(Uri.parse(String.valueOf(asmileys)), "video/mp4");
               startActivity(intent);
               Toast.makeText(getApplicationContext(), "Playing Video", Toast.LENGTH_LONG).show();
           }
       });

    }

    ......//......



    public void getByte(byte[] yuv) {
       getNV21(640, 480, bitmap);
       System.out.println(yuv.length + " ");
       if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
           startTime = System.currentTimeMillis();
           return;
       }
       if (RECORD_LENGTH > 0) {
           int i = imagesIndex++ % images.length;
           yuvimage = images[i];
           timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
       }
           /* get video data */
       if (yuvimage != null && recording) {
               ((ByteBuffer) yuvimage.image[0].position(0)).put(yuv);

               if (RECORD_LENGTH <= 0) {
                   try {
                       long t = 1000 * (System.currentTimeMillis() - startTime);
                       if (t > recorder.getTimestamp()) {
                           recorder.setTimestamp(t);
                       }
                       recorder.record(yuvimage);
                   } catch (FFmpegFrameRecorder.Exception e) {

                       e.printStackTrace();
                   }
               }
           }
    }

    public byte [] getNV21(int inputWidth, int inputHeight, Bitmap bitmap) {

       int[] argb = new int[inputWidth * inputHeight];

       bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);

       byte[] yuv = new byte[inputWidth * inputHeight * 3 / 2];
       encodeYUV420SP(yuv, argb, inputWidth, inputHeight);

       bitmap.recycle();
       System.out.println(yuv.length + " ");
       return yuv;

    }

    void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
       final int frameSize = width * height;

       int yIndex = 0;
       int uIndex = frameSize;
       int vIndex = frameSize;
       System.out.println(yuv420sp.length + " " + frameSize);

       int a, R, G, B, Y, U, V;
       int index = 0;
       for (int j = 0; j < height; j++) {
           for (int i = 0; i < width; i++) {

               a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
               R = (argb[index] & 0xff0000) >> 16;
               G = (argb[index] & 0xff00) >> 8;
               B = (argb[index] & 0xff) >> 0;

               // well known RGB to YUV algorithm

               Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
               U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
               V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;

               // NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
               //    meaning for every 4 Y pixels there are 1 V and 1 U.  Note the sampling is every other
               //    pixel AND every other scanline.
               yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
               if (j % 2 == 0 && index % 2 == 0) {
                   yuv420sp[uIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
                   yuv420sp[vIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
               }

               index++;
           }
       }
    }

    .....//.....

    public void addListenerOnButton() {
    image = (ImageView) findViewById(R.id.imageView);
    image.setDrawingCacheEnabled(true);
    image.buildDrawingCache();
    bitmap = image.getDrawingCache();
    System.out.println(bitmap.getByteCount() + " " );

    button = (Button) findViewById(R.id.btn1);
    button.setOnClickListener(new OnClickListener() {
    @Override
    public void onClick(View view){
       image.setImageResource(R.drawable.image1);
     }
    });

    ......//......

    EDIT 1 :

    I made few changes in the above code :

    record.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View v) {
               startRecording();
               getByte();
           }
       });
    .....//....

    public void getbyte() {
       byte[] yuv = getNV21(640, 480, bitmap);

    So now in the console ; i get same yuv length in this method as the yuv length from getNV21 method..

    But now i am getting half screen Black and Half screen green(black above and green below) pixels in the recorded video...

    If i add these lines to onCreate method ;

    image = (ImageView) findViewById(R.id.imageView);
    image.setDrawingCacheEnabled(true);
    image.buildDrawingCache();
    bitmap = image.getDrawingCache();

    I do get distorted frames( frames are 1/4th of the image displayed with mix up of colors here and there) in the video....

    All i am trying to learn is the image processing and flow of Bytes[] from one method to another ; but i am still a noob.. ;

    Kindly help..!

  • Live streaming : node-media-server + Dash.js configured for real-time low latency

    7 juillet 2021, par Maoration

    We're working on an app that enables live monitoring of your back yard.
Each client has a camera connected to the internet, streaming to our public node.js server.

    



    I'm trying to use node-media-server to publish an MPEG-DASH (or HLS) stream to be available for our app clients, on different networks, bandwidths and resolutions around the world.

    



    Our goal is to get as close as possible to live "real-time" so you can monitor what happens in your backyard instantly.

    



    The technical flow already accomplished is :

    



      

    1. ffmpeg process on our server processes the incoming camera stream (separate child process for each camera) and publishes the stream via RTSP on the local machine for node-media-server to use as an 'input' (we are also saving segmented files, generating thumbnails, etc.). the ffmpeg command responsible for that is :

      



      -c:v libx264 -preset ultrafast -tune zerolatency -b:v 900k -f flv rtmp://127.0.0.1:1935/live/office

    2. 


    3. node-media-server is running with what I found as the default configuration for 'live-streaming'

      



      private NMS_CONFIG = {
server: {
  secret: 'thisisnotmyrealsecret',
},
rtmp_server: {
  rtmp: {
    port: 1935,
    chunk_size: 60000,
    gop_cache: false,
    ping: 60,
    ping_timeout: 30,
  },
  http: {
    port: 8888,
    mediaroot: './server/media',
    allow_origin: '*',
  },
  trans: {
    ffmpeg: '/usr/bin/ffmpeg',
    tasks: [
      {
        app: 'live',
        hls: true,
        hlsFlags: '[hls_time=2:hls_list_size=3:hls_flags=delete_segments]',
        dash: true,
        dashFlags: '[f=dash:window_size=3:extra_window_size=5]',
      },
    ],
  },
},


      



      } ;

    4. 


    5. As I understand it, out of the box NMS (node-media-server) publishes the input stream it gets in multiple output formats : flv, mpeg-dash, hls.
with all sorts of online players for these formats I'm able to access and the stream using the url on localhost. with mpeg-dash and hls I'm getting anything between 10-15 seconds of delay, and more.

    6. 


    




    



    My goal now is to implement a local client-side mpeg-dash player, using dash.js and configure it to be as close as possible to live.

    



    my code for that is :

    



    

    

    &#xD;&#xA;&#xD;&#xA;    &#xD;&#xA;        &#xD;&#xA;        &#xD;&#xA;    &#xD;&#xA;    &#xD;&#xA;        <div>&#xD;&#xA;            <video autoplay="" controls=""></video>&#xD;&#xA;        </div>&#xD;&#xA;        <code class="echappe-js">&lt;script src=&quot;https://cdnjs.cloudflare.com/ajax/libs/dashjs/3.0.2/dash.all.min.js&quot;&gt;&lt;/script&gt;&#xD;&#xA;&#xD;&#xA;        &lt;script&gt;&amp;#xD;&amp;#xA;            (function(){&amp;#xD;&amp;#xA;                // var url = &quot;https://dash.akamaized.net/envivio/EnvivioDash3/manifest.mpd&quot;;&amp;#xD;&amp;#xA;                var url = &quot;http://localhost:8888/live/office/index.mpd&quot;;&amp;#xD;&amp;#xA;                var player = dashjs.MediaPlayer().create();&amp;#xD;&amp;#xA;                &amp;#xD;&amp;#xA;                &amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                // config&amp;#xD;&amp;#xA;                targetLatency = 2.0;        // Lowering this value will lower latency but may decrease the player&amp;#x27;s ability to build a stable buffer.&amp;#xD;&amp;#xA;                minDrift = 0.05;            // Minimum latency deviation allowed before activating catch-up mechanism.&amp;#xD;&amp;#xA;                catchupPlaybackRate = 0.5;  // Maximum catch-up rate, as a percentage, for low latency live streams.&amp;#xD;&amp;#xA;                stableBuffer = 2;           // The time that the internal buffer target will be set to post startup/seeks (NOT top quality).&amp;#xD;&amp;#xA;                bufferAtTopQuality = 2;     // The time that the internal buffer target will be set to once playing the top quality.&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                player.updateSettings({&amp;#xD;&amp;#xA;                    &amp;#x27;streaming&amp;#x27;: {&amp;#xD;&amp;#xA;                        &amp;#x27;liveDelay&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;liveCatchUpMinDrift&amp;#x27;: 0.05,&amp;#xD;&amp;#xA;                        &amp;#x27;liveCatchUpPlaybackRate&amp;#x27;: 0.5,&amp;#xD;&amp;#xA;                        &amp;#x27;stableBufferTime&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferTimeAtTopQuality&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferTimeAtTopQualityLongForm&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferToKeep&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;bufferAheadToKeep&amp;#x27;: 2,&amp;#xD;&amp;#xA;                        &amp;#x27;lowLatencyEnabled&amp;#x27;: true,&amp;#xD;&amp;#xA;                        &amp;#x27;fastSwitchEnabled&amp;#x27;: true,&amp;#xD;&amp;#xA;                        &amp;#x27;abr&amp;#x27;: {&amp;#xD;&amp;#xA;                            &amp;#x27;limitBitrateByPortal&amp;#x27;: true&amp;#xD;&amp;#xA;                        },&amp;#xD;&amp;#xA;                    }&amp;#xD;&amp;#xA;                });&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                console.log(player.getSettings());&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                setInterval(() =&gt; {&amp;#xD;&amp;#xA;                  console.log(&amp;#x27;Live latency= &amp;#x27;, player.getCurrentLiveLatency());&amp;#xD;&amp;#xA;                  console.log(&amp;#x27;Buffer length= &amp;#x27;, player.getBufferLength(&amp;#x27;video&amp;#x27;));&amp;#xD;&amp;#xA;                }, 3000);&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;                player.initialize(document.querySelector(&quot;#videoPlayer&quot;), url, true);&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;            })();&amp;#xD;&amp;#xA;&amp;#xD;&amp;#xA;        &lt;/script&gt;&#xD;&#xA;    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;&#xA;

    with the online test video (https://dash.akamaized.net/envivio/EnvivioDash3/manifest.mpd) I see that the live latency value is close to 2 secs (but I have no way to actually confirm it. it's a video file streamed. in my office I have a camera so I can actually compare latency between real-life and the stream I get).&#xA;however when working locally with my NMS, it seems this value does not want to go below 20-25 seconds.

    &#xA;&#xA;

    Am I doing something wrong ? any configuration on the player (client-side html) I'm forgetting ?&#xA;or is there a missing configuration I should add on the server side (NMS) ?

    &#xA;

  • yet another screenshot encoding exercise with ffmpeg - stuck at getting AVFrame from ALT::CImage - VC++

    11 septembre 2013, par sith

    Total AV newbee here - trying to learn the ropes on using FFMpeg functions to encode movies. On searching for tutorials I found a few similar questions that I have linked here for reference :

    Encoding a screenshot into a video using FFMPEG

    [Libav-user] Encoding a screenshot into a video using FFMPEG

    Save bitmap to video (libavcodec ffmpeg)

    When converting from RGB to YUV using ffmpeg the video file the color is spread why ?

    How to convert RGB from YUV420p for ffmpeg encoder ?

    Encode bmp sequence with libavcodec...Help !

    Not able to encode image with ffmpeg

    For my setup FFMPEG is on VS12 - VC++ with MFC on win7.

    With the help of above samples, I am able to get "some" output from the encoder, but I am not sure in what format or state the output has been encoded. Neither VLC nor WMP can play this file. It does not even seem to recognize the metadata in the file to display the FPS or video length. What would normally cause that ? Also any pointers on what could be going wrong and how to approach fixing the problems would be great. [1]

    Here is the flow of my code :

    Step1 : capture desktop on to a CImg :

    int W=GetSystemMetrics(SM_CXSCREEN), H=GetSystemMetrics(SM_CYSCREEN), bpp=24;
    CImage cImg; cImg.Create(W,H,bpp)
    HDC hDC = cImg.GetDC();
    CWindowDC winDC(GetDesktopWindow());

    BitBlt(hDC, 0,0, rez.W(), rez.H(), winDC.m_hDC, 0, 0, SRCCOPY);

    At this point I am able to dump a screen shot into a bmp file -
    using cImg.Save( _T("test.bmp"), Gdiplus::ImageFormatBMP) ;

    Step2 : Extract the BMP bits from the CImg.

    HBITMAP hBitmap = (HBITMAP)cImg;
    HDC memDC = CreateCompatibleDC(NULL);
    SelectObject( memDC, hBitmap );

    BITMAPINFO bmi; // initialized bmi with {W,-H, plane=1, bitCount=24, comp=BI_RGB, size=W*H*3 }
    &lt;&lt; removed bmi init code for conciseness. >>>

    BYTE *rgb24Data = new BYTE[W*H*3]; // 3 for 24bpp. 4 for 32...
    int ret = GetDIBits(memDC, hBitmap, 0, H, rgb24Data, &amp;bmi, DIB_RGB_COLORS);

    At this point I faithfully believe rgb24Data points to pixel data :) - copied out of the cImg bitmap

    Step 3 : next I try to create an AV frame with the rgb24Data got from this CImg. Also this is where I have a massive knowledge gap. I am going to try and recover

    // setup the codecs and contexts here as per mohM&#39;s post

    AVCodec *currCodec = avcodec_find_encoder(CODEC_ID_MPEG4);

    AVCodecContext *codeCtxt = avcodec_alloc_context();  // init this with bate=400k, W, H,
    &lt;&lt; removed codeCtxt init code for conciseness. >>>   //  time base 1/25, gop=10, max_b=1, fmt=YUV420

    avcodec_open(codeCtxt, currCodec);

    SwsContext *currSWSCtxt = sws_getContext( W, H, AV_PIX_FMT_RGB24, // FROM
                                             W, H, AV_PIX_FMT_YUV420P, // TO
                                             SWS_FAST_BILINEAR,
                                             NULL, NULL, NULL);

    // allocate and fill AVFrame
    int numBytes = avpicture_get_size(PIX_FMT_YUV420P, W, H);
    uint8_t *buffer=new uint8_t[numBytes];
    AVFrame *avFrame = avcodec_alloc_frame();
    avpicture_fill( (AVPicture*)avFrame, buffer, PIX_FMT_YUV420P, W, H );

    Step 4 : transform the data frame into YUV420P as we fill the frame.

    uint8_t * inData[1] = { rgb24Data };
    int inLinesize[1] = { 3*W }; // RGB stride
    sws_scale( currSWSCtxt, inData, inLinesize, 0, H,
              avFrame->data, avFrame->linesize);

    step 5 encode the frame and write out the output buffer into a file.

    int out_size = avcodec_encode_video( codeCtxt,
                                        outBuf,
                                        outBufSize,
                                        avFrame );

    fwrite(outBuf, 1, outBufSize, outFile );

    finally I close the file off with [0x00 0x00 0x01 0xb7]

    The first hint of things gone haywire is that for a 50 screens of 1920X1080 at 24bpp encoded at 25fps gives me a 507MB unplayable-mpeg file.

    As mentioned earlier, neither VLC nor WMP can play this file nor they even recognize the metadata in the file to display the FPS or video length. What would normally cause that ? Also any pointers on what could be going wrong and how to approach fixing the problems would be great. [2]

    Any guidance is much appreciated.