Recherche avancée

Médias (0)

Mot : - Tags -/upload

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (41)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (6085)

  • How to create video using avcodec from jpeg images of type OpenCV::Mat ?

    23 juillet 2015, par theateist

    I have colored jpeg images of OpenCV::Mat type and I create from them video using avcodec. The video that I get is upside-down, black & white and each row of each frame is shifted and I got diagonal line. What could be the reason for such output ?
    Follow this link to watch the video I get using avcodec.
    I’m using acpicture_fill function to create avFrame from cv::Mat frame !

    P.S.
    Each cv::Mat cvFrame has width=810, height=610, step=2432
    I noticed that avFrame (that is filled by acpicture_fill) has linesize[0]=2430
    I tried manually setting avFrame->linesizep0]=2432 and not 2430 but it still didn’t helped.

    ======== CODE =========================================================

    AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
    AVStream *outStream = avformat_new_stream(outContainer, encoder);
    avcodec_get_context_defaults3(outStream->codec, encoder);

    outStream->codec->pix_fmt = AV_PIX_FMT_YUV420P;
    outStream->codec->width = 810;
    outStream->codec->height = 610;
    //...

    SwsContext *swsCtx = sws_getContext(outStream->codec->width, outStream->codec->height, PIX_FMT_RGB24,
                                       outStream->codec->width, outStream->codec->height,  outStream->codec->pix_fmt, SWS_BICUBIC, NULL, NULL, NULL);

    for (uint i=0; i < frameNums; i++)
    {
       // get frame at location I using OpenCV
       cv::Mat cvFrame;
       myReader.getFrame(cvFrame, i);
       cv::Size frameSize = cvFrame.size();    
       //Each cv::Mat cvFrame has  width=810, height=610, step=2432


    1.  // create AVPicture from cv::Mat frame
    2.  avpicture_fill((AVPicture*)avFrame, cvFrame.data, PIX_FMT_RGB24, outStream->codec->width, outStream->codec->height);
    3avFrame->width = frameSize.width;
    4.  avFrame->height = frameSize.height;

       // rescale to outStream format
       sws_scale(swsCtx, avFrame->data, avFrame->linesize, 0, outStream->codec->height, avFrameRescaledFrame->data, avFrameRescaledFrame ->linesize);
    encoderRescaledFrame->pts=i;
    avFrameRescaledFrame->width = frameSize.width;
       avFrameRescaledFrame->height = frameSize.height;

    av_init_packet(&avEncodedPacket);
       avEncodedPacket.data = NULL;
       avEncodedPacket.size = 0;

       // encode rescaled frame
       if(avcodec_encode_video2(outStream->codec, &avEncodedPacket, avFrameRescaledFrame, &got_frame) < 0) exit(1);
       if(got_frame)
       {
           if (avEncodedPacket.pts != AV_NOPTS_VALUE)
               avEncodedPacket.pts =  av_rescale_q(avEncodedPacket.pts, outStream->codec->time_base, outStream->time_base);
           if (avEncodedPacket.dts != AV_NOPTS_VALUE)
               avEncodedPacket.dts = av_rescale_q(avEncodedPacket.dts, outStream->codec->time_base, outStream->time_base);

           // outContainer is "mp4"
           av_write_frame(outContainer, & avEncodedPacket);

           av_free_packet(&encodedPacket);
       }
    }

    UPDATED

    As @Alex suggested I changed the lines 1-4 with the code below

    int width = frameSize.width, height = frameSize.height;
    avpicture_alloc((AVPicture*)avFrame, AV_PIX_FMT_RGB24, outStream->codec->width, outStream->codec->height);
    for (int h = 0; h < height; h++)
    {
        memcpy(&(avFrame->data[0][h*avFrame->linesize[0]]), &(cvFrame.data[h*cvFrame.step]), width*3);
    }

    The video (here) I get now is almost perfect. It’s NOT upside-down, NOT black & white, BUT it seems that one of the RGB components is missing. Every brown/red colors became blue (in original images it should be vice-verse).
    What could be the problem ? Could rescaling(sws_scale) to AV_PIX_FMT_YUV420P format causes this ?

  • avformat/mpjpeg : allow processing of MIME parts without Content-Length header

    30 novembre 2015, par Alex Agranovsky
    avformat/mpjpeg : allow processing of MIME parts without Content-Length header
    

    Fixes ticket 5023

    Signed-off-by : Alex Agranovsky <alex@sighthound.com>

    • [DH] libavformat/mpjpegdec.c
  • Issue in recording video

    16 novembre 2015, par human123

    I am trying to record video in 480*480 resolution like in vine using javacv. As a starting point I used the sample provided in https://github.com/bytedeco/javacv/blob/master/samples/RecordActivity.java Video is getting recorded (but not in the desired resolution) and saved.

    But the issue is that 480*480 resolution is not supported natively in android. So some pre processing needs to be done to get the video in desired resolution.

    So once I was able to record video using code sample provided by javacv, next challenge was on how to pre process the video. On research it was found that efficient cropping is possible when final image width required is same as recorded image width. Such a solution was provided in the SO question,Recording video on Android using JavaCV (Updated 2014 02 17). I changed onPreviewFrame method as suggested in that answer.

       @Override
       public void onPreviewFrame(byte[] data, Camera camera) {
           if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
               startTime = System.currentTimeMillis();
               return;
           }
           if (RECORD_LENGTH > 0) {
               int i = imagesIndex++ % images.length;
               yuvImage = images[i];
               timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
           }
           /* get video data */
           imageWidth = 640;
           imageHeight = 480    
           int finalImageHeight = 360;
           if (yuvImage != null &amp;&amp; recording) {
               ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0); // resets the buffer
               final int startY = imageWidth*(imageHeight-finalImageHeight)/2;
               final int lenY = imageWidth*finalImageHeight;
               bb.put(data, startY, lenY);
               final int startVU = imageWidth*imageHeight + imageWidth*(imageHeight-finalImageHeight)/4;
               final int lenVU = imageWidth* finalImageHeight/2;
               bb.put(data, startVU, lenVU);
               try {
                   long t = 1000 * (System.currentTimeMillis() - startTime);
                   if (t > recorder.getTimestamp()) {
                       recorder.setTimestamp(t);
                   }
                   recorder.record(yuvImage);
               } catch (FFmpegFrameRecorder.Exception e) {
                   Log.e(LOG_TAG, "problem with recorder():", e);
               }
           }


       }
    }

    Please also note that this solution was provided for an older version of javacv. The resulting video had a yellowish overlay covering 2/3rd part. Also there was empty section on left side as the video was not cropped correctly.

    So my question is what is the most appropriate solution for cropping videos using latest version of javacv ?

    Code after making change as suggested by Alex Cohn

       @Override
       public void onPreviewFrame(byte[] data, Camera camera) {
           if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
               startTime = System.currentTimeMillis();
               return;
           }
           if (RECORD_LENGTH > 0) {
               int i = imagesIndex++ % images.length;
               yuvImage = images[i];
               timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
           }
           /* get video data */
           imageWidth = 640;
           imageHeight = 480;      
           destWidth = 480;

           if (yuvImage != null &amp;&amp; recording) {
               ByteBuffer bb = (ByteBuffer)yuvImage.image[0].position(0); // resets the buffer
               int start = 2*((imageWidth-destWidth)/4); // this must be even
               for (int row=0; row2; row++) {
                   bb.put(data, start, destWidth);
                   start += imageWidth;
               }
               try {
                   long t = 1000 * (System.currentTimeMillis() - startTime);
                   if (t > recorder.getTimestamp()) {
                       recorder.setTimestamp(t);
                   }
                   recorder.record(yuvImage);
               } catch (FFmpegFrameRecorder.Exception e) {
                   Log.e(LOG_TAG, "problem with recorder():", e);
               }
           }


       }

    Screen shot from video generated with this code (destWidth 480) is

    video resolution 480*480

    Next I tried capturing a video with destWidth speciified as 639. The result is

    639*480

    When destWidth is 639 video is repeating contents twice. When it is 480, contents are repeated 5 times and the green overlay and distortion is more.

    Also When the destWidth = imageWidth, video is captured properly. ie, for 640*480 there is no repetition of video contents and no green overlay.

    Converting frame to IplImage

    When this question was asked first, I missed to mention that the record method in FFmpegFrameRecorder is now accepting object of type Frame whereas earlier it was IplImage object. So I tried to apply Alex Cohn’s solution by converting Frame to IplImage.

    //---------------------------------------
    // initialize ffmpeg_recorder
    //---------------------------------------
    private void initRecorder() {

       Log.w(LOG_TAG,"init recorder");

       imageWidth = 640;
       imageHeight = 480;

       if (RECORD_LENGTH > 0) {
           imagesIndex = 0;
           images = new Frame[RECORD_LENGTH * frameRate];
           timestamps = new long[images.length];
           for (int i = 0; i &lt; images.length; i++) {
               images[i] = new Frame(imageWidth, imageHeight, Frame.DEPTH_UBYTE, 2);
               timestamps[i] = -1;
           }
       } else if (yuvImage == null) {
           yuvImage = new Frame(imageWidth, imageHeight, Frame.DEPTH_UBYTE, 2);
           Log.i(LOG_TAG, "create yuvImage");
           OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage();
           yuvIplimage = converter.convert(yuvImage);

       }

       Log.i(LOG_TAG, "ffmpeg_url: " + ffmpeg_link);
       recorder = new FFmpegFrameRecorder(ffmpeg_link, imageWidth, imageHeight, 1);
       recorder.setFormat("flv");
       recorder.setSampleRate(sampleAudioRateInHz);
       // Set in the surface changed method
       recorder.setFrameRate(frameRate);

       Log.i(LOG_TAG, "recorder initialize success");

       audioRecordRunnable = new AudioRecordRunnable();
       audioThread = new Thread(audioRecordRunnable);
       runAudioThread = true;
    }



    @Override
       public void onPreviewFrame(byte[] data, Camera camera) {
           if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
               startTime = System.currentTimeMillis();
               return;
           }
           if (RECORD_LENGTH > 0) {
               int i = imagesIndex++ % images.length;
               yuvImage = images[i];
               timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
           }
           /* get video data */
           int destWidth = 640;

           if (yuvIplimage != null &amp;&amp; recording) {
               ByteBuffer bb = yuvIplimage.getByteBuffer(); // resets the buffer
               int start = 2*((imageWidth-destWidth)/4); // this must be even
               for (int row=0; row2; row++) {
                   bb.put(data, start, destWidth);
                   start += imageWidth;
               }
               try {
                   long t = 1000 * (System.currentTimeMillis() - startTime);
                   if (t > recorder.getTimestamp()) {
                       recorder.setTimestamp(t);
                   }
                   recorder.record(yuvImage);
               } catch (FFmpegFrameRecorder.Exception e) {
                   Log.e(LOG_TAG, "problem with recorder():", e);
               }
           }


       }

    But the videos generated with this method contained only green frames.