Recherche avancée

Médias (3)

Mot : - Tags -/pdf

Autres articles (52)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (5784)

  • How can i create a portrait video using Android's MediaRecorder

    23 mars 2015, par urudroid

    I have an Android application which is able to record and play a videos in portrait mode, those features are working fine on Android phones.

    The issue comes up because this video is needed to be played also on iOS devices (after being shared through a server).

    iOS is not correctly showing the video as it looks "cropped", but videos recorded on iOS are played without issues.

    So, the main difference between videos created on Android and iOS’ is the size and the rotation.

    Im using CWAC-Camera library for preview and recording and ffmpeg to scale the video down to 320x568px (as this is the standard size for both Android and iOS apps).

    Here is the metadata from an video created from Android :

    General
    Complete name                            : android_video.mp4
    Format                                   : MPEG-4
    Format profile                           : Base Media
    Codec ID                                 : isom
    File size                                : 447 KiB
    Duration                                 : 5s 596ms
    Overall bit rate                         : 654 Kbps
    Encoded date                             : UTC 1904-01-01 00:00:00
    Tagged date                              : UTC 1904-01-01 00:00:00
    Writing application                      : Lavf56.4.101

    Video
    ID                                       : 1
    Format                                   : AVC
    Format/Info                              : Advanced Video Codec
    Format profile                           : High@L2.1
    Format settings, CABAC                   : Yes
    Format settings, ReFrames                : 4 frames
    Codec ID                                 : avc1
    Codec ID/Info                            : Advanced Video Coding
    Duration                                 : 5s 406ms
    Bit rate                                 : 536 Kbps
    Width                                    : 568 pixels
    Height                                   : 320 pixels
    Display aspect ratio                     : 16:9
    Original display aspect ratio            : 16:9
    Rotation                                 : 270°
    Frame rate mode                          : Constant
    Frame rate                               : 14.985 fps
    Color space                              : YUV
    Chroma subsampling                       : 4:2:0
    Bit depth                                : 8 bits
    Scan type                                : Progressive
    Bits/(Pixel*Frame)                       : 0.197
    Stream size                              : 354 KiB (79%)
    Writing library                          : x264 core 142
    Encoding settings                        : cabac=1 / ref=3 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=7 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=6 / lookahead_threads=1 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=3 / b_pyramid=2 / b_adapt=1 / b_bias=0 / direct=1 / weightb=1 / open_gop=0 / weightp=2 / keyint=250 / keyint_min=14 / scenecut=40 / intra_refresh=0 / rc_lookahead=40 / rc=crf / mbtree=1 / crf=23.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00
    Language                                 : English
    Encoded date                             : UTC 1904-01-01 00:00:00
    Tagged date                              : UTC 1904-01-01 00:00:00

    Audio
    ID                                       : 2
    Format                                   : AAC
    Format/Info                              : Advanced Audio Codec
    Format profile                           : LC
    Codec ID                                 : 40
    Duration                                 : 5s 596ms
    Bit rate mode                            : Constant
    Bit rate                                 : 132 Kbps
    Channel(s)                               : 2 channels
    Channel(s)_Original                      : 1 channel
    Channel positions                        : Front: C
    Sampling rate                            : 44.1 KHz
    Compression mode                         : Lossy
    Stream size                              : 89.4 KiB (20%)
    Language                                 : English
    Encoded date                             : UTC 1904-01-01 00:00:00
    Tagged date                              : UTC 1904-01-01 00:00:00

    And here is the metadata from the video created on iOS :

    General
    Complete name                            : ios_video.mp4
    Format                                   : MPEG-4
    Format profile                           : Base Media / Version 2
    Codec ID                                 : mp42
    File size                                : 673 KiB
    Duration                                 : 7s 38ms
    Overall bit rate                         : 783 Kbps
    Encoded date                             : UTC 2015-03-17 19:16:36
    Tagged date                              : UTC 2015-03-17 19:16:37

    Video
    ID                                       : 2
    Format                                   : AVC
    Format/Info                              : Advanced Video Codec
    Format profile                           : Main@L3.0
    Format settings, CABAC                   : Yes
    Format settings, ReFrames                : 2 frames
    Codec ID                                 : avc1
    Codec ID/Info                            : Advanced Video Coding
    Duration                                 : 7s 33ms
    Bit rate                                 : 711 Kbps
    Width                                    : 320 pixels
    Height                                   : 568 pixels
    Display aspect ratio                     : 0.563
    Frame rate mode                          : Constant
    Frame rate                               : 30.000 fps
    Color space                              : YUV
    Chroma subsampling                       : 4:2:0
    Bit depth                                : 8 bits
    Scan type                                : Progressive
    Bits/(Pixel*Frame)                       : 0.130
    Stream size                              : 610 KiB (91%)
    Title                                    : Core Media Video
    Encoded date                             : UTC 2015-03-17 19:16:36
    Tagged date                              : UTC 2015-03-17 19:16:37
    Color primaries                          : BT.709
    Transfer characteristics                 : BT.709
    Matrix coefficients                      : BT.709
    Color range                              : Limited

    Audio
    ID                                       : 1
    Format                                   : AAC
    Format/Info                              : Advanced Audio Codec
    Format profile                           : LC
    Codec ID                                 : 40
    Duration                                 : 7s 38ms
    Source duration                          : 7s 105ms
    Bit rate mode                            : Constant
    Bit rate                                 : 64.0 Kbps
    Channel(s)                               : 2 channels
    Channel(s)_Original                      : 1 channel
    Channel positions                        : Front: C
    Sampling rate                            : 44.1 KHz
    Compression mode                         : Lossy
    Stream size                              : 56.8 KiB (8%)
    Source stream size                       : 57.2 KiB (9%)
    Title                                    : Core Media Audio
    Encoded date                             : UTC 2015-03-17 19:16:36
    Tagged date                              : UTC 2015-03-17 19:16:37

    The values width and height are inverted on Android, also the Rotation parameter is set to 270º (this is the rotation parameter for portrait videos).

    This is a sketch of how iOS’ videos look on iOS app :

    Videos recorded on iOS

    And this is how Android’s videos look on iOS app :

    enter image description here

    So, in order to get the videos correctly displayed both on iOS and Android i need to be able to set the width to 320 and height to 568 on Android. I tried it from several places (outside and inside CWAC-Camera library) but i always get a Camera.Parameters error.

    It is possible to do this on Android ?

    EDIT :

    This is the result i get when i set the rotation to 0 with ffmpeg :

    enter image description here

  • encode h264 video using ffmpeg library memory issues

    31 mars 2015, par Zeppa

    I’m trying to do screen capture on OS X using ffmpeg’s avfoundation library. I capture frames from the screen and encode it using H264 into an flv container.

    Here’s the command line output of the program :

    Input #0, avfoundation, from 'Capture screen 0':
     Duration: N/A, start: 9.253649, bitrate: N/A
       Stream #0:0: Video: rawvideo (UYVY / 0x59565955), uyvy422, 1440x900, 14.58 tbr, 1000k tbn, 1000k tbc
    raw video is inCodec
    FLV (Flash Video)http://localhost:8090/test.flv
    [libx264 @ 0x102038e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x102038e00] profile High, level 4.0
    [libx264 @ 0x102038e00] 264 - core 142 r2495 6a301b6 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=1 weightp=2 keyint=50 keyint_min=5 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=400 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    [tcp @ 0x101a5fe70] Connection to tcp://localhost:8090 failed (Connection refused), trying next address
    [tcp @ 0x101a5fe70] Connection to tcp://localhost:8090 failed: Connection refused
    url_fopen failed: Operation now in progress
    [flv @ 0x102038800] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
    encoded frame #0
    encoded frame #1
    ......
    encoded frame #49
    encoded frame #50
    testmee(8404,0x7fff7e05c300) malloc: *** error for object 0x102053e08: incorrect checksum for freed object - object was probably modified after being freed.
    *** set a breakpoint in malloc_error_break to debug
    (lldb) bt
    * thread #10: tid = 0x43873, 0x00007fff95639286 libsystem_kernel.dylib`__pthread_kill + 10, stop reason = signal SIGABRT
     * frame #0: 0x00007fff95639286 libsystem_kernel.dylib`__pthread_kill + 10
       frame #1: 0x00007fff9623742f libsystem_pthread.dylib`pthread_kill + 90
       frame #2: 0x00007fff977ceb53 libsystem_c.dylib`abort + 129
       frame #3: 0x00007fff9ab59e06 libsystem_malloc.dylib`szone_error + 625
       frame #4: 0x00007fff9ab4f799 libsystem_malloc.dylib`small_malloc_from_free_list + 1105
       frame #5: 0x00007fff9ab4d3bc libsystem_malloc.dylib`szone_malloc_should_clear + 1449
       frame #6: 0x00007fff9ab4c877 libsystem_malloc.dylib`malloc_zone_malloc + 71
       frame #7: 0x00007fff9ab4b395 libsystem_malloc.dylib`malloc + 42
       frame #8: 0x00007fff94aa63d2 IOSurface`IOSurfaceClientLookupFromMachPort + 40
       frame #9: 0x00007fff94aa6b38 IOSurface`IOSurfaceLookupFromMachPort + 12
       frame #10: 0x00007fff92bfa6b2 CoreGraphics`_CGYDisplayStreamFrameAvailable + 342
       frame #11: 0x00007fff92f6759c CoreGraphics`CGYDisplayStreamNotification_server + 336
       frame #12: 0x00007fff92bfada6 CoreGraphics`display_stream_runloop_callout + 46
       frame #13: 0x00007fff956eba07 CoreFoundation`__CFMachPortPerform + 247
       frame #14: 0x00007fff956eb8f9 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 41
       frame #15: 0x00007fff956eb86b CoreFoundation`__CFRunLoopDoSource1 + 475
       frame #16: 0x00007fff956dd3e7 CoreFoundation`__CFRunLoopRun + 2375
       frame #17: 0x00007fff956dc858 CoreFoundation`CFRunLoopRunSpecific + 296
       frame #18: 0x00007fff95792ef1 CoreFoundation`CFRunLoopRun + 97
       frame #19: 0x0000000105f79ff1 CMIOUnits`___lldb_unnamed_function2148$$CMIOUnits + 875
       frame #20: 0x0000000105f6f2c2 CMIOUnits`___lldb_unnamed_function2127$$CMIOUnits + 14
       frame #21: 0x00007fff97051765 CoreMedia`figThreadMain + 417
       frame #22: 0x00007fff96235268 libsystem_pthread.dylib`_pthread_body + 131
       frame #23: 0x00007fff962351e5 libsystem_pthread.dylib`_pthread_start + 176
       frame #24: 0x00007fff9623341d libsystem_pthread.dylib`thread_start + 13

    I’ve attached the code I used below.

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #include <libavdevice></libavdevice>avdevice.h>
    #include <libavutil></libavutil>opt.h>
    #include
    #include
    #include
    /* compile using
    gcc -g -o stream test.c -lavformat -lavutil -lavcodec -lavdevice -lswscale
    */

    // void show_av_device() {

    //    inFmt->get_device_list(inFmtCtx, device_list);
    //    printf("Device Info=============\n");
    //    //avformat_open_input(&amp;inFmtCtx,"video=Capture screen 0",inFmt,&amp;inOptions);
    //    printf("===============================\n");
    // }

    void AVFAIL (int code, const char *what) {
       char msg[500];
       av_strerror(code, msg, sizeof(msg));
       fprintf(stderr, "failed: %s\nerror: %s\n", what, msg);
       exit(2);
    }

    #define AVCHECK(f) do { int e = (f); if (e &lt; 0) AVFAIL(e, #f); } while (0)
    #define AVCHECKPTR(p,f) do { p = (f); if (!p) AVFAIL(AVERROR_UNKNOWN, #f); } while (0)

    void registerLibs() {
       av_register_all();
       avdevice_register_all();
       avformat_network_init();
       avcodec_register_all();
    }

    int main(int argc, char *argv[]) {

       //conversion variables
       struct SwsContext *swsCtx = NULL;
       //input stream variables
       AVFormatContext   *inFmtCtx = NULL;
       AVCodecContext    *inCodecCtx = NULL;
       AVCodec           *inCodec = NULL;
       AVInputFormat     *inFmt = NULL;
       AVFrame           *inFrame = NULL;
       AVDictionary      *inOptions = NULL;
       const char *streamURL = "http://localhost:8090/test.flv";
       const char *name = "avfoundation";

    //    AVFrame           *inFrameYUV = NULL;
       AVPacket          inPacket;


       //output stream variables
       AVCodecContext    *outCodecCtx = NULL;
       AVCodec           *outCodec;
       AVFormatContext   *outFmtCtx = NULL;
       AVOutputFormat    *outFmt = NULL;
       AVFrame           *outFrameYUV = NULL;
       AVStream          *stream = NULL;

       int               i, videostream, ret;
       int               numBytes, frameFinished;

       registerLibs();
       inFmtCtx = avformat_alloc_context(); //alloc input context
       av_dict_set(&amp;inOptions, "pixel_format", "uyvy422", 0);
       av_dict_set(&amp;inOptions, "probesize", "7000000", 0);

       inFmt = av_find_input_format(name);
       ret = avformat_open_input(&amp;inFmtCtx, "Capture screen 0:", inFmt, &amp;inOptions);
       if (ret &lt; 0) {
           printf("Could not load the context for the input device\n");
           return -1;
       }
       if (avformat_find_stream_info(inFmtCtx, NULL) &lt; 0) {
           printf("Could not find stream info for screen\n");
           return -1;
       }
       av_dump_format(inFmtCtx, 0, "Capture screen 0", 0);
       // inFmtCtx->streams is an array of pointers of size inFmtCtx->nb_stream

       videostream = av_find_best_stream(inFmtCtx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;inCodec, 0);
       if (videostream == -1) {
           printf("no video stream found\n");
           return -1;
       } else {
           printf("%s is inCodec\n", inCodec->long_name);
       }
       inCodecCtx = inFmtCtx->streams[videostream]->codec;
       // open codec
       if (avcodec_open2(inCodecCtx, inCodec, NULL) > 0) {
           printf("Couldn't open codec");
           return -1;  // couldn't open codec
       }


           //setup output params
       outFmt = av_guess_format(NULL, streamURL, NULL);
       if(outFmt == NULL) {
           printf("output format was not guessed properly");
           return -1;
       }

       if((outFmtCtx = avformat_alloc_context()) &lt; 0) {
           printf("output context not allocated. ERROR");
           return -1;
       }

       printf("%s", outFmt->long_name);

       outFmtCtx->oformat = outFmt;

       snprintf(outFmtCtx->filename, sizeof(outFmtCtx->filename), streamURL);
       printf("%s\n", outFmtCtx->filename);

       outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if(!outCodec) {
           printf("could not find encoder for H264 \n" );
           return -1;
       }

       stream = avformat_new_stream(outFmtCtx, outCodec);
       outCodecCtx = stream->codec;
       avcodec_get_context_defaults3(outCodecCtx, outCodec);

       outCodecCtx->codec_id = AV_CODEC_ID_H264;
       outCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
       outCodecCtx->flags = CODEC_FLAG_GLOBAL_HEADER;
       outCodecCtx->width = inCodecCtx->width;
       outCodecCtx->height = inCodecCtx->height;
       outCodecCtx->time_base.den = 25;
       outCodecCtx->time_base.num = 1;
       outCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
       outCodecCtx->gop_size = 50;
       outCodecCtx->bit_rate = 400000;

       //setup output encoders etc
       if(stream) {
           ret = avcodec_open2(outCodecCtx, outCodec, NULL);
           if (ret &lt; 0) {
               printf("Could not open output encoder");
               return -1;
           }
       }

       if (avio_open(&amp;outFmtCtx->pb, outFmtCtx->filename, AVIO_FLAG_WRITE ) &lt; 0) {
           perror("url_fopen failed");
       }

       avio_open_dyn_buf(&amp;outFmtCtx->pb);
       ret = avformat_write_header(outFmtCtx, NULL);
       if (ret != 0) {
           printf("was not able to write header to output format");
           return -1;
       }
       unsigned char *pb_buffer;
       int len = avio_close_dyn_buf(outFmtCtx->pb, (unsigned char **)(&amp;pb_buffer));
       avio_write(outFmtCtx->pb, (unsigned char *)pb_buffer, len);


       numBytes = avpicture_get_size(PIX_FMT_UYVY422, inCodecCtx->width, inCodecCtx->height);
       // Allocate video frame
       inFrame = av_frame_alloc();

       swsCtx = sws_getContext(inCodecCtx->width, inCodecCtx->height, inCodecCtx->pix_fmt, inCodecCtx->width,
                               inCodecCtx->height, PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
       int frame_count = 0;
       while(av_read_frame(inFmtCtx, &amp;inPacket) >= 0) {
           if(inPacket.stream_index == videostream) {
               avcodec_decode_video2(inCodecCtx, inFrame, &amp;frameFinished, &amp;inPacket);
               // 1 Frame might need more than 1 packet to be filled
               if(frameFinished) {
                   outFrameYUV = av_frame_alloc();

                   uint8_t *buffer = (uint8_t *)av_malloc(numBytes);

                   int ret = avpicture_fill((AVPicture *)outFrameYUV, buffer, PIX_FMT_YUV420P,
                                            inCodecCtx->width, inCodecCtx->height);
                   if(ret &lt; 0){
                       printf("%d is return val for fill\n", ret);
                       return -1;
                   }
                   //convert image to YUV
                   sws_scale(swsCtx, (uint8_t const * const* )inFrame->data,
                             inFrame->linesize, 0, inCodecCtx->height,
                             outFrameYUV->data, outFrameYUV->linesize);
                   //outFrameYUV now holds the YUV scaled frame/picture
                   outFrameYUV->format = outCodecCtx->pix_fmt;
                   outFrameYUV->width = outCodecCtx->width;
                   outFrameYUV->height = outCodecCtx->height;


                   AVPacket pkt;
                   int got_output;
                   av_init_packet(&amp;pkt);
                   pkt.data = NULL;
                   pkt.size = 0;

                   outFrameYUV->pts = frame_count;

                   ret = avcodec_encode_video2(outCodecCtx, &amp;pkt, outFrameYUV, &amp;got_output);
                   if (ret &lt; 0) {
                       fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
                       return -1;
                   }

                   if(got_output) {
                       if(stream->codec->coded_frame->key_frame) {
                           pkt.flags |= AV_PKT_FLAG_KEY;
                       }
                       pkt.stream_index = stream->index;
                       if(pkt.pts != AV_NOPTS_VALUE)
                           pkt.pts = av_rescale_q(pkt.pts, stream->codec->time_base, stream->time_base);
                       if(pkt.dts != AV_NOPTS_VALUE)
                           pkt.dts = av_rescale_q(pkt.dts, stream->codec->time_base, stream->time_base);
                       if(avio_open_dyn_buf(&amp;outFmtCtx->pb)!= 0) {
                           printf("ERROR: Unable to open dynamic buffer\n");
                       }
                       ret = av_interleaved_write_frame(outFmtCtx, &amp;pkt);
                       unsigned char *pb_buffer;
                       int len = avio_close_dyn_buf(outFmtCtx->pb, (unsigned char **)&amp;pb_buffer);
                       avio_write(outFmtCtx->pb, (unsigned char *)pb_buffer, len);

                   } else {
                       ret = 0;
                   }
                   if(ret != 0) {
                       fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
                       exit(1);
                   }

                   fprintf(stderr, "encoded frame #%d\n", frame_count);
                   frame_count++;

                   av_free_packet(&amp;pkt);
                   av_frame_free(&amp;outFrameYUV);
                   av_free(buffer);

               }
           }
           av_free_packet(&amp;inPacket);
       }
       av_write_trailer(outFmtCtx);

       //close video stream
       if(stream) {
           avcodec_close(outCodecCtx);
       }
       for (i = 0; i &lt; outFmtCtx->nb_streams; i++) {
           av_freep(&amp;outFmtCtx->streams[i]->codec);
           av_freep(&amp;outFmtCtx->streams[i]);
       }
       if (!(outFmt->flags &amp; AVFMT_NOFILE))
       /* Close the output file. */
           avio_close(outFmtCtx->pb);
       /* free the output format context */
       avformat_free_context(outFmtCtx);

       // Free the YUV frame populated by the decoder
       av_free(inFrame);

       // Close the video codec (decoder)
       avcodec_close(inCodecCtx);

       // Close the input video file
       avformat_close_input(&amp;inFmtCtx);

       return 1;

    }

    I’m not sure what I’ve done wrong here. But, what I’ve observed is that for each frame thats been encoded, my memory usage goes up by about 6MB. Backtracking afterward usually leads one of the following two culprits :

    1. avf_read_frame function in avfoundation.m
    2. av_dup_packet function in avpacket.h

    Can I also get advice on the way I’m using avio_open_dyn_buff function to be able to stream over http? I’ve also attached my ffmpeg library versions below :

       ffmpeg version N-70876-g294bb6c Copyright (c) 2000-2015 the FFmpeg developers
     built with Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn)
     configuration: --prefix=/usr/local --enable-gpl --enable-postproc --enable-pthreads --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-libvorbis --disable-mmx --disable-ssse3 --disable-armv5te --disable-armv6 --disable-neon --enable-shared --disable-static --disable-stripping
     libavutil      54. 20.100 / 54. 20.100
     libavcodec     56. 29.100 / 56. 29.100
     libavformat    56. 26.101 / 56. 26.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 13.101 /  5. 13.101
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Hyper fast Audio and Video encoder

    Valgrind analysis attached here because I exceeded stack overflow’s character limit. http://pastebin.com/MPeRhjhN

  • issue after video rotation how fix

    2 avril 2015, par Vahagn

    I have next code for rotate video

    OpenCVFrameConverter.ToIplImage converter2 = new OpenCVFrameConverter.ToIplImage() ;

    for (int i = firstIndex; i &lt;= lastIndex; i++) {
       long t = timestamps[i % timestamps.length] - startTime;
       if (t >= 0) {
           if (t > recorder.getTimestamp()) {
               recorder.setTimestamp(t);
           }
           Frame g = converter2.convert(rotate(converter2.convertToIplImage(images[i % images.length]),9 0));
       recorder.record(g);
       }
    }

    images[i] - Frame in JavaCV
    after in video have green lines

    UPDATE
    Convertation function

    /*
    * Copyright (C) 2015 Samuel Audet
    *
    * This file is part of JavaCV.
    *
    * JavaCV is free software: you can redistribute it and/or modify
    * it under the terms of the GNU General Public License as published by
    * the Free Software Foundation, either version 2 of the License, or
    * (at your option) any later version (subject to the "Classpath" exception
    * as provided in the LICENSE.txt file that accompanied this code).
    *
    * JavaCV is distributed in the hope that it will be useful,
    * but WITHOUT ANY WARRANTY; without even the implied warranty of
    * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    * GNU General Public License for more details.
    *
    * You should have received a copy of the GNU General Public License
    * along with JavaCV.  If not, see /www.gnu.org/licenses/>.
    */

    package com.example.vvardanyan.ffmpeg;

    import org.bytedeco.javacpp.BytePointer;
    import org.bytedeco.javacpp.Pointer;

    import java.nio.Buffer;

    import static org.bytedeco.javacpp.opencv_core.CV_16S;
    import static org.bytedeco.javacpp.opencv_core.CV_16U;
    import static org.bytedeco.javacpp.opencv_core.CV_32F;
    import static org.bytedeco.javacpp.opencv_core.CV_32S;
    import static org.bytedeco.javacpp.opencv_core.CV_64F;
    import static org.bytedeco.javacpp.opencv_core.CV_8S;
    import static org.bytedeco.javacpp.opencv_core.CV_8U;
    import static org.bytedeco.javacpp.opencv_core.CV_MAKETYPE;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_16S;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_16U;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_32F;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_32S;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_64F;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_8S;
    import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_8U;
    import static org.bytedeco.javacpp.opencv_core.IplImage;
    import static org.bytedeco.javacpp.opencv_core.Mat;

    /**
    * A utility class to map data between {@link Frame} and {@link IplImage} or {@link Mat}.
    * Since this is an abstract class, one must choose between two concrete classes:
    * {@link ToIplImage} or {@link ToMat}.
    *
    * @author Samuel Audet
    */
    public abstract class OpenCVFrameConverter<f> extends FrameConverter<f> {
       IplImage img;
       Mat mat;

       public static class ToIplImage extends OpenCVFrameConverter<iplimage> {
           @Override public IplImage convert(Frame frame) { return convertToIplImage(frame); }
       }

       public static class ToMat extends OpenCVFrameConverter<mat> {
           @Override public Mat convert(Frame frame) { return convertToMat(frame); }
       }

       public static int getFrameDepth(int depth) {
           switch (depth) {
               case IPL_DEPTH_8U:  case CV_8U:  return Frame.DEPTH_UBYTE;
               case IPL_DEPTH_8S:  case CV_8S:  return Frame.DEPTH_BYTE;
               case IPL_DEPTH_16U: case CV_16U: return Frame.DEPTH_USHORT;
               case IPL_DEPTH_16S: case CV_16S: return Frame.DEPTH_SHORT;
               case IPL_DEPTH_32F: case CV_32F: return Frame.DEPTH_FLOAT;
               case IPL_DEPTH_32S: case CV_32S: return Frame.DEPTH_INT;
               case IPL_DEPTH_64F: case CV_64F: return Frame.DEPTH_DOUBLE;
               default: return -1;
           }
       }

       public static int getIplImageDepth(Frame frame) {
           switch (frame.imageDepth) {
               case Frame.DEPTH_UBYTE:  return IPL_DEPTH_8U;
               case Frame.DEPTH_BYTE:   return IPL_DEPTH_8S;
               case Frame.DEPTH_USHORT: return IPL_DEPTH_16U;
               case Frame.DEPTH_SHORT:  return IPL_DEPTH_16S;
               case Frame.DEPTH_FLOAT:  return IPL_DEPTH_32F;
               case Frame.DEPTH_INT:    return IPL_DEPTH_32S;
               case Frame.DEPTH_DOUBLE: return IPL_DEPTH_64F;
               default:  return -1;
           }
       }
       static boolean isEqual(Frame frame, IplImage img) {
           return img != null &amp;&amp; frame != null &amp;&amp; frame.image != null &amp;&amp; frame.image.length > 0
                   &amp;&amp; frame.imageWidth == img.width() &amp;&amp; frame.imageHeight == img.height()
                   &amp;&amp; frame.imageChannels == img.nChannels() &amp;&amp; getIplImageDepth(frame) == img.depth()
                   &amp;&amp; new Pointer(frame.image[0]).address() == img.imageData().address()
                   &amp;&amp; frame.imageStride * Math.abs(frame.imageDepth) / 8 == img.widthStep();
       }
       public IplImage convertToIplImage(Frame frame) {
           if (frame == null) {
               return null;
           } else if (frame.opaque instanceof IplImage) {
               return (IplImage)frame.opaque;
           } else if (!isEqual(frame, img)) {
               int depth = getIplImageDepth(frame);
               img = depth &lt; 0 ? null : IplImage.createHeader(frame.imageWidth, frame.imageHeight, depth, frame.imageChannels)
                       .imageData(new BytePointer(new Pointer(frame.image[0].position(0)))).widthStep(frame.imageStride * Math.abs(frame.imageDepth) / 8);
           }
           return img;
       }
       public Frame convert(IplImage img) {
           if (img == null) {
               return null;
           } else if (!isEqual(frame, img)) {
               frame = new Frame();
               frame.imageWidth = img.width();
               frame.imageHeight = img.height();
               frame.imageDepth = getFrameDepth(img.depth());
               frame.imageChannels = img.nChannels();
               frame.imageStride = img.widthStep() * 8 / Math.abs(frame.imageDepth);
               frame.image = new Buffer[] { img.createBuffer() };
               frame.opaque = img;
           }
           return frame;
       }

       public static int getMatDepth(Frame frame) {
           switch (frame.imageDepth) {
               case Frame.DEPTH_UBYTE:  return CV_8U;
               case Frame.DEPTH_BYTE:   return CV_8S;
               case Frame.DEPTH_USHORT: return CV_16U;
               case Frame.DEPTH_SHORT:  return CV_16S;
               case Frame.DEPTH_FLOAT:  return CV_32F;
               case Frame.DEPTH_INT:    return CV_32S;
               case Frame.DEPTH_DOUBLE: return CV_64F;
               default:  return -1;
           }
       }
       static boolean isEqual(Frame frame, Mat mat) {
           return mat != null &amp;&amp; frame != null &amp;&amp; frame.image != null &amp;&amp; frame.image.length > 0
                   &amp;&amp; frame.imageWidth == mat.cols() &amp;&amp; frame.imageHeight == mat.rows()
                   &amp;&amp; frame.imageChannels == mat.channels() &amp;&amp; getMatDepth(frame) == mat.depth()
                   &amp;&amp; new Pointer(frame.image[0]).address() == mat.data().address()
                   &amp;&amp; frame.imageStride * Math.abs(frame.imageDepth) / 8 == (int)mat.step();
       }
       public Mat convertToMat(Frame frame) {
           if (frame == null) {
               return null;
           } else if (frame.opaque instanceof Mat) {
               return (Mat)frame.opaque;
           } else if (!isEqual(frame, mat)) {
               int depth = getMatDepth(frame);
               mat = depth &lt; 0 ? null : new Mat(frame.imageHeight, frame.imageWidth, CV_MAKETYPE(depth, frame.imageChannels),
                       new Pointer(frame.image[0].position(0)), frame.imageStride * Math.abs(frame.imageDepth) / 8);
           }
           return mat;
       }
       public Frame convert(Mat mat) {
           if (mat == null) {
               return null;
           } else if (!isEqual(frame, mat)) {
               frame = new Frame();
               frame.imageWidth = mat.cols();
               frame.imageHeight = mat.rows();
               frame.imageDepth = getFrameDepth(mat.depth());
               frame.imageChannels = mat.channels();
               frame.imageStride = (int)mat.step() * 8 / Math.abs(frame.imageDepth);
               frame.image = new Buffer[] { mat.createBuffer() };
               frame.opaque = mat;
           }
           return frame;
       }
    }
    </mat></iplimage></f></f>