Recherche avancée

Médias (91)

Autres articles (57)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (11090)

  • The C++11 Thread Timer is not working

    26 août 2017, par Gathros

    I’m trying to make a video player using SDL2 and FFmpeg API. The video is being decoded and I can display an image on screen. I can also play audio but I not doing that (I know it works, I’ve tried it).

    My problem is I can’t update the image when it should be. I’m able to get the timestamps and work out the delay then send it to a thread, where it should call an window update when the time has elapsed. But all that happens is the images flash on the screen with no delay. I have even set the delay to 1 second and the images still flash, after there being 1 second of a blank window.

    Here is my code :

    extern "C"{
       //FFmpeg libraries
       #include <libavcodec></libavcodec>avcodec.h>
       #include <libavformat></libavformat>avformat.h>
       #include <libswscale></libswscale>swscale.h>

       //SDL2 libraries
       #include <sdl2></sdl2>SDL.h>
    }
    // compatibility with newer API
    #if LIBAVCODEC_VERSION_INT &lt; AV_VERSION_INT(55,28,1)
    #define av_frame_alloc avcodec_alloc_frame
    #define av_frame_free avcodec_free_frame
    #endif

    //C++ libraries
    #include <cstdio>
    #include <chrono>
    #include <thread>
    #include <atomic>
    #include <mutex>
    #include

    typedef struct PacketQueue {
       AVPacketList                *first_pkt, *last_pkt;
       std::mutex                  mutex;
       std::condition_variable     convar;
    } PacketQueue;

    std::atomic<bool>           quitting, decoded;
    std::atomic        delay;
    Uint32                      Update_Window;

    int packet_queue_put(PacketQueue *q, AVPacket *pkt){
       AVPacketList *pkt1;
       if(av_dup_packet(pkt) &lt; 0){
           return -1;
       }
       pkt1 = (AVPacketList*) av_malloc(sizeof(AVPacketList));
       if(!pkt1){
           return -1;
       }
       pkt1->pkt = *pkt;
       pkt1->next = NULL;

       std::lock_guard lock(q->mutex);

       if (!q->last_pkt){
           q->first_pkt = pkt1;
       }else{
           q->last_pkt->next = pkt1;
       }
       q->last_pkt = pkt1;
       q->convar.notify_all();
       return 0;
    }

    static int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block){
       AVPacketList *pkt1;
       int ret;

       std::unique_lock lk(q->mutex);
       while(1){
           if(quitting){
               ret = -1;
               break;
           }

           pkt1 = q->first_pkt;
           if(pkt1){
               q->first_pkt = pkt1->next;
               if(!q->first_pkt){
                   q->last_pkt = NULL;
               }
               *pkt = pkt1->pkt;
               av_free(pkt1);
               ret = 1;
               break;
           }else if(decoded){
               ret = 0;
               quitting = true;
               break;
           }else if(block){
               q->convar.wait_for(lk, std::chrono::microseconds(50));
           }else {
               ret = 0;
               break;
           }
       }
       return ret;
    }

    void UpdateEventQueue(){
       SDL_Event event;
       SDL_zero(event);
       event.type = Update_Window;
       SDL_PushEvent(&amp;event);
    }

    void VideoTimerThreadFunc(){
       UpdateEventQueue();

       while(!quitting){
           if(delay == 0){
               std::this_thread::sleep_for(std::chrono::milliseconds(1));
           }else {
               std::this_thread::sleep_for(std::chrono::microseconds(delay));
               UpdateEventQueue();
           }
       }
    }

    int main(int argc, char *argv[]){
       AVFormatContext*                FormatCtx = nullptr;
       AVCodecContext*                 CodecCtxOrig = nullptr;
       AVCodecContext*                 CodecCtx = nullptr;
       AVCodec*                        Codec = nullptr;
       int                             videoStream;
       AVFrame*                        Frame = nullptr;
       AVPacket                        packet;
       struct SwsContext*              SwsCtx = nullptr;

       PacketQueue                     videoq;
       int                             frameFinished;
       int64_t                         last_pts = 0;
       const AVRational                ms = {1, 1000};

       SDL_Event                       event;
       SDL_Window*                     screen;
       SDL_Renderer*                   renderer;
       SDL_Texture*                    texture;
       std::shared_ptr<uint8>          yPlane, uPlane, vPlane;
       int                             uvPitch;

       if (argc != 2) {
           fprintf(stderr, "Usage: %s <file>\n", argv[0]);
           return -1;
       }

       // Register all formats and codecs
       av_register_all();

       // Initialise SDL2
       if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
           fprintf(stderr, "Couldn't initialise SDL - %s\n", SDL_GetError());
           return -1;
       }

       // Setting things up
       quitting = false;
       decoded = false;
       delay = 0;
       Update_Window = SDL_RegisterEvents(1);
       memset(&amp;videoq, 0, sizeof(PacketQueue));

       // Open video file
       if(avformat_open_input(&amp;FormatCtx, argv[1], NULL, NULL) != 0){
           fprintf(stderr, "Couldn't open file\n");        
           return -1; // Couldn't open file
       }

       // Retrieve stream information
       if(avformat_find_stream_info(FormatCtx, NULL) &lt; 0){
           fprintf(stderr, "Couldn't find stream information\n");

           // Close the video file
           avformat_close_input(&amp;FormatCtx);

           return -1; // Couldn't find stream information
       }

       // Find the video stream
       videoStream = av_find_best_stream(FormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);
       if(videoStream &lt; 0){
           fprintf(stderr, "Couldn't find video stream\n");

           // Close the video file
           avformat_close_input(&amp;FormatCtx);

           return -1; // Didn't find a video stream
       }

       // Get a pointer to the codec context for the video stream
       CodecCtxOrig = FormatCtx->streams[videoStream]->codec;

       // Find the decoder for the video stream
       Codec = avcodec_find_decoder(CodecCtxOrig->codec_id);
       if(Codec == NULL){
           fprintf(stderr, "Unsupported codec\n");

           // Close the codec
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);

           return -1; // Codec not found
       }

       // Copy context
       CodecCtx = avcodec_alloc_context3(Codec);
       if(avcodec_copy_context(CodecCtx, CodecCtxOrig) != 0){
           fprintf(stderr, "Couldn't copy codec context");

           // Close the codec
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);

           return -1; // Error copying codec context
       }

       // Open codec
       if(avcodec_open2(CodecCtx, Codec, NULL) &lt; 0){
           fprintf(stderr, "Couldn't open codec\n");

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);
           return -1; // Could not open codec
       }

       // Allocate video frame
       Frame = av_frame_alloc();

       // Make a screen to put our video
       screen = SDL_CreateWindow("Video Player", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, CodecCtx->width, CodecCtx->height, 0);
       if(!screen){
           fprintf(stderr, "SDL: could not create window - exiting\n");
           quitting = true;

           // Clean up SDL2
           SDL_Quit();

           // Free the YUV frame
           av_frame_free(&amp;Frame);

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);

           return -1;
       }

       renderer = SDL_CreateRenderer(screen, -1, 0);
       if(!renderer){
           fprintf(stderr, "SDL: could not create renderer - exiting\n");
           quitting = true;

           // Clean up SDL2
           SDL_DestroyWindow(screen);
           SDL_Quit();

           // Free the YUV frame
           av_frame_free(&amp;Frame);

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);
           return -1;
       }

       // Allocate a place to put our YUV image on that screen
       texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING, CodecCtx->width, CodecCtx->height);
       if(!texture){
           fprintf(stderr, "SDL: could not create texture - exiting\n");
           quitting = true;

           // Clean up SDL2
           SDL_DestroyRenderer(renderer);
           SDL_DestroyWindow(screen);
           SDL_Quit();

           // Free the YUV frame
           av_frame_free(&amp;Frame);

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);
           return -1;
       }

       // Initialise SWS context for software scaling
       SwsCtx = sws_getContext(CodecCtx->width, CodecCtx->height, CodecCtx->pix_fmt,
                   CodecCtx->width, CodecCtx->height, PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
       if(!SwsCtx){
           fprintf(stderr, "Couldn't create sws context\n");
           quitting = true;

           // Clean up SDL2
           SDL_DestroyTexture(texture);
           SDL_DestroyRenderer(renderer);
           SDL_DestroyWindow(screen);
           SDL_Quit();

           // Free the YUV frame
           av_frame_free(&amp;Frame);

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);
           return -1;
       }

       // set up YV12 pixel array (12 bits per pixel)
       yPlane = std::shared_ptr<uint8>((Uint8 *)::operator new (CodecCtx->width * CodecCtx->height, std::nothrow));
       uPlane = std::shared_ptr<uint8>((Uint8 *)::operator new (CodecCtx->width * CodecCtx->height / 4, std::nothrow));
       vPlane = std::shared_ptr<uint8>((Uint8 *)::operator new (CodecCtx->width * CodecCtx->height / 4, std::nothrow));
       uvPitch = CodecCtx->width / 2;

       if (!yPlane || !uPlane || !vPlane) {
           fprintf(stderr, "Could not allocate pixel buffers - exiting\n");
           quitting = true;

           // Clean up SDL2
           SDL_DestroyTexture(texture);
           SDL_DestroyRenderer(renderer);
           SDL_DestroyWindow(screen);
           SDL_Quit();

           // Free the YUV frame
           av_frame_free(&amp;Frame);

           // Close the codec
           avcodec_close(CodecCtx);
           avcodec_close(CodecCtxOrig);

           // Close the video file
           avformat_close_input(&amp;FormatCtx);
           return -1;
       }

       std::thread VideoTimer (VideoTimerThreadFunc);

       while (!quitting) {
           // Check for more packets
           if(av_read_frame(FormatCtx, &amp;packet) >= 0){
               // Check what stream it belongs to
               if (packet.stream_index == videoStream) {
                   packet_queue_put(&amp;videoq, &amp;packet);
               }else{
                   // Free the packet that was allocated by av_read_frame
                   av_free_packet(&amp;packet);
               }
           }else {
               decoded = true;
           }

           SDL_PollEvent(&amp;event);

           if(event.type == Update_Window){
               // Getting packet
               if(packet_queue_get(&amp;videoq, &amp;packet, 0)){
                   // Decode video frame
                   avcodec_decode_video2(CodecCtx, Frame, &amp;frameFinished, &amp;packet);

                   // Did we get a video frame?
                   if (frameFinished) {
                       AVPicture pict;
                       pict.data[0] = yPlane.get();
                       pict.data[1] = uPlane.get();
                       pict.data[2] = vPlane.get();
                       pict.linesize[0] = CodecCtx->width;
                       pict.linesize[1] = uvPitch;
                       pict.linesize[2] = uvPitch;

                       // Convert the image into YUV format that SDL uses
                       sws_scale(SwsCtx, (uint8_t const * const *) Frame->data, Frame->linesize, 0, CodecCtx->height, pict.data, pict.linesize);

                       SDL_UpdateYUVTexture(texture, NULL, yPlane.get(), CodecCtx->width, uPlane.get(), uvPitch, vPlane.get(), uvPitch);

                       SDL_RenderClear(renderer);
                       SDL_RenderCopy(renderer, texture, NULL, NULL);
                       SDL_RenderPresent(renderer);

                       // Calculating delay
                       delay = av_rescale_q(packet.dts, CodecCtx->time_base, ms) - last_pts;
                       last_pts = av_rescale_q(packet.dts, CodecCtx->time_base, ms);
                   }else{
                       //UpdateEventQueue();
                       delay = 1;
                   }

                   // Free the packet that was allocated by av_read_frame
                   av_free_packet(&amp;packet);

               }else{
                   //UpdateEventQueue();
               }
           }

           switch (event.type) {
               case SDL_QUIT:
                   quitting = true;
                   break;

               default:
                   break;
           }
       }

       VideoTimer.join();

       //SDL2 clean up
       SDL_DestroyTexture(texture);
       SDL_DestroyRenderer(renderer);
       SDL_DestroyWindow(screen);
       SDL_Quit();

       // Free the YUV frame
       av_frame_free(&amp;Frame);

       // Free Sws
       sws_freeContext(SwsCtx);

       // Close the codec
       avcodec_close(CodecCtx);
       avcodec_close(CodecCtxOrig);

       // Close the video file
       avformat_close_input(&amp;FormatCtx);

       return 0;
    }
    </uint8></uint8></uint8></file></uint8></bool></mutex></atomic></thread></chrono></cstdio>
  • ffmpeg - splitting, overlaying and selecting frame range in multiple input/outputs with complex video filter producing empty output

    11 octobre 2017, par Josep Carner

    I am trying to use ffmpeg to apply multiple filters -single input, multiple outputs-. In concrete, I am splitting the output in two, then applying an overlaying patch to one of those streams, split again and select a time range. The command I am using is :

      ffmpeg \
       -i ./dataset_test/videos/jz_10_huron_barstow.mkv \
       -an \
       -sn \
       -loglevel error \
       -filter_complex \
           "[0:v]split=2[L_in][F_in];
            [L_in]
               crop=226:307:21:74,scale=200x200,setdar=200:200,split=2\
            [L_in_0][L_in_1];\
            color=#808080:226x307,setpts=PTS-STARTPTS[L_PATCH];\
            [F_in][L_PATCH]
               overlay=21:74:shortest=1,scale=200x200,setdar=200:200,split=2\
            [F_in_0][F_in_1];\
            [F_in_0]select='between(t\,32\,212)',select='not(mod(n\,20))',setpts=(PTS-STARTPTS)/20[F_0];\
            [F_in_1]select='between(t\,213\,393)',select='not(mod(n\,20))',setpts=(PTS-STARTPTS)/20[F_1];\
            [L_in_0]select='between(t\,32\,212)',select='not(mod(n\,20))',setpts=(PTS-STARTPTS)/20[L_0];\
            [L_in_1]select='between(t\,213\,393)',select='not(mod(n\,20))',setpts=(PTS-STARTPTS)/20[L_1]\
            "\
       -map [F_0] -an -sn ./out/jz_10_huron_barstow/F/0_F.mp4\
       -map [F_1] -an -sn ./out/jz_10_huron_barstow/F/1_F.mp4\
       -map [L_0] -an -sn ./out/jz_10_huron_barstow/L/0_L.mp4\
       -map [L_1] -an -sn ./out/jz_10_huron_barstow/L/1_L.mp4\

    For some reason, 0_F.mp4 is empty and 1_F.mp4 is cropped. My intuition is that it has something to do with applying splitting + overlaying + selecting. I think at some point some filter before select is changing the PTS of the frames in some way select is not properly discarding them, but I honestly don’t know how to solve it. Some insights :

    1. L_0 and L_1 are fine (no overlay is used in those two)

    2. If I suppress L outputs everything works fine (no "split" is needed at the beginning)

    Could someone provide some light ? I don’t really know what else to try.
    The logs with loglevel 40 are (some repeated lines ommited) :

    ffmpeg version 2.8.1 Copyright (c) 2000-2015 the FFmpeg developers
     built with Apple LLVM version 9.0.0 (clang-900.0.37)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/2.8.1_1 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-opencl --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-vda
     libavutil      54. 31.100 / 54. 31.100
     libavcodec     56. 60.100 / 56. 60.100
     libavformat    56. 40.101 / 56. 40.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 40.101 /  5. 40.101
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  2.101 /  1.  2.101
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, matroska,webm, from './dataset_test/videos/jz_10_huron_barstow.mkv':
     Metadata:
       ENCODER         : Lavf57.66.104
     Duration: 00:24:04.14, start: 0.000000, bitrate: 2117 kb/s
       Stream #0:0: Video: h264 (Main), 1 reference frame, yuv420p, 1440x900 (1440x912) [SAR 1:1 DAR 8:5], 30 fps, 30 tbr, 1k tbn, 60 tbc (default)
       Metadata:
         ENCODER         : Lavc57.83.100 h264_nvenc
         DURATION        : 00:24:04.102000000
       Stream #0:1: Audio: ac3, 48000 Hz, stereo, fltp, 192 kb/s (default)
       Metadata:
         ENCODER         : Lavc57.83.100 ac3
         DURATION        : 00:24:04.135000000
    [Parsed_scale_2 @ 0x7fdfd2403d20] w:200 h:200 flags:'bilinear' interl:0
    [Parsed_setdar_3 @ 0x7fdfd240ad80] num:den syntax is deprecated, please use num/den or named options instead
    [Parsed_color_5 @ 0x7fdfd24096a0] size:226x307 rate:25/1 duration:-1.000000 sar:1/1
    [Parsed_scale_8 @ 0x7fdfd24092c0] w:200 h:200 flags:'bilinear' interl:0
    [Parsed_setdar_9 @ 0x7fdfd240b360] num:den syntax is deprecated, please use num/den or named options instead
    [Parsed_scale_2 @ 0x7fdfd2509b00] w:200 h:200 flags:'bilinear' interl:0
    [Parsed_setdar_3 @ 0x7fdfd25092c0] num:den syntax is deprecated, please use num/den or named options instead
    [Parsed_color_5 @ 0x7fdfd250a520] size:226x307 rate:25/1 duration:-1.000000 sar:1/1
    [Parsed_scale_8 @ 0x7fdfd250b620] w:200 h:200 flags:'bilinear' interl:0
    [Parsed_setdar_9 @ 0x7fdfd250a960] num:den syntax is deprecated, please use num/den or named options instead
    [graph 0 input from stream 0:0 @ 0x7fdfd2511360] w:1440 h:900 pixfmt:yuv420p tb:1/1000 fr:30/1 sar:1/1 sws_param:flags=2
    [Parsed_crop_1 @ 0x7fdfd2509c80] w:1440 h:900 sar:1/1 -> w:226 h:306 sar:1/1
    [Parsed_scale_2 @ 0x7fdfd2509b00] w:226 h:306 fmt:yuv420p sar:1/1 -> w:200 h:200 fmt:yuv420p sar:113/153 flags:0x2
    [Parsed_setdar_3 @ 0x7fdfd25092c0] w:200 h:200 dar:113/153 sar:113/153 -> dar:1/1 sar:1/1
    [Parsed_select_21 @ 0x7fdfd250a760] TB:0.001000 FRAME_RATE:30.000000 SAMPLE_RATE:nan
    [Parsed_select_18 @ 0x7fdfd250a080] TB:0.001000 FRAME_RATE:30.000000 SAMPLE_RATE:nan
    [Parsed_color_5 @ 0x7fdfd250a520] TB:0.040000 FRAME_RATE:25.000000 SAMPLE_RATE:nan
    [Parsed_overlay_7 @ 0x7fdfd250b020] main w:1440 h:900 fmt:yuv420p overlay w:226 h:306 fmt:yuva420p eof_action:endall
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Selected 1/1000 time base
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Sync level 2
    [Parsed_scale_8 @ 0x7fdfd250b620] w:1440 h:900 fmt:yuv420p sar:1/1 -> w:200 h:200 fmt:yuv420p sar:8/5 flags:0x2
    [Parsed_setdar_9 @ 0x7fdfd250a960] w:200 h:200 dar:8/5 sar:8/5 -> dar:1/1 sar:1/1
    [Parsed_select_15 @ 0x7fdfd250dd00] TB:0.001000 FRAME_RATE:30.000000 SAMPLE_RATE:nan
    [Parsed_select_12 @ 0x7fdfd250ca60] TB:0.001000 FRAME_RATE:30.000000 SAMPLE_RATE:nan
    [libx264 @ 0x7fdfd4005e00] using SAR=1/1
    [libx264 @ 0x7fdfd4005e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
    [libx264 @ 0x7fdfd4005e00] profile High, level 1.2
    [libx264 @ 0x7fdfd4005e00] 264 - core 148 r2601 a0cd7d3 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    [libx264 @ 0x7fdfd400f600] using SAR=1/1
    [libx264 @ 0x7fdfd400f600] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
    [libx264 @ 0x7fdfd400f600] profile High, level 1.2
    [libx264 @ 0x7fdfd400f600] 264 - core 148 r2601 a0cd7d3 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    [libx264 @ 0x7fdfd4018e00] using SAR=1/1
    [libx264 @ 0x7fdfd4018e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
    [libx264 @ 0x7fdfd4018e00] profile High, level 1.2
    [libx264 @ 0x7fdfd4018e00] 264 - core 148 r2601 a0cd7d3 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    [libx264 @ 0x7fdfd4022600] using SAR=1/1
    [libx264 @ 0x7fdfd4022600] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
    [libx264 @ 0x7fdfd4022600] profile High, level 1.2
    [libx264 @ 0x7fdfd4022600] 264 - core 148 r2601 a0cd7d3 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to './out/jz_10_huron_barstow/F/0_F.mp4':
     Metadata:
       encoder         : Lavf56.40.101
       Stream #0:0: Video: h264 (libx264), -1 reference frame ([33][0][0][0] / 0x0021), yuv420p, 200x200 [SAR 1:1 DAR 1:1], q=-1--1, 30 fps, 15360 tbn, 30 tbc (default)
       Metadata:
         encoder         : Lavc56.60.100 libx264
    Output #1, mp4, to './out/jz_10_huron_barstow/F/1_F.mp4':
     Metadata:
       encoder         : Lavf56.40.101
       Stream #1:0: Video: h264 (libx264), -1 reference frame ([33][0][0][0] / 0x0021), yuv420p, 200x200 [SAR 1:1 DAR 1:1], q=-1--1, 30 fps, 15360 tbn, 30 tbc (default)
       Metadata:
         encoder         : Lavc56.60.100 libx264
    Output #2, mp4, to './out/jz_10_huron_barstow/L/0_L.mp4':
     Metadata:
       encoder         : Lavf56.40.101
       Stream #2:0: Video: h264 (libx264), -1 reference frame ([33][0][0][0] / 0x0021), yuv420p, 200x200 [SAR 1:1 DAR 1:1], q=-1--1, 30 fps, 15360 tbn, 30 tbc (default)
       Metadata:
         encoder         : Lavc56.60.100 libx264
    Output #3, mp4, to './out/jz_10_huron_barstow/L/1_L.mp4':
     Metadata:
       encoder         : Lavf56.40.101
       Stream #3:0: Video: h264 (libx264), -1 reference frame ([33][0][0][0] / 0x0021), yuv420p, 200x200 [SAR 1:1 DAR 1:1], q=-1--1, 30 fps, 15360 tbn, 30 tbc (default)
       Metadata:
         encoder         : Lavc56.60.100 libx264
    Stream mapping:
     Stream #0:0 (h264) -> split
     setpts -> Stream #0:0 (libx264)
     setpts -> Stream #1:0 (libx264)
     setpts -> Stream #2:0 (libx264)
     setpts -> Stream #3:0 (libx264)
    Press [q] to stop, [?] for help
    [swscaler @ 0x7fdfd3800000] Warning: data is not aligned! This can lead to a speedloss
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 210 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 262 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 239 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 195 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 211 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 255 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 148 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 174 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 182 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 193 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:00.26 bitrate=   1.4kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 156 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:00.53 bitrate=   0.7kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 154 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:00.76 bitrate=   0.5kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 155 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:01.03 bitrate=   0.4kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 231 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:01.43 bitrate=   0.3kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 260 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:01.86 bitrate=   0.2kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 256 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:02.30 bitrate=   0.2kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 209 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:02.63 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 253 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:03.06 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 160 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:03.33 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 189 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:03.63 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 201 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:03.96 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 251 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:04.40 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 246 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:04.80 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 257 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:05.23 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 258 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:05.66 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 251 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:06.10 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 274 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:06.56 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 282 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:07.03 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Buffer queue overflow, dropping.
       Last message repeated 185 times
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    frame=    0 fps=0.0 q=0.0 q=0.0 q=29.0 q=0.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    frame=    0 fps=0.0 q=0.0 q=29.0 q=29.0 q=29.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    frame=    0 fps=0.0 q=0.0 q=29.0 q=29.0 q=29.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    frame=    0 fps=0.0 q=0.0 q=29.0 q=29.0 q=29.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    frame=    0 fps=0.0 q=0.0 q=29.0 q=29.0 q=29.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    frame=    0 fps=0.0 q=0.0 q=29.0 q=29.0 q=29.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    […]  
    frame=    0 fps=0.0 q=0.0 q=29.0 q=29.0 q=29.0 size=       0kB time=00:00:07.26 bitrate=   0.1kbits/s    
    [Parsed_overlay_7 @ 0x7fdfd250b020] [framesync @ 0x7fdfd250b708] Sync level 1
    No more output streams to write to, finishing.
    frame=    0 fps=0.0 q=0.0 Lq=-1.0 q=-1.0 q=-1.0 size=       0kB time=00:00:08.93 bitrate=   0.2kbits/s    
    video:620kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
    Input file #0 (./dataset_test/videos/jz_10_huron_barstow.mkv):
     Input stream #0:0 (video): 43323 packets read (347030822 bytes); 43323 frames decoded;
     Input stream #0:1 (audio): 4 packets read (3072 bytes);
     Total: 43327 packets (347033894 bytes) demuxed
    Output file #0 (./out/jz_10_huron_barstow/F/0_F.mp4):
     Output stream #0:0 (video): 0 frames encoded; 0 packets muxed (0 bytes);
     Total: 0 packets (0 bytes) muxed
    Output file #1 (./out/jz_10_huron_barstow/F/1_F.mp4):
     Output stream #1:0 (video): 270 frames encoded; 270 packets muxed (309743 bytes);
     Total: 270 packets (309743 bytes) muxed
    Output file #2 (./out/jz_10_huron_barstow/L/0_L.mp4):
     Output stream #2:0 (video): 270 frames encoded; 270 packets muxed (169687 bytes);
     Total: 270 packets (169687 bytes) muxed
    Output file #3 (./out/jz_10_huron_barstow/L/1_L.mp4):
     Output stream #3:0 (video): 270 frames encoded; 270 packets muxed (155502 bytes);
     Total: 270 packets (155502 bytes) muxed
    [libx264 @ 0x7fdfd400f600] frame I:2     Avg QP:25.08  size:  3204
    [libx264 @ 0x7fdfd400f600] frame P:140   Avg QP:26.63  size:  1426
    [libx264 @ 0x7fdfd400f600] frame B:128   Avg QP:28.64  size:   805
    [libx264 @ 0x7fdfd400f600] consecutive B-frames:  5.2% 94.8%  0.0%  0.0%
    [libx264 @ 0x7fdfd400f600] mb I  I16..4:  8.6% 62.7% 28.7%
    [libx264 @ 0x7fdfd400f600] mb P  I16..4:  2.5% 12.5%  3.4%  P16..4: 39.2% 21.4%  9.3%  0.0%  0.0%    skip:11.6%
    [libx264 @ 0x7fdfd400f600] mb B  I16..4:  0.5%  4.0%  1.0%  B16..8: 36.5% 16.7%  4.0%  direct: 5.5%  skip:31.8%  L0:43.4% L1:46.9% BI: 9.7%
    [libx264 @ 0x7fdfd400f600] 8x8 transform intra:68.6% inter:67.7%
    [libx264 @ 0x7fdfd400f600] coded y,uvDC,uvAC intra: 63.1% 48.6% 17.9% inter: 34.5% 21.1% 3.6%
    [libx264 @ 0x7fdfd400f600] i16 v,h,dc,p: 24% 23% 12% 41%
    [libx264 @ 0x7fdfd400f600] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 17% 19% 26%  7%  5%  5%  7%  6%  8%
    [libx264 @ 0x7fdfd400f600] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 29% 18%  6%  6%  5%  5%  4%  5%
    [libx264 @ 0x7fdfd400f600] i8c dc,h,v,p: 63% 21% 14%  3%
    [libx264 @ 0x7fdfd400f600] Weighted P-Frames: Y:14.3% UV:1.4%
    [libx264 @ 0x7fdfd400f600] ref P L0: 49.2% 20.4% 17.3% 11.2%  1.9%
    [libx264 @ 0x7fdfd400f600] ref B L0: 75.4% 24.6%
    [libx264 @ 0x7fdfd400f600] kb/s:274.71
    [libx264 @ 0x7fdfd4018e00] frame I:2     Avg QP:23.87  size:  2294
    [libx264 @ 0x7fdfd4018e00] frame P:133   Avg QP:25.45  size:   864
    [libx264 @ 0x7fdfd4018e00] frame B:135   Avg QP:28.99  size:   367
    [libx264 @ 0x7fdfd4018e00] consecutive B-frames: 17.4% 38.5% 27.8% 16.3%
    [libx264 @ 0x7fdfd4018e00] mb I  I16..4: 14.5% 54.1% 31.4%
    [libx264 @ 0x7fdfd4018e00] mb P  I16..4:  6.4%  9.6%  2.1%  P16..4: 33.3% 16.7%  5.9%  0.0%  0.0%    skip:26.0%
    [libx264 @ 0x7fdfd4018e00] mb B  I16..4:  1.6%  2.0%  0.3%  B16..8: 29.6%  9.3%  1.4%  direct: 2.2%  skip:53.6%  L0:41.9% L1:51.4% BI: 6.7%
    [libx264 @ 0x7fdfd4018e00] 8x8 transform intra:52.8% inter:73.8%
    [libx264 @ 0x7fdfd4018e00] coded y,uvDC,uvAC intra: 40.7% 41.7% 8.9% inter: 20.7% 16.6% 0.2%
    [libx264 @ 0x7fdfd4018e00] i16 v,h,dc,p: 22% 50%  8% 21%
    [libx264 @ 0x7fdfd4018e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 18% 30% 23%  6%  2%  2%  5%  5%  9%
    [libx264 @ 0x7fdfd4018e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 29% 29% 14%  5%  3%  4%  5%  6%  5%
    [libx264 @ 0x7fdfd4018e00] i8c dc,h,v,p: 62% 23% 12%  3%
    [libx264 @ 0x7fdfd4018e00] Weighted P-Frames: Y:1.5% UV:0.8%
    [libx264 @ 0x7fdfd4018e00] ref P L0: 56.1% 15.8% 17.7% 10.2%  0.1%
    [libx264 @ 0x7fdfd4018e00] ref B L0: 75.4% 20.8%  3.9%
    [libx264 @ 0x7fdfd4018e00] ref B L1: 92.6%  7.4%
    [libx264 @ 0x7fdfd4018e00] kb/s:150.22
    [libx264 @ 0x7fdfd4022600] frame I:2     Avg QP:23.86  size:  2208
    [libx264 @ 0x7fdfd4022600] frame P:139   Avg QP:24.90  size:   784
    [libx264 @ 0x7fdfd4022600] frame B:129   Avg QP:28.25  size:   322
    [libx264 @ 0x7fdfd4022600] consecutive B-frames: 17.8% 50.4% 15.6% 16.3%
    [libx264 @ 0x7fdfd4022600] mb I  I16..4: 15.7% 53.3% 31.1%
    [libx264 @ 0x7fdfd4022600] mb P  I16..4:  8.3%  9.9%  1.5%  P16..4: 33.8% 14.8%  5.0%  0.0%  0.0%    skip:26.7%
    [libx264 @ 0x7fdfd4022600] mb B  I16..4:  1.6%  1.8%  0.1%  B16..8: 28.8%  8.2%  1.1%  direct: 2.1%  skip:56.4%  L0:45.5% L1:47.6% BI: 6.9%
    [libx264 @ 0x7fdfd4022600] 8x8 transform intra:50.6% inter:75.8%
    [libx264 @ 0x7fdfd4022600] coded y,uvDC,uvAC intra: 34.0% 44.3% 10.0% inter: 18.8% 17.9% 0.4%
    [libx264 @ 0x7fdfd4022600] i16 v,h,dc,p: 18% 54%  8% 20%
    [libx264 @ 0x7fdfd4022600] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 28% 26%  6%  2%  2%  4%  5%  8%
    [libx264 @ 0x7fdfd4022600] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 32% 31% 14%  4%  2%  2%  3%  7%  5%
    [libx264 @ 0x7fdfd4022600] i8c dc,h,v,p: 60% 25% 13%  2%
    [libx264 @ 0x7fdfd4022600] Weighted P-Frames: Y:2.2% UV:0.7%
    [libx264 @ 0x7fdfd4022600] ref P L0: 57.9% 16.8% 16.1%  8.9%  0.2%
    [libx264 @ 0x7fdfd4022600] ref B L0: 76.5% 21.0%  2.5%
    [libx264 @ 0x7fdfd4022600] ref B L1: 95.0%  5.0%
    [libx264 @ 0x7fdfd4022600] kb/s:137.61
  • Getting troubles when I generate rtsp stream as an output with ffmpeg from static images as an input

    10 août 2013, par Ilya Yevlampiev

    I'm trying to start the rtsp stream via feeding ffmpeg with static images and feeding ffserver with ffmpeg output.

    The first problem appears from the ffserver.config :

    Port 12345
    RTSPPort 8544
    BindAddress 0.0.0.0
    MaxHTTPConnections 2000
    MaxClients 1000
    MaxBandwidth 1000
    CustomLog /var/log/ffserver-access.log
     <feed>
    File /tmp/videofeed.ffm
    FileMaxSize 3M
    #Launch ffmpeg -s 640x480 -f video4linux2 -i /dev/video0
    #Launch ffmpeg http://localhost:8090/videofeed.ffm
    Launch ffmpeg -loop 1 -f image2 -r 20 -b 9600 -i Janalif.jpg -t 30 http://127.0.0.1:8090/videofeed.ffm -report
    ACL allow 127.0.0.1
     </feed>
     <stream>
    Format rtsp
    #rtsp://localhost:5454/test1-rtsp.mpg
    Feed videofeed.ffm
    #webcam.ffm
    Format flv
    VideoCodec flv
    VideoFrameRate 30
    VideoBufferSize 80000
    VideoBitRate 200
    VideoQMin 1
    VideoQMax 5
    VideoSize 640x480
    PreRoll 1
    NoAudio
     </stream>
     <stream>
    Format status
     </stream>

    Please ignore codecs etc in stream part. The problem appears for RTSPPort, after starting the server nmap shows no binding to 8544, only 12345 port is used.

    8090/tcp  open  unknown
    12345/tcp open  netbus

    I can download mpeg stream through http from http://localhost:12345/test1-rtsp.mpg. How can I setup 8544 port working ?

    and another question is about Launch part of the stream. Am I right, that ffserver executes the content of Launch line ? If so, how can i configure ffserver to wait the stream in some particular port, but start streaming at the moment I desire ?

    P.S. The solution looks like Säkkijärven polkka, hoowever the idea behind this construct is to provide the controlled rtsp stream to emulate the camera output. In future I plan to substitute the command line for ffmpeg with some java bindings for it to produce the program-controlled images to the camera input to test the computer vision, that's why I need a way to launch ffmpeg independently on ffserver.