Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (62)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (9340)

  • ffmpeg Bmp to yuv : Crash at sws_scale

    28 décembre 2015, par the-owner

    The context :
    I have a succession of continuous bitmap and I want to encode them into a light video format.
    I use ffmpeg version 2.8.3 (the build here), under qt5, qt IDE, and msvc2013 for win32.

    The problem :
    My code crash at sws_scale () (and sometimes at avcodec_encode_video2()). When I explore the stack, the crash event occurs at sws_getCachedContext (). (I can only see the stack with these ffmpeg builds).
    I only use these ffmpeg libraries (from the Qt .pro file) :

    LIBS += -lavcodec -lavformat -lswscale -lavutil

    It’s swscale which bug. And this is the code :

    void newVideo ()
    {
       ULONG_PTR gdiplusToken;
       GdiplusStartupInput gdiplusStartupInput;
       GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);

       initBitmap (); //init bmp
       int screenWidth =  bmp.bmiHeader.biWidth;
       int screenHeight = bmp.bmiHeader.biHeight;

       AVCodec * codec;
       AVCodecContext * c = NULL;
       uint8_t * outbuf;
       int i, out_size, outbuf_size;


       avcodec_register_all();

       qDebug () << "Video encoding\n";

       // Find the mpeg1 video encoder
       codec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if (!codec)
       {
           qDebug () << "Codec not found\n";
           avcodec_close(c);
           av_free(c);
           return;
       }
       else
           qDebug () << "H264 codec found\n";

       c = avcodec_alloc_context3(codec);

       c->bit_rate = 1000000;
       c->width = 800; // resolution must be a multiple of two (1280x720),(1900x1080),(720x480)
       c->height = 600;
       c->time_base.num = 1; // framerate numerator
       c->time_base.den = 25; // framerate denominator
       c->gop_size = 30; // emit one intra frame every ten frames
       c->max_b_frames = 1; // maximum number of b-frames between non b-frames
       c->pix_fmt = AV_PIX_FMT_YUV420P; //Converstion RGB to YUV ?
       c->codec_id = AV_CODEC_ID_H264;

       struct SwsContext* fooContext = sws_getContext(screenWidth, screenHeight,
                                                      AV_PIX_FMT_RGB32,
                                                      c->width, c->height,
                                                      AV_PIX_FMT_YUV420P,
                                                      SWS_FAST_BILINEAR,
                                                      NULL, NULL, NULL);

       // Open the encoder
       if (avcodec_open2(c, codec, NULL) < 0)
       {
           qDebug () << "Could not open codec\n";
           avcodec_close(c);
           av_free(c);
           return;
       }
       else qDebug () << "H264 codec opened\n";

       outbuf_size = 100000 + c->width*c->height*(32>>3);//*(32>>3); // alloc image and output buffer
       outbuf = static_cast(malloc(outbuf_size));
       qDebug() << "Setting buffer size to: " << outbuf_size << "\n";

       FILE* f = fopen("TEST.mpg","wb");
       if(!f) qDebug() << "x - Cannot open video file for writing\n";
       else qDebug() << "Opened video file for writing\n";

       // encode 5 seconds of video
       for (i = 0; i < STREAM_FRAME_RATE*STREAM_DURATION; i++) //the stop condition i < 5.0*5
       {
           qDebug () << "i = " << i;
           fflush(stdout);

           HBITMAP hBmp;
           if (GetScreen(hBmp) == -1) return;
           BYTE * pPixels;// = new BYTE [bmp.bmiHeader.biSizeImage];
           pPixels = getPixels (hBmp);
           DeleteObject (hBmp);

           int nbytes = avpicture_get_size(AV_PIX_FMT_YUV420P, c->width, c->height);
           uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes*sizeof(uint8_t));
           if(!outbuffer) // check if(outbuf) instead
           {
               qDebug () << "Bytes cannot be allocated";
               return;
           }

           AVFrame* inpic = avcodec_alloc_frame(); //av_frame_alloc () ?
           AVFrame* outpic = avcodec_alloc_frame();

           outpic->pts = (int64_t)((float)i * (1000.0/((float)(c->time_base.den))) * 90);
           if (avpicture_fill((AVPicture*) inpic, (uint8_t*) pPixels, AV_PIX_FMT_RGB32,
                          screenWidth, screenHeight) < 0)
               qDebug () <<  "avpicture_fill Fill picture with image failed"; //Fill picture with image

           if(avpicture_fill((AVPicture*) outpic, outbuffer, AV_PIX_FMT_YUV420P,
                          c->width, c->height) < 0)
               qDebug () <<  "avpicture_fill failed";

           if (av_image_alloc(outpic->data, outpic->linesize, c->width, c->height,
                          c->pix_fmt, 1) < 0)
               qDebug () <<  "av_image_alloc failed";

           inpic->data[0] += inpic->linesize[0]*(screenHeight - 1); // Flipping frame
           inpic->linesize[0] = -inpic->linesize[0]; // Flipping frame

    ////////////////////////////HERE THE BUG////////////////////////////////
           sws_scale(fooContext,
                     inpic->data, inpic->linesize,
                     0, c->height,
                     outpic->data, outpic->linesize); //HERE THE BUG

           av_free_packet((AVPacket *)outbuf);
           // encode the image
           out_size = avcodec_encode_video2 (c, (AVPacket *) outbuf,
                                             (AVFrame *) outbuf_size, (int *) outpic);
    ///////////////////////THE CODE DONT GO BEYOND/////////////////////////////////

           qDebug () << "Encoding frame" << i <<" (size=" << out_size <<"\n";
           fwrite(outbuf, 1, out_size, f);
           delete [] pPixels;
           av_free(outbuffer);
           av_free(inpic);
           av_freep(outpic);
       }

       // get the delayed frames
       for(; out_size; i++)
       {
           fflush(stdout);
           out_size = avcodec_encode_video2 (c, (AVPacket *) outbuf,
                                             (AVFrame *) outbuf_size, NULL);
           qDebug () << "Writing frame" << i <<" (size=" << out_size <<"\n";
           fwrite(outbuf, 1, out_size, f);
       }

       // add sequence end code to have a real mpeg file
       outbuf[0] = 0x00;
       outbuf[1] = 0x00;
       outbuf[2] = 0x01;
       outbuf[3] = 0xb7;
       fwrite(outbuf, 1, 4, f);
       fclose(f);

       avcodec_close(c);
       free(outbuf);
       av_free(c);
       qDebug () << "Closed codec and Freed\n";
    }

    And the output :

    Video encoding

    H264 codec found

    H264 codec opened

    Setting buffer size to:  2020000

    Opened video file for writing

    i =  0
    **CRASH**

    I have thougth that my bitmap wasn’t good so I have crafted a bitmap just for testing, the code was :

       uint8_t* pPixels = new uint8_t[Width * 3 * Height];
       int x = 50;
       for(unsigned int i = 0; i < Width * 3 * Height; i = i + 3) // loop for generating color changing images
       {
           pPixels [i] = x % 255; //R
           pPixels [i + 1] = (x) % 255; //G
           pPixels [i + 2] = (255 - x) % 255; //B
       }

    However the crash continue. Perhaps, it might prove that it’s not the bitmap (pPixels) which has a problem.

    If anyone know, why I get this bug : Maybe don’t I set one parameter well ? Or one ffmpeg deprecated function ? etc.


    EDIT 1 27/12/15

    Thanks to Ronald S. Bultje The function sws_scale () does not crash with this code, however I get an error from it bad dst image pointers. My code :

    //DESTINATION FRAME            
    if (avpicture_alloc ((AVPicture*) dst_frame, AV_PIX_FMT_YUV420P, c->width, c->height) < 0)
               {
                   qDebug () <<  "# avpicture_alloc failed";
                   return;
               }
               if(avpicture_fill((AVPicture*) dst_frame, NULL, AV_PIX_FMT_YUV420P,
                              c->width, c->height) < 0)
                   qDebug () <<  "avpicture_fill failed";
               avcodec_align_dimensions2 (c, &c->width, &c->height, dst_frame->linesize);

    //SOURCE FRAME
               if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                                  tmp_screenWidth, tmp_screenHeight) < 0)
                   qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image
               avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, src_frame->linesize);

               struct SwsContext* conversionContext = sws_getContext(tmp_screenWidth,tmp_screenHeight,AV_PIX_FMT_RGB32,c->width, c->height, AV_PIX_FMT_YUV420P,SWS_FAST_BILINEAR, NULL, NULL, NULL);

               int output_Height = sws_scale(conversionContext,
                                             src_frame->data, src_frame->linesize,
                                             0, tmp_screenHeight,
                                             dst_frame->data, dst_frame->linesize); //return 0 -> bad dst image pointers error

    EDIT 2 28/12/15

    I have tried to follow the Ronald S. Bultje’s suggestion and now I get a bad src image pointers error, I have investigated and worked many hours but I do not find a solution. Here, there is the new snippet :

    AVFrame* src_frame = av_frame_alloc ();
    AVFrame* dst_frame = av_frame_alloc ();
    AVFrame* tmp_src_frame = av_frame_alloc ();

    /*........I do not use them until this snippet..........*/
    //DESTINATION
    //avpicture_free ((AVPicture*)dst_frame);
    avcodec_align_dimensions2 (c, &c->width, &c->height, dst_frame->linesize);
    if (avpicture_alloc ((AVPicture*) dst_frame, AV_PIX_FMT_YUV420P, c->width, c->height) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }

    //SOURCE
    //stride = src_frame->linesize [0] = ((((screenWidth * bitPerPixel) + 31) & ~31) >> 3); do I need to do that ?
    //== stride - I have gotten this formula from : https://msdn.microsoft.com/en-us/library/windows/desktop/dd318229(v=vs.85).aspx
    if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                      screenWidth, screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image
    //linesize [0] == 21760 like commented stride

    //Source TO TMP Source
    avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, tmp_src_frame->linesize);
    if (avpicture_fill((AVPicture*) tmp_src_frame, NULL, AV_PIX_FMT_RGB32,
                      tmp_screenWidth, tmp_screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image

    av_picture_copy ((AVPicture*) tmp_src_frame, (const AVPicture*) src_frame, AV_PIX_FMT_RGB32,
                    screenWidth, screenHeight);

    struct SwsContext* conversionContext = sws_getContext(tmp_screenWidth, tmp_screenHeight,
                                                         AV_PIX_FMT_RGB32,
                                                         c->width, c->height,
                                                         AV_PIX_FMT_YUV420P,
                                                         SWS_FAST_BILINEAR,
                                                         NULL, NULL, NULL);

    int output_Height = sws_scale(conversionContext,
                                 tmp_src_frame->data, tmp_src_frame->linesize,
                                 0, tmp_screenHeight,
                                 dst_frame->data, dst_frame->linesize);
    //ffmpeg error = bad src image pointers
    // output_Height == 0

    EDIT 3

    For temp Picture I have done an avcode_align_dimension2() then a avpicture_alloc() for allocating memory and avpicture_fill() in order to fill the picture pointer. Below the updated code :

    //DESTINATION
    //avpicture_free ((AVPicture*)dst_frame);
    avcodec_align_dimensions2 (c, &c->width, &c->height, dst_frame->linesize);
    if (avpicture_alloc ((AVPicture*) dst_frame, AV_PIX_FMT_YUV420P, c->width, c->height) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }

    //SOURCE
    //src_frame->linesize [0] = ((((screenWidth * bpp) + 31) & ~31) >> 3);
    //src_frame->linesize [0] = stride;
    if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                      screenWidth, screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image

    //Source TO TMP Source
    avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, tmp_src_frame->linesize);
    if (avpicture_alloc ((AVPicture*) tmp_src_frame, AV_PIX_FMT_RGB32, tmp_screenWidth, tmp_screenHeight) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }
    int outbuf_size = tmp_screenWidth*tmp_screenHeight*4;// alloc image and output buffer
    outbuf = static_cast(malloc(outbuf_size));
    if (avpicture_fill((AVPicture*) tmp_src_frame, outbuf, AV_PIX_FMT_RGB32,
                      tmp_screenWidth, tmp_screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image
    av_picture_copy ((AVPicture*) tmp_src_frame, (const AVPicture*) src_frame, AV_PIX_FMT_RGB32,
                    tmp_screenWidth, tmp_screenHeight);

    struct SwsContext* conversionContext = sws_getContext(tmp_screenWidth, tmp_screenHeight,
                                                         AV_PIX_FMT_RGB32,
                                                         c->width, c->height,
                                                         AV_PIX_FMT_YUV420P,
                                                         SWS_FAST_BILINEAR,
                                                         NULL, NULL, NULL);

    int output_Height = sws_scale(conversionContext,
                                 tmp_src_frame->data, tmp_src_frame->linesize,
                                 0, tmp_screenHeight,
                                 dst_frame->data, dst_frame->linesize);

    The call stack is as follow : av_picture_copy() is called then av_image_copy() then _VEC_memcpy() then fastcopy_I() and crash ... The problem is not the dimensions (tmp_screenWidth/Height) ? (With av_picture_copy () could we copy a picture P1 with dim W1xH1 to a picture P2 with dimension W2xH2 ?)

    EDIT 4

    Crash at av_picture_copy() which call _aligned_malloc() then av_image_copy _VEC_memcpy() and fastcopy_I()

    //SOURCE
    if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                      screenWidth, screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image

    //Source TO TMP Source
    avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, tmp_src_frame->linesize);
    if (avpicture_alloc ((AVPicture*) tmp_src_frame, AV_PIX_FMT_RGB32, tmp_screenWidth, tmp_screenHeight) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }
    av_picture_copy ((AVPicture*) tmp_src_frame, (const AVPicture*) src_frame, AV_PIX_FMT_RGB32,
                    tmp_screenWidth, tmp_screenHeight);
  • FFMPEG to create an MPEG-DASH stream with VP8

    16 septembre 2019, par Kenny Worden

    I’m trying to use FFMPEG to stream a live video feed from my webcam /dev/video0. Following scattered tutorials and scarce documentation (is this a known problem for the encoding community ?) I arrived at the following bash script :

    #!/bin/bash

    ffmpeg \
       -y \
       -f v4l2 \
           -i /dev/video0 \
           -s 640x480 \
           -input_format mjpeg \
           -r 24 \
       -map 0:0 \
       -pix_fmt yuv420p \
       -codec:v libvpx \
           -s 640x480 \
           -threads 4 \
           -b:v 50k \
           -tile-columns 4 \
           -frame-parallel 1 \
           -keyint_min 24 -g 24 \
       -f webm_chunk \
           -header "stream.hdr" \
           -chunk_start_index 1 \
       stream_%d.chk &

    sleep 2

    ffmpeg \
       -f webm_dash_manifest -live 1 \
       -i stream.hdr \
       -c copy \
       -map 0 \
       -f webm_dash_manifest -live 1 \
           -adaptation_sets "id=0,streams=0" \
           -chunk_start_index 1 \
           -chunk_duration_ms 1000 \
           -time_shift_buffer_depth 30000 \
           -minimum_update_period 60000 \
       stream_manifest.mpd

    When I run this script, my webcam light turns on, the stream.hdr and stream_manifest.mpd files are written, and chunks start to be created (i.e. stream_1.chk, stream_2.chk, etc...). However, FFMPEG throws the following error :

    Could not write header for output file #0 (incorrect codec parameters
     ?) : Invalid data found when processing input

    I will explain what I think I am doing with this script, and hopefully this will expose any errors in my thinking.

    First, we invoke FFMPEG to use Video for Linux 2 (v4l2) to read from my webcam (/dev/video0) of a resolution 640x480. The input format is mjpeg with a framerate of 24fps.

    I then declare that FFMPEG should "map" (copy) the video stream output by v4l2 to a file. I specify the pixel format (YUV420P) and use libvpx (VP8 encoding) to encode the video stream. I set the size to be 640x480, use 4 threads, set the bitrate to be 50kbps, do some magic with tile-columns and frame-parallel options, and set the I-frames to be 24 frames apart.

    I then create a stream.hdr file. The starting index is 1. This command continues to run infinitely until I kill it, grabbing new video from my webcam and outputting it into chunks.

    I then sleep for 2 seconds to give the previous command time to generate a header file.

    And that’s really it. The next invocation of FFMPEG simply creates the MPEG-DASH manifest file given the header generated in the previous step.

    So what’s going on ? Why can I not view the video in a web browser (I’m using Dash.js) ? I serve the manifest, header, and chunks on a Node.js server so that trivial issue is not the problem.


    Edit : Here is my full console output.

    ffmpeg version 3.0.7-0ubuntu0.16.10.1 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 6.2.0 (Ubuntu 6.2.0-5ubuntu12) 20161005
     configuration: --prefix=/usr --extra-version=0ubuntu0.16.10.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-chromaprint --enable-libx264
     libavutil      55. 17.103 / 55. 17.103
     libavcodec     57. 24.102 / 57. 24.102
     libavformat    57. 25.100 / 57. 25.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 31.100 /  6. 31.100
     libavresample   3.  0.  0 /  3.  0.  0
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    [video4linux2,v4l2 @ 0x55847e244ea0] The driver changed the time per frame from 1/24 to 1/30
    [mjpeg @ 0x55847e245c00] Changing bps to 8
    Input #0, video4linux2,v4l2, from '/dev/video0':
     Duration: N/A, start: 64305.102081, bitrate: N/A
       Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 640x480, -5 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
    Codec AVOption frame-parallel (Enable frame parallel decodability features) specified for output file #0 (stream_%d.chk) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
    Codec AVOption tile-columns (Number of tile columns to use, log2) specified for output file #0 (stream_%d.chk) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
    [swscaler @ 0x55847e24b720] deprecated pixel format used, make sure you did set range correctly
    [libvpx @ 0x55847e248a20] v1.5.0
    Output #0, webm_chunk, to 'stream_%d.chk':
     Metadata:
       encoder         : Lavf57.25.100
       Stream #0:0: Video: vp8 (libvpx), yuv420p, 640x480, q=-1--1, 50 kb/s, 30 fps, 30 tbn, 30 tbc
       Metadata:
         encoder         : Lavc57.24.102 libvpx
       Side data:
         unknown side data type 10 (24 bytes)
    Stream mapping:
     Stream #0:0 -> #0:0 (mjpeg (native) -> vp8 (libvpx))
    Press [q] to stop, [?] for help
    frame=   21 fps=0.0 q=0.0 size=N/A time=00:00:00.70 bitrate=N/A dup=5 drop=frame=   36 fps= 35 q=0.0 size=N/A time=00:00:01.20 bitrate=N/A dup=5 drop=frame=   51 fps= 33 q=0.0 size=N/A time=00:00:01.70 bitrate=N/A dup=5 drop=ffmpeg version 3.0.7-0ubuntu0.16.10.1 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 6.2.0 (Ubuntu 6.2.0-5ubuntu12) 20161005
     configuration: --prefix=/usr --extra-version=0ubuntu0.16.10.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-chromaprint --enable-libx264
     libavutil      55. 17.103 / 55. 17.103
     libavcodec     57. 24.102 / 57. 24.102
     libavformat    57. 25.100 / 57. 25.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 31.100 /  6. 31.100
     libavresample   3.  0.  0 /  3.  0.  0
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    Input #0, webm_dash_manifest, from 'stream.hdr':
     Metadata:
       encoder         : Lavf57.25.100
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         webm_dash_manifest_file_name: stream.hdr
         webm_dash_manifest_track_number: 1
    Output #0, webm_dash_manifest, to 'stream_manifest.mpd':
     Metadata:
       encoder         : Lavf57.25.100
       Stream #0:0: Video: vp8, yuv420p, 640x480 [SAR 1:1 DAR 4:3], q=2-31, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         webm_dash_manifest_file_name: stream.hdr
         webm_dash_manifest_track_number: 1
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid data found when processing input
    frame=   67 fps= 33 q=0.0 size
    frame=   82 fps= 32 q=0.0 size=N/A time=00:00:02.73 bitrate=N/A dup=5 drop=
    frame=   97 fps= 32 q=0.0 size=N/A time=00:00:03.23 bitrate=N/A dup=5 drop=
    frame=  112 fps= 32 q=0.0 size=N/A time=00:00:03.73 bitrate=N/A dup=5 ...
  • FFMPEG to create an MPEG-DASH stream with VP8

    24 avril 2017, par Kenneth Worden

    I’m trying to use FFMPEG to stream a live video feed from my webcam /dev/video0. Following scattered tutorials and scarce documentation (is this a known problem for the encoding community ?) I arrived at the following bash script :

    #!/bin/bash

    ffmpeg \
       -y \
       -f v4l2 \
           -i /dev/video0 \
           -s 640x480 \
           -input_format mjpeg \
           -r 24 \
       -map 0:0 \
       -pix_fmt yuv420p \
       -codec:v libvpx \
           -s 640x480 \
           -threads 4 \
           -b:v 50k \
           -tile-columns 4 \
           -frame-parallel 1 \
           -keyint_min 24 -g 24 \
       -f webm_chunk \
           -header "stream.hdr" \
           -chunk_start_index 1 \
       stream_%d.chk &

    sleep 2

    ffmpeg \
       -f webm_dash_manifest -live 1 \
       -i stream.hdr \
       -c copy \
       -map 0 \
       -f webm_dash_manifest -live 1 \
           -adaptation_sets "id=0,streams=0" \
           -chunk_start_index 1 \
           -chunk_duration_ms 1000 \
           -time_shift_buffer_depth 30000 \
           -minimum_update_period 60000 \
       stream_manifest.mpd

    When I run this script, my webcam light turns on, the stream.hdr and stream_manifest.mpd files are written, and chunks start to be created (i.e. stream_1.chk, stream_2.chk, etc...). However, FFMPEG throws the following error :

    Could not write header for output file #0 (incorrect codec parameters
     ?) : Invalid data found when processing input

    I will explain what I think I am doing with this script, and hopefully this will expose any errors in my thinking.

    First, we invoke FFMPEG to use Video for Linux 2 (v4l2) to read from my webcam (/dev/video0) of a resolution 640x480. The input format is mjpeg with a framerate of 24fps.

    I then declare that FFMPEG should "map" (copy) the video stream output by v4l2 to a file. I specify the pixel format (YUV420P) and use libvpx (VP8 encoding) to encode the video stream. I set the size to be 640x480, use 4 threads, set the bitrate to be 50kbps, do some magic with tile-columns and frame-parallel options, and set the I-frames to be 24 frames apart.

    I then create a stream.hdr file. The starting index is 1. This command continues to run infinitely until I kill it, grabbing new video from my webcam and outputting it into chunks.

    I then sleep for 2 seconds to give the previous command time to generate a header file.

    And that’s really it. The next invocation of FFMPEG simply creates the MPEG-DASH manifest file given the header generated in the previous step.

    So what’s going on ? Why can I not view the video in a web browser (I’m using Dash.js) ? I serve the manifest, header, and chunks on a Node.js server so that trivial issue is not the problem.


    Edit : Here is my full console output.

    ffmpeg version 3.0.7-0ubuntu0.16.10.1 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 6.2.0 (Ubuntu 6.2.0-5ubuntu12) 20161005
     configuration: --prefix=/usr --extra-version=0ubuntu0.16.10.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-chromaprint --enable-libx264
     libavutil      55. 17.103 / 55. 17.103
     libavcodec     57. 24.102 / 57. 24.102
     libavformat    57. 25.100 / 57. 25.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 31.100 /  6. 31.100
     libavresample   3.  0.  0 /  3.  0.  0
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    [video4linux2,v4l2 @ 0x55847e244ea0] The driver changed the time per frame from 1/24 to 1/30
    [mjpeg @ 0x55847e245c00] Changing bps to 8
    Input #0, video4linux2,v4l2, from '/dev/video0':
     Duration: N/A, start: 64305.102081, bitrate: N/A
       Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 640x480, -5 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
    Codec AVOption frame-parallel (Enable frame parallel decodability features) specified for output file #0 (stream_%d.chk) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
    Codec AVOption tile-columns (Number of tile columns to use, log2) specified for output file #0 (stream_%d.chk) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
    [swscaler @ 0x55847e24b720] deprecated pixel format used, make sure you did set range correctly
    [libvpx @ 0x55847e248a20] v1.5.0
    Output #0, webm_chunk, to 'stream_%d.chk':
     Metadata:
       encoder         : Lavf57.25.100
       Stream #0:0: Video: vp8 (libvpx), yuv420p, 640x480, q=-1--1, 50 kb/s, 30 fps, 30 tbn, 30 tbc
       Metadata:
         encoder         : Lavc57.24.102 libvpx
       Side data:
         unknown side data type 10 (24 bytes)
    Stream mapping:
     Stream #0:0 -> #0:0 (mjpeg (native) -> vp8 (libvpx))
    Press [q] to stop, [?] for help
    frame=   21 fps=0.0 q=0.0 size=N/A time=00:00:00.70 bitrate=N/A dup=5 drop=frame=   36 fps= 35 q=0.0 size=N/A time=00:00:01.20 bitrate=N/A dup=5 drop=frame=   51 fps= 33 q=0.0 size=N/A time=00:00:01.70 bitrate=N/A dup=5 drop=ffmpeg version 3.0.7-0ubuntu0.16.10.1 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 6.2.0 (Ubuntu 6.2.0-5ubuntu12) 20161005
     configuration: --prefix=/usr --extra-version=0ubuntu0.16.10.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-chromaprint --enable-libx264
     libavutil      55. 17.103 / 55. 17.103
     libavcodec     57. 24.102 / 57. 24.102
     libavformat    57. 25.100 / 57. 25.100
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 31.100 /  6. 31.100
     libavresample   3.  0.  0 /  3.  0.  0
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    Input #0, webm_dash_manifest, from 'stream.hdr':
     Metadata:
       encoder         : Lavf57.25.100
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         webm_dash_manifest_file_name: stream.hdr
         webm_dash_manifest_track_number: 1
    Output #0, webm_dash_manifest, to 'stream_manifest.mpd':
     Metadata:
       encoder         : Lavf57.25.100
       Stream #0:0: Video: vp8, yuv420p, 640x480 [SAR 1:1 DAR 4:3], q=2-31, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         webm_dash_manifest_file_name: stream.hdr
         webm_dash_manifest_track_number: 1
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid data found when processing input
    frame=   67 fps= 33 q=0.0 size
    frame=   82 fps= 32 q=0.0 size=N/A time=00:00:02.73 bitrate=N/A dup=5 drop=
    frame=   97 fps= 32 q=0.0 size=N/A time=00:00:03.23 bitrate=N/A dup=5 drop=
    frame=  112 fps= 32 q=0.0 size=N/A time=00:00:03.73 bitrate=N/A dup=5 ...