Recherche avancée

Médias (91)

Autres articles (45)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8689)

  • FFmpeg avcodec_decode_video2 decode RTSP H264 HD-video packet to video picture with error

    29 mai 2018, par Nguyen Ba Thi

    I used FFmpeg library version 4.0 to have simple C++ program, in witch is a thread to receive RTSP H264 video data from IP-camera and display it in program window.

    Code of this thread is follow :

    DWORD WINAPI GrabbProcess(LPVOID lpParam)
    // Grabbing thread
    {
     DWORD i;
     int ret = 0, nPacket=0;
     FILE *pktFile;
     // Open video file
     pFormatCtx = avformat_alloc_context();
     if(avformat_open_input(&pFormatCtx, nameVideoStream, NULL, NULL)!=0)
         fGrabb=-1; // Couldn't open file
     else
     // Retrieve stream information
     if(avformat_find_stream_info(pFormatCtx, NULL)<0)
         fGrabb=-2; // Couldn't find stream information
     else
     {
         // Find the first video stream
         videoStream=-1;
         for(i=0; inb_streams; i++)
           if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO)
           {
             videoStream=i;
             break;
           }
         if(videoStream==-1)
             fGrabb=-3; // Didn't find a video stream
         else
         {
             // Get a pointer to the codec context for the video stream
             pCodecCtxOrig=pFormatCtx->streams[videoStream]->codec;
             // Find the decoder for the video stream
             pCodec=avcodec_find_decoder(pCodecCtxOrig->codec_id);
             if(pCodec==NULL)
                 fGrabb=-4; // Codec not found
             else
             {
                 // Copy context
                 pCodecCtx = avcodec_alloc_context3(pCodec);
                 if(avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0)
                     fGrabb=-5; // Error copying codec context
                 else
                 {
                     // Open codec
                     if(avcodec_open2(pCodecCtx, pCodec, NULL)<0)
                         fGrabb=-6; // Could not open codec
                     else
                     // Allocate video frame for input
                     pFrame=av_frame_alloc();
                     // Determine required buffer size and allocate buffer
                     numBytes=avpicture_get_size(pCodecCtx->pix_fmt, pCodecCtx->width,
                         pCodecCtx->height);
                     buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
                     // Assign appropriate parts of buffer to image planes in pFrame
                     // Note that pFrame is an AVFrame, but AVFrame is a superset
                     // of AVPicture
                     avpicture_fill((AVPicture *)pFrame, buffer, pCodecCtx->pix_fmt,
                         pCodecCtx->width, pCodecCtx->height);

                     // Allocate video frame for display
                     pFrameRGB=av_frame_alloc();
                     // Determine required buffer size and allocate buffer
                     numBytes=avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width,
                         pCodecCtx->height);
                     bufferRGB=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
                     // Assign appropriate parts of buffer to image planes in pFrameRGB
                     // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
                     // of AVPicture
                     avpicture_fill((AVPicture *)pFrameRGB, bufferRGB, AV_PIX_FMT_RGB24,
                         pCodecCtx->width, pCodecCtx->height);
                     // initialize SWS context for software scaling to FMT_RGB24
                     sws_ctx_to_RGB = sws_getContext(pCodecCtx->width,
                         pCodecCtx->height,
                         pCodecCtx->pix_fmt,
                         pCodecCtx->width,
                         pCodecCtx->height,
                         AV_PIX_FMT_RGB24,
                         SWS_BILINEAR,
                         NULL,
                         NULL,
                         NULL);

                     // Allocate video frame (grayscale YUV420P) for processing
                     pFrameYUV=av_frame_alloc();
                     // Determine required buffer size and allocate buffer
                     numBytes=avpicture_get_size(AV_PIX_FMT_YUV420P, pCodecCtx->width,
                         pCodecCtx->height);
                     bufferYUV=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
                     // Assign appropriate parts of buffer to image planes in pFrameYUV
                     // Note that pFrameYUV is an AVFrame, but AVFrame is a superset
                     // of AVPicture
                     avpicture_fill((AVPicture *)pFrameYUV, bufferYUV, AV_PIX_FMT_YUV420P,
                         pCodecCtx->width, pCodecCtx->height);
                     // initialize SWS context for software scaling to FMT_YUV420P
                     sws_ctx_to_YUV = sws_getContext(pCodecCtx->width,
                         pCodecCtx->height,
                         pCodecCtx->pix_fmt,
                         pCodecCtx->width,
                         pCodecCtx->height,
                         AV_PIX_FMT_YUV420P,
                         SWS_BILINEAR,
                         NULL,
                         NULL,
                         NULL);
                   RealBsqHdr.biWidth = pCodecCtx->width;
                   RealBsqHdr.biHeight = -pCodecCtx->height;
                 }
             }
         }
     }
     while ((fGrabb==1)||(fGrabb==100))
     {
         // Grabb a frame
         if (av_read_frame(pFormatCtx, &packet) >= 0)
         {
           // Is this a packet from the video stream?
           if(packet.stream_index==videoStream)
           {
               // Decode video frame
               int len = avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
               nPacket++;
               // Did we get a video frame?
               if(frameFinished)
               {
                   // Convert the image from its native format to YUV
                   sws_scale(sws_ctx_to_YUV, (uint8_t const * const *)pFrame->data,
                       pFrame->linesize, 0, pCodecCtx->height,
                       pFrameYUV->data, pFrameYUV->linesize);
                   // Convert the image from its native format to RGB
                   sws_scale(sws_ctx_to_RGB, (uint8_t const * const *)pFrame->data,
                       pFrame->linesize, 0, pCodecCtx->height,
                       pFrameRGB->data, pFrameRGB->linesize);
                   HDC hdc=GetDC(hWndM);
                   SetDIBitsToDevice(hdc, 0, 0, pCodecCtx->width, pCodecCtx->height,
                       0, 0, 0, pCodecCtx->height,pFrameRGB->data[0], (LPBITMAPINFO)&RealBsqHdr, DIB_RGB_COLORS);
                   ReleaseDC(hWndM,hdc);
                   av_frame_unref(pFrame);
               }
           }
           // Free the packet that was allocated by av_read_frame
           av_free_packet(&packet);
         }
      }
      // Free the org frame
     av_frame_free(&pFrame);
     // Free the RGB frame
     av_frame_free(&pFrameRGB);
     // Free the YUV frame
     av_frame_free(&pFrameYUV);

     // Close the codec
     avcodec_close(pCodecCtx);
     avcodec_close(pCodecCtxOrig);

     // Close the video file
     avformat_close_input(&pFormatCtx);
     avformat_free_context(pFormatCtx);

     if (fGrabb==1)
         sprintf(tmpstr,"Grabbing Completed %d frames", nCntTotal);
     else if (fGrabb==2)
         sprintf(tmpstr,"User break on %d frames", nCntTotal);
     else if (fGrabb==3)
         sprintf(tmpstr,"Can't Grabb at frame %d", nCntTotal);
     else if (fGrabb==-1)
         sprintf(tmpstr,"Couldn't open file");
     else if (fGrabb==-2)
         sprintf(tmpstr,"Couldn't find stream information");
     else if (fGrabb==-3)
         sprintf(tmpstr,"Didn't find a video stream");
     else if (fGrabb==-4)
         sprintf(tmpstr,"Codec not found");
     else if (fGrabb==-5)
         sprintf(tmpstr,"Error copying codec context");
     else if (fGrabb==-6)
         sprintf(tmpstr,"Could not open codec");
     i=(UINT) fGrabb;
     fGrabb=0;
     SetWindowText(hWndM,tmpstr);
     ExitThread(i);
     return 0;
    }
    // End Grabbing thread  

    When program receive RTSP H264 video data with resolution 704x576 then decoded video pictures are OK. When receive RTSP H264 HD-video data with resolution 1280x720 it look like that first video picture is decoded OK and then video pictures are decoded but always with some error.

    Please help me to fix this problem !

    Here is problems brief :
    I have an IP camera model HI3518E_50H10L_S39 (product of China).
    Camera can provide H264 video stream both at resolution 704x576 (with RTSP URI "rtsp ://192.168.1.18:554/user=admin_password=tlJwpbo6_channel=1_stream=1.sdp ?real_stream") or 1280x720 (with RTSP URI "rtsp ://192.168.1.18:554/user=admin_password=tlJwpbo6_channel=1_stream=0.sdp ?real_stream").
    Using FFplay utility I can access and display them with good picture quality.
    For testing of grabbing from this camera, I have a simple (above mentioned) program in VC-2005. In "Grabbing thread" program use FFmpeg library version 4.0 for opening camera RTSP stream, retrieve stream information, find the first video stream... and prepare some variables.
    Center of this thread is loop : Grab a frame (function av_read_frame) - Decode it if it’s video (function avcodec_decode_video2) - Convert to RGB format (function sws_scale) - Display to program window (GDI function SetDIBitsToDevice).
    When proram run with camera RTSP stream at resolution 704x576, I have good video picture. Here is a sample :
    704x576 sample
    When program run with camera RTSP stream at resolution 1280x720, first video picture is good :
    First good at res.1280x720
    but then not good :
    not good at res.1280x720
    Its seem to be my FFmpeg function call to avcodec_decode_video2 can’t fully decode certain packet for some reasons.

  • Specifying audio/video for a multiple stream/multiple file setup using ffmpeg

    31 mai 2018, par Robert Smith

    Folks, I have the following ffmpeg command :

    ffmpeg
       -i video1a -i video2a -i video3a -i video4a
       -i video1b -i video2b -i video3b -i video4b
       -i video1c
       -filter_complex "
           nullsrc=size=640x480 [base];
           [0:v] setpts=PTS-STARTPTS+   0/TB, scale=320x240 [1a];
           [1:v] setpts=PTS-STARTPTS+ 300/TB, scale=320x240 [2a];
           [2:v] setpts=PTS-STARTPTS+ 400/TB, scale=320x240 [3a];
           [3:v] setpts=PTS-STARTPTS+ 400/TB, scale=320x240 [4a];
           [4:v] setpts=PTS-STARTPTS+2500/TB, scale=320x240 [1b];
           [5:v] setpts=PTS-STARTPTS+ 800/TB, scale=320x240 [2b];
           [6:v] setpts=PTS-STARTPTS+ 700/TB, scale=320x240 [3b];
           [7:v] setpts=PTS-STARTPTS+ 800/TB, scale=320x240 [4b];
           [8:v] setpts=PTS-STARTPTS+3000/TB, scale=320x240 [1c];
           [base][1a] overlay=eof_action=pass [o1];
           [o1][1b] overlay=eof_action=pass [o1];
           [o1][1c] overlay=eof_action=pass:shortest=1 [o1];
           [o1][2a] overlay=eof_action=pass:x=320 [o2];
           [o2][2b] overlay=eof_action=pass:x=320 [o2];
           [o2][3a] overlay=eof_action=pass:y=240 [o3];
           [o3][3b] overlay=eof_action=pass:y=240 [o3];
           [o3][4a] overlay=eof_action=pass:x=320:y=240[o4];
           [o4][4b] overlay=eof_action=pass:x=320:y=240"
       -c:v libx264 output.mp4

    I have just found out something regarding the files I will be processing with above command : that some mp4 files are video/audio, some mp4 files are audio alone and some mp4 files are video alone. I am already able to determine which ones have audio/video/both using ffprobe. My question is how do I modify above command to state what each file contains (video/audio/both).

    This is the scenario of which file has video/audio/both :

    video   time
    ======= =========
    Area 1:
    video1a    audio
    video1b     both
    video1c    video

    Area 2:
    video2a    video
    video2b    audio

    Area 3:
    video3a    video
    video3b    audio

    Area 4:
    video4a    video
    video4b    both

    My question is how to correctly modify command above to specify what the file has (audio/video/both). Thank you.

    Update #1

    I ran test as follows :

    -i "video1a.flv"
    -i "video1b.flv"
    -i "video1c.flv"
    -i "video2a.flv"
    -i "video3a.flv"
    -i "video4a.flv"
    -i "video4b.flv"
    -i "video4c.flv"
    -i "video4d.flv"
    -i "video4e.flv"

    -filter_complex

    nullsrc=size=640x480[base];
    [0:v]setpts=PTS-STARTPTS+120/TB,scale=320x240[1a];
    [1:v]setpts=PTS-STARTPTS+3469115/TB,scale=320x240[1b];
    [2:v]setpts=PTS-STARTPTS+7739299/TB,scale=320x240[1c];
    [5:v]setpts=PTS-STARTPTS+4390466/TB,scale=320x240[4a];
    [6:v]setpts=PTS-STARTPTS+6803937/TB,scale=320x240[4b];
    [7:v]setpts=PTS-STARTPTS+8242005/TB,scale=320x240[4c];
    [8:v]setpts=PTS-STARTPTS+9811577/TB,scale=320x240[4d];
    [9:v]setpts=PTS-STARTPTS+10765190/TB,scale=320x240[4e];
    [base][1a]overlay=eof_action=pass[o1];
    [o1][1b]overlay=eof_action=pass[o1];
    [o1][1c]overlay=eof_action=pass:shortest=1[o1];
    [o1][4a]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4b]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4c]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4d]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4e]overlay=eof_action=pass:x=320:y=240;
    [0:a]asetpts=PTS-STARTPTS+120/TB,aresample=async=1,apad[a1a];
    [1:a]asetpts=PTS-STARTPTS+3469115/TB,aresample=async=1,apad[a1b];
    [2:a]asetpts=PTS-STARTPTS+7739299/TB,aresample=async=1[a1c];
    [3:a]asetpts=PTS-STARTPTS+82550/TB,aresample=async=1,apad[a2a];
    [4:a]asetpts=PTS-STARTPTS+2687265/TB,aresample=async=1,apad[a3a];
    [a1a][a1b][a1c][a2a][a3a]amerge=inputs=5

    -c:v libx264 -c:a aac -ac 2 output.mp4

    This is the stream data from ffmpeg :

    Input #0
     Stream #0:0: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
     Stream #0:1: Audio: nellymoser, 11025 Hz, mono, flt

    Input #1
     Stream #1:0: Audio: nellymoser, 11025 Hz, mono, flt
     Stream #1:1: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn

    Input #2
     Stream #2:0: Audio: nellymoser, 11025 Hz, mono, flt
     Stream #2:1: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn

    Input #3
     Stream #3:0: Audio: nellymoser, 11025 Hz, mono, flt

    Input #4
     Stream #4:0: Audio: nellymoser, 11025 Hz, mono, flt

    Input #5
     Stream #5:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn

    Input #6
     Stream #6:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn

    Input #7
     Stream #7:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn

    Input #8
     Stream #8:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn

    Input #9
     Stream #9:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn

    This is the error :

    Stream mapping:
     Stream #0:0 (vp6f) -> setpts
     Stream #0:1 (nellymoser) -> asetpts

     Stream #1:0 (nellymoser) -> asetpts
     Stream #1:1 (vp6f) -> setpts

     Stream #2:0 (nellymoser) -> asetpts
     Stream #2:1 (vp6f) -> setpts

     Stream #3:0 (nellymoser) -> asetpts

     Stream #4:0 (nellymoser) -> asetpts

     Stream #5:0 (vp6f) -> setpts

     Stream #6:0 (vp6f) -> setpts

     Stream #7:0 (vp6f) -> setpts

     Stream #8:0 (vp6f) -> setpts

     Stream #9:0 (vp6f) -> setpts

     overlay -> Stream #0:0 (libx264)
     amerge -> Stream #0:1 (aac)
    Press [q] to stop, [?] for help

    Enter command: <target>|all <time>|-1 <command>[ <argument>]

    Parse error, at least 3 arguments were expected, only 1 given in string 'ho Oscar'
    [Parsed_amerge_39 @ 0aa147c0] No channel layout for input 1
       Last message repeated 1 times
    [AVFilterGraph @ 05e01900] The following filters could not choose their formats: Parsed_amerge_39
    Consider inserting the (a)format filter near their input or output.
    Error reinitializing filters!
    Failed to inject frame into filter network: I/O error
    Error while processing the decoded data for stream #4:0
    Conversion failed!
    </argument></command></time></target>

    Update #2

    Would it be like this :

    -i "video1a.flv"
    -i "video1b.flv"
    -i "video1c.flv"
    -i "video2a.flv"
    -i "video3a.flv"
    -i "video4a.flv"
    -i "video4b.flv"
    -i "video4c.flv"
    -i "video4d.flv"
    -i "video4e.flv"

    -filter_complex

    nullsrc=size=640x480[base];
    [0:v]setpts=PTS-STARTPTS+120/TB,scale=320x240[1a];
    [1:v]setpts=PTS-STARTPTS+3469115/TB,scale=320x240[1b];
    [2:v]setpts=PTS-STARTPTS+7739299/TB,scale=320x240[1c];
    [5:v]setpts=PTS-STARTPTS+4390466/TB,scale=320x240[4a];
    [6:v]setpts=PTS-STARTPTS+6803937/TB,scale=320x240[4b];
    [7:v]setpts=PTS-STARTPTS+8242005/TB,scale=320x240[4c];
    [8:v]setpts=PTS-STARTPTS+9811577/TB,scale=320x240[4d];
    [9:v]setpts=PTS-STARTPTS+10765190/TB,scale=320x240[4e];
    [base][1a]overlay=eof_action=pass[o1];
    [o1][1b]overlay=eof_action=pass[o1];
    [o1][1c]overlay=eof_action=pass:shortest=1[o1];
    [o1][4a]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4b]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4c]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4d]overlay=eof_action=pass:x=320:y=240[o4];
    [o4][4e]overlay=eof_action=pass:x=320:y=240;
    [0:a]asetpts=PTS-STARTPTS+120/TB,aresample=async=1,pan=1c|c0=c0,apad[a1a];
    [1:a]asetpts=PTS-STARTPTS+3469115/TB,aresample=async=1,pan=1c|c0=c0,apad[a1b];
    [2:a]asetpts=PTS-STARTPTS+7739299/TB,aresample=async=1,pan=1c|c0=c0[a1c];
    [3:a]asetpts=PTS-STARTPTS+82550/TB,aresample=async=1,pan=1c|c0=c0,apad[a2a];
    [4:a]asetpts=PTS-STARTPTS+2687265/TB,aresample=async=1,pan=1c|c0=c0,apad[a3a];
    [a1a][a1b][a1c][a2a][a3a]amerge=inputs=5

    -c:v libx264 -c:a aac -ac 2 output.mp4

    Update #3

    Now getting this error :

    Stream mapping:
     Stream #0:0 (vp6f) -> setpts
     Stream #0:1 (nellymoser) -> asetpts
     Stream #1:0 (nellymoser) -> asetpts
     Stream #1:1 (vp6f) -> setpts
     Stream #2:0 (nellymoser) -> asetpts
     Stream #2:1 (vp6f) -> setpts
     Stream #3:0 (nellymoser) -> asetpts
     Stream #4:0 (nellymoser) -> asetpts
     Stream #5:0 (vp6f) -> setpts
     Stream #6:0 (vp6f) -> setpts
     Stream #7:0 (vp6f) -> setpts
     Stream #8:0 (vp6f) -> setpts
     Stream #9:0 (vp6f) -> setpts
     overlay -> Stream #0:0 (libx264)
     amerge -> Stream #0:1 (aac)
    Press [q] to stop, [?] for help

    Enter command: <target>|all <time>|-1 <command>[ <argument>]

    Parse error, at least 3 arguments were expected, only 1 given in string 'ho Oscar'
    [Parsed_amerge_44 @ 0a9808c0] No channel layout for input 1
    [Parsed_amerge_44 @ 0a9808c0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
    [Parsed_pan_27 @ 07694800] Pure channel mapping detected: 0
    [Parsed_pan_31 @ 07694a80] Pure channel mapping detected: 0
    [Parsed_pan_35 @ 0a980300] Pure channel mapping detected: 0
    [Parsed_pan_38 @ 0a980500] Pure channel mapping detected: 0
    [Parsed_pan_42 @ 0a980780] Pure channel mapping detected: 0
    [libx264 @ 06ad78c0] using SAR=1/1
    [libx264 @ 06ad78c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    [libx264 @ 06ad78c0] profile High, level 3.0
    [libx264 @ 06ad78c0] 264 - core 155 r2901 7d0ff22 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=15 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'output.mp4':
     Metadata:
       canSeekToEnd    : false
       encoder         : Lavf58.16.100
       Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p(progressive), 640x480 [SAR 1:1 DAR 4:3], q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
       Metadata:
         encoder         : Lavc58.19.102 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
       Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 11025 Hz, stereo, fltp, 128 kb/s (default)
       Metadata:
         encoder         : Lavc58.19.102 aac
    ...
    ...
    Error while processing the decoded data for stream #1:1
    [libx264 @ 06ad78c0] frame I:133   Avg QP: 8.58  size:  6481
    [libx264 @ 06ad78c0] frame P:8358  Avg QP:17.54  size:  1386
    [libx264 @ 06ad78c0] frame B:24582 Avg QP:24.27  size:   105
    [libx264 @ 06ad78c0] consecutive B-frames:  0.6%  0.5%  0.7% 98.1%
    [libx264 @ 06ad78c0] mb I  I16..4: 78.3% 16.1%  5.6%
    [libx264 @ 06ad78c0] mb P  I16..4:  0.3%  0.7%  0.1%  P16..4:  9.6%  3.0%  1.4%  0.0%  0.0%    skip:84.9%
    [libx264 @ 06ad78c0] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8:  2.9%  0.1%  0.0%  direct: 0.2%  skip:96.8%  L0:47.0% L1:49.0% BI: 4.0%
    [libx264 @ 06ad78c0] 8x8 transform intra:35.0% inter:70.1%
    [libx264 @ 06ad78c0] coded y,uvDC,uvAC intra: 36.8% 43.7% 27.3% inter: 1.6% 3.0% 0.1%
    [libx264 @ 06ad78c0] i16 v,h,dc,p: 79%  8%  4%  9%
    [libx264 @ 06ad78c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 32% 20% 12%  3%  6%  8%  6%  5%  7%
    [libx264 @ 06ad78c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 38% 22%  9%  4%  6%  7%  5%  5%  4%
    [libx264 @ 06ad78c0] i8c dc,h,v,p: 62% 15% 16%  7%
    [libx264 @ 06ad78c0] Weighted P-Frames: Y:0.6% UV:0.5%
    [libx264 @ 06ad78c0] ref P L0: 65.4% 12.3% 14.3%  7.9%  0.0%
    [libx264 @ 06ad78c0] ref B L0: 90.2%  7.5%  2.3%
    [libx264 @ 06ad78c0] ref B L1: 96.3%  3.7%
    [libx264 @ 06ad78c0] kb/s:90.81
    [aac @ 06ad8480] Qavg: 65519.970
    [aac @ 06ad8480] 2 frames left in the queue on closing
    Conversion failed!
    </argument></command></time></target>
  • Bring back support for CommonJS

    19 juin 2018, par bbc2
    Bring back support for CommonJS
    

    Browserify, which follows CommonJS module conventions, parses the AST
    for `require` calls to determine the dependencies of a project.

    jQuery File Upload used to support this ; it was removed in
    e2cda462610c776a1f7856692c98b1baab02231a when a new version of
    jquery.ui.widget.js was vendored.