Recherche avancée

Médias (0)

Mot : - Tags -/alertes

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (71)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

Sur d’autres sites (12384)

  • ffmpeg command failed

    25 octobre 2015, par Asad kamran

    I am experimenting FFMPEG command to convert any video format to mp4.

    Server admin execute the below command created by me and show me the errors :

    ffmpeg -ss 00:03:00  -i  /video/1444107854.mov -c:v libx264 /video/player/1444107854.mp4  -vframes 1 /video/thumb/1444107854.jpg

    This is error :

    [aac @ 0x2b845a0] The encoder ’aac’ is experimental but experimental
    codecs are not enabled, add ’-strict -2’ if you want to use it.

    how can i avoid this error and also experimental codec, so can any one let me know what can be best codec if i explicitly specify it.

    my ffmpeg and server are as follow :

    ffmpeg version N-75903-g14573b9 Copyright (c) 2000-2015 the FFmpeg
    developers built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)

    EDIT 1 :
    I change the command a bit and add -c:a copy to copy audio stream as it is but still no hope :

    New command :

    ffmpeg -ss 00:03:00  -i  /video/1444107854.mov -c:v libx264 -c:a copy /video/player/1444107854.mp4  -vframes 1 /video/thumb/1444107854.jpg

    After executing this i got a file of 23k size, apparently as in log it only copy audio stream and no video is added in final output.
    Its Log is as follow :

    > ffmpeg version N-75903-g14573b9 Copyright (c) 2000-2015 the FFmpeg
    > developers   built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)  
    > configuration: --enable-libx264 --enable-gpl   libavutil      55.
    > 3.100 / 55.  3.100   libavcodec     57.  5.100 / 57.  5.100   libavformat    57.  3.100 / 57.  3.100   libavdevice    57.  0.100 /
    > 57.  0.100   libavfilter     6. 10.100 /  6. 10.100   libswscale      4.  0.100 /  4.  0.100   libswresample   2.  0.100 /  2.  0.100   libpostproc    54.  0.100 / 54.  0.100 Input #0, mpeg, from
    > '/video/1444108714.mpg':  
    > Duration: 00:00:02.14, start: 0.184278, bitrate: 15689 kb/s
    >     Stream #0:0[0x1e0]: Video: mpeg2video (Main), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], max. 38810 kb/s, 29.97 fps, 29.97 tbr,
    > 90k tbn, 59.94 tbc
    >     Stream #0:1[0x80]: Audio: ac3, 48000 Hz, 5.1(side), fltp, 448 kb/s [swscaler @ 0x347fda0] deprecated pixel format used, make sure you did
    > set range correctly [libx264 @ 0x34708a0] using SAR=1/1 [libx264 @
    > 0x34708a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 [libx264
    > @ 0x34708a0] profile High, level 4.0 [libx264 @ 0x34708a0] 264 - core
    > 148 r2597 e86f3a1 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 -
    > http://www.videolan.org/x264.html - options: cabac=1 ref=3
    > deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00
    > mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0
    > deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=34
    > lookahead_threads=5 sliced_threads=0 nr=0 decimate=1 interlaced=0
    > bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1
    > b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250
    > keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf
    > mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40
    > aq=1:1.00 [mp4 @ 0x346f6c0] Codec for stream 1 does not use global
    > headers but container format requires global headers [mp4 @ 0x346f6c0]
    > track 1: codec frame size is not set Output #0, mp4, to
    > '/video/player/1444108714.mp4':  
    > Metadata:
    >     encoder         : Lavf57.3.100
    >     Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 29.97 fps, 30k tbn,
    > 29.97 tbc
    >     Metadata:
    >       encoder         : Lavc57.5.100 libx264
    >     Stream #0:1: Audio: ac3 ([165][0][0][0] / 0x00A5), 48000 Hz, 5.1(side), 448 kb/s Output #1, image2, to '/video/thumb/1444108714.jpg':  
    > Metadata:
    >     encoder         : Lavf57.3.100
    >     Stream #1:0: Video: mjpeg, yuvj420p(pc), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 29.97 fps, 29.97 tbn, 29.97 tbc
    >     Metadata:
    >       encoder         : Lavc57.5.100 mjpeg Stream mapping:   Stream #0:0 -> #0:0 (mpeg2video (native) -> h264 (libx264))   Stream #0:1 -> #0:1 (copy)   Stream #0:0 -> #1:0 (mpeg2video (native) -> mjpeg (native)) Press [q] to stop, [?] for help [mp4 @ 0x346f6c0]
    > Non-monotonous DTS in output stream 0:1; previous: 2208, current: 672;
    > changing to 2209. This may result in incorrect timestamps in the
    > output file. frame=    0 fps=0.0 q=0.0 Lq=0.0 size=      23kB
    > time=00:00:00.07 bitrate=2447.5kbits/s video:0kB audio:23kB
    > subtitle:0kB other streams:0kB global headers:0kB muxing overhead:
    > 3.457839%

    EDIT 2 :

    ffmpeg -y -i ./1445675270.m4b -c:v libx264  -crf 20 -preset slow -pix_fmt yuv420p -movflags +faststart -c:a libfdk_aac -b:a 128k  ./player/1445675270.mp4 -vframes 1 ./thumb/1445675270.jpg

    Above command output the below message, i will appreciate if you can suggest.

    ffmpeg version git-2015-10-11-49f4967 Copyright (c) 2000-2015 the FFmpeg developers
     built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
     configuration: --prefix=/root/ffmpeg_build --extra-cflags=-I/root/ffmpeg_build/include --extra-ldflags=-L/root/ffmpeg_build/lib --bindir=/root/bin --pkg-config-flags=--static --enable-gpl --enable-nonfree --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265
     libavutil      55.  3.100 / 55.  3.100
     libavcodec     57.  5.100 / 57.  5.100
     libavformat    57.  3.100 / 57.  3.100
     libavdevice    57.  0.100 / 57.  0.100
     libavfilter     6. 11.100 /  6. 11.100
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.100 /  2.  0.100
     libpostproc    54.  0.100 / 54.  0.100
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x31e07c0] stream 0, timescale not set
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './1445675270.m4b':
     Metadata:
       major_brand     : M4A
       minor_version   : 0
       compatible_brands: M4A mp42isom
       creation_time   : 2005-08-01 07:26:16
       tool            : ?
       title           : MAKE_2005-08-01
       artist          : MAKE Magazine
       composer        : MAKE Magazine - Phillip Torrone
       album           : Interview with Janus Wireless
       grouping        : MAKE Magazine enhanced podcast
       genre           : Podcast
       date            : 2005
       comment         : Interview with Janus wireless and their 5 Wi-Fi card packet capturing Linux box. This is a special enhanced podcast (this file plays images and links in iTunes and on iPod color devices).
     Duration: 00:02:57.54, start: 0.000000, bitrate: 162 kb/s
       Chapter #0:0: start 0.000000, end 17.000000
       Metadata:
         title           : MAKE Magazine @ DEFCON with JANUS
       Chapter #0:1: start 17.000000, end 37.000000
       Metadata:
         title           : Janus
       Chapter #0:2: start 37.000000, end 83.000000
       Metadata:
         title           : Construction
       Chapter #0:3: start 83.000000, end 109.000000
       Metadata:
         title           : The MAKERs
       Chapter #0:4: start 109.000000, end 177.540000
       Metadata:
         title           : Display
       Stream #0:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 22050 Hz, mono, fltp, 32 kb/s (default)
       Metadata:
         creation_time   : 2005-08-01 07:26:16
         handler_name    : ?Apple Alias Data Handler
       Stream #0:1(eng): Subtitle: mov_text (text / 0x74786574), 0 kb/s
       Metadata:
         creation_time   : 2005-08-01 07:26:16
         handler_name    : ?Apple Alias Data Handler
       Stream #0:2(eng): Video: tiff (tiff / 0x66666974), rgb24, 167x166, 126 kb/s, SAR 206:275 DAR 17201:22825, 0.03 fps, 1 tbr, 22050 tbn, 22050 tbc (default)
       Metadata:
         creation_time   : 2005-08-01 07:26:16
         handler_name    : ?Apple Alias Data Handler
         encoder         : TIFF (Uncompressed)
       Stream #0:3(eng): Subtitle: mov_text (tx3g / 0x67337874), 160x160, 0 kb/s (default)
       Metadata:
         creation_time   : 2005-08-01 07:26:16
         handler_name    : ?Apple Alias Data Handler
       Stream #0:4: Video: mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 167x166 [SAR 1:1 DAR 167:166], 90k tbr, 90k tbn, 90k tbc
    [swscaler @ 0x3242360] deprecated pixel format used, make sure you did set range correctly
    [libx264 @ 0x3228c40] width not divisible by 2 (167x166)
    Output #0, mp4, to './player/1445675270.mp4':
     Metadata:
       major_brand     : M4A
       minor_version   : 0
       compatible_brands: M4A mp42isom
       comment         : Interview with Janus wireless and their 5 Wi-Fi card packet capturing Linux box. This is a special enhanced podcast (this file plays images and links in iTunes and on iPod color devices).
       tool            : ?
       title           : MAKE_2005-08-01
       artist          : MAKE Magazine
       composer        : MAKE Magazine - Phillip Torrone
       album           : Interview with Janus Wireless
       grouping        : MAKE Magazine enhanced podcast
       genre           : Podcast
       date            : 2005
       Chapter #0:0: start 0.000000, end 17.000000
       Metadata:
         title           : MAKE Magazine @ DEFCON with JANUS
       Chapter #0:1: start 17.000000, end 37.000000
       Metadata:
         title           : Janus
       Chapter #0:2: start 37.000000, end 83.000000
       Metadata:
         title           : Construction
       Chapter #0:3: start 83.000000, end 109.000000
       Metadata:
         title           : The MAKERs
       Chapter #0:4: start 109.000000, end 177.540000
       Metadata:
         title           : Display
       Stream #0:0(eng): Video: h264, none, q=2-31, 128 kb/s, SAR 206:275 DAR 0:0, 1 fps (default)
       Metadata:
         creation_time   : 2005-08-01 07:26:16
         handler_name    : ?Apple Alias Data Handler
         encoder         : Lavc57.5.100 libx264
       Stream #0:1(eng): Audio: aac, 0 channels, 128 kb/s (default)
       Metadata:
         creation_time   : 2005-08-01 07:26:16
         handler_name    : ?Apple Alias Data Handler
         encoder         : Lavc57.5.100 libfdk_aac
    Output #1, image2, to './thumb/1445675270.jpg':
     Metadata:
       major_brand     : M4A
       minor_version   : 0
       compatible_brands: M4A mp42isom
       comment         : Interview with Janus wireless and their 5 Wi-Fi card packet capturing Linux box. This is a special enhanced podcast (this file plays images and links in iTunes and on iPod color devices).
       tool            : ?
       title           : MAKE_2005-08-01
       artist          : MAKE Magazine
       composer        : MAKE Magazine - Phillip Torrone
       album           : Interview with Janus Wireless
       grouping        : MAKE Magazine enhanced podcast
       genre           : Podcast
       date            : 2005
       Chapter #1:0: start 0.000000, end 17.000000
       Metadata:
         title           : MAKE Magazine @ DEFCON with JANUS
       Chapter #1:1: start 17.000000, end 37.000000
       Metadata:
         title           : Janus
       Chapter #1:2: start 37.000000, end 83.000000
       Metadata:
         title           : Construction
       Chapter #1:3: start 83.000000, end 109.000000
       Metadata:
         title           : The MAKERs
       Chapter #1:4: start 109.000000, end 177.540000
       Metadata:
         title           : Display
       Stream #1:0(eng): Video: mjpeg, none, q=2-31, 128 kb/s, SAR 206:275 DAR 0:0, 1 fps (default)
       Metadata:
         creation_time   : 2005-08-01 07:26:16
         handler_name    : ?Apple Alias Data Handler
         encoder         : Lavc57.5.100 mjpeg
    Stream mapping:
     Stream #0:2 -> #0:0 (tiff (native) -> h264 (libx264))
     Stream #0:0 -> #0:1 (aac (native) -> aac (libfdk_aac))
     Stream #0:2 -> #1:0 (tiff (native) -> mjpeg (native))
    Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
  • How to merge two videos using FFMpeg PHP

    7 août 2015, par Vikas

    currently i am working on a project where i need to merge the two videos using ffmpeg and php. I need to make two video play same time and on stop should stop at same time and if one video stop’s than other should also pause. I have searched a lot but didn’t find any thing, hope anyone can help me.

  • Random segmentation fault with avcodec_encode_video2()

    10 août 2015, par Seba Arriagada

    this is my first question so i hope i did it correctly. If not, please let me know to fix it.

    I’m trying to convert a short (10 secs) mp4 video file into a gif using ffmpeg libraries (I’m pretty new using ffmpeg). The program works pretty well converting to gif, but some times it randomly crash.

    This is the version of the ffmpeg libraries I’m using :

    libavutil      54. 27.100
    libavcodec     56. 41.100
    libavformat    56. 36.100
    libavdevice    56.  4.100
    libavfilter     5. 16.101
    libavresample   2.  1.  0
    libswscale      3.  1.101
    libswresample   1.  2.100
    libpostproc    53.  3.100

    I’m using a 1920x1080p video, so in order to generate the gif I’m doing a pixel format convertion, from AV_PIX_FMT_YUV420P to AV_PIX_FMT_RGB8 with a resizing from the initial resolution to 432x240p.

    Here is the code :

    int VideoManager::loadVideo(QString filename, bool showInfo)
    {
       if(avformat_open_input(&iFmtCtx, filename.toStdString().c_str(), 0, 0) < 0)
       {
           qDebug() << "Could not open input file " << filename;
           closeInput();
           return -1;
       }
       if (avformat_find_stream_info(iFmtCtx, 0) < 0)
       {
           qDebug() << "Failed to retrieve input stream information";
           closeInput();
           return -2;
       }

       videoStreamIndex = -1;
       for(unsigned int i = 0; i < iFmtCtx->nb_streams; ++i)
           if(iFmtCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
           {
               videoStreamIndex = i;
               break;
           }

       if(videoStreamIndex == -1)
       {
           qDebug() << "Didn't find any video stream!";
           closeInput();
           return -3;
       }
       iCodecCtx = iFmtCtx->streams[videoStreamIndex]->codec;

       iCodec = avcodec_find_decoder(iCodecCtx->codec_id);
       if(iCodec == NULL) // Codec not found
       {
           qDebug() << "Codec not found!";
           closeInput();
           return -4;
       }
       if(avcodec_open2(iCodecCtx, iCodec, NULL) < 0)
       {
           qDebug() << "Could not open codec!";
           closeInput();
           return -1;
       }

       if(showInfo)
           av_dump_format(iFmtCtx, 0, filename.toStdString().c_str(), 0);

       return 0;
    }

    void VideoManager::generateGif(QString filename)
    {
       int ret, frameCount = 0;
       AVPacket packet;
       packet.data = NULL;
       packet.size = 0;
       AVFrame *frame = NULL;
       unsigned int stream_index;
       int got_frame;

       gifHeight = iFmtCtx->streams[videoStreamIndex]->codec->height;
       gifWidth  = iFmtCtx->streams[videoStreamIndex]->codec->width;

       if(gifHeight > MAX_GIF_HEIGHT || gifWidth > MAX_GIF_WIDTH)
       {
           if(gifHeight > gifWidth)
           {
               gifWidth  = (float)gifWidth * ( (float)MAX_GIF_HEIGHT / (float)gifHeight );
               gifHeight = MAX_GIF_HEIGHT;
           }
           else
           {
               gifHeight = (float)gifHeight * ( (float)MAX_GIF_WIDTH / (float)gifWidth );
               gifWidth  = MAX_GIF_WIDTH;
           }
       }


       if(openOutputFile(filename.toStdString().c_str()) < 0)
       {
           qDebug() << "Error openning output file: " << filename;
           return;
       }

       while (1) {
           int ret = av_read_frame(iFmtCtx, &packet);
           if (ret < 0)
           {
               if(ret != AVERROR_EOF)
                   qDebug() << "Error reading frame: " << ret;
               break;
           }
           stream_index = packet.stream_index;

           if(stream_index == videoStreamIndex)
           {
               frame = av_frame_alloc();
               if (!frame) {
                   qDebug() << "Error allocating frame";
                   break;
               }
               av_packet_rescale_ts(&packet,
                                    iFmtCtx->streams[stream_index]->time_base,
                                    iFmtCtx->streams[stream_index]->codec->time_base);

               ret = avcodec_decode_video2(iFmtCtx->streams[stream_index]->codec, frame,
                       &got_frame, &packet);
               if (ret < 0) {
                   qDebug() << "Decoding failed";
                   break;
               }

               if(got_frame)
               {
                   qDebug() << ++frameCount;
                   nframes++;
                   frame->pts = av_frame_get_best_effort_timestamp(frame);

                   ////////////////////////////////////////////////////////////////////////////////
                   /// Pixel format convertion and resize
                   ////////////////////////////////////////////////////////////////////////////////
                   uint8_t *out_buffer = NULL;
                   SwsContext *img_convert_ctx = NULL;
                   AVFrame *pFrameRGB = av_frame_alloc();

                   if(pFrameRGB == NULL)
                   {
                       qDebug() << "Error allocating frameRGB";
                       break;
                   }

                   AVPixelFormat pixFmt;
                   switch (iFmtCtx->streams[stream_index]->codec->pix_fmt)
                   {
                   case AV_PIX_FMT_YUVJ420P : pixFmt = AV_PIX_FMT_YUV420P; break;
                   case AV_PIX_FMT_YUVJ422P : pixFmt = AV_PIX_FMT_YUV422P; break;
                   case AV_PIX_FMT_YUVJ444P : pixFmt = AV_PIX_FMT_YUV444P; break;
                   case AV_PIX_FMT_YUVJ440P : pixFmt = AV_PIX_FMT_YUV440P; break;
                   default:
                       pixFmt = iFmtCtx->streams[stream_index]->codec->pix_fmt;
                   }

                   out_buffer = (uint8_t*)av_malloc( avpicture_get_size( AV_PIX_FMT_RGB8,
                                                     gifWidth,
                                                     gifHeight ));
                   if(!out_buffer)
                   {
                       qDebug() << "Error alocatting out_buffer!";
                   }
                   avpicture_fill((AVPicture *)pFrameRGB, out_buffer, AV_PIX_FMT_RGB8,
                                  gifWidth,
                                  gifHeight);
                   img_convert_ctx = sws_getContext( iFmtCtx->streams[stream_index]->codec->width,
                                                     iFmtCtx->streams[stream_index]->codec->height,
                                                     pixFmt,
                                                     gifWidth,
                                                     gifHeight,
                                                     AV_PIX_FMT_RGB8,
                                                     SWS_ERROR_DIFFUSION, NULL, NULL, NULL );

                   if(!img_convert_ctx)
                   {
                       qDebug() << "error getting sws context";
                   }

                   sws_scale( img_convert_ctx, (const uint8_t* const*)frame->data,
                              frame->linesize, 0,
                              iFmtCtx->streams[stream_index]->codec->height,
                              pFrameRGB->data,
                              pFrameRGB->linesize );

                   pFrameRGB->format = AV_PIX_FMT_RGB8;
                   pFrameRGB->pts = frame->pts;
                   pFrameRGB->best_effort_timestamp = frame->best_effort_timestamp;
                   pFrameRGB->width = gifWidth;
                   pFrameRGB->height = gifHeight;
                   pFrameRGB->pkt_dts = frame->pkt_dts;
                   pFrameRGB->pkt_pts = frame->pkt_pts;
                   pFrameRGB->pkt_duration = frame->pkt_duration;
                   pFrameRGB->pkt_pos = frame->pkt_pos;
                   pFrameRGB->pkt_size = frame->pkt_size;
                   pFrameRGB->interlaced_frame = frame->interlaced_frame;
                   ////////////////////////////////////////////////////////////////////////////////
                   ret = encodeAndWriteFrame(pFrameRGB, stream_index, NULL);
                   //av_frame_free(&frame);
                   //av_free(out_buffer);
                   //sws_freeContext(img_convert_ctx);
                   if (ret < 0)
                   {
                       qDebug() << "Error encoding and writting frame";
                       //av_free_packet(&packet);
                       closeOutput();
                   }
               }
               else {
                   //av_frame_free(&frame);
               }
           }
           av_free_packet(&packet);
       }

       ret = flushEncoder(videoStreamIndex);
       if (ret < 0)
       {
           qDebug() << "Flushing encoder failed";
       }

       av_write_trailer(oFmtCtx);

       //av_free_packet(&packet);
       //av_frame_free(&frame);
       closeOutput();
    }


    void VideoManager::closeOutput()
    {
       if (oFmtCtx && oFmtCtx->nb_streams > 0 && oFmtCtx->streams[0] && oFmtCtx->streams[0]->codec)
           avcodec_close(oFmtCtx->streams[0]->codec);
       if (oFmtCtx && oFmt && !(oFmt->flags & AVFMT_NOFILE))
           avio_closep(&oFmtCtx->pb);
       avformat_free_context(oFmtCtx);
    }

    int VideoManager::openOutputFile(const char *filename)
    {
       AVStream *out_stream;
       AVStream *in_stream;
       AVCodecContext *dec_ctx, *enc_ctx;
       AVCodec *encoder;
       int ret;

       oFmtCtx = NULL;
       avformat_alloc_output_context2(&oFmtCtx, NULL, NULL, filename);
       if (!oFmtCtx) {
           qDebug() << "Could not create output context";
           return AVERROR_UNKNOWN;
       }

       oFmt = oFmtCtx->oformat;

       out_stream = avformat_new_stream(oFmtCtx, NULL);
       if (!out_stream) {
           qDebug() << "Failed allocating output stream";
           return AVERROR_UNKNOWN;
       }

       in_stream = iFmtCtx->streams[videoStreamIndex];
       dec_ctx = in_stream->codec;
       enc_ctx = out_stream->codec;

       encoder = avcodec_find_encoder(AV_CODEC_ID_GIF);
       if (!encoder) {
           qDebug() << "FATAL!: Necessary encoder not found";
           return AVERROR_INVALIDDATA;
       }

       enc_ctx->height = gifHeight;    
       enc_ctx->width = gifWidth;      
       enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
       enc_ctx->pix_fmt = AV_PIX_FMT_RGB8;
       enc_ctx->time_base = dec_ctx->time_base;
       ret = avcodec_open2(enc_ctx, encoder, NULL);
       if (ret < 0) {
           qDebug() << "Cannot open video encoder for gif";
           return ret;
       }

       if (oFmt->flags & AVFMT_GLOBALHEADER)
           enc_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;

       if (!(oFmt->flags & AVFMT_NOFILE)) {
           ret = avio_open(&oFmtCtx->pb, filename, AVIO_FLAG_WRITE);
           if (ret < 0) {
               qDebug() << "Could not open output file " << filename;
               return ret;
           }
       }

       ret = avformat_write_header(oFmtCtx, NULL);
       if (ret < 0) {
           qDebug() << "Error occurred when opening output file";
           return ret;
       }

       return 0;
    }


    int VideoManager::encodeAndWriteFrame(AVFrame *frame, unsigned int stream_index, int *got_frame) {
       int ret;
       int got_frame_local;
       AVPacket enc_pkt;

       if (!got_frame)
           got_frame = &got_frame_local;

       enc_pkt.data = NULL;
       enc_pkt.size = 0;
       av_init_packet(&enc_pkt);
       ret = avcodec_encode_video2(oFmtCtx->streams[stream_index]->codec, &enc_pkt,
               frame, got_frame);
       //av_frame_free(&frame);
       if (ret < 0)
           return ret;
       if (!(*got_frame))
           return 0;

       enc_pkt.stream_index = stream_index;
       av_packet_rescale_ts(&enc_pkt,
                            oFmtCtx->streams[stream_index]->codec->time_base,
                            oFmtCtx->streams[stream_index]->time_base);

       ret = av_interleaved_write_frame(oFmtCtx, &enc_pkt);
       return ret;
    }


    int VideoManager::flushEncoder(unsigned int stream_index)
    {
       int ret;
       int got_frame;

       if (!(oFmtCtx->streams[stream_index]->codec->codec->capabilities &
                   CODEC_CAP_DELAY))
           return 0;

       while (1) {
           ret = encodeAndWriteFrame(NULL, stream_index, &got_frame);
           if (ret < 0)
               break;
           if (!got_frame)
               return 0;
       }
       return ret;
    }

    I know there are a lot of memory leaks. I deleted/commented most of the free functions intentionality because i thought that was the problem.

    I’m using Qtcreator, so when i debug the programs this is the output :

    Level Function                            Line
    0     av_image_copy                       303
    1     frame_copy_video                    650    
    2     av_frame_copy                       687    
    3     av_frame_ref                        384    
    4     gif_encode_frame                    307    
    5     avcodec_encode_video2               2191    
    6     VideoManager::encodeAndWriteFrame   813    
    7     VideoManager::generateGif           375    
    8     qMain                               31    
    9     WinMain*16                          112    
    10    main

    I’ve checked if there is a specific frame the program crash at, but it’s a random frame too.

    Any idea of what i’m doing wrong ? Any help would be very appreciated.

    EDIT :

    After a few days of pain, suffering and frustation I decided to write the whole code from scratch. Both times i started from this example and modified it in order to works as I described before. And it works perfectly now :D ! The only error i could find in the old code (posted before) is when i try to access to the video stream in the output file I used videoStreamIndex, but that index is from the video stream in the input file. Some times it could be the same index and some times not. But it doesn’t explain why it crashed randomly. If that was the reason of the crash, it should crash every time i ran the code with the same video. So probably, there are more errors in that code.
    Notice that i’ve not tested if fixing that error in the code above actually solve the crashing problems.