Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (81)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (8883)

  • How to keep transparency when scale webm file with ffmpeg

    5 octobre 2022, par Sonia Kidman

    I'm using ffmpeg to scale my WEBM file, by using below command : 
ffmpeg -i in.webm -c:v libvpx -vf scale=100:100 out.webm
The output has correct resolution as I expected but the problem is transparency become black background.

    



    Could someone give me a solution for this.

    



    Thank you so much.

    



    Below is the log of the operation :

    



    ffmpeg version 3.4 Copyright (c) 2000-2017 the FFmpeg developers
  built with gcc 7.2.0 (GCC)
  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth --enable-libmfx
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
Splitting the commandline.
Reading option '-v' ... matched as option 'v' (set logging level) with argument '56'.
Reading option '-i' ... matched as input url with argument 'in.webm'.
Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'libvpx'.
Reading option '-vf' ... matched as option 'vf' (set video filters) with argument 'scale=320:240'.
Reading option 'out.webm' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option v (set logging level) with argument 56.
Successfully parsed a group of options.
Parsing a group of options: input url in.webm.
Successfully parsed a group of options.
Opening an input file: in.webm.
[NULL @ 000002387e6322a0] Opening 'in.webm' for reading
[file @ 000002387e632ea0] Setting default whitelist 'file,crypto'
Probing matroska,webm score:100 size:2048
Probing mp3 score:1 size:2048
[matroska,webm @ 000002387e6322a0] Format matroska,webm probed with size=2048 and score=100
st:0 removing common factor 1000000 from timebase
[matroska,webm @ 000002387e6322a0] Before avformat_find_stream_info() pos: 634 bytes read:32768 seeks:0 nb_streams:1
[matroska,webm @ 000002387e6322a0] All info found
[matroska,webm @ 000002387e6322a0] stream 0: start_time: 0.000 duration: -9223372036854776.000
[matroska,webm @ 000002387e6322a0] format: start_time: 0.000 duration: 0.400 bitrate=1432 kb/s
[matroska,webm @ 000002387e6322a0] After avformat_find_stream_info() pos: 34843 bytes read:65536 seeks:0 frames:1
Input #0, matroska,webm, from 'in.webm':
  Metadata:
    ENCODER         : Lavf57.83.100
  Duration: 00:00:00.40, start: 0.000000, bitrate: 1432 kb/s
    Stream #0:0, 1, 1/1000: Video: vp8, 1 reference frame, yuv420p(progressive), 640x480, 0/1, SAR 1:1 DAR 4:3, 10 fps, 10 tbr, 1k tbn, 1k tbc (default)
    Metadata:
      alpha_mode      : 1
      ENCODER         : Lavc57.107.100 libvpx
      DURATION        : 00:00:00.400000000
Successfully opened the file.
Parsing a group of options: output url out.webm.
Applying option c:v (codec name) with argument libvpx.
Applying option vf (set video filters) with argument scale=320:240.
Successfully parsed a group of options.
Opening an output file: out.webm.
[file @ 000002387e658b40] Setting default whitelist 'file,crypto'
Successfully opened the file.
detected 4 logical cores
Stream mapping:
  Stream #0:0 -> #0:0 (vp8 (native) -> vp8 (libvpx))
Press [q] to stop, [?] for help
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    Last message repeated 4 times
[Parsed_scale_0 @ 000002387e718a60] Setting 'w' to value '320'
[Parsed_scale_0 @ 000002387e718a60] Setting 'h' to value '240'
[Parsed_scale_0 @ 000002387e718a60] Setting 'flags' to value 'bicubic'
[Parsed_scale_0 @ 000002387e718a60] w:320 h:240 flags:'bicubic' interl:0
[graph 0 input from stream 0:0 @ 000002387e743b00] Setting 'video_size' to value '640x480'
[graph 0 input from stream 0:0 @ 000002387e743b00] Setting 'pix_fmt' to value '0'
[graph 0 input from stream 0:0 @ 000002387e743b00] Setting 'time_base' to value '1/1000'
[graph 0 input from stream 0:0 @ 000002387e743b00] Setting 'pixel_aspect' to value '1/1'
[graph 0 input from stream 0:0 @ 000002387e743b00] Setting 'sws_param' to value 'flags=2'
[graph 0 input from stream 0:0 @ 000002387e743b00] Setting 'frame_rate' to value '10/1'
[graph 0 input from stream 0:0 @ 000002387e743b00] w:640 h:480 pixfmt:yuv420p tb:1/1000 fr:10/1 sar:1/1 sws_param:flags=2
[format @ 000002387e7fe1e0] compat: called with args=[yuv420p|yuva420p]
[format @ 000002387e7fe1e0] Setting 'pix_fmts' to value 'yuv420p|yuva420p'
[AVFilterGraph @ 000002387e634e60] query_formats: 4 queried, 3 merged, 0 already done, 0 delayed
[Parsed_scale_0 @ 000002387e718a60] w:640 h:480 fmt:yuv420p sar:1/1 -> w:320 h:240 fmt:yuv420p sar:1/1 flags:0x4
[libvpx @ 000002387e657fe0] v1.6.1
[libvpx @ 000002387e657fe0] --prefix=/Users/kyle/software/libvpx/win64/libvpx-1.6.1-win64 --target=x86_64-win64-gcc
[libvpx @ 000002387e657fe0] vpx_codec_enc_cfg
[libvpx @ 000002387e657fe0] generic settings
  g_usage:                      0
  g_threads:                    0
  g_profile:                    0
  g_w:                          320
  g_h:                          240
  g_bit_depth:                  8
  g_input_bit_depth:            8
  g_timebase:                   {1/30}
  g_error_resilient:            0
  g_pass:                       0
  g_lag_in_frames:              0
[libvpx @ 000002387e657fe0] rate control settings
  rc_dropframe_thresh:          0
  rc_resize_allowed:            0
  rc_resize_up_thresh:          60
  rc_resize_down_thresh:        30
  rc_end_usage:                 0
  rc_twopass_stats_in:          0000000000000000(0)
  rc_target_bitrate:            256
[libvpx @ 000002387e657fe0] quantizer settings
  rc_min_quantizer:             4
  rc_max_quantizer:             63
[libvpx @ 000002387e657fe0] bitrate tolerance
  rc_undershoot_pct:            100
  rc_overshoot_pct:             100
[libvpx @ 000002387e657fe0] decoder buffer model
  rc_buf_sz:                    6000
  rc_buf_initial_sz:            4000
  rc_buf_optimal_sz:            5000
[libvpx @ 000002387e657fe0] 2 pass rate control settings
  rc_2pass_vbr_bias_pct:        50
  rc_2pass_vbr_minsection_pct:  0
  rc_2pass_vbr_maxsection_pct:  400
[libvpx @ 000002387e657fe0] keyframing settings
  kf_mode:                      1
  kf_min_dist:                  0
  kf_max_dist:                  128
[libvpx @ 000002387e657fe0] 
[libvpx @ 000002387e657fe0] vpx_codec_enc_cfg
[libvpx @ 000002387e657fe0] generic settings
  g_usage:                      0
  g_threads:                    0
  g_profile:                    0
  g_w:                          320
  g_h:                          240
  g_bit_depth:                  8
  g_input_bit_depth:            8
  g_timebase:                   {1/10}
  g_error_resilient:            0
  g_pass:                       0
  g_lag_in_frames:              25
[libvpx @ 000002387e657fe0] rate control settings
  rc_dropframe_thresh:          0
  rc_resize_allowed:            0
  rc_resize_up_thresh:          60
  rc_resize_down_thresh:        30
  rc_end_usage:                 0
  rc_twopass_stats_in:          0000000000000000(0)
  rc_target_bitrate:            200
[libvpx @ 000002387e657fe0] quantizer settings
  rc_min_quantizer:             4
  rc_max_quantizer:             63
[libvpx @ 000002387e657fe0] bitrate tolerance
  rc_undershoot_pct:            100
  rc_overshoot_pct:             100
[libvpx @ 000002387e657fe0] decoder buffer model
  rc_buf_sz:                    6000
  rc_buf_initial_sz:            4000
  rc_buf_optimal_sz:            5000
[libvpx @ 000002387e657fe0] 2 pass rate control settings
  rc_2pass_vbr_bias_pct:        50
  rc_2pass_vbr_minsection_pct:  0
  rc_2pass_vbr_maxsection_pct:  400
[libvpx @ 000002387e657fe0] keyframing settings
  kf_mode:                      1
  kf_min_dist:                  0
  kf_max_dist:                  128
[libvpx @ 000002387e657fe0] 
[libvpx @ 000002387e657fe0] vpx_codec_control
[libvpx @ 000002387e657fe0]   VP8E_SET_CPUUSED:             1
[libvpx @ 000002387e657fe0]   VP8E_SET_ARNR_MAXFRAMES:      0
[libvpx @ 000002387e657fe0]   VP8E_SET_ARNR_STRENGTH:       3
[libvpx @ 000002387e657fe0]   VP8E_SET_ARNR_TYPE:           3
[libvpx @ 000002387e657fe0]   VP8E_SET_NOISE_SENSITIVITY:   0
[libvpx @ 000002387e657fe0]   VP8E_SET_TOKEN_PARTITIONS:    0
[libvpx @ 000002387e657fe0]   VP8E_SET_STATIC_THRESHOLD:    0
[libvpx @ 000002387e657fe0] Using deadline: 1000000
Output #0, webm, to 'out.webm':
  Metadata:
    encoder         : Lavf57.83.100
    Stream #0:0, 0, 1/1000: Video: vp8 (libvpx), 1 reference frame, yuv420p, 320x240 [SAR 1:1 DAR 4:3], 0/1, q=-1--1, 200 kb/s, 10 fps, 1k tbn, 10 tbc (default)
    Metadata:
      alpha_mode      : 1
      DURATION        : 00:00:00.400000000
      encoder         : Lavc57.107.100 libvpx
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Clipping frame in rate conversion by 0.000008
[webm @ 000002387e656880] get_metadata_duration returned: 400000
[webm @ 000002387e656880] Write early duration from metadata = 400
[webm @ 000002387e656880] Writing block at offset 3, size 11223, pts 0, dts 0, duration 100, keyframe 1
[webm @ 000002387e656880] Writing block at offset 11233, size 1288, pts 100, dts 100, duration 100, keyframe 0
[webm @ 000002387e656880] Writing block at offset 12528, size 1504, pts 200, dts 200, duration 100, keyframe 0
[webm @ 000002387e656880] Writing block at offset 14039, size 2481, pts 300, dts 300, duration 100, keyframe 0
[out_0_0 @ 000002387e743d60] EOF on sink link out_0_0:default.
No more output streams to write to, finishing.
[webm @ 000002387e656880] end duration = 400
[webm @ 000002387e656880] stream 0 end duration = 400
frame=    4 fps=0.0 q=0.0 Lsize=      17kB time=00:00:00.30 bitrate= 457.8kbits/s speed=4.45x    
video:16kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 4.413191%
Input file #0 (in.webm):
  Input stream #0:0 (video): 4 packets read (34992 bytes); 4 frames decoded; 
  Total: 4 packets (34992 bytes) demuxed
Output file #0 (out.webm):
  Output stream #0:0 (video): 4 frames encoded; 4 packets muxed (16496 bytes); 
  Total: 4 packets (16496 bytes) muxed
4 frames successfully decoded, 0 decoding errors
[AVIOContext @ 000002387e698c20] Statistics: 14 seeks, 10 writeouts
[AVIOContext @ 000002387cc773e0] Statistics: 71649 bytes read, 0 seeks


    


  • avcodec_receive_packet() doesn't see the output

    1er mars 2018, par Eugene Alexeev

    I’m trying to create a converter which will make a video out of set of images. Everything is at its place, AVFormatContext, AVCodecContext, AVCodec. I’m creating YUV AVFrame out of UIImage and send it to encoder by avcodec_send_frame() method. Everything goes fine until I’m trying to get AVPacket with method avcodec_receive_packet(). Every time it returns -53 which means - output is not available in the current state - user must try to send input. As I said, I’m sending an input before I’m trying to get something and sending is successful.

    Here’s my code :

    Init ffmpeg entities :

    - (BOOL)setupForConvert:(DummyFVPVideoFile *)videoFile outputPath:(NSString *)path
    {
       if (!videoFile) {
           [self.delegate convertationFailed:@"VideoFile is nil!"];
           return NO;
       }
       currentVideoFile = videoFile;
       outputPath = path;
       BOOL success = NO;

       success = [self initFormatCtxAndCodecs:path];
       if (!success) {
           return NO;
       }

       success = [self addCameraStreams:videoFile];
       if (!success) {
           return NO;
       }

       success = [self openIOContext:path];
       if (!success) {
           return NO;
       }

       return YES;
    }

    - (BOOL)initFormatCtxAndCodecs:(NSString *)path
    {
       //AVOutputFormat *fmt = av_guess_format("mp4", NULL, NULL);
       int ret = avformat_alloc_output_context2(&pFormatCtx, NULL, NULL, [path UTF8String]);
       if (ret < 0) {
           NSLog(@"Couldn't create output context");
           return NO;
       }

       //encoder codec init
       pCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if (!pCodec) {
           NSLog(@"Couldn't find a encoder codec!");
           return NO;
       }

       pCodecCtx = avcodec_alloc_context3(pCodec);
       if (!pCodecCtx) {
           NSLog(@"Couldn't alloc encoder codec context!");
           return NO;
       }

       pCodecCtx->codec_tag = AV_CODEC_ID_H264;
       pCodecCtx->bit_rate = 400000;
       pCodecCtx->width = currentVideoFile.size.width;
       pCodecCtx->height = currentVideoFile.size.height;
       pCodecCtx->time_base = (AVRational){1, (int)currentVideoFile.framerate};
       pCodecCtx->framerate = (AVRational){(int)currentVideoFile.framerate, 1};
       pCodecCtx->gop_size = 10;
       pCodecCtx->max_b_frames = 1;
       pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;

       if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0) {
           NSLog(@"Couldn't open the encoder codec!");
           return NO;
       }

       pPacket = av_packet_alloc();

       return YES;
    }

    - (BOOL)addCameraStreams:(DummyFVPVideoFile *)videoFile
    {
       AVCodecParameters *params = avcodec_parameters_alloc();
       if (!params) {
           NSLog(@"Couldn't allocate codec parameters!");
           return NO;
       }

       if (avcodec_parameters_from_context(params, pCodecCtx) < 0) {
           NSLog(@"Couldn't copy parameters from context!");
           return NO;
       }

       for (int i = 0; i < videoFile.idCameras.count - 1; i++)
       {
           NSString *path = [videoFile.url URLByAppendingPathComponent:videoFile.idCameras[i]].path;
           AVStream *stream = avformat_new_stream(pFormatCtx, pCodec);
           if (!stream) {
               NSLog(@"Couldn't alloc stream!");
               return NO;
           }

           if (avcodec_parameters_copy(stream->codecpar, params) < 0) {
               NSLog(@"Couldn't copy parameters into stream!");
               return NO;
           }

           stream->avg_frame_rate.num = videoFile.framerate;
           stream->avg_frame_rate.den = 1;
           stream->codecpar->codec_tag = 0;    //some silly workaround
           stream->index = i;
           streams[path] = [[VideoStream alloc] initWithStream:stream];
       }

       return YES;
    }

    - (BOOL)openIOContext:(NSString *)path
    {
       AVIOContext *ioCtx = nil;
       if (avio_open(&ioCtx, [path UTF8String], AVIO_FLAG_WRITE) < 0) {
           return NO;
       }
       pFormatCtx->pb = ioCtx;

       return YES;
    }

    And here’s convertation process :

    - (void)launchConvert:(DummyFVPVideoFile *)videoFile
    {
       BOOL convertInProgress = YES;
       unsigned int frameCount = 1;
       unsigned long pts = 0;
       BOOL success = NO;

       success = [self writeHeader];
       if (!success) {
           NSLog(@"Couldn't write header!");
           return;
       }

       AVRational defaultTimeBase;
       defaultTimeBase.num = 1;
       defaultTimeBase.den = videoFile.framerate;
       AVRational streamTimeBase = streams.allValues.firstObject.stream->time_base;

       while (convertInProgress)
       {
           pts += av_rescale_q(1, defaultTimeBase, streamTimeBase);
           for (NSString *path in streams.allKeys)
           {
               UIImage *img = [UIImage imageWithContentsOfFile:[NSString stringWithFormat:@"%@/%u.jpg", path, frameCount]];
               AVPacket *pkt = [self getAVPacket:img withPts:pts];
               if (!pkt->data) {   continue;   }
               pkt->stream_index = streams[path].stream->index;
               //check all settings of pkt

               if (![self writePacket:pkt]) {
                   NSLog(@"Couldn't write packet!");
                   convertInProgress = NO;
                   break;
               }
           }

           frameCount++;
       }

       success = [self writeTrailer];
       if (!success) {
           NSLog(@"Couldn't write trailer!");
           return;
       }

       NSLog(@"Convertation finished!");
       //delegate convertationFinished method
    }

    - (BOOL)writeHeader
    {
       if (avformat_write_header(pFormatCtx, NULL) < 0) {
           return NO;
       }

       return YES;
    }

    - (BOOL)writePacket:(AVPacket *)pkt
    {
       if (av_interleaved_write_frame(pFormatCtx, pkt) != 0) {
           return NO;
       }

       return YES;
    }

    - (BOOL)writeTrailer
    {
       if (av_write_trailer(pFormatCtx) != 0) {
           return NO;
       }

       return YES;
    }


    /**
    This method will create AVPacket out of UIImage.

    @return AVPacket
    */
    - (AVPacket *)getAVPacket:(UIImage *)img withPts:(unsigned long)pts
    {
       if (!img) {
           NSLog(@"imgData is nil!");
           return nil;
       }
       uint8_t *imgData = [self getPixelDataFromImage:img];

       AVFrame *frame_yuv = av_frame_alloc();
       if (!frame_yuv) {
           NSLog(@"frame_yuv is nil!");
           return nil;
       }
       frame_yuv->format = AV_PIX_FMT_YUV420P;
       frame_yuv->width = (int)img.size.width;
       frame_yuv->height = (int)img.size.height;

       int ret = av_image_alloc(frame_yuv->data,
                                  frame_yuv->linesize,
                                  frame_yuv->width,
                                  frame_yuv->height,
                                  frame_yuv->format,
                                  32);
       if (ret < 0) {
           NSLog(@"Couldn't alloc yuv frame!");
           return nil;
       }

       struct SwsContext *sws_ctx = nil;
       sws_ctx = sws_getContext((int)img.size.width, (int)img.size.height, AV_PIX_FMT_RGB24,
                                (int)img.size.width, (int)img.size.height, AV_PIX_FMT_YUV420P,
                                0, NULL, NULL, NULL);
       const uint8_t *scaleData[1] = { imgData };
       int inLineSize[1] = { 4 * img.size.width };
       sws_scale(sws_ctx, scaleData, inLineSize, 0, (int)img.size.height, frame_yuv->data, frame_yuv->linesize);

       frame_yuv->pict_type = AV_PICTURE_TYPE_I;
       frame_yuv->pts = pCodecCtx->frame_number;

       ret = avcodec_send_frame(pCodecCtx, frame_yuv);   //every time everything is fine
       if (ret != 0) {
           NSLog(@"Couldn't send yuv frame!");
           return nil;
       }

       av_init_packet(pPacket);
       pPacket->dts = pPacket->pts = pts;
       do {
           ret = avcodec_receive_packet(pCodecCtx, pPacket);   //every time -35 error
           NSLog(@"ret = %d", ret);
           if (ret == AVERROR_EOF) {
               NSLog(@"AVERROR_EOF!");
           } else if (ret == AVERROR(EAGAIN)) {
               NSLog(@"AVERROR(EAGAIN)");
           } else if (ret == AVERROR(EINVAL)) {
               NSLog(@"AVERROR(EINVAL)");
           }
           if (ret != 0) {
               NSLog(@"Couldn't receive packet!");
               //return nil;
           }
       } while ( ret == 0 );

       free(imgData);
       av_packet_unref(pPacket);
       av_packet_free(pPacket);
       av_frame_unref(&frame_yuv);
       av_frame_free(&frame_yuv);
       //perform other clean up and test dat shit

       return pPacket;
    }

    Any insights would be helpful. Thanks !

  • swscaler bad src image pointers

    7 mars 2018, par user1496491

    I’m completely lost. I’m trying to capture 30 screenshots and put them into a video with FFMPEG under Windows 10. And it keeps telling me that [swscaler @ 073890a0] bad src image pointers. As a result the video is entirely green. If I change format to dshow using video=screen-capture-recorder the video looks to be mostly garbage. Here’s my short code for that. I’m completely stuck and don’t know even in which direction to look.

    MainWindow.h

    #ifndef MAINWINDOW_H
    #define MAINWINDOW_H

    #include <qmainwindow>
    #include <qfuture>
    #include <qfuturewatcher>
    #include <qmutex>
    #include <qmutexlocker>

    extern "C" {
    #include "libavcodec/avcodec.h"
    #include "libavcodec/avfft.h"

    #include "libavdevice/avdevice.h"

    #include "libavfilter/avfilter.h"
    #include "libavfilter/avfiltergraph.h"
    #include "libavfilter/buffersink.h"
    #include "libavfilter/buffersrc.h"

    #include "libavformat/avformat.h"
    #include "libavformat/avio.h"

    #include "libavutil/opt.h"
    #include "libavutil/common.h"
    #include "libavutil/channel_layout.h"
    #include "libavutil/imgutils.h"
    #include "libavutil/mathematics.h"
    #include "libavutil/samplefmt.h"
    #include "libavutil/time.h"
    #include "libavutil/opt.h"
    #include "libavutil/pixdesc.h"
    #include "libavutil/file.h"

    #include "libswscale/swscale.h"
    }

    class MainWindow : public QMainWindow
    {
       Q_OBJECT

    public:
       MainWindow(QWidget *parent = 0);
       ~MainWindow();

    private:
       AVFormatContext *inputFormatContext = nullptr;
       AVFormatContext *outFormatContext = nullptr;

       AVStream* videoStream = nullptr;

       AVDictionary* options = nullptr;

       AVCodec* outCodec = nullptr;
       AVCodec* inputCodec = nullptr;
       AVCodecContext* inputCodecContext = nullptr;
       AVCodecContext* outCodecContext = nullptr;
       SwsContext* swsContext = nullptr;

    private:
       void init();
       void initOutFile();
       void collectFrame();
    };

    #endif // MAINWINDOW_H
    </qmutexlocker></qmutex></qfuturewatcher></qfuture></qmainwindow>

    MainWindow.cpp

    #include "MainWindow.h"

    #include <qguiapplication>
    #include <qlabel>
    #include <qscreen>
    #include <qtimer>
    #include <qlayout>
    #include <qimage>
    #include <qtconcurrent></qtconcurrent>QtConcurrent>
    #include <qthreadpool>

    #include "ScreenCapture.h"

    MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent)
    {
       resize(800, 600);

       auto label = new QLabel();
       label->setAlignment(Qt::AlignHCenter | Qt::AlignVCenter);

       auto layout = new QHBoxLayout();
       layout->addWidget(label);

       auto widget = new QWidget();
       widget->setLayout(layout);
       setCentralWidget(widget);

       init();
       initOutFile();
       collectFrame();
    }

    MainWindow::~MainWindow()
    {
       avformat_close_input(&amp;inputFormatContext);
       avformat_free_context(inputFormatContext);

       QThreadPool::globalInstance()->waitForDone();
    }

    void MainWindow::init()
    {
       av_register_all();
       avcodec_register_all();
       avdevice_register_all();
       avformat_network_init();

       auto screen = QGuiApplication::screens()[0];
       QRect geometry = screen->geometry();

       inputFormatContext = avformat_alloc_context();

       options = NULL;
       av_dict_set(&amp;options, "framerate", "30", NULL);
       av_dict_set(&amp;options, "offset_x", QString::number(geometry.x()).toLatin1().data(), NULL);
       av_dict_set(&amp;options, "offset_y", QString::number(geometry.y()).toLatin1().data(), NULL);
       av_dict_set(&amp;options, "video_size", QString(QString::number(geometry.width()) + "x" + QString::number(geometry.height())).toLatin1().data(), NULL);
       av_dict_set(&amp;options, "show_region", "1", NULL);

       AVInputFormat* inputFormat = av_find_input_format("gdigrab");
       avformat_open_input(&amp;inputFormatContext, "desktop", inputFormat, &amp;options);

       int videoStreamIndex = av_find_best_stream(inputFormatContext, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);

       inputCodecContext = inputFormatContext->streams[videoStreamIndex]->codec;
       inputCodecContext->width = geometry.width();
       inputCodecContext->height = geometry.height();
       inputCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;

       inputCodec = avcodec_find_decoder(inputCodecContext->codec_id);
       avcodec_open2(inputCodecContext, inputCodec, NULL);
    }

    void MainWindow::initOutFile()
    {
       const char* filename = "C:/Temp/output.mp4";

       avformat_alloc_output_context2(&amp;outFormatContext, NULL, NULL, filename);

       outCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);

       videoStream = avformat_new_stream(outFormatContext, outCodec);
       videoStream->time_base = {1, 30};

       outCodecContext = videoStream->codec;
       outCodecContext->codec_id = AV_CODEC_ID_MPEG4;
       outCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
       outCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
       outCodecContext->bit_rate = 400000;
       outCodecContext->width = inputCodecContext->width;
       outCodecContext->height = inputCodecContext->height;
       outCodecContext->gop_size = 3;
       outCodecContext->max_b_frames = 2;
       outCodecContext->time_base = videoStream->time_base;

       if (outFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER)
           outCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

       avcodec_open2(outCodecContext, outCodec, NULL);

       if (!(outFormatContext->flags &amp; AVFMT_NOFILE))
           avio_open2(&amp;outFormatContext->pb, filename, AVIO_FLAG_WRITE, NULL, NULL);

       swsContext = sws_getContext(inputCodecContext->width,
                                   inputCodecContext->height,
                                   inputCodecContext->pix_fmt,
                                   outCodecContext->width,
                                   outCodecContext->height,
                                   outCodecContext->pix_fmt,
                                   SWS_BICUBIC, NULL, NULL, NULL);

       avformat_write_header(outFormatContext, &amp;options);
    }

    void MainWindow::collectFrame()
    {
       AVFrame* frame = av_frame_alloc();
       frame->data[0] = NULL;
       frame->width = inputCodecContext->width;
       frame->height = inputCodecContext->height;
       frame->format = inputCodecContext->pix_fmt;

       av_image_alloc(frame->data, frame->linesize, inputCodecContext->width, inputCodecContext->height, (AVPixelFormat)frame->format, 32);

       AVFrame* outFrame = av_frame_alloc();
       outFrame->data[0] = NULL;
       outFrame->width = outCodecContext->width;
       outFrame->height = outCodecContext->height;
       outFrame->format = outCodecContext->pix_fmt;

       av_image_alloc(outFrame->data, outFrame->linesize, outCodecContext->width, outCodecContext->height, (AVPixelFormat)outFrame->format, 32);

       int bufferSize = av_image_get_buffer_size(outCodecContext->pix_fmt,
                                                 outCodecContext->width,
                                                 outCodecContext->height,
                                                 24);

       uint8_t* outBuffer = (uint8_t*)av_malloc(bufferSize);

       avpicture_fill((AVPicture*)outFrame, outBuffer,
                      AV_PIX_FMT_YUV420P,
                      outCodecContext->width, outCodecContext->height);

       int frameCount = 30;
       int count = 0;

       AVPacket* packet = (AVPacket*)av_malloc(sizeof(AVPacket));
       av_init_packet(packet);

       while(av_read_frame(inputFormatContext, packet) >= 0)
       {
           if(packet->stream_index == videoStream->index)
           {
               int frameFinished = 0;
               avcodec_decode_video2(inputCodecContext, frame, &amp;frameFinished, packet);

               if(frameFinished)
               {
                   if(++count > frameCount)
                   {
                       qDebug() &lt;&lt; "FINISHED!";
                       break;
                   }

                   sws_scale(swsContext, frame->data, frame->linesize, 0, inputCodecContext->height, outFrame->data, outFrame->linesize);

                   AVPacket outPacket;
                   av_init_packet(&amp;outPacket);
                   outPacket.data = NULL;
                   outPacket.size = 0;

                   int got_picture = 0;
                   avcodec_encode_video2(outCodecContext, &amp;outPacket, outFrame, &amp;got_picture);

                   if(got_picture)
                   {
                       if(outPacket.pts != AV_NOPTS_VALUE) outPacket.pts = av_rescale_q(outPacket.pts, videoStream->codec->time_base, videoStream->time_base);
                       if(outPacket.dts != AV_NOPTS_VALUE) outPacket.dts = av_rescale_q(outPacket.dts, videoStream->codec->time_base, videoStream->time_base);

                       av_write_frame(outFormatContext , &amp;outPacket);
                   }

                   av_packet_unref(&amp;outPacket);
               }
           }
       }

       av_write_trailer(outFormatContext);

       av_free(outBuffer);
    }
    </qthreadpool></qimage></qlayout></qtimer></qscreen></qlabel></qguiapplication>