Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (51)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (4169)

  • TCL command prompt crashed with the error "ffmpeg.exe stopped working" with tcl version 8.0 and window 7

    9 février 2019, par M. D. P

    TCL command prompt crashed with the error "ffmpeg.exe has stopped working" with tcl version 8.0 and window 7 32 bit. my code file is "live.tcl" which is as follow :

    proc live {} {

    exec ffmpeg -f dshow -s 1280x720 -i "video=Logitech HD Webcam C525" -f sdl2 - >& c:/test/temp.txt &

    }
    live

    On the other hand same my code for video capturing i.e "videocapture.tcl" works on the same tclsh command prompt on windows 7. my tcl code for "videocapture.tcl" is :

    proc videocapture {} {

    exec ffmpeg -f dshow -t 00:00:10 -i "video=Integrated Webcam" c:/test/sample-a.avi >& temp.txt &

    }
    videocapture

    error report is as follow :

    ffmpeg started on 2017-11-22 at 10:56:04
    Report written to "ffmpeg-20171122-105604.log"
    Command line:
    ffmpeg -f dshow -s 1280x720 -i "video=HD Webcam C525" -report -f sdl2 -
    ffmpeg version N-89127-g8f4702a93f Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 7.2.0 (GCC)
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth --enable-libmfx
     libavutil      56.  0.100 / 56.  0.100
     libavcodec     58.  3.103 / 58.  3.103
     libavformat    58.  2.100 / 58.  2.100
     libavdevice    58.  0.100 / 58.  0.100
     libavfilter     7.  2.100 /  7.  2.100
     libswscale      5.  0.101 /  5.  0.101
     libswresample   3.  0.101 /  3.  0.101
     libpostproc    55.  0.100 / 55.  0.100
    Splitting the commandline.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'dshow'.
    Reading option '-s' ... matched as option 's' (set frame size (WxH or abbreviation)) with argument '1280x720'.
    Reading option '-i' ... matched as input url with argument 'video=HD Webcam C525'.
    Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'sdl2'.
    Reading option '-' ... matched as output url.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option report (generate a report) with argument 1.
    Successfully parsed a group of options.
    Parsing a group of options: input url video=HD Webcam C525.
    Applying option f (force format) with argument dshow.
    Applying option s (set frame size (WxH or abbreviation)) with argument 1280x720.
    Successfully parsed a group of options.
    Opening an input file: video=HD Webcam C525.
    [dshow @ 03dfca20] Selecting pin Capture on video
    dshow passing through packet of type video size  1843200 timestamp 664898760000 orig timestamp 664898643970 graph timestamp 664898760000 diff 116030 HD Webcam C525
    [dshow @ 03dfca20] All info found
    Input #0, dshow, from 'video=HD Webcam C525':
     Duration: N/A, start: 66489.876000, bitrate: N/A
    Stream #0:0, 1, 1/10000000: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 1280x720, 10 fps, 10 tbr, 10000k tbn, 10000k tbc
    Successfully opened the file.
    Parsing a group of options: output url -.
    Applying option f (force format) with argument sdl2.
    Successfully parsed a group of options.
    Opening an output file: -.
    Successfully opened the file.
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> rawvideo (native))
    Press [q] to stop, [?] for help
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    [rawvideo @ 03e01660] PACKET SIZE: 1843200, STRIDE: 2560
    detected 1 logical cores
    [graph 0 input from stream 0:0 @ 03e06ae0] Setting 'video_size' to value '1280x720'
    [graph 0 input from stream 0:0 @ 03e06ae0] Setting 'pix_fmt' to value '1'
    [graph 0 input from stream 0:0 @ 03e06ae0] Setting 'time_base' to value '1/10000000'
    [graph 0 input from stream 0:0 @ 03e06ae0] Setting 'pixel_aspect' to value '0/1'
    [graph 0 input from stream 0:0 @ 03e06ae0] Setting 'sws_param' to value 'flags=2'
    [graph 0 input from stream 0:0 @ 03e06ae0] Setting 'frame_rate' to value '10000000/1000000'
    [graph 0 input from stream 0:0 @ 03e06ae0] w:1280 h:720 pixfmt:yuyv422 tb:1/10000000 fr:10000000/1000000 sar:0/1 sws_param:flags=2
    [AVFilterGraph @ 03de2780] query_formats: 3 queried, 2 merged, 0 already done, 0 delayed
    dshow passing through packet of type video size  1843200 timestamp 664899770000 orig timestamp 664899643970 graph timestamp 664899770000 diff 126030 HD Webcam C525
    dshow passing through packet of type video size  1843200 timestamp 664900890000 orig timestamp 664900643970 graph timestamp 664900890000 diff 246030 HD Webcam C525
    dshow passing through packet of type video size  1843200 timestamp 664902040000 orig timestamp 664901643970 graph timestamp 664902040000 diff 396030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664902650000 orig timestamp 664902643970 graph timestamp 664902650000 diff 6030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664903610000 orig timestamp 664903643970 graph timestamp 664903610000 diff -33970 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664904570000 orig timestamp 664904643970 graph timestamp 664904570000 diff -73970 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664905520000 orig timestamp 664905643970 graph timestamp 664905520000 diff -123970 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664929720000 orig timestamp 664906643970 graph timestamp 664929720000 diff 23076030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664929760000 orig timestamp 664907643970 graph timestamp 664929760000 diff 22116030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664929790000 orig timestamp 664908643970 graph timestamp 664929790000 diff 21146030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664929820000 orig timestamp 664909643970 graph timestamp 664929820000 diff 20176030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664929870000 orig timestamp 664910643970 graph timestamp 664929870000 diff 19226030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664929910000 orig timestamp 664911643970 graph timestamp 664929910000 diff 18266030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664929940000 orig timestamp 664912643970 graph timestamp 664929940000 diff 17296030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664929980000 orig timestamp 664913643970 graph timestamp 664929980000 diff 16336030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664930030000 orig timestamp 664914643970 graph timestamp 664930030000 diff 15386030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
    dshow passing through packet of type video size  1843200 timestamp 664930070000 orig timestamp 664915643970 graph timestamp 664930070000 diff 14426030 HD Webcam C525
    [dshow @ 03dfca20] real-time buffer [HD Webcam C525] [video input] too full or near too full (121% of size: 3041280 [rtbufsize parameter])! frame dropped!
  • ffmpeg error "Could not allocate picture : Invalid argument Found Video Stream Found Audio Stream"

    26 octobre 2020, par Dinkan

    I am trying to write a C program to stream AV by copying both AV codecs with rtp_mpegts using RTP over network

    


    ffmpeg -re -i Sample_AV_15min.ts -acodec copy -vcodec copy -f rtp_mpegts rtp://192.168.1.1:5004


    


    using muxing.c as example which used ffmpeg libraries.
ffmpeg application works fine.

    


    Stream details

    


    Input #0, mpegts, from 'Weather_Nation_10min.ts':
  Duration: 00:10:00.38, start: 41313.400811, bitrate: 2840 kb/s
  Program 1
    Stream #0:0[0x11]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 1440x1080 [SAR 4:3 DAR 16:9], 29.97 fps, 59.94 tbr, 90k tbn, 59.94 tbc
    Stream #0:1[0x14]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 448 kb/s
Output #0, rtp_mpegts, to 'rtp://192.168.1.1:5004':
  Metadata:
    encoder         : Lavf54.63.104
    Stream #0:0: Video: h264 ([27][0][0][0] / 0x001B), yuv420p, 1440x1080 [SAR 4:3 DAR 16:9], q=2-31, 29.97 fps, 90k tbn, 29.97 tbc
    Stream #0:1: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, 448 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)


    


    However, my application fails with

    


    ./my_test_app Sample_AV_15min.ts rtp://192.168.1.1:5004  
[h264 @ 0x800b30] non-existing PPS referenced                                  
[h264 @ 0x800b30] non-existing PPS 0 referenced                        
[h264 @ 0x800b30] decode_slice_header error                            
[h264 @ 0x800b30] no frame! 

[....snipped...]
[h264 @ 0x800b30] non-existing PPS 0 referenced        
[h264 @ 0x800b30] non-existing PPS referenced  
[h264 @ 0x800b30] non-existing PPS 0 referenced  
[h264 @ 0x800b30] decode_slice_header error  
[h264 @ 0x800b30] no frame!  
[h264 @ 0x800b30] mmco: unref short failure  
[h264 @ 0x800b30] mmco: unref short failure

[mpegts @ 0x800020] max_analyze_duration 5000000 reached at 5024000 microseconds  
[mpegts @ 0x800020] PES packet size mismatch could not find codec tag for codec id 
17075200, default to 0.  could not find codec tag for codec id 86019, default to 0.  
Could not allocate picture: Invalid argument  
Found Video Stream Found Audio Stream


    


    How do I fix this ? My complete source code based on muxing.c

    


    /**&#xA; * @file&#xA; * libavformat API example.&#xA; *&#xA; * Output a media file in any supported libavformat format.&#xA; * The default codecs are used.&#xA; * @example doc/examples/muxing.c&#xA; */&#xA;&#xA;#include &#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;#include <libavutil></libavutil>mathematics.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;&#xA;/* 5 seconds stream duration */&#xA;#define STREAM_DURATION   200.0&#xA;#define STREAM_FRAME_RATE 25 /* 25 images/s */&#xA;#define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * STREAM_FRAME_RATE))&#xA;#define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */&#xA;&#xA;static int sws_flags = SWS_BICUBIC;&#xA;&#xA;/**************************************************************/&#xA;/* audio output */&#xA;&#xA;static float t, tincr, tincr2;&#xA;static int16_t *samples;&#xA;static int audio_input_frame_size;&#xA;#if 0&#xA;/* Add an output stream. */&#xA;static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,&#xA;                            enum AVCodecID codec_id)&#xA;{&#xA;    AVCodecContext *c;&#xA;    AVStream *st;&#xA;&#xA;    /* find the encoder */&#xA;    *codec = avcodec_find_encoder(codec_id);&#xA;    if (!(*codec)) {&#xA;        fprintf(stderr, "Could not find encoder for &#x27;%s&#x27;\n",&#xA;                avcodec_get_name(codec_id));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    st = avformat_new_stream(oc, *codec);&#xA;    if (!st) {&#xA;        fprintf(stderr, "Could not allocate stream\n");&#xA;        exit(1);&#xA;    }&#xA;    st->id = oc->nb_streams-1;&#xA;    c = st->codec;&#xA;&#xA;    switch ((*codec)->type) {&#xA;    case AVMEDIA_TYPE_AUDIO:&#xA;        st->id = 1;&#xA;        c->sample_fmt  = AV_SAMPLE_FMT_S16;&#xA;        c->bit_rate    = 64000;&#xA;        c->sample_rate = 44100;&#xA;        c->channels    = 2;&#xA;        break;&#xA;&#xA;    case AVMEDIA_TYPE_VIDEO:&#xA;        c->codec_id = codec_id;&#xA;&#xA;        c->bit_rate = 400000;&#xA;        /* Resolution must be a multiple of two. */&#xA;        c->width    = 352;&#xA;        c->height   = 288;&#xA;        /* timebase: This is the fundamental unit of time (in seconds) in terms&#xA;         * of which frame timestamps are represented. For fixed-fps content,&#xA;         * timebase should be 1/framerate and timestamp increments should be&#xA;         * identical to 1. */&#xA;        c->time_base.den = STREAM_FRAME_RATE;&#xA;        c->time_base.num = 1;&#xA;        c->gop_size      = 12; /* emit one intra frame every twelve frames at most */&#xA;        c->pix_fmt       = STREAM_PIX_FMT;&#xA;        if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {&#xA;            /* just for testing, we also add B frames */&#xA;            c->max_b_frames = 2;&#xA;        }&#xA;        if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {&#xA;            /* Needed to avoid using macroblocks in which some coeffs overflow.&#xA;             * This does not happen with normal video, it just happens here as&#xA;             * the motion of the chroma plane does not match the luma plane. */&#xA;            c->mb_decision = 2;&#xA;        }&#xA;    break;&#xA;&#xA;    default:&#xA;        break;&#xA;    }&#xA;&#xA;    /* Some formats want stream headers to be separate. */&#xA;    if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;        c->flags |= CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;    return st;&#xA;}&#xA;#endif &#xA;/**************************************************************/&#xA;/* audio output */&#xA;&#xA;static float t, tincr, tincr2;&#xA;static int16_t *samples;&#xA;static int audio_input_frame_size;&#xA;&#xA;static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)&#xA;{&#xA;    AVCodecContext *c;&#xA;    int ret;&#xA;&#xA;    c = st->codec;&#xA;&#xA;    /* open it */&#xA;    ret = avcodec_open2(c, codec, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* init signal generator */&#xA;    t     = 0;&#xA;    tincr = 2 * M_PI * 110.0 / c->sample_rate;&#xA;    /* increment frequency by 110 Hz per second */&#xA;    tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;&#xA;&#xA;    if (c->codec->capabilities &amp; CODEC_CAP_VARIABLE_FRAME_SIZE)&#xA;        audio_input_frame_size = 10000;&#xA;    else&#xA;        audio_input_frame_size = c->frame_size;&#xA;    samples = av_malloc(audio_input_frame_size *&#xA;                        av_get_bytes_per_sample(c->sample_fmt) *&#xA;                        c->channels);&#xA;    if (!samples) {&#xA;        fprintf(stderr, "Could not allocate audio samples buffer\n");&#xA;        exit(1);&#xA;    }&#xA;}&#xA;&#xA;/* Prepare a 16 bit dummy audio frame of &#x27;frame_size&#x27; samples and&#xA; * &#x27;nb_channels&#x27; channels. */&#xA;static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)&#xA;{&#xA;    int j, i, v;&#xA;    int16_t *q;&#xA;&#xA;    q = samples;&#xA;    for (j = 0; j &lt; frame_size; j&#x2B;&#x2B;) {&#xA;        v = (int)(sin(t) * 10000);&#xA;        for (i = 0; i &lt; nb_channels; i&#x2B;&#x2B;)&#xA;            *q&#x2B;&#x2B; = v;&#xA;        t     &#x2B;= tincr;&#xA;        tincr &#x2B;= tincr2;&#xA;    }&#xA;}&#xA;&#xA;static void write_audio_frame(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    AVCodecContext *c;&#xA;    AVPacket pkt = { 0 }; // data and size must be 0;&#xA;    AVFrame *frame = avcodec_alloc_frame();&#xA;    int got_packet, ret;&#xA;&#xA;    av_init_packet(&amp;pkt);&#xA;    c = st->codec;&#xA;&#xA;    get_audio_frame(samples, audio_input_frame_size, c->channels);&#xA;    frame->nb_samples = audio_input_frame_size;&#xA;    avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,&#xA;                             (uint8_t *)samples,&#xA;                             audio_input_frame_size *&#xA;                             av_get_bytes_per_sample(c->sample_fmt) *&#xA;                             c->channels, 1);&#xA;&#xA;    ret = avcodec_encode_audio2(c, &amp;pkt, frame, &amp;got_packet);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if (!got_packet)&#xA;        return;&#xA;&#xA;    pkt.stream_index = st->index;&#xA;&#xA;    /* Write the compressed frame to the media file. */&#xA;    ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;    if (ret != 0) {&#xA;        fprintf(stderr, "Error while writing audio frame: %s\n",&#xA;                av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;    avcodec_free_frame(&amp;frame);&#xA;}&#xA;&#xA;static void close_audio(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    avcodec_close(st->codec);&#xA;&#xA;    av_free(samples);&#xA;}&#xA;&#xA;/**************************************************************/&#xA;/* video output */&#xA;&#xA;static AVFrame *frame;&#xA;static AVPicture src_picture, dst_picture;&#xA;static int frame_count;&#xA;&#xA;static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)&#xA;{&#xA;    int ret;&#xA;    AVCodecContext *c = st->codec;&#xA;&#xA;    /* open the codec */&#xA;    ret = avcodec_open2(c, codec, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* allocate and init a re-usable frame */&#xA;    frame = avcodec_alloc_frame();&#xA;    if (!frame) {&#xA;        fprintf(stderr, "Could not allocate video frame\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* Allocate the encoded raw picture. */&#xA;    ret = avpicture_alloc(&amp;dst_picture, c->pix_fmt, c->width, c->height);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not allocate picture: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* If the output format is not YUV420P, then a temporary YUV420P&#xA;     * picture is needed too. It is then converted to the required&#xA;     * output format. */&#xA;    if (c->pix_fmt != AV_PIX_FMT_YUV420P) {&#xA;        ret = avpicture_alloc(&amp;src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Could not allocate temporary picture: %s\n",&#xA;                    av_err2str(ret));&#xA;            exit(1);&#xA;        }&#xA;    }&#xA;&#xA;    /* copy data and linesize picture pointers to frame */&#xA;    *((AVPicture *)frame) = dst_picture;&#xA;}&#xA;&#xA;/* Prepare a dummy image. */&#xA;static void fill_yuv_image(AVPicture *pict, int frame_index,&#xA;                           int width, int height)&#xA;{&#xA;    int x, y, i;&#xA;&#xA;    i = frame_index;&#xA;&#xA;    /* Y */&#xA;    for (y = 0; y &lt; height; y&#x2B;&#x2B;)&#xA;        for (x = 0; x &lt; width; x&#x2B;&#x2B;)&#xA;            pict->data[0][y * pict->linesize[0] &#x2B; x] = x &#x2B; y &#x2B; i * 3;&#xA;&#xA;    /* Cb and Cr */&#xA;    for (y = 0; y &lt; height / 2; y&#x2B;&#x2B;) {&#xA;        for (x = 0; x &lt; width / 2; x&#x2B;&#x2B;) {&#xA;            pict->data[1][y * pict->linesize[1] &#x2B; x] = 128 &#x2B; y &#x2B; i * 2;&#xA;            pict->data[2][y * pict->linesize[2] &#x2B; x] = 64 &#x2B; x &#x2B; i * 5;&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;static void write_video_frame(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    int ret;&#xA;    static struct SwsContext *sws_ctx;&#xA;    AVCodecContext *c = st->codec;&#xA;&#xA;    if (frame_count >= STREAM_NB_FRAMES) {&#xA;        /* No more frames to compress. The codec has a latency of a few&#xA;         * frames if using B-frames, so we get the last frames by&#xA;         * passing the same picture again. */&#xA;    } else {&#xA;        if (c->pix_fmt != AV_PIX_FMT_YUV420P) {&#xA;            /* as we only generate a YUV420P picture, we must convert it&#xA;             * to the codec pixel format if needed */&#xA;            if (!sws_ctx) {&#xA;                sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,&#xA;                                         c->width, c->height, c->pix_fmt,&#xA;                                         sws_flags, NULL, NULL, NULL);&#xA;                if (!sws_ctx) {&#xA;                    fprintf(stderr,&#xA;                            "Could not initialize the conversion context\n");&#xA;                    exit(1);&#xA;                }&#xA;            }&#xA;            fill_yuv_image(&amp;src_picture, frame_count, c->width, c->height);&#xA;            sws_scale(sws_ctx,&#xA;                      (const uint8_t * const *)src_picture.data, src_picture.linesize,&#xA;                      0, c->height, dst_picture.data, dst_picture.linesize);&#xA;        } else {&#xA;            fill_yuv_image(&amp;dst_picture, frame_count, c->width, c->height);&#xA;        }&#xA;    }&#xA;&#xA;    if (oc->oformat->flags &amp; AVFMT_RAWPICTURE) {&#xA;        /* Raw video case - directly store the picture in the packet */&#xA;        AVPacket pkt;&#xA;        av_init_packet(&amp;pkt);&#xA;&#xA;        pkt.flags        |= AV_PKT_FLAG_KEY;&#xA;        pkt.stream_index  = st->index;&#xA;        pkt.data          = dst_picture.data[0];&#xA;        pkt.size          = sizeof(AVPicture);&#xA;&#xA;        ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;    } else {&#xA;        /* encode the image */&#xA;        AVPacket pkt;&#xA;        int got_output;&#xA;&#xA;        av_init_packet(&amp;pkt);&#xA;        pkt.data = NULL;    // packet data will be allocated by the encoder&#xA;        pkt.size = 0;&#xA;&#xA;        ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_output);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));&#xA;            exit(1);&#xA;        }&#xA;&#xA;        /* If size is zero, it means the image was buffered. */&#xA;        if (got_output) {&#xA;            if (c->coded_frame->key_frame)&#xA;                pkt.flags |= AV_PKT_FLAG_KEY;&#xA;&#xA;            pkt.stream_index = st->index;&#xA;&#xA;            /* Write the compressed frame to the media file. */&#xA;            ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;        } else {&#xA;            ret = 0;&#xA;        }&#xA;    }&#xA;    if (ret != 0) {&#xA;        fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;    frame_count&#x2B;&#x2B;;&#xA;}&#xA;&#xA;static void close_video(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    avcodec_close(st->codec);&#xA;    av_free(src_picture.data[0]);&#xA;    av_free(dst_picture.data[0]);&#xA;    av_free(frame);&#xA;}&#xA;&#xA;/**************************************************************/&#xA;/* media file output */&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;    const char *filename;&#xA;    AVOutputFormat *fmt;&#xA;    AVFormatContext *oc;&#xA;    AVStream *audio_st, *video_st;&#xA;    AVCodec *audio_codec, *video_codec;&#xA;    double audio_pts, video_pts;&#xA;    int ret;&#xA;    char errbuf[50];&#xA;    int i = 0;&#xA;    /* Initialize libavcodec, and register all codecs and formats. */&#xA;    av_register_all();&#xA;&#xA;    if (argc != 3) {&#xA;        printf("usage: %s input_file out_file|stream\n"&#xA;               "API example program to output a media file with libavformat.\n"&#xA;               "This program generates a synthetic audio and video stream, encodes and\n"&#xA;               "muxes them into a file named output_file.\n"&#xA;               "The output format is automatically guessed according to the file extension.\n"&#xA;               "Raw images can also be output by using &#x27;%%d&#x27; in the filename.\n"&#xA;               "\n", argv[0]);&#xA;        return 1;&#xA;    }&#xA;&#xA;    filename = argv[2];&#xA;&#xA;    /* allocate the output media context */&#xA;    avformat_alloc_output_context2(&amp;oc, NULL, "rtp_mpegts", filename);&#xA;    if (!oc) {&#xA;        printf("Could not deduce output format from file extension: using MPEG.\n");&#xA;        avformat_alloc_output_context2(&amp;oc, NULL, "mpeg", filename);&#xA;    }&#xA;    if (!oc) {&#xA;        return 1;&#xA;    }&#xA;    fmt = oc->oformat;&#xA;    //Find input stream info.&#xA;&#xA;   video_st = NULL;&#xA;   audio_st = NULL;&#xA;&#xA;   avformat_open_input( &amp;oc, argv[1], 0, 0);&#xA;&#xA;   if ((ret = avformat_find_stream_info(oc, 0))&lt; 0)&#xA;   {&#xA;       av_strerror(ret, errbuf,sizeof(errbuf));&#xA;       printf("Not Able to find stream info::%s ", errbuf);&#xA;       ret = -1;&#xA;       return ret;&#xA;   }&#xA;   for (i = 0; i &lt; oc->nb_streams; i&#x2B;&#x2B;)&#xA;   {&#xA;       if(oc->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;       {&#xA;           AVCodecContext *codec_ctx;&#xA;           unsigned int tag = 0;&#xA;&#xA;           printf("Found Video Stream ");&#xA;           video_st = oc->streams[i];&#xA;           codec_ctx = video_st->codec;&#xA;           // m_num_frames = oc->streams[i]->nb_frames;&#xA;           video_codec = avcodec_find_decoder(codec_ctx->codec_id);&#xA;           ret = avcodec_open2(codec_ctx, video_codec, NULL);&#xA;            if (ret &lt; 0) &#xA;            {&#xA;                av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);&#xA;                return ret;&#xA;            }&#xA;            if (av_codec_get_tag2(oc->oformat->codec_tag, video_codec->id, &amp;tag) == 0) &#xA;            {&#xA;                av_log(NULL, AV_LOG_ERROR, "could not find codec tag for codec id %d, default to 0.\n", audio_codec->id);&#xA;            }&#xA;            video_st->codec = avcodec_alloc_context3(video_codec);&#xA;            video_st->codec->codec_tag = tag;&#xA;       }&#xA;&#xA;       if(oc->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)&#xA;       {&#xA;           AVCodecContext *codec_ctx;&#xA;           unsigned int tag = 0;&#xA;&#xA;           printf("Found Audio Stream ");&#xA;           audio_st = oc->streams[i];&#xA;          // aud_dts = audio_st->cur_dts;&#xA;          // aud_pts = audio_st->last_IP_pts;           &#xA;          codec_ctx = audio_st->codec;&#xA;          audio_codec = avcodec_find_decoder(codec_ctx->codec_id);&#xA;          ret = avcodec_open2(codec_ctx, audio_codec, NULL);&#xA;          if (ret &lt; 0) &#xA;          {&#xA;             av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);&#xA;             return ret;&#xA;          }&#xA;          if (av_codec_get_tag2(oc->oformat->codec_tag, audio_codec->id, &amp;tag) == 0) &#xA;          {&#xA;              av_log(NULL, AV_LOG_ERROR, "could not find codec tag for codec id %d, default to 0.\n", audio_codec->id);&#xA;          }&#xA;          audio_st->codec = avcodec_alloc_context3(audio_codec);&#xA;          audio_st->codec->codec_tag = tag;&#xA;       }&#xA;   }&#xA;    /* Add the audio and video streams using the default format codecs&#xA;     * and initialize the codecs. */&#xA;    /*&#xA;    if (fmt->video_codec != AV_CODEC_ID_NONE) {&#xA;        video_st = add_stream(oc, &amp;video_codec, fmt->video_codec);&#xA;    }&#xA;    if (fmt->audio_codec != AV_CODEC_ID_NONE) {&#xA;        audio_st = add_stream(oc, &amp;audio_codec, fmt->audio_codec);&#xA;    }&#xA;    */&#xA;&#xA;    /* Now that all the parameters are set, we can open the audio and&#xA;     * video codecs and allocate the necessary encode buffers. */&#xA;    if (video_st)&#xA;        open_video(oc, video_codec, video_st);&#xA;    if (audio_st)&#xA;        open_audio(oc, audio_codec, audio_st);&#xA;&#xA;    av_dump_format(oc, 0, filename, 1);&#xA;&#xA;    /* open the output file, if needed */&#xA;    if (!(fmt->flags &amp; AVFMT_NOFILE)) {&#xA;        ret = avio_open(&amp;oc->pb, filename, AVIO_FLAG_WRITE);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Could not open &#x27;%s&#x27;: %s\n", filename,&#xA;                    av_err2str(ret));&#xA;            return 1;&#xA;        }&#xA;    }&#xA;&#xA;    /* Write the stream header, if any. */&#xA;    ret = avformat_write_header(oc, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Error occurred when opening output file: %s\n",&#xA;                av_err2str(ret));&#xA;        return 1;&#xA;    }&#xA;&#xA;    if (frame)&#xA;        frame->pts = 0;&#xA;    for (;;) {&#xA;        /* Compute current audio and video time. */&#xA;        if (audio_st)&#xA;            audio_pts = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;&#xA;        else&#xA;            audio_pts = 0.0;&#xA;&#xA;        if (video_st)&#xA;            video_pts = (double)video_st->pts.val * video_st->time_base.num /&#xA;                        video_st->time_base.den;&#xA;        else&#xA;            video_pts = 0.0;&#xA;&#xA;        if ((!audio_st || audio_pts >= STREAM_DURATION) &amp;&amp;&#xA;            (!video_st || video_pts >= STREAM_DURATION))&#xA;            break;&#xA;&#xA;        /* write interleaved audio and video frames */&#xA;        if (!video_st || (video_st &amp;&amp; audio_st &amp;&amp; audio_pts &lt; video_pts)) {&#xA;            write_audio_frame(oc, audio_st);&#xA;        } else {&#xA;            write_video_frame(oc, video_st);&#xA;            frame->pts &#x2B;= av_rescale_q(1, video_st->codec->time_base, video_st->time_base);&#xA;        }&#xA;    }&#xA;&#xA;    /* Write the trailer, if any. The trailer must be written before you&#xA;     * close the CodecContexts open when you wrote the header; otherwise&#xA;     * av_write_trailer() may try to use memory that was freed on&#xA;     * av_codec_close(). */&#xA;    av_write_trailer(oc);&#xA;&#xA;    /* Close each codec. */&#xA;    if (video_st)&#xA;        close_video(oc, video_st);&#xA;    if (audio_st)&#xA;        close_audio(oc, audio_st);&#xA;&#xA;    if (!(fmt->flags &amp; AVFMT_NOFILE))&#xA;        /* Close the output file. */&#xA;        avio_close(oc->pb);&#xA;&#xA;    /* free the stream */&#xA;    avformat_free_context(oc);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

  • "Amix" and "adelay" combined leads to "Error while filtering : Cannot allocate memory"

    10 février 2016, par Harald Nordgren

    I was trying to add to audio clips together (using amix) while delaying one of them (with adelay). I used the following command

    ffmpeg -i org/onclassical_demo_luisi_chopin_scherzo_2_31_small-version_ii-ending.wav \
    -i org/all_u_had_2_say.wav -filter_complex \
    "[1]adelay=1000[del1];[0][del1]amix" out.wav

    and get the following output

    ffmpeg version N-77387-g9d38f06 Copyright (c) 2000-2015 the FFmpeg developers
     built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04)
     configuration: --enable-libmp3lame --enable-gpl --enable-libx264 --enable-libx265
     libavutil      55. 11.100 / 55. 11.100
     libavcodec     57. 18.100 / 57. 18.100
     libavformat    57. 20.100 / 57. 20.100
     libavdevice    57.  0.100 / 57.  0.100
     libavfilter     6. 21.100 /  6. 21.100
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
    Guessed Channel Layout for  Input Stream #0.0 : stereo
    Input #0, wav, from 'org/onclassical_demo_luisi_chopin_scherzo_2_31_small-version_ii-ending.wav':
     Duration: 00:02:18.26, bitrate: 1411 kb/s
       Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 2 channels, s16, 1411 kb/s
    Guessed Channel Layout for  Input Stream #1.0 : mono
    Input #1, wav, from 'org/all_u_had_2_say.wav':
     Duration: 00:00:03.85, bitrate: 88 kb/s
       Stream #1:0: Audio: pcm_u8 ([1][0][0][0] / 0x0001), 11025 Hz, 1 channels, u8, 88 kb/s
    Output #0, wav, to 'out.wav':
     Metadata:
       ISFT            : Lavf57.20.100
       Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 11025 Hz, mono, s16, 176 kb/s (default)
       Metadata:
         encoder         : Lavc57.18.100 pcm_s16le
    Stream mapping:
     Stream #0:0 (pcm_s16le) -> amix:input0
     Stream #1:0 (pcm_u8) -> adelay
     amix -> Stream #0:0 (pcm_s16le)
    Press [q] to stop, [?] for help
    Error while filtering: Cannot allocate memory
    size=      83kB time=00:00:03.85 bitrate= 176.6kbits/s speed= 213x    
    video:0kB audio:83kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.091808%

    Maybe there is some incompatibility between the streams (pcm_s16le, 44100 Hz, 2 channels vs. pcm_u8, 11025 Hz, 1 channel) that need to be handled first, but running only amix works so that doesn’t actually seem to be the case.