Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (108)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (13307)

  • c++ - using FFmpeg encode and UDP with a Webcam

    14 mars, par Rendres

    I'm trying to get frames from a Webcam using OpenCV, encode them with FFmpeg and send them using UDP.

    



    I did before a similar project that instead of sending the packets with UDP, it saved them in a video file.

    



    My code is.

    



    #include &#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libavutil></libavutil>mathematics.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;#include <libswresample></libswresample>swresample.h>&#xA;}&#xA;&#xA;#include <opencv2></opencv2>opencv.hpp>&#xA;&#xA;using namespace std;&#xA;using namespace cv;&#xA;&#xA;#define WIDTH 640&#xA;#define HEIGHT 480&#xA;#define CODEC_ID AV_CODEC_ID_H264&#xA;#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P&#xA;&#xA;static AVFrame *frame, *pFrameBGR;&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;VideoCapture cap(0);&#xA;const char *url = "udp://127.0.0.1:8080";&#xA;&#xA;AVFormatContext *formatContext;&#xA;AVStream *stream;&#xA;AVCodec *codec;&#xA;AVCodecContext *c;&#xA;AVDictionary *opts = NULL;&#xA;&#xA;int ret, got_packet;&#xA;&#xA;if (!cap.isOpened())&#xA;{&#xA;    return -1;&#xA;}&#xA;&#xA;av_log_set_level(AV_LOG_TRACE);&#xA;&#xA;av_register_all();&#xA;avformat_network_init();&#xA;&#xA;avformat_alloc_output_context2(&amp;formatContext, NULL, "h264", url);&#xA;if (!formatContext)&#xA;{&#xA;    av_log(NULL, AV_LOG_FATAL, "Could not allocate an output context for &#x27;%s&#x27;.\n", url);&#xA;}&#xA;&#xA;codec = avcodec_find_encoder(CODEC_ID);&#xA;if (!codec)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not find encoder.\n");&#xA;}&#xA;&#xA;stream = avformat_new_stream(formatContext, codec);&#xA;&#xA;c = avcodec_alloc_context3(codec);&#xA;&#xA;stream->id = formatContext->nb_streams - 1;&#xA;stream->time_base = (AVRational){1, 25};&#xA;&#xA;c->codec_id = CODEC_ID;&#xA;c->bit_rate = 400000;&#xA;c->width = WIDTH;&#xA;c->height = HEIGHT;&#xA;c->time_base = stream->time_base;&#xA;c->gop_size = 12;&#xA;c->pix_fmt = STREAM_PIX_FMT;&#xA;&#xA;if (formatContext->flags &amp; AVFMT_GLOBALHEADER)&#xA;    c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;av_dict_set(&amp;opts, "preset", "fast", 0);&#xA;&#xA;av_dict_set(&amp;opts, "tune", "zerolatency", 0);&#xA;&#xA;ret = avcodec_open2(c, codec, NULL);&#xA;if (ret &lt; 0)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");&#xA;}&#xA;&#xA;pFrameBGR = av_frame_alloc();&#xA;if (!pFrameBGR)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");&#xA;}&#xA;&#xA;frame = av_frame_alloc();&#xA;if (!frame)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");&#xA;}&#xA;&#xA;frame->format = c->pix_fmt;&#xA;frame->width = c->width;&#xA;frame->height = c->height;&#xA;&#xA;ret = avcodec_parameters_from_context(stream->codecpar, c);&#xA;if (ret &lt; 0)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");&#xA;}&#xA;&#xA;av_dump_format(formatContext, 0, url, 1);&#xA;&#xA;ret = avformat_write_header(formatContext, NULL);&#xA;if (ret != 0)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Failed to connect to &#x27;%s&#x27;.\n", url);&#xA;}&#xA;&#xA;Mat image(Size(HEIGHT, WIDTH), CV_8UC3);&#xA;SwsContext *swsctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_BGR24, WIDTH, HEIGHT, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);&#xA;int frame_pts = 0;&#xA;&#xA;while (1)&#xA;{&#xA;    cap >> image;&#xA;&#xA;    int numBytesYUV = av_image_get_buffer_size(STREAM_PIX_FMT, WIDTH, HEIGHT, 1);&#xA;    uint8_t *bufferYUV = (uint8_t *)av_malloc(numBytesYUV * sizeof(uint8_t));&#xA;&#xA;    avpicture_fill((AVPicture *)pFrameBGR, image.data, AV_PIX_FMT_BGR24, WIDTH, HEIGHT);&#xA;    avpicture_fill((AVPicture *)frame, bufferYUV, STREAM_PIX_FMT, WIDTH, HEIGHT);&#xA;&#xA;    sws_scale(swsctx, (uint8_t const *const *)pFrameBGR->data, pFrameBGR->linesize, 0, HEIGHT, frame->data, frame->linesize);&#xA;&#xA;    AVPacket pkt = {0};&#xA;    av_init_packet(&amp;pkt);&#xA;&#xA;    frame->pts = frame_pts;&#xA;&#xA;    ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_packet);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        av_log(NULL, AV_LOG_ERROR, "Error encoding frame\n");&#xA;    }&#xA;&#xA;    if (got_packet)&#xA;    {&#xA;        pkt.pts = av_rescale_q_rnd(pkt.pts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;        pkt.dts = av_rescale_q_rnd(pkt.dts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;        pkt.duration = av_rescale_q(pkt.duration, c->time_base, stream->time_base);&#xA;        pkt.stream_index = stream->index;&#xA;&#xA;        return av_interleaved_write_frame(formatContext, &amp;pkt);&#xA;&#xA;        cout &lt;&lt; "Seguro que si" &lt;&lt; endl;&#xA;    }&#xA;    frame_pts&#x2B;&#x2B;;&#xA;}&#xA;&#xA;avcodec_free_context(&amp;c);&#xA;av_frame_free(&amp;frame);&#xA;avformat_free_context(formatContext);&#xA;&#xA;return 0;&#xA;}&#xA;

    &#xA;&#xA;

    The code compiles but it returns Segmentation fault in the function av_interleaved_write_frame(). I've tried several implementations or several codecs (in this case I'm using libopenh264, but using mpeg2video returns the same segmentation fault). I tried also with av_write_frame() but it returns the same error.

    &#xA;&#xA;

    As I told before, I only want to grab frames from a webcam connected via USB, encode them to H264 and send the packets through UDP to another PC.

    &#xA;&#xA;

    My console log when I run the executable is.

    &#xA;&#xA;

    [100%] Built target display&#xA;[OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::SetOption():ENCODER_OPTION_TRACE_CALLBACK callback = 0x7f0c302a87c0.&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::InitEncoder(), openh264 codec version = 5a5c4f1&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:iUsageType = 0,iPicWidth= 640;iPicHeight= 480;iTargetBitrate= 400000;iMaxBitrate= 400000;iRCMode= 0;iPaddingFlag= 0;iTemporalLayerNum= 1;iSpatialLayerNum= 1;fFrameRate= 25.000000f;uiIntraPeriod= 12;eSpsPpsIdStrategy = 0;bPrefixNalAddingCtrl = 0;bSimulcastAVC=0;bEnableDenoise= 0;bEnableBackgroundDetection= 1;bEnableSceneChangeDetect = 1;bEnableAdaptiveQuant= 1;bEnableFrameSkip= 0;bEnableLongTermReference= 0;iLtrMarkPeriod= 30, bIsLosslessLink=0;iComplexityMode = 0;iNumRefFrame = 1;iEntropyCodingModeFlag = 0;uiMaxNalSize = 0;iLTRRefNum = 0;iMultipleThreadIdc = 1;iLoopFilterDisableIdc = 0 (offset(alpha/beta): 0,0;iComplexityMode = 0,iMaxQp = 51;iMinQp = 0)&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:sSpatialLayers[0]: .iVideoWidth= 640; .iVideoHeight= 480; .fFrameRate= 25.000000f; .iSpatialBitrate= 400000; .iMaxSpatialBitrate= 400000; .sSliceArgument.uiSliceMode= 1; .sSliceArgument.iSliceNum= 0; .sSliceArgument.uiSliceSizeConstraint= 1500;uiProfileIdc = 66;uiLevelIdc = 41&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:SliceArgumentValidationFixedSliceMode(), unsupported setting with Resolution and uiSliceNum combination under RC on! So uiSliceNum is changed to 6!&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:Setting MaxSpatialBitrate (400000) the same at SpatialBitrate (400000) will make the    actual bit rate lower than SpatialBitrate&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:bEnableFrameSkip = 0,bitrate can&#x27;t be controlled for RC_QUALITY_MODE,RC_BITRATE_MODE and RC_TIMESTAMP_MODE without enabling skip frame.&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:Change QP Range from(0,51) to (12,42)&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WELS CPU features/capacities (0x4007fe3f) detected:   HTT:      Y, MMX:      Y, MMXEX:    Y, SSE:      Y, SSE2:     Y, SSE3:     Y, SSSE3:    Y, SSE4.1:   Y, SSE4.2:   Y, AVX:      Y, FMA:      Y, X87-FPU:  Y, 3DNOW:    N, 3DNOWEX:  N, ALTIVEC:  N, CMOV:     Y, MOVBE:    Y, AES:      Y, NUMBER OF LOGIC PROCESSORS ON CHIP: 8, CPU CACHE LINE SIZE (BYTES):        64&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt() exit, overall memory usage: 4542878 bytes&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt(), pCtx= 0x0x245a400.&#xA;Output #0, h264, to &#x27;udp://192.168.100.39:8080&#x27;:&#xA;Stream #0:0, 0, 1/25: Video: h264 (libopenh264), 1 reference frame, yuv420p, 640x480 (0x0), 0/1, q=2-31, 400 kb/s, 25 tbn&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:RcUpdateIntraComplexity iFrameDqBits = 385808,iQStep= 2016,iIntraCmplx = 777788928&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:[Rc]Layer 0: Frame timestamp = 0, Frame type = 2, encoding_qp = 30, average qp = 30, max qp = 33, min qp = 27, index = 0, iTid = 0, used = 385808, bitsperframe = 16000, target = 64000, remainingbits = -257808, skipbuffersize = 200000&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerNum = 2,iFrameSize = 48252&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 0,iNalType = 0,iNalCount = 2, first Nal Length=18,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 1,iNalType = 1,iNalCount = 6, first Nal Length=6057,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0&#xA;[libopenh264 @ 0x244aa00] 6 slices&#xA;./scriptBuild.sh: line 20: 10625 Segmentation fault      (core dumped) ./display&#xA;

    &#xA;&#xA;

    As you can see, FFmpeg uses libopenh264 and configures it correctly. However, no matter what. It always returns the same Segmentation fault error...

    &#xA;&#xA;

    I've used commands like this.

    &#xA;&#xA;

    ffmpeg -s 640x480 -f video4linux2 -i /dev/video0 -r 30 -vcodec libopenh264 -an -f h264 udp://127.0.0.1:8080&#xA;

    &#xA;&#xA;

    And it works perfectly, but I need to process the frames before sending them. Thats why I'm trying to use the libs.

    &#xA;&#xA;

    My FFmpeg version is.

    &#xA;&#xA;

    ffmpeg version 3.3.6 Copyright (c) 2000-2017 the FFmpeg developers&#xA;built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)&#xA;configuration: --disable-yasm --enable-shared --enable-libopenh264 --cc=&#x27;gcc -fPIC&#x27;&#xA;libavutil      55. 58.100 / 55. 58.100&#xA;libavcodec     57. 89.100 / 57. 89.100&#xA;libavformat    57. 71.100 / 57. 71.100&#xA;libavdevice    57.  6.100 / 57.  6.100&#xA;libavfilter     6. 82.100 /  6. 82.100&#xA;libswscale      4.  6.100 /  4.  6.100&#xA;libswresample   2.  7.100 /  2.  7.100&#xA;

    &#xA;&#xA;

    I tried to get more information of the error using gbd, but it didn't give me debugging info.

    &#xA;&#xA;

    How can I solve this problem ? I don't know what else can I try...

    &#xA;&#xA;

    Thank you !

    &#xA;

  • ffmpeg version 2.6.8 : Stream specifier ':a' in filtergraph description matches no streams

    16 décembre 2020, par Ashitaka

    I am not getting why this isn't working.. I have tried to get the video streams with [0:v]/[0:1]/[0:v:0] & the audio streams with [0:a]/[0:0]/[0:0:0].&#xA;nothing worked.

    &#xA;

    Explaining the inputs :

    &#xA;

    1.1st input stream is a video that can be of varying resolution on which the filter adds a padding on to make it 600:480.

    &#xA;

    2.2nd input is an overlay png which is already at 5:4 ratio.. just making it 600:480 before it gets overlaid in the filter.

    &#xA;

    3.3rd & 4th ones are also videos which I don't care if they get stretched.. n they are getting stretched to 600:480.

    &#xA;

    4.so finally there are 3 streams 1 overlaid video 2 stretched videos which needs to be concatenated.

    &#xA;

    here's the command :

    &#xA;

    ffmpeg &#xA;-i &#x27;/home/vidinflux/public_html/assets/temp/2018020116464612/2018020116464612.mp4&#x27; &#xA;-i &#x27;/home/vidinflux/public_html/assets/temp/2018020116464612/2018020116464612_overlay.png&#x27; &#xA;-i &#x27;/home/vidinflux/public_html/assets/user/736/video/Lines1.mp4&#x27; &#xA;-i &#x27;/home/vidinflux/public_html/assets/user/736/video/Lines11.mp4&#x27; &#xA;-filter_complex&#xA;"[0:v]trim=0:138,setpts=PTS-STARTPTS[v0];[0:a]atrim=0:138,asetpts=PTS-STARTPTS[a0];[v0]scale=&#x27;gte(iw/ih\,600/480)*600&#x2B;lt(iw/ih\,600/480)*((480*iw)/ih):lte(iw/ih\,600/480)*480&#x2B;gt(iw/ih\,600/480)*((600*ih)/iw)&#x27;,pad=&#x27;600:480:(600-gte(iw/ih\,600/480)*600-lt(iw/ih\,600/480)*((480*iw)/ih))/2:(480-lte(iw/ih\,600/480)*480-gt(iw/ih\,600/480)*((600*ih)/iw))/2:black&#x27;[x];[1:v]scale=600:480[y];[x][y]overlay=0:0[z];[2:v]scale=600:480,setsar=1:1[x0];[3:v]scale=600:480,setsar=1:1[x1];[x0][2:a][z][a0][x1][3:a]concat=n=3:v=1:a=1[v][a]" &#xA;-map "[v]" &#xA;-map "[a]" &#xA;-c:v libx264 &#xA;-shortest /home/vidinflux/public_html/assets/temp/2018020116464612/2018020116464612_final.mp4&#xA;

    &#xA;

    this is the complete error I am getting :

    &#xA;

    Stream specifier &#x27;:a&#x27; in filtergraph description  [0:v]trim=0:138,setpts=PTS-STARTPTS[v0];[0:a]atrim=0:138,asetpts=PTS-STARTPTS[a0];[v0]scale=&#x27;gte(iw/ih\,600/480)*600&#x2B;lt(iw/ih\,600/480)*((480*iw)/ih):lte(iw/ih\,600/480)*480&#x2B;gt(iw/ih\,600/480)*((600*ih)/iw)&#x27;,pad=&#x27;600:480:(600-gte(iw/ih\,600/480)*600-lt(iw/ih\,600/480)*((480*iw)/ih))/2:(480-lte(iw/ih\,600/480)*480-gt(iw/ih\,600/480)*((600*ih)/iw))/2:black&#x27;[x];[1:v]scale=600:480[y];[x][y]overlay=0:0[z];[2:v]scale=600:480,setsar=1:1[x0];[3:v]scale=600:480,setsar=1:1[x1];[x0][2:a][z][a0][x1][3:a]concat=n=3:v=1:a=1[v][a] matches no streams.&#xA;

    &#xA;

    also there are these warnings :

    &#xA;

    [Parsed_setsar_9 @ 0x219fba0] num:den syntax is deprecated, please use num/den or named options instead&#xA;[Parsed_setsar_11 @ 0x21a4840] num:den syntax is deprecated, please use num/den or named options instead&#xA;

    &#xA;

    Complete log as requested :

    &#xA;

    [root@cloud ~]# ffmpeg -i &#x27;/home/vidinflux/public_html/assets/temp/2018020116464612/2018020116464612.mp4&#x27; -i &#x27;/home/vidinflux/public_html/assets/temp/2018020116464612/2018020116464612_overlay.png&#x27; -i &#x27;/home/vidinflux/public_html/assets/user/736/video/Lines1.mp4&#x27; -i &#x27;/home/vidinflux/public_html/assets/user/736/video/Lines11.mp4&#x27; -filter_complex \ "[0:v]trim=0:138,setpts=PTS-STARTPTS[v0];[0:a]atrim=0:138,asetpts=PTS-STARTPTS[a0];[v0]scale=&#x27;gte(iw/ih\,600/480)*600&#x2B;lt(iw/ih\,600/480)*((480*iw)/ih):lte(iw/ih\,600/480)*480&#x2B;gt(iw/ih\,600/480)*((600*ih)/iw)&#x27;,pad=&#x27;600:480:(600-gte(iw/ih\,600/480)*600-lt(iw/ih\,600/480)*((480*iw)/ih))/2:(480-lte(iw/ih\,600/480)*480-gt(iw/ih\,600/480)*((600*ih)/iw))/2:black&#x27;[x];[1:v]scale=600:480[y];[x][y]overlay=0:0[z];[2:v]scale=600:480,setsar=1:1[x0];[3:v]scale=600:480,setsar=1:1[x1];[x0][2:a][z][a0][x1][3:a]concat=n=3:v=1:a=1[v][a]" -map "[v]" -map "[a]" -c:v libx264 -shortest /home/vidinflux/public_html/assets/temp/2018020116464612/2018020116464612_final.mp4&#xA;ffmpeg version 2.6.8 Copyright (c) 2000-2016 the FFmpeg developers&#xA;  built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)&#xA;  configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags=&#x27;-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic&#x27; --enable-bzlib --disable-crystalhd --enable-gnutls --enable-ladspa --enable-libass --enable-libdc1394 --enable-libfaac --enable-nonfree --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libx264 --enable-libx265 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect&#xA;  libavutil      54. 20.100 / 54. 20.100&#xA;  libavcodec     56. 26.100 / 56. 26.100&#xA;  libavformat    56. 25.101 / 56. 25.101&#xA;  libavdevice    56.  4.100 / 56.  4.100&#xA;  libavfilter     5. 11.102 /  5. 11.102&#xA;  libavresample   2.  1.  0 /  2.  1.  0&#xA;  libswscale      3.  1.101 /  3.  1.101&#xA;  libswresample   1.  1.100 /  1.  1.100&#xA;  libpostproc    53.  3.100 / 53.  3.100&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;/home/vidinflux/public_html/assets/temp/2018020116464612/2018020116464612.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: isommp42&#xA;    creation_time   : 2017-08-21 02:23:24&#xA;  Duration: 00:02:17.23, start: 0.000000, bitrate: 417 kb/s&#xA;    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(tv, bt709), 640x360 [SAR 1:1 DAR 16:9], 318 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 96 kb/s (default)&#xA;    Metadata:&#xA;      creation_time   : 2017-08-21 02:23:24&#xA;      handler_name    : IsoMedia File Produced by Google, 5-11-2011&#xA;Input #1, png_pipe, from &#x27;/home/vidinflux/public_html/assets/temp/2018020116464612/2018020116464612_overlay.png&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;    Stream #1:0: Video: png, rgba, 600x479, 25 tbr, 25 tbn, 25 tbc&#xA;Input #2, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;/home/vidinflux/public_html/assets/user/736/video/Lines1.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 1&#xA;    compatible_brands: mp41mp42isom&#xA;    creation_time   : 2018-01-31 22:40:09&#xA;  Duration: 00:00:04.90, start: 0.103811, bitrate: 846 kb/s&#xA;    Stream #2:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(tv, smpte170m/bt470bg/bt709), 1920x1080, 827 kb/s, SAR 1:1 DAR 16:9, 30 fps, 30 tbr, 90k tbn, 180k tbc (default)&#xA;    Metadata:&#xA;      creation_time   : 2018-01-31 22:40:10&#xA;      handler_name    : Core Media Video&#xA;Input #3, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;/home/vidinflux/public_html/assets/user/736/video/Lines11.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 1&#xA;    compatible_brands: mp41mp42isom&#xA;    creation_time   : 2018-01-31 22:40:09&#xA;  Duration: 00:00:04.90, start: 0.103811, bitrate: 846 kb/s&#xA;    Stream #3:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(tv, smpte170m/bt470bg/bt709), 1920x1080, 827 kb/s, SAR 1:1 DAR 16:9, 30 fps, 30 tbr, 90k tbn, 180k tbc (default)&#xA;    Metadata:&#xA;      creation_time   : 2018-01-31 22:40:10&#xA;      handler_name    : Core Media Video&#xA;[Parsed_setsar_9 @ 0x17c8ba0] num:den syntax is deprecated, please use num/den or named options instead&#xA;[Parsed_setsar_11 @ 0x17cd840] num:den syntax is deprecated, please use num/den or named options instead&#xA;Stream specifier &#x27;:a&#x27; in filtergraph description  [0:v]trim=0:138,setpts=PTS-STARTPTS[v0];[0:a]atrim=0:138,asetpts=PTS-STARTPTS[a0];[v0]scale=&#x27;gte(iw/ih\,600/480)*600&#x2B;lt(iw/ih\,600/480)*((480*iw)/ih):lte(iw/ih\,600/480)*480&#x2B;gt(iw/ih\,600/480)*((600*ih)/iw)&#x27;,pad=&#x27;600:480:(600-gte(iw/ih\,600/480)*600-lt(iw/ih\,600/480)*((480*iw)/ih))/2:(480-lte(iw/ih\,600/480)*480-gt(iw/ih\,600/480)*((600*ih)/iw))/2:black&#x27;[x];[1:v]scale=600:480[y];[x][y]overlay=0:0[z];[2:v]scale=600:480,setsar=1:1[x0];[3:v]scale=600:480,setsar=1:1[x1];[x0][2:a][z][a0][x1][3:a]concat=n=3:v=1:a=1[v][a] matches no streams.&#xA;

    &#xA;

  • Why does ffmpeg report different durations ?

    18 mai 2012, par kenitech

    Source videos : http://www.artworknotavailable.com/tmp/ffmpegtest

    Quicktime Pro 7.7.1 Inspector (Win 7) reports the following for the file 2398.mov

    4.19MB
    H.264
    Movie FPS : 23.98
    Data Rate : 2.35 mbits/Sec
    Duration 14:97

    ffmpeg reports the following (see full ffmpeg version info at bottom of post)

    ffmpeg -i 2398.mov

    Seems stream 1 codec frame rate differs from container frame rate : 47952.00 (47952/1) -> >23.98 (2997/125)
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '2398.mov' :
    Metadata :
    major_brand : qt

    minor_version : 537199360
    compatible_brands : qt
    Duration : 00:00:15.97, start : 0.-963005, bitrate : 2210 kb/s
    Stream #0.0(eng) : Audio : aac, 48000 Hz, stereo, s16, 152 kb/s
    Stream #0.1(eng) : Video : h264, yuv420p, 848x480, 2060 kb/s, 23.98 fps, 23.98 tbr, 23976 tbn, 47952 tbc

    One second longer than what Quicktime reports.

    As an experiment I exported this file from Quicktime Pro using the following settings :

    Frame Rate : Current
    Key Frames : Every 24 frames
    Frame Reordering On
    Quality : High
    Encoding Best
    Data Rate : Automatic
    Optimized for Download
    Output file : qtime-export-2398.mov

    Quicktime Inspector reports :

    5.62 MB
    H.264
    Movie FPS : 23.98
    Data Rate : 3.15 mbits/Sec
    Duration 14:97

    ffmpeg now reports :

    ffmpeg -i qtime-export-2398.mov

    Seems stream 1 codec frame rate differs from container frame rate : 1200.00 (1200/1) -> 23.98 (24000/1001)
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'qtime-export-2398.mov' :
    Metadata :
    major_brand : qt
    minor_version : 537199360
    compatible_brands : qt
    Duration : 00:00:14.96, start : 0.000000, bitrate : 3153 kb/s
    Stream #0.0(eng) : Audio : pcm_s16le, 44100 Hz, 2 channels, s16, 1411 kb/s
    Stream #0.1(eng) : Video : h264, yuv420p, 678x384, 1738 kb/s, 23.98 fps, 23.98 tbr, 600 tbn, 1200 tbc

    ffmpeg's report on duration went from 15.97 to 14.96 (I can live with .1)

    Is this duration calculated from the bitrate ?

    I need to accurately report the duration of uploaded videos as well as convert them to FLV. Can somebody tell me what is going on here and how I might get around this ?

    ffmpeg info below. I've tried this on 2 completely different installs/versions of ffmpeg. Same result.

    FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers
    built on Jan 29 2012 23:55:02 with gcc 4.1.2 20080704 (Red Hat 4.1.2-51)
    configuration : —prefix=/usr —libdir=/usr/lib64 —shlibdir=/usr/lib64 —mandir=/usr/share/man —incdir=/usr/include —disable-avisynth —extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector —param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' —enable-avfilter —enable-avfilter-lavf —enable-libdirac —enable-libfaac —enable-libfaad —enable-libfaadbin —enable-libgsm —enable-libmp3lame —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libx264 —enable-gpl —enable-nonfree —enable-postproc —enable-pthreads —enable-shared —enable-swscale —enable-vdpau —enable-version3 —enable-x11grab
    libavutil 50.15. 1 / 50.15. 1
    libavcodec 52.72. 2 / 52.72. 2
    libavformat 52.64. 2 / 52.64. 2
    libavdevice 52. 2. 0 / 52. 2. 0
    libavfilter 1.19. 0 / 1.19. 0
    libswscale 0.11. 0 / 0.11. 0
    libpostproc 51. 2. 0 / 51. 2. 0
    FFmpeg 0.6.5
    libavutil 50.15. 1 / 50.15. 1
    libavcodec 52.72. 2 / 52.72. 2
    libavformat 52.64. 2 / 52.64. 2
    libavdevice 52. 2. 0 / 52. 2. 0
    libavfilter 1.19. 0 / 1.19. 0
    libswscale 0.11. 0 / 0.11. 0
    libpostproc 51. 2. 0 / 51. 2. 0