Recherche avancée

Médias (0)

Mot : - Tags -/configuration

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (101)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Emballe Médias : Mettre en ligne simplement des documents

    29 octobre 2010, par

    Le plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
    Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
    D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...)

Sur d’autres sites (13748)

  • Matomo Launches Global Partner Programme to Deepen Local Connections and Champion Ethical Analytics

    25 juin, par Matomo Core Team — Press Releases

    Matomo introduces a global Partner Programme designed to connect organisations with trusted local experts, advancing its commitment to privacy, data sovereignty, and localisation.

    Wellington, New Zealand 25 June 2025 Matomo, the leading web analytics platform, is
    proud to announce the launch of the Matomo Partner Programme. This new initiative marks a significant step in Matomo’s global growth strategy, bringing together a carefully selected
    network of expert partners to support customers with localised, hightrust analytics services
    rooted in shared values.

    As privacy concerns rise and organisations seek alternatives to mainstream analytics solutions, the need for regional expertise has never been more vital. The Matomo Partner Programme ensures that customers around the world are supported not just by a worldclass platform, but by trusted local professionals who understand their specific regulatory, cultural, and business needs.

    “Matomo is evolving. As privacy regulations become more nuanced and the need for regional
    understanding grows, we’ve made localisation a central pillar of our strategy. Our partners are
    the key to helping customers navigate these complexities with confidence and care,” said
    Adam Taylor, Chief Operating Officer at Matomo.

    Local Experts, Global Values

    At the heart of the Matomo Partner Programme is a commitment to connect clients with local experts who live and breathe their markets. These partners are more than service
    providersthey’re trusted advisors who bring deep insight into their region’s privacy
    legislation, cultural norms, sectorspecific requirements, and digital trends.

    The programme empowers partners to act as extensions of Matomo’s core teams :

    As Customer Success allies, delivering personalised training, support, and technical
    services in local languages and time zones.
    As Sales ambassadors, raising awareness of ethical analytics in both public and private
    sectors, where trust, compliance, and transparency are crucial.

    This decentralised, valuesaligned approach ensures that every Matomo customer benefits
    from localised delivery with global consistency.

    A Programme Designed for Impactful Partnerships

    The Matomo Partner Programme is open to organisations who share a commitment to ethical, open-source analytics and can demonstrate :

    Technical excellence in deploying, configuring, and supporting Matomo Analytics in diverse environments.
    Deep market understanding, allowing them to tell the Matomo story in ways that
    resonate locally.
    Commercial strength to position Matomo across key industries, particularly in sectors with complex compliance and data sovereignty demands.

    Partners who meet these standards will be recognised as ‘Official Matomo Partners’— a symbol of excellence, credibility, and shared purpose. With this status, they gain access to :

    Brand alignment and trust : Strengthen credibility with clients by promoting their
    connection to Matomo and its globally respected ethical stance.
    Go-to-market support : Access to qualified leads, joint marketing, and tools to scale their business in a privacy-first market.
    Strategic collaboration : Early insights into the product roadmap and direct
    engagement with Matomo’s core team.
    Meaningful local impact : Help regional organisations reclaim control of their data and embrace ethical analytics with confidence.

    Ethical Analytics for Today’s World

    Matomo was founded in 2007 with the belief that people should have full control over their data. As the first opensource web analytics platform of its kind, Matomo continues to challenge the dominance of opaque, centralised tools by offering a transparent and flexible alternative that puts users first.

    In today’s landscapemarked by increased regulatory scrutiny, data protection concerns, and rapid advancements in AIMatomo’s approach is more relevant than ever. Opensource technology provides the adaptability organisations need to respond to local expectations while reinforcing digital trust with users.

    Whether it’s a government department, healthcare provider, educational institution, or
    commercial businessMatomo partners are on the ground, ready to help organisations
    transition to analytics that are not only powerful but principled.
  • How to generate a fixed duration and fps for a video using FFmpeg C++ libraries ? [closed]

    4 novembre 2024, par BlueSky Light Programmer

    I'm following the mux official example to write a C++ class that generates a video with a fixed duration (5s) and a fixed fps (60). For some reason, the duration of the output video is 3-4 seconds, although I call the function to write frames 300 times and set the fps to 60.

    


    Can you take a look at the code below and spot what I'm doing wrong ?

    


    #include "ffmpeg.h"&#xA;&#xA;#include <iostream>&#xA;&#xA;static int writeFrame(AVFormatContext *fmt_ctx, AVCodecContext *c,&#xA;                      AVStream *st, AVFrame *frame, AVPacket *pkt);&#xA;&#xA;static void addStream(OutputStream *ost, AVFormatContext *formatContext,&#xA;                      const AVCodec **codec, enum AVCodecID codec_id,&#xA;                      int width, int height, int fps);&#xA;&#xA;static AVFrame *allocFrame(enum AVPixelFormat pix_fmt, int width, int height);&#xA;&#xA;static void openVideo(AVFormatContext *formatContext, const AVCodec *codec,&#xA;                      OutputStream *ost, AVDictionary *opt_arg);&#xA;&#xA;static AVFrame *getVideoFrame(OutputStream *ost,&#xA;                              const std::vector<glubyte>&amp; pixels,&#xA;                              int duration);&#xA;&#xA;static int writeVideoFrame(AVFormatContext *formatContext,&#xA;                           OutputStream *ost,&#xA;                           const std::vector<glubyte>&amp; pixels,&#xA;                           int duration);&#xA;&#xA;static void closeStream(AVFormatContext *formatContext, OutputStream *ost);&#xA;&#xA;static void fillRGBImage(AVFrame *frame, int width, int height,&#xA;                         const std::vector<glubyte>&amp; pixels);&#xA;&#xA;#ifdef av_err2str&#xA;#undef av_err2str&#xA;#include <string>&#xA;av_always_inline std::string av_err2string(int errnum) {&#xA;  char str[AV_ERROR_MAX_STRING_SIZE];&#xA;  return av_make_error_string(str, AV_ERROR_MAX_STRING_SIZE, errnum);&#xA;}&#xA;#define av_err2str(err) av_err2string(err).c_str()&#xA;#endif  // av_err2str&#xA;&#xA;FFmpeg::FFmpeg(int width, int height, int fps, const char *fileName)&#xA;: videoStream{ 0 }&#xA;, formatContext{ nullptr } {&#xA;  const AVOutputFormat *outputFormat;&#xA;  const AVCodec *videoCodec{ nullptr };&#xA;  AVDictionary *opt{ nullptr };&#xA;  int ret{ 0 };&#xA;&#xA;  av_dict_set(&amp;opt, "crf", "17", 0);&#xA;&#xA;  /* Allocate the output media context. */&#xA;  avformat_alloc_output_context2(&amp;this->formatContext, nullptr, nullptr, fileName);&#xA;  if (!this->formatContext) {&#xA;    std::cout &lt;&lt; "Could not deduce output format from file extension: using MPEG." &lt;&lt; std::endl;&#xA;    avformat_alloc_output_context2(&amp;this->formatContext, nullptr, "mpeg", fileName);&#xA;  &#xA;    if (!formatContext)&#xA;      exit(-14);&#xA;  }&#xA;&#xA;  outputFormat = this->formatContext->oformat;&#xA;&#xA;  /* Add the video stream using the default format codecs&#xA;   * and initialize the codecs. */&#xA;  if (outputFormat->video_codec == AV_CODEC_ID_NONE) {&#xA;    std::cout &lt;&lt; "The output format doesn&#x27;t have a default codec video." &lt;&lt; std::endl;&#xA;    exit(-15);&#xA;  }&#xA;&#xA;  addStream(&#xA;    &amp;this->videoStream,&#xA;    this->formatContext,&#xA;    &amp;videoCodec,&#xA;    outputFormat->video_codec,&#xA;    width,&#xA;    height,&#xA;    fps&#xA;  );&#xA;  openVideo(this->formatContext, videoCodec, &amp;this->videoStream, opt);&#xA;  av_dump_format(this->formatContext, 0, fileName, 1);&#xA;  &#xA;  /* open the output file, if needed */&#xA;  if (!(outputFormat->flags &amp; AVFMT_NOFILE)) {&#xA;    ret = avio_open(&amp;this->formatContext->pb, fileName, AVIO_FLAG_WRITE);&#xA;    if (ret &lt; 0) {&#xA;      std::cout &lt;&lt; "Could not open &#x27;" &lt;&lt; fileName &lt;&lt; "&#x27;: " &lt;&lt; std::string{ av_err2str(ret) } &lt;&lt; std::endl;&#xA;      exit(-16);&#xA;    }&#xA;  }&#xA;&#xA;  /* Write the stream header, if any. */&#xA;  ret = avformat_write_header(this->formatContext, &amp;opt);&#xA;  if (ret &lt; 0) {&#xA;    std::cout &lt;&lt; "Error occurred when opening output file: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;    exit(-17);&#xA;  }&#xA;&#xA;  av_dict_free(&amp;opt);&#xA;}&#xA;&#xA;FFmpeg::~FFmpeg() {&#xA;  if (this->formatContext) {&#xA;    /* Close codec. */&#xA;    closeStream(this->formatContext, &amp;this->videoStream);&#xA;&#xA;    if (!(this->formatContext->oformat->flags &amp; AVFMT_NOFILE)) {&#xA;      /* Close the output file. */&#xA;      avio_closep(&amp;this->formatContext->pb);&#xA;    }&#xA;&#xA;    /* free the stream */&#xA;    avformat_free_context(this->formatContext);&#xA;  }&#xA;}&#xA;&#xA;void FFmpeg::Record(&#xA;  const std::vector<glubyte>&amp; pixels,&#xA;  unsigned frameIndex,&#xA;  int duration,&#xA;  bool isLastIndex&#xA;) {&#xA;  static bool encodeVideo{ true };&#xA;  if (encodeVideo)&#xA;    encodeVideo = !writeVideoFrame(this->formatContext,&#xA;                                   &amp;this->videoStream,&#xA;                                   pixels,&#xA;                                   duration);&#xA;&#xA;  if (isLastIndex) {&#xA;    av_write_trailer(this->formatContext);&#xA;    encodeVideo = false;&#xA;  }&#xA;}&#xA;&#xA;int writeFrame(AVFormatContext *fmt_ctx, AVCodecContext *c,&#xA;               AVStream *st, AVFrame *frame, AVPacket *pkt) {&#xA;  int ret;&#xA;&#xA;  // send the frame to the encoder&#xA;  ret = avcodec_send_frame(c, frame);&#xA;  if (ret &lt; 0) {&#xA;    std::cout &lt;&lt; "Error sending a frame to the encoder: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;    exit(-2);&#xA;  }&#xA;&#xA;  while (ret >= 0) {&#xA;    ret = avcodec_receive_packet(c, pkt);&#xA;    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;      break;&#xA;    else if (ret &lt; 0) {&#xA;      std::cout &lt;&lt; "Error encoding a frame: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;      exit(-3);&#xA;    }&#xA;&#xA;    /* rescale output packet timestamp values from codec to stream timebase */&#xA;    av_packet_rescale_ts(pkt, c->time_base, st->time_base);&#xA;    pkt->stream_index = st->index;&#xA;&#xA;    /* Write the compressed frame to the media file. */&#xA;    ret = av_interleaved_write_frame(fmt_ctx, pkt);&#xA;    /* pkt is now blank (av_interleaved_write_frame() takes ownership of&#xA;     * its contents and resets pkt), so that no unreferencing is necessary.&#xA;     * This would be different if one used av_write_frame(). */&#xA;    if (ret &lt; 0) {&#xA;      std::cout &lt;&lt; "Error while writing output packet: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;      exit(-4);&#xA;    }&#xA;  }&#xA;&#xA;  return ret == AVERROR_EOF ? 1 : 0;&#xA;}&#xA;&#xA;void addStream(OutputStream *ost, AVFormatContext *formatContext,&#xA;               const AVCodec **codec, enum AVCodecID codec_id,&#xA;               int width, int height, int fps) {&#xA;  AVCodecContext *c;&#xA;  int i;&#xA;&#xA;  /* find the encoder */&#xA;  *codec = avcodec_find_encoder(codec_id);&#xA;  if (!(*codec)) {&#xA;    std::cout &lt;&lt; "Could not find encoder for " &lt;&lt; avcodec_get_name(codec_id) &lt;&lt; "." &lt;&lt; std::endl;&#xA;    exit(-5);&#xA;  }&#xA;&#xA;  ost->tmpPkt = av_packet_alloc();&#xA;  if (!ost->tmpPkt) {&#xA;    std::cout &lt;&lt; "Could not allocate AVPacket." &lt;&lt; std::endl;&#xA;    exit(-6);&#xA;  }&#xA;&#xA;  ost->st = avformat_new_stream(formatContext, nullptr);&#xA;  if (!ost->st) {&#xA;    std::cout &lt;&lt; "Could not allocate stream." &lt;&lt; std::endl;&#xA;    exit(-7);&#xA;  }&#xA;&#xA;  ost->st->id = formatContext->nb_streams-1;&#xA;  c = avcodec_alloc_context3(*codec);&#xA;  if (!c) {&#xA;    std::cout &lt;&lt; "Could not alloc an encoding context." &lt;&lt; std::endl;&#xA;    exit(-8);&#xA;  }&#xA;  ost->enc = c;&#xA;&#xA;  switch ((*codec)->type) {&#xA;  case AVMEDIA_TYPE_VIDEO:&#xA;    c->codec_id = codec_id;&#xA;    c->bit_rate = 6000000;&#xA;    /* Resolution must be a multiple of two. */&#xA;    c->width    = width;&#xA;    c->height   = height;&#xA;    /* timebase: This is the fundamental unit of time (in seconds) in terms&#xA;      * of which frame timestamps are represented. For fixed-fps content,&#xA;      * timebase should be 1/framerate and timestamp increments should be&#xA;      * identical to 1. */&#xA;    ost->st->time_base = { 1, fps };&#xA;    c->time_base       = ost->st->time_base;&#xA;    c->framerate       = { fps, 1 };&#xA;&#xA;    c->gop_size      = 0; /* emit one intra frame every twelve frames at most */&#xA;    c->pix_fmt       = AV_PIX_FMT_YUV420P;&#xA;    if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {&#xA;      /* just for testing, we also add B-frames */&#xA;      c->max_b_frames = 2;&#xA;    }&#xA;    if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {&#xA;      /* Needed to avoid using macroblocks in which some coeffs overflow.&#xA;      *  This does not happen with normal video, it just happens here as&#xA;      *  the motion of the chroma plane does not match the luma plane.*/&#xA;      c->mb_decision = 2;&#xA;    }&#xA;    break;&#xA;&#xA;  default:&#xA;    break;&#xA;  }&#xA;&#xA;  /* Some formats want stream headers to be separate. */&#xA;  if (formatContext->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;    c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;}&#xA;&#xA;AVFrame *allocFrame(enum AVPixelFormat pix_fmt, int width, int height) {&#xA;  AVFrame *frame{ av_frame_alloc() };&#xA;  int ret;&#xA;&#xA;  if (!frame)&#xA;    return nullptr;&#xA;&#xA;  frame->format = pix_fmt;&#xA;  frame->width  = width;&#xA;  frame->height = height;&#xA;&#xA;  /* allocate the buffers for the frame data */&#xA;  ret = av_frame_get_buffer(frame, 0);&#xA;  if (ret &lt; 0) {&#xA;    std::cout &lt;&lt; "Could not allocate frame data." &lt;&lt; std::endl;&#xA;    exit(-8);&#xA;  }&#xA;&#xA;  return frame;&#xA;}&#xA;&#xA;void openVideo(AVFormatContext *formatContext, const AVCodec *codec,&#xA;               OutputStream *ost, AVDictionary *opt_arg) {&#xA;  int ret;&#xA;  AVCodecContext *c{ ost->enc };&#xA;  AVDictionary *opt{ nullptr };&#xA;&#xA;  av_dict_copy(&amp;opt, opt_arg, 0);&#xA;&#xA;  /* open the codec */&#xA;  ret = avcodec_open2(c, codec, &amp;opt);&#xA;  av_dict_free(&amp;opt);&#xA;  if (ret &lt; 0) {&#xA;    std::cout &lt;&lt; "Could not open video codec: " &lt;&lt; av_err2str(ret) &lt;&lt; std::endl;&#xA;    exit(-9);&#xA;  }&#xA;&#xA;  /* Allocate and init a re-usable frame. */&#xA;  ost->frame = allocFrame(c->pix_fmt, c->width, c->height);&#xA;  if (!ost->frame) {&#xA;    std::cout &lt;&lt; "Could not allocate video frame." &lt;&lt; std::endl;&#xA;    exit(-10);&#xA;  }&#xA;&#xA;  /* If the output format is not RGB24, then a temporary RGB24&#xA;   * picture is needed too. It is then converted to the required&#xA;   * output format. */&#xA;  ost->tmpFrame = nullptr;&#xA;  if (c->pix_fmt != AV_PIX_FMT_RGB24) {&#xA;    ost->tmpFrame = allocFrame(AV_PIX_FMT_RGB24, c->width, c->height);&#xA;    if (!ost->tmpFrame) {&#xA;      std::cout &lt;&lt; "Could not allocate temporary video frame." &lt;&lt; std::endl;&#xA;      exit(-11);&#xA;    }&#xA;  }&#xA;&#xA;  /* Copy the stream parameters to the muxer. */&#xA;  ret = avcodec_parameters_from_context(ost->st->codecpar, c);&#xA;  if (ret &lt; 0) {&#xA;    std::cout &lt;&lt; "Could not copy the stream parameters." &lt;&lt; std::endl;&#xA;    exit(-12);&#xA;  }&#xA;}&#xA;&#xA;AVFrame *getVideoFrame(OutputStream *ost,&#xA;                       const std::vector<glubyte>&amp; pixels,&#xA;                       int duration) {&#xA;  AVCodecContext *c{ ost->enc };&#xA;&#xA;  /* check if we want to generate more frames */&#xA;  if (av_compare_ts(ost->nextPts, c->time_base,&#xA;                    duration, { 1, 1 }) > 0) {&#xA;    return nullptr;&#xA;  }&#xA;&#xA;  /* when we pass a frame to the encoder, it may keep a reference to it&#xA;    * internally; make sure we do not overwrite it here */&#xA;  if (av_frame_make_writable(ost->frame) &lt; 0) {&#xA;    std::cout &lt;&lt; "It wasn&#x27;t possible to make frame writable." &lt;&lt; std::endl;&#xA;    exit(-12);&#xA;  }&#xA;&#xA;  if (c->pix_fmt != AV_PIX_FMT_RGB24) {&#xA;      /* as we only generate a YUV420P picture, we must convert it&#xA;        * to the codec pixel format if needed */&#xA;      if (!ost->swsContext) {&#xA;        ost->swsContext = sws_getContext(c->width, c->height,&#xA;                                         AV_PIX_FMT_RGB24,&#xA;                                         c->width, c->height,&#xA;                                         c->pix_fmt,&#xA;                                         SWS_BICUBIC, nullptr, nullptr, nullptr);&#xA;        if (!ost->swsContext) {&#xA;          std::cout &lt;&lt; "Could not initialize the conversion context." &lt;&lt; std::endl;&#xA;          exit(-13);&#xA;        }&#xA;      }&#xA;&#xA;      fillRGBImage(ost->tmpFrame, c->width, c->height, pixels);&#xA;      sws_scale(ost->swsContext, (const uint8_t * const *) ost->tmpFrame->data,&#xA;                ost->tmpFrame->linesize, 0, c->height, ost->frame->data,&#xA;                ost->frame->linesize);&#xA;  } else&#xA;    fillRGBImage(ost->frame, c->width, c->height, pixels);&#xA;&#xA;  ost->frame->pts = ost->nextPts&#x2B;&#x2B;;&#xA;&#xA;  return ost->frame;&#xA;}&#xA;&#xA;int writeVideoFrame(AVFormatContext *formatContext,&#xA;                    OutputStream *ost,&#xA;                    const std::vector<glubyte>&amp; pixels,&#xA;                    int duration) {&#xA;  return writeFrame(formatContext,&#xA;                    ost->enc,&#xA;                    ost->st,&#xA;                    getVideoFrame(ost, pixels, duration),&#xA;                    ost->tmpPkt);&#xA;}&#xA;&#xA;void closeStream(AVFormatContext *formatContext, OutputStream *ost) {&#xA;  avcodec_free_context(&amp;ost->enc);&#xA;  av_frame_free(&amp;ost->frame);&#xA;  av_frame_free(&amp;ost->tmpFrame);&#xA;  av_packet_free(&amp;ost->tmpPkt);&#xA;  sws_freeContext(ost->swsContext);&#xA;}&#xA;&#xA;static void fillRGBImage(AVFrame *frame, int width, int height,&#xA;                         const std::vector<glubyte>&amp; pixels) {&#xA;  // Copy pixel data into the frame&#xA;  int inputLineSize{ 3 * width };  // 3 bytes per pixel for RGB&#xA;  for (int y{ 0 }; y &lt; height; &#x2B;&#x2B;y) {&#xA;    memcpy(frame->data[0] &#x2B; y * frame->linesize[0],&#xA;           pixels.data() &#x2B; y * inputLineSize,&#xA;           inputLineSize);&#xA;  }&#xA;}&#xA;</glubyte></glubyte></glubyte></glubyte></string></glubyte></glubyte></glubyte></iostream>

    &#xA;

  • ffmpeg got stuck while trying to crossfade merge two videos

    30 juin 2017, par Jeflopo

    I’m trying to do a crossfade merge (1s) between two videos. An intro (39secs duration) video with the main video. When I executed the command it started working without throwing errors but at some frame ffmpeg gets stuck.

    I read a lot of q/a here in stackoverflow, and the official docs but I can’t solve this so :

    This is the command :

    ffmpeg -i "inputs/intro.mp4" -i "inputs/240p.mp4" -an -filter_complex \
       "[0:v]trim=start=0:end=38,setpts=PTS-STARTPTS[firstclip]; \
       [0:v]trim=start=38:end=39,setpts=PTS-STARTPTS[fadeoutsrc]; \
       [1:v]trim=start=1,setpts=PTS-STARTPTS[secondclip]; \
       [1:v]trim=start=0:end=1,setpts=PTS-STARTPTS[fadeinsrc]; \
       [fadeinsrc]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[fadein]; \
       [fadeoutsrc]format=pix_fmts=yuva420p, fade=t=out:st=0:d=1:alpha=1[fadeout]; \
       [fadein]fifo[fadeinfifo]; \
       [fadeout]fifo[fadeoutfifo]; \
       [fadeoutfifo][fadeinfifo]overlay[crossfade]; \
       [firstclip][crossfade][secondclip]concat=n=3[output]; \
       [0:a][1:a] acrossfade=d=1 [audio]" -vcodec libx264 -map "[output]" -map "[audio]" "outputs/240p.mp4"

    Here’s the raw command (the exact command I used) :

    ffmpeg -i "inputs/intro.mp4" -i "inputs/240p.mp4" -an -filter_complex "[0:v]trim=start=0:end=38,setpts=PTS-STARTPTS[firstclip]; [0:v]trim=start=38:end=39,setpts=PTS-STARTPTS[fadeoutsrc]; [1:v]trim=start=1,setpts=PTS-STARTPTS[secondclip]; [1:v]trim=start=0:end=1,setpts=PTS-STARTPTS[fadeinsrc]; [fadeinsrc]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[fadein]; [fadeoutsrc]format=pix_fmts=yuva420p, fade=t=out:st=0:d=1:alpha=1[fadeout]; [fadein]fifo[fadeinfifo]; [fadeout]fifo[fadeoutfifo]; [fadeoutfifo][fadeinfifo]overlay[crossfade]; [firstclip][crossfade][secondclip]concat=n=3[output]; [0:a][1:a] acrossfade=d=1 [audio]" -vcodec libx264 -map "[output]" -map "[audio]" "outputs/240p.mp4"

    The "error" is reproducible with and without the -an and the acrossfade filters.

    This is the output :

    PS C:\scripts\ffmpeg> ffmpeg -i "inputs/intro.mp4" -i "inputs/240p.mp4" -an -filter_complex "[0:v]trim=start=0:end=38,setpts=PTS-STARTPTS[firstclip]; [0:v]trim=start=38:end=39,setpts=PTS-STARTPTS[fadeoutsrc]; [1:v]trim=start=1,setpts=PTS-STARTPTS[secondclip]; [1:v]trim=start=0:end=1,setpts=PTS-STARTPTS[fadeinsrc]; [fadeinsrc]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[fadein]; [fadeoutsrc]format=pix_fmts=yuva420p, fade=t=out:st=0:d=1:alpha=1[fadeout]; [fadein]fifo[fadeinfifo]; [fadeout]fifo[fadeoutfifo]; [fadeoutfifo][fadeinfifo]overlay[crossfade]; [firstclip][crossfade][secondclip]concat=n=3[output]; [0:a][1:a] acrossfade=d=1 [audio]" -map "[output]" -map "[audio]" "outputs/240p.mp4"
    ffmpeg version N-86669-gc1d1274 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 7.1.0 (GCC)
     configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
     libavutil      55. 67.100 / 55. 67.100
     libavcodec     57.100.102 / 57.100.102
     libavformat    57. 75.100 / 57. 75.100
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 94.100 /  6. 94.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'inputs/intro.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.72.101
     Duration: 00:06:24.45, start: 0.000000, bitrate: 491 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 426x240 [SAR 1:1 DAR 71:40], 353 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 130 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
    Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'inputs/240p.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf56.40.101
     Duration: 00:06:24.43, start: 0.000000, bitrate: 375 kb/s
       Stream #1:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 426x240 [SAR 1:1 DAR 71:40], 243 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
    Stream mapping:
     Stream #0:0 (h264) -> trim
     Stream #0:0 (h264) -> trim
     Stream #0:1 (aac) -> acrossfade:crossfade0
     Stream #1:0 (h264) -> trim
     Stream #1:0 (h264) -> trim
     Stream #1:1 (aac) -> acrossfade:crossfade1
     concat -> Stream #0:0 (libx264)
     acrossfade -> Stream #0:1 (aac)
    Press [q] to stop, [?] for help
    [libx264 @ 00000000026b2240] using SAR=1/1
    [libx264 @ 00000000026b2240] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    [libx264 @ 00000000026b2240] profile High, level 2.1
    [libx264 @ 00000000026b2240] 264 - core 152 r2851 ba24899 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=7 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'outputs/240p.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.75.100
       Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 426x240 [SAR 1:1 DAR 71:40], q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
       Metadata:
         encoder         : Lavc57.100.102 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
       Stream #0:1: Audio: aac (LC) ([64][0][0][0] / 0x0040), 44100 Hz, stereo, fltp, 128 kb/s (default)
       Metadata:
         encoder         : Lavc57.100.102 aac
    frame=10369 fps=503 q=28.0 size=   24064kB time=00:06:55.68 bitrate= 474.2kbits/s speed=20.2x

    At frame 10000 it gets stuck... I waited for 1hour but it keeps stuck.

    I’ve updated ffmpeg :

    ffmpeg -version
    ffmpeg version N-86669-gc1d1274 Copyright (c) 2000-2017 the FFmpeg developers
    built with gcc 7.1.0 (GCC)
    configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
    libavutil      55. 67.100 / 55. 67.100
    libavcodec     57.100.102 / 57.100.102
    libavformat    57. 75.100 / 57. 75.100
    libavdevice    57.  7.100 / 57.  7.100
    libavfilter     6. 94.100 /  6. 94.100
    libswscale      4.  7.101 /  4.  7.101
    libswresample   2.  8.100 /  2.  8.100
    libpostproc    54.  6.100 / 54.  6.100

    I used these references :