Recherche avancée

Médias (91)

Autres articles (82)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (12879)

  • adding overlay with ffmpeg is very slow

    15 juillet 2024, par Exorcismus

    Am trying to append some PNGs on a video, I've used overlay filter for that, but it has increase processing time 10X !!! am not sure if am doing it wrong or this is expected,

    


    

    

    async function processVideo3() {

  let ffmpegProcess = new ffmpeg("./data/input.mp4")

    //Add your args here
    .withVideoCodec("libx264")
    .withAudioCodec("copy")
    .withOutputFormat("mp4")
    .outputOptions("-preset faster")

  let filterChain = [
    {
      filter: "scale",
      options: "-1:720",
      inputs: '0', outputs: 'a1'
    },
    {
      filter: 'subtitles',
      options: `./data/1721070513371.ass`,
      inputs: 'a1', outputs: 'a2'
    }
  ];

  let lastOutput = 'a2';


  const emojiJson = await extractJsonFromFile(`./data/1721070513371.ass`)


  const timedEmojis = convertTimestampsForFfmpeg(emojiJson.emojis)
  // Add filters for each PNG with its respective timestamps
  timedEmojis.forEach((pngWithTimestamp, index) => {
    const pngInputIndex = index + 1;
    const { emoji, start, end } = pngWithTimestamp;

    // Input the PNG URL
    ffmpegProcess.input(emoji);

    // Scale filter for the PNG
    filterChain.push({
      filter: 'scale',
      options: { w: 50, h: 50 },
      inputs: pngInputIndex.toString(), outputs: `b${pngInputIndex}`
    });

    // Overlay filters for each timestamp
    const outputLabel = `c${pngInputIndex}_${1}`;
    const overlayFilter = {
      filter: 'overlay',
      options: {
        eval: "init",
        format: "auto",
        x: "(main_w-overlay_w)/2",
        y: "(main_h-overlay_h)/2",
        enable: `between(t,${start},${end})`
      },
      inputs: [lastOutput, `b${pngInputIndex}`],
      outputs: outputLabel
    };
    filterChain.push(overlayFilter);
    lastOutput = outputLabel;
  });

  // Remove the output label from the last filter
  filterChain[filterChain.length - 1].outputs = undefined;


  await ffmpegProcess.complexFilter(filterChain)
    .save("output.mp4")



  ffmpegProcess.on("progress", (progress) => console.log(progress))

  ffmpegProcess.on('error', async function (err, stdout, stderr) {
    if (err) {
      console.log(err.message);
      console.log("stdout:\n" + stdout);
      console.log("stderr:\n" + stderr);
      reject("Error");
    }
  }).on('end', async () => {

    console.log('Finished processing');

  })

}

    


    


    



    can some one advise on this ?

    


  • Extracting thumbnails with FFMPEG is super slow on large video files ? [duplicate]

    5 octobre 2015, par vaid

    This question already has an answer here :

    I extract thumbnails from a .MOV file using FFMPEG on Linus (Debian 64bit).

    The file I extract the thumbnail from is about 430 Megabytes large.

    I use the following command to do so :

    ffmpeg -i 'largeVideoFile.mov' -ss 00:14:37 -vframes 1 'thumbnail.jpg'

    It takes well over 3 minutes for a single frame to be extracted.

    How can I speed it up ?

  • Ffmpeg decode H264 video from RTP stream very slow

    3 octobre 2013, par Dmitry Bakhtiyarov

    I have H264 RTP stream over network and I need to decode it. I use libx264 to encode stream and ffmpeg to decode. When I connect to server using VLC, it correct play video without any problem. But when I connect to server using my application I have a long period, when widget, which draw video from this stream, draw only one image.

    I check log file and found, that avcodec_decode_video2() set got_image into 1 very rarely ! Decoder give me new decoded frame average every 1-2 seconds, but on the server I have 12-15 fps on encoder !

    What the reasons of this delays on decoder and how I can fix its ?

    avcodec_register_all();
    av_init_packet(&m_packet);
    m_decoder = avcodec_find_decoder(CODEC_ID_H264);
    if (!m_decoder)
    {
       QString str = QString("Can't find H264 decoder!");
       emit criticalError(str);
    }
    m_decoderContext = avcodec_alloc_context3(m_decoder);

    m_decoderContext->flags |= CODEC_FLAG_LOW_DELAY;
    m_decoderContext->flags2 |= CODEC_FLAG2_CHUNKS;
    AVDictionary* dictionary = nullptr;
    if (avcodec_open2(m_decoderContext, m_decoder, &dictionary) < 0)
    {
       QString str = QString("Failed to open decoder!");
       emit criticalError(str);
    }
    qDebug() << "H264 Decoder successfully opened";
    m_picture = avcodec_alloc_frame();
    ...
       while(m_packet.size > 0)
       {
           int got_picture;
           int len = avcodec_decode_video2(m_decoderContext, m_picture, &got_picture, &m_packet);
           if (len < 0)
           {
               QString err("Decoding error");
               qDebug() << err;
               //emit criticalError(err);
               return false;
           }
           if (got_picture)
           {
                   //create from frame QImage and send it into GUI thread
               qDebug() << "H264Decoder: frame decoded!";

    I try to change some options of m_decoderContext (i.e. thread_count) but this not changing anything.