Recherche avancée

Médias (1)

Mot : - Tags -/stallman

Autres articles (53)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (6267)

  • RuntimeError : abort(OOM). Build with -s ASSERTIONS=1 for more info

    26 avril 2021, par KhoPhi

    I'm using ffmpeg.wasm 0.9.7 (at the time of this question).
Browser is Brave (Version 1.23.71 Chromium : 90.0.4430.72 (Official Build) (64-bit))

    


    I get this error running an overlay ffmpeg command in the browser, slapping an png over an hevc mp4 4K video of size, almost 40 Megabyte.

    


    RuntimeError: abort(OOM). Build with -s ASSERTIONS=1 for more info.

    


    Here's my code in Angular. Stripped down for brevity

    


    async generatePreviews(event: any) {

    if (!ffmpeg.isLoaded()) {
      await ffmpeg.load();
    }

    for (let index = 0; index < this.files.length; index++) {

      this.video = this.files[index];
      const file_name = this.video.name.split('.')[0] + '-preview.mp4';

      ffmpeg.FS('writeFile', 'tempLogo.png', await fetchFile(this.watermark));
      ffmpeg.FS('writeFile', 'tempVideo.mp4', await fetchFile(this.video));


      // working command in normal terminal
      // ffmpeg -i tempVideo.mp4 -i tempLogo.png -filter_complex "overlay=(W-w)/2:(H-h)/2" temp.mp4

      await ffmpeg.run(
        '-i',
        'tempVideo.mp4',
        '-i',
        'tempLogo.png',
        '-filter_complex',
        'overlay=(W-w)/2:(H-h)/2',
        'temp.mp4'
      );

      const data = ffmpeg.FS('readFile', 'temp.mp4');

      var blob = new Blob([data.buffer], { type: 'video/mp4' });
      saveAs(blob, file_name); // using filesaver.js to save the blob

    }
  }
}



    


    In chrome, I read up to 2Gb of file is possible to convert. Not sure why the OOM issues. Any settings I need to set or changes I need to do ?

    


    Update (4/26/2021)

    


    This thread offered a solution, by building the ffmpeg wasm with a few tweaks. I am able to build, but using the built files even causes the OOM faster than the npm built from the ffmpeg wasm repo.

    


  • Decoder return of av_find_best_stream vs. avcodec_find_decoder

    7 octobre 2016, par Jason C

    The docs for libav’s av_find_best_stream function (libav 11.7, Windows, i686, GPL) specify a parameter that can be used to receive a pointer to an appropriate AVCodec :

    decoder_ret - if non-NULL, returns the decoder for the selected stream

    There is also the avcodec_find_decoder function which can find an AVCodec given an ID.

    However, the official demuxing + decoding example uses av_find_best_stream to find a stream, but chooses to use avcodec_find_decoder to find the codec in lieu of av_find_best_stream’s codec return parameter :

    ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
    ...
    stream_index = ret;
    st = fmt_ctx->streams[stream_index];
    ...
    /* find decoder for the stream */
    dec = avcodec_find_decoder(st->codecpar->codec_id);

    As opposed to something like :

    ret = av_find_best_stream(fmt_ctx, type, -1, -1, &dec, 0);

    My question is pretty straightforward : Is there a difference between using av_find_best_stream’s return parameter vs. using avcodec_find_decoder to find the AVCodec ?

    The reason I ask is because the example chose to use avcodec_find_decoder rather than the seemingly more convenient return parameter, and I can’t tell if the example did that for a specific reason or not. The documentation itself is a little spotty and disjoint, so it’s hard to tell if things like this are done for a specific important reason or not. I can’t tell if the example is implying that it "should" be done that way, or if the example author did it for some more arbitrary personal reason.

  • avutil/dovi_meta : add dolby vision extension blocks

    23 mars 2024, par Niklas Haas
    avutil/dovi_meta : add dolby vision extension blocks
    

    As well as accessors plus a function for allocating this struct with
    extension blocks,

    Definitions generously taken from quietvoid/dovi_tool, which is
    assembled as a collection of various patent fragments, as well as output
    by the official Dolby Vision bitstream verifier tool.

    • [DH] doc/APIchanges
    • [DH] libavutil/dovi_meta.c
    • [DH] libavutil/dovi_meta.h
    • [DH] libavutil/version.h