Recherche avancée

Médias (0)

Mot : - Tags -/presse-papier

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (20)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (4530)

  • RuntimeError : abort(OOM). Build with -s ASSERTIONS=1 for more info

    26 avril 2021, par KhoPhi

    I'm using ffmpeg.wasm 0.9.7 (at the time of this question).
Browser is Brave (Version 1.23.71 Chromium : 90.0.4430.72 (Official Build) (64-bit))

    


    I get this error running an overlay ffmpeg command in the browser, slapping an png over an hevc mp4 4K video of size, almost 40 Megabyte.

    


    RuntimeError: abort(OOM). Build with -s ASSERTIONS=1 for more info.

    


    Here's my code in Angular. Stripped down for brevity

    


    async generatePreviews(event: any) {

    if (!ffmpeg.isLoaded()) {
      await ffmpeg.load();
    }

    for (let index = 0; index < this.files.length; index++) {

      this.video = this.files[index];
      const file_name = this.video.name.split('.')[0] + '-preview.mp4';

      ffmpeg.FS('writeFile', 'tempLogo.png', await fetchFile(this.watermark));
      ffmpeg.FS('writeFile', 'tempVideo.mp4', await fetchFile(this.video));


      // working command in normal terminal
      // ffmpeg -i tempVideo.mp4 -i tempLogo.png -filter_complex "overlay=(W-w)/2:(H-h)/2" temp.mp4

      await ffmpeg.run(
        '-i',
        'tempVideo.mp4',
        '-i',
        'tempLogo.png',
        '-filter_complex',
        'overlay=(W-w)/2:(H-h)/2',
        'temp.mp4'
      );

      const data = ffmpeg.FS('readFile', 'temp.mp4');

      var blob = new Blob([data.buffer], { type: 'video/mp4' });
      saveAs(blob, file_name); // using filesaver.js to save the blob

    }
  }
}



    


    In chrome, I read up to 2Gb of file is possible to convert. Not sure why the OOM issues. Any settings I need to set or changes I need to do ?

    


    Update (4/26/2021)

    


    This thread offered a solution, by building the ffmpeg wasm with a few tweaks. I am able to build, but using the built files even causes the OOM faster than the npm built from the ffmpeg wasm repo.

    


  • Decoder return of av_find_best_stream vs. avcodec_find_decoder

    7 octobre 2016, par Jason C

    The docs for libav’s av_find_best_stream function (libav 11.7, Windows, i686, GPL) specify a parameter that can be used to receive a pointer to an appropriate AVCodec :

    decoder_ret - if non-NULL, returns the decoder for the selected stream

    There is also the avcodec_find_decoder function which can find an AVCodec given an ID.

    However, the official demuxing + decoding example uses av_find_best_stream to find a stream, but chooses to use avcodec_find_decoder to find the codec in lieu of av_find_best_stream’s codec return parameter :

    ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
    ...
    stream_index = ret;
    st = fmt_ctx->streams[stream_index];
    ...
    /* find decoder for the stream */
    dec = avcodec_find_decoder(st->codecpar->codec_id);

    As opposed to something like :

    ret = av_find_best_stream(fmt_ctx, type, -1, -1, &dec, 0);

    My question is pretty straightforward : Is there a difference between using av_find_best_stream’s return parameter vs. using avcodec_find_decoder to find the AVCodec ?

    The reason I ask is because the example chose to use avcodec_find_decoder rather than the seemingly more convenient return parameter, and I can’t tell if the example did that for a specific reason or not. The documentation itself is a little spotty and disjoint, so it’s hard to tell if things like this are done for a specific important reason or not. I can’t tell if the example is implying that it "should" be done that way, or if the example author did it for some more arbitrary personal reason.

  • avutil/dovi_meta : add dolby vision extension blocks

    23 mars 2024, par Niklas Haas
    avutil/dovi_meta : add dolby vision extension blocks
    

    As well as accessors plus a function for allocating this struct with
    extension blocks,

    Definitions generously taken from quietvoid/dovi_tool, which is
    assembled as a collection of various patent fragments, as well as output
    by the official Dolby Vision bitstream verifier tool.

    • [DH] doc/APIchanges
    • [DH] libavutil/dovi_meta.c
    • [DH] libavutil/dovi_meta.h
    • [DH] libavutil/version.h