Recherche avancée

Médias (5)

Mot : - Tags -/open film making

Autres articles (12)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Emballe Médias : Mettre en ligne simplement des documents

    29 octobre 2010, par

    Le plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
    Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
    D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (2759)

  • How can I make windows "like" the mp4 files I create in Linux and sync with Rsync

    17 juillet 2019, par Geoff Fox

    I am a meteorologist on TV remotely from a studio I built. My control room uses a TriCaster, an amazing studio-in-a-box which runs on a Windows 7 variant. I make my weather maps myself on a Centos 7 machine — around 40,000/day.

    I don’t entirely understand the problem, but here’s a quote from someone helping me at NewTek (the TriCaster company)

    Rsync is built on a *nix based environment where all the file permissions and attributes are based on the Linux environment. There is no meaning for this in NTFS and Windows. The result is you get files that will most likely have the read-only flag set or no flag at all. Other attributes will be delivered as null. I’m sure from your own programming experience, programs don’t like null values and they generally have to be accounted for very specifically.

    And so the finely tuned TriCaster stumbles, meaning lost frames or other problems caused by my short weather animations.

    Here are some samples of the Rsync code I use

    rsync -r -t -s -v --no-p --chmod=ugo=rwX /var/www/html/output/loops/mp4/conus*.mp4 /mnt/tricaster/Clips/Import
    rsync -r -t -s -v --no-p --chmod=ugo=rwX /var/www/html/output/loops/mp4/nebraska*.mp4 /mnt/tricaster/Clips/Import
    rsync -r -t -s -v --no-p --chmod=ugo=rwX /var/www/html/output/loops/mp4/northernplains*.mp4 /mnt/tricaster/Clips/Import

    These are mp4 files. They are only used locally. I really don’t care what flags are checked and permissions filled as long as Windows 7 doesn’t care.

    At this point I always like to tell folks, though I do write some code my last computer class was in high school,’67-68 semester. Thanks in advance for your help.

  • Play HLS segments through Media source extensions

    11 février 2018, par ler

    I got a list of m4s and init.mp4 from this FFMPEG command

    ffmpeg -i bunny.mp4 -f hls -hls_segment_type fmp4 -c:v copy playlist.m3u8

    I send those chunks using Socketand try to play them through MSE.
    When i send them in this order :

    init.mp4 + playlist0.m4s + playlist1.m4s ...

    They play without any problem, But when i want to start from the chunk number 3 meaning init.mp4 + playlist3.m4s for example i get this error :

    video frame with PTS 0us has negative DTS -80000us after applying timestampOffset, handling any discontinuity, and filtering against append window.

    I want to be able to start from any chunk, currently the only way to play the video is to star by init.mp4 + playlist0.m4s meaning playlist0.m4s because init.mp4 contain just headers of the video, This is the client code i’m using :

    var socket = io();
    var video = document.querySelector('video');
    var mimeCodec = 'video/mp4; codecs="avc1.64000d,mp4a.40.2"'; // true
    if ('MediaSource' in window && MediaSource.isTypeSupported(mimeCodec))
    {
       var mediaSource = new MediaSource;
       video.src = URL.createObjectURL(mediaSource);
       mediaSource.addEventListener('sourceopen', function () {
               var mediaSource = this;
               var sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
              sourceBuffer.mode = 'segments';
               sourceBuffer.addEventListener('updateend', function (_) { video.play().then(function() { }).catch(function(error) { }); });
               socket.on('broadcast', function (chunk) {
                   downloadData(chunk.uri, function(arrayBuffer) {
                       sourceBuffer.appendBuffer(arrayBuffer);
                   });
               });
       });
    } else {
       console.error('Unsupported MIME type or codec: ', mimeCodec);
    }
    function downloadData(url, cb) {
       var xhr = new XMLHttpRequest;
       xhr.open('get', url);
       xhr.responseType = 'arraybuffer';
       xhr.onload = function () {
           cb(new Uint8Array(xhr.response));
       };
       xhr.send();
    }
  • Generating a P frame based on an I frame

    17 octobre 2016, par Navid Ahmadi

    Say I have 5 images that are quite similar. I’d like to compress images 2, 3, 4 and 5 based on the first image, somewhat similar to the way P frames are generated from an I frame.

    • In general, what’s the best way/tool to do so ?
    • For instance, using FFMPEG, is it possible to generate P frames and store them in a separate file ?

    Edit :
    Although similar, I am not looking for simply generating a diff between the two images. My goal is to somehow use the information in the first image to make the consecutive images much smaller. If I simply do a diff, the diff itself is about the same size (about 10% reduced) which is not as much as I expect. If I generate a mp4 video including these 5 frames, the video size is much less than putting 5 frames in a file, which probably has to with frame predications based on the I frames. Is there a way to generate those predicted frames one by one and store them individually ?