Recherche avancée

Médias (91)

Autres articles (53)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (5957)

  • Web-based video editor

    13 avril 2021, par Danny

    We have a web-based editor currently that allows users to build animated web apps. The apps are made up of shapes, text, images, and videos. Except for videos, all other elements can also be animated around the screen. The result of building a animated app is basically a big blob of JSON.

    



    The playback code for the web app is web-based as well. It takes the JSON blob and constructs the HTML, which ends up playing back in some sort of browser environment. The problem is that most of the time this playback occurs on lower-end hardware like televisions and set-top boxes.

    



    These performance issues go away if there is some way to be able to convert a digital sign to video. Then the STB/smart TV simply plays a video, which is much more performant than playing back animations in a web view.

    



    Given a blob of JSON describing each layer and how to draw each type of object, its animation points, etc, how could I somehow take that and convert it to video on the server ?

    



    My first attempt at this was using PhantomJS to load the playback page in a headless browser, take a series of screenshots, and then use ffmpeg to merge those screenshots into a video. That worked great so long as there is no video. But it does not work with video since there is no HTML5 video tag support in PhantomJS, and even if there was, I would lose any audio.

    



    The other way I was thinking of doing it would be to again load the playback page in PhantomJS, but turn off the video layers and leave them transparent, then take screenshots as a series of PNGs with transparency. I would then combine these with the video layers.

    



    None of this feels very elegant though. I know there are web-based video editors out there that basically do what I'm trying to accomplish, so how do they do it ?

    


  • Revision 513157e093 : Scatter-based scantables. This gains about 0.2% on derf, 0.1% on hd and 0.4% on

    25 mars 2013, par Ronald S. Bultje

    Changed Paths : Modify /configure Modify /vp9/common/vp9_entropy.c Modify /vp9/decoder/vp9_dequantize.c Scatter-based scantables. This gains about 0.2% on derf, 0.1% on hd and 0.4% on stdhd. I can put this under an experimental flag if wanted, just trying to get my patch queue in shape. Change-Id : (...)

  • Extracting frame out of video based on time in seconds

    20 avril 2024, par Vicky

    I'm developing a web-based video editing tool where users can pause a video and draw circles or lines on it using canvas. When a user pauses the video, I retrieve the current playback time in seconds using the HTML5 video.currentTime property. I then send this time value along with the shape details to the server. On the server-side, we use FFmpeg to extract the specific paused frame from the video. The issue I'm encountering is a frame mismatch between the one displayed in the browser and the one generated in the backend using FFmpeg.

    


    I've experimented with various approaches for this process.

    


    Extracting frame based on time. Example : in this case time is 3.360 second.

    


    


    ffmpeg -i input.mp4 -ss 00:00:03.360 -frames:v 1 frame.jpg

    


    


    Converting time to frame number using the following logic : Math.round(video.currentTime * fps)

    


    


    ffmpeg -i input.mp4 -vf "select=eq(n,101)" -vsync vfr frame.jpg

    


    


    


    ffmpeg -i input .mp4 -vf "select='lt(t,3.360)lt(3.360-t,1/31.019)',setpts=N/(31.019TB)" -vsync 0 frame.jpg

    


    


    The challenge I'm facing is that sometimes the frame I see in the browser at the pause time doesn't match the one generated in the backend using FFmpeg. How can I solve this problem ? If it's an issue with currentTime, are there any other approaches I can try ?