Recherche avancée

Médias (1)

Mot : - Tags -/3GS

Autres articles (101)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (10249)

  • Web-based video editor

    13 avril 2021, par Danny

    We have a web-based editor currently that allows users to build animated web apps. The apps are made up of shapes, text, images, and videos. Except for videos, all other elements can also be animated around the screen. The result of building a animated app is basically a big blob of JSON.

    



    The playback code for the web app is web-based as well. It takes the JSON blob and constructs the HTML, which ends up playing back in some sort of browser environment. The problem is that most of the time this playback occurs on lower-end hardware like televisions and set-top boxes.

    



    These performance issues go away if there is some way to be able to convert a digital sign to video. Then the STB/smart TV simply plays a video, which is much more performant than playing back animations in a web view.

    



    Given a blob of JSON describing each layer and how to draw each type of object, its animation points, etc, how could I somehow take that and convert it to video on the server ?

    



    My first attempt at this was using PhantomJS to load the playback page in a headless browser, take a series of screenshots, and then use ffmpeg to merge those screenshots into a video. That worked great so long as there is no video. But it does not work with video since there is no HTML5 video tag support in PhantomJS, and even if there was, I would lose any audio.

    



    The other way I was thinking of doing it would be to again load the playback page in PhantomJS, but turn off the video layers and leave them transparent, then take screenshots as a series of PNGs with transparency. I would then combine these with the video layers.

    



    None of this feels very elegant though. I know there are web-based video editors out there that basically do what I'm trying to accomplish, so how do they do it ?

    


  • FFmpeg throwing error during video render process using Remotion

    31 mai 2022, par Vince

    I'm using a tool called Remotion, which allows you to create videos using javascript (node & react). I have the server-side rendering process working, unless I try to include an mp3. Per their docs, I have my audio imported like this :

    


    import goodtimes from &#x27;../../music/GoodTimes.mp3&#x27;;&#xA;&#xA;...&#xA;&#xA;<audio src="{goodtimes}"></audio>&#xA;

    &#xA;

    I'm currently trying to troubleshoot this using the CLI, because the error messaging is better. The command runs, and seems to generate the full video – 2400 out of 2400 frames. But then it fails with this output :

    &#xA;

    (2/3) [====================] Rendered frames (6x) 176540ms&#xA;    &#x2B; [====================] Downloading http://localhost:3000/1ba90857580e6291.mp3&#xA;(3/3) [====================] Encoding video 2400/2400&#xA;An error occurred:&#xA;Error: Command failed with exit code 1: ffprobe -v error -show_entries stream=channels:format=duration -of default=nw=1 /var/folders/6n/_nfb38t53dgdj092_pcnqqmw0000gn/T/remotion-assets-dir2jc2oppruh/7744374580215663.mp3&#xA;[mp3 @ 0x7fa78152ec40] Failed to read frame size: Could not seek to 5244.&#xA;/var/folders/6n/_nfb38t53dgdj092_pcnqqmw0000gn/T/remotion-assets-dir2jc2oppruh/7744374580215663.mp3: Invalid argument&#xA;    at makeError (/Users/voverson/clm-video/server/node_modules/execa/lib/error.js:60:11)&#xA;    at handlePromise (/Users/voverson/clm-video/server/node_modules/execa/index.js:118:26)&#xA;    at runMicrotasks (<anonymous>)&#xA;    at processTicksAndRejections (internal/process/task_queues.js:93:5)&#xA;    at async getAudioChannelsAndDuration (/Users/voverson/clm-video/server/node_modules/@remotion/renderer/dist/assets/get-audio-channels.js:17:18)&#xA;    at async preprocessAudioTrackUnlimited (/Users/voverson/clm-video/server/node_modules/@remotion/renderer/dist/preprocess-audio-track.js:14:36)&#xA;</anonymous>

    &#xA;

    I'm hoping some ffmpeg experts might recognize this error and have some idea what might be causing it.

    &#xA;

  • Burning subtitles into video with ffmpeg with specific margin not working

    17 mai 2024, par Anthony

    I am trying to build a tool where people can position their subtitles on an HTML5 video and then I will burn the subtitles into the video for them.

    &#xA;

    I can easily get the position of the subtitles, in whatever format I need, that's easy. However, no combination of values that I feed to ffmpeg are working.

    &#xA;

    My holy grail is an x and y offest (aka offset 5% from the left, 10% from the bottom, in terms of video height/width).

    &#xA;

    To achieve this, I would like the subtitles to start in the very bottom-left corner.

    &#xA;

    ffmpeg -y -i english.mp4 -vf "subtitles=english.srt:force_style=&#x27;Alignment=1,OutlineColour=&amp;H100000000,BorderStyle=3,Outline=1,Shadow=0,FontName=Arial,FontSize=24,MarginL=140,MarginV=0&#x27;"  -c:v libx264 -crf 23 -c:a copy output_video.mp4&#xA;

    &#xA;

    Here is something that is approaching working, but it's behaving very strangely. The video is 850x480 but when I do a margin of 140 it goes way past the midway point of the video.

    &#xA;

    Am I missing something ? How can I start the margin in the bottom left, and then apply the margin I want to push it away from the left and away from the bottom ? I can do it either as a percentage of the video height/width if that API is supported or just pixels is fine too (but clearly doesn't seem to be working)

    &#xA;