Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (71)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (13517)

  • Smart y coordinate to make vertical alignment for the text with typewriting effect

    31 août 2021, par Макс Шульдинер

    I'm doing animation for the typewriting effect. Here is my ffmpeg string :

    


    -i ffmpeg_inputs/output-onlinegiftools.gif  -vf "[in]drawtext=fonts/RobotoMono-Regular.ttf:text='h':fontcolor=orange:fontsize=35:x=(w-text_w)/2-200+0:y=h-th-400:enable='between(t,0.00, 7.80)', drawtext =fonts/RobotoMono-Regular.ttf:text = 'i':fontcolor=orange:fontsize=35:x=(w-text_w)/2-200+25:y=h-th-400:enable='between(t,0.80, 7.80)', drawtext =fonts/RobotoMono-Regular.ttf:text = 'g':fontcolor=orange:fontsize=35:x=(w-text_w)/2-200+75:y=h-th-400:enable='between(t,1.60, 7.80)', drawtext =fonts/RobotoMono-Regular.ttf:text = 'u':fontcolor=orange:fontsize=35:x=(w-text_w)/2-200+100:y=h-th-400:enable='between(t,2.40, 7.80)', drawtext =fonts/RobotoMono-Regular.ttf:text = 'y':fontcolor=orange:fontsize=35:x=(w-text_w)/2-200+125:y=h-th-400:enable='between(t,3.20, 7.80)', drawtext =fonts/RobotoMono-Regular.ttf:text = 's':fontcolor=orange:fontsize=35:x=(w-text_w)/2-200+150:y=h-th-400:enable='between(t,4.00, 7.80)'[out]" ffmpeg_outputs/test2.gif -y 


    


    Here is the results with different y values :

    


    enter image description here

    


    enter image description here

    


    As i understand, to make smooth sentence, for some letters i need top vertical alignment, and for others i need bottom vertical alignment. How can i make this "smart alignment", or the only one method is to hardcode y values for different letters ?

    


  • Restreaming and transcoding a hls stream with FFMPEG.WASM fails due to tcp connection

    13 juillet 2021, par Hoang Nguyen

    I'm trying to implement a feature which is live transcoding a hls stream (h265) to another hls stream (h264) so the video can be played using html5 video player as we know that h265 is not supported on browsers. Quick summery of my tech stack :

    


    -Electron desktop app as the client.
-FFMPEG.WASM library : https://ffmpegwasm.github.io/

    


    (*) The on-the-fly transcoding is supposed to happen right on the client side.

    


    My dev enviroment :

    


    "devDependencies": { "electron": "^13.1.5", "electron-packager": "^13.0.1", "electron-winstaller": "^2.7.0" }

    


    There are 2 ways to use this library, one is as normal html script, and the other is nodejs style which I have tried both and get different errors (though they are all about connection) :

    


    (*) Regular JS way

    


    <code class="echappe-js">&lt;script async defer src=&quot;https://unpkg.com/@ffmpeg/ffmpeg@0.10.1/dist/ffmpeg.min.js&quot;&gt;&lt;/script&gt;&#xA;
    

    &#xA;&lt;script&gt;&amp;#xA; async function loadPlayerHEVC() {&amp;#xA;var resource = &amp;#x27;http://10.70.39.32:80/streams/60dd68fdc88f570012526657/stream/60dd68fdc8....526657.m3u8&amp;#x27;&amp;#xA;                    const { createFFmpeg } = FFmpeg;&amp;#xA;                    const ffmpeg = createFFmpeg({ log: true });&amp;#xA;                    const { fetchFile } = FFmpeg;&amp;#xA;                    await ffmpeg.load();&amp;#xA;                    await ffmpeg.run(&amp;#x27;-re&amp;#x27;,&amp;#x27;-i&amp;#x27;, resource, &amp;#x27;-vcodec&amp;#x27;, &amp;#x27;libx264&amp;#x27;, &amp;#x27;-acodec&amp;#x27;, &amp;#x27;copy&amp;#x27;, &amp;#x27;-f&amp;#x27; ,&amp;#x27;hls&amp;#x27;, &amp;#x27;-hls_list_size&amp;#x27;, &amp;#x27;3&amp;#x27;, &amp;#x27;-hls_wrap&amp;#x27;, &amp;#x27;5&amp;#x27;, &amp;#x27;playlist.m3u8&amp;#x27; );&amp;#xA;                    // ffmpeg.exit(0);&amp;#xA;                }&amp;#xA;            &lt;/script&gt;&#xA;

    &#xA;

    enter image description here

    &#xA;

    (*) Nodejs way

    &#xA;

    async function tester(url)&#xA;{&#xA;    const { createFFmpeg, fetchFile } = require(&#x27;@ffmpeg/ffmpeg&#x27;);&#xA;    const ffmpeg = createFFmpeg({ log: true });&#xA;    await ffmpeg.load();&#xA;    await ffmpeg.run(&#x27;tcp&#x27;,&#x27;-re&#x27;, &#x27;-i&#x27;, url, &#x27;-vcodec&#x27;, &#x27;libx264&#x27;, &#x27;-acodec&#x27;, &#x27;aac&#x27;, &#x27;-f&#x27;, &#x27;flv&#x27;, &#x27;-hls_list_size&#x27;, &#x27;3&#x27;, &#x27;-hls_wrap&#x27;, &#x27;5&#x27;, &#x27;playlist.m3u8&#x27;);&#xA;    // ffmpeg.exit(0);&#xA;}&#xA;

    &#xA;

    enter image description here

    &#xA;

    URL for testing

    &#xA;

    You guys can use this public stream to reproduce the scenario : http://113.163.94.245/hls-live/livepkgr/_definst_/liveevent/thbt.m3u8

    &#xA;

    Any help would be much appreciated.

    &#xA;

  • How to extract tiles from ffmpeg thumbnails by cutting them to the length of the video

    22 mars 2021, par JeeNi

    I have searched in many ways to solve this problem. But I can't find an answer, so I leave a question.

    &#xA;

    ffmpeg -i thumb_test.mp4 -filter_complex "select=&#x27;isnan(prev_selected_t)&#x2B;gte(t-prev_selected_t\,5)&#x27;,scale=120:-1,tile=layout=60x60" -vframes 1 -q:v 2 thumb.jpg&#xA;

    &#xA;

    The result of the command I used is as follows.

    &#xA;

    enter image description here

    &#xA;

    The remaining space remains black, making the file size larger.&#xA;To solve this method, I would like to specify the tile size as much as the video length when extracting thumbnails.&#xA;Please let me know if there is a solution.

    &#xA;

    Thank you.

    &#xA;