Recherche avancée

Médias (0)

Mot : - Tags -/navigation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (30)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (5898)

  • image sequence to Video. Faster Way to Render "Hold" frames in FFMPEG (without duplicating image)

    11 septembre 2014, par dprogramz

    I’m working on a project that converts a Flash Movie with dynamic input to Apple Animation Codec.

    It’s to generate messaging, so there is some animation, then the message holds, then animation resumes, etc. The hold times are user variable.

    I’m first converting the SWF to PNGs by sending a bitstream via the actionscript3 PNGEncoder library, and using a PHP script to save the bitstream to the server as a series of temp png files. This is working very fast.

    from flash ill get an image sequence like this :

    frame000.png-frame014.png

    frame0200.png-frame250.png

    frame350.png-frame400.png

    frames 15-199 are all the same image holding. I’m not sending bitstreams for the holding images to save bandwidth and time, it would be redundant to send the same image file 185 times. Right now I am copying frame14.png with incrementing file numbers. I have to do this for each instance there is a hold frame.

    After all the hold frame holes are filled I then run FFMPEG on the complete image sequence to generate the video.

    Is there any faster way to do this ? The longer the hold (which is variable by the user) the longer it takes to render. I understand that there will be some extra render time, but it’s a repeated image and in other conversion programs does not take nearly as long. Or is my current way of doing it the only way in this instance ?

    When rendering out of a program like after effects, the animation codec seems to dynamically adjust the frame rate for hold frames.

    Thanks for the help and insight !!

  • ffmpeg combine files where first has no audio mutes entire ouput

    1er mars 2017, par rodrigo-silveira

    I have multiple ts files that I need to combine before transcoding. Typically, I just sort the list of files, then concat them. Works great — in most cases.

    The problem

    The first file in the series of ts files has no audio. The second file does have audio (as do all of the subsequent files in this list). The output is a file with all of the video combined, but zero audio throughout.

    How can I combine multiple video files and preserve the audio of the files with audio ?

    $: ffmpeg -i file-01.ts

    Input #0, mpegts, from 'file-01.ts':
     Duration: 00:03:18.63, start: 11.743000, bitrate: 2044 kb/s
     Program 1
       Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc


    $: ffmpeg -i file-02.ts

    Input #0, mpegts, from 'file-02.ts':
     Duration: 00:00:10.02, start: 23124.795000, bitrate: 2251 kb/s
     Program 1
       Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc
       Stream #0:1[0x101](und): Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 155 kb/s


    $: cat file-01.ts file-02.ts > out-01.ts
    $: ffmpeg -i file-01.ts -i file-02.ts -c:v copy -c:a copy out-02.ts

    In the first case (cat [files] > out) out-01.ts has the videos in the right order, and zero audio. In the second case (ffmpeg [-i files] out) I only get the video from the first file, with the audio from the second on top. No video from the subsequent files. I’ve also tried that command with/without -c:a copy and -c:v copy.

  • How to create transparent hevc / h265 mp4 video with libx265 / FFmpeg without hardware acceleration ?

    1er octobre 2024, par Igor Stepin

    As a source there are series of transparent png files.

    


    It's clear how to create transparent webm :

    


    ffpmpeg -framerate 30 -i 'frame-%d.png' -f webm -vcodec libvpx-vp9 -b:v 0 -pix_fmt yava420p my.webm


    


    It's also clear how to create transparent mp4 using Apple hw (videotoolbox api) :

    


    ffpmeg -framerate 30 -i 'frame-%d.png' -f mp4 -vcodec hevc_videotoolbox -allow_sw 1 -q:v 75 -alpha_quality 0.75 -tag:v hvc1 my.mp4


    


    It's unclear how to do it in a Linux-based docker image without special hw. In these cases, libx265 is used quite often. Latest version 4.0.0 has support for transparency but it's not integrated with FFmpeg for now.

    


    Any hw independent solution to create transparent mp4 will solve this issue. It can be something with ffmpeg, libx265, some converter of webm to mp4, or anything else.

    


    I tried to use libx265 4.0.0 as an external tool :

    


    ffmpeg -framerate 30 -i 'frame-%d.png' -pix_fmt yava420p my.yuv
x265 --alpha -o my.hevc --fps 30 --input-res 150x360 --input my.yuv
ffmpeg -i my.hevc -c:v copy -pix_fmt yuva420p -tag:v hvc1 my.mp4


    


    Last command produces warnings and creates file with yuv420p in metadata. Looks like it doesn't support alpha in this way.

    


    Yuv file is correct as webm can be created from it :

    


    ffpmeg -f rawvideo -vcodec rawvideo -r 150x360 -r 30 -pix_fmt yuva420p -i my.yuv -b:v 0 my2.webm


    


    So, it's unclear how to combine x265 and ffmpeg clis to produce mp4 (if possible). I expect that x265 output can be encapsulated into mp4 somehow, I just don't know how.