Recherche avancée

Médias (91)

Autres articles (101)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (10138)

  • How to stop ffmpeg and continue batch running

    7 novembre 2015, par Wayne

    I have this small batch script. The purpose is to save small portion of our rtmp live stream

    @echo off
    echo Live Stream Cropper v0.1
    echo **************************
    echo video codec : libxh264
    echo audio codec : mp3
    echo **************************
    echo Recording...
    echo.

    for /f "tokens=2 delims==" %%I in ('wmic os get localdatetime /format:list') do set datetime=%%I

    set output=%datetime:~0,8%-%datetime:~8,4%

    ffmpeg -i rtmp://my.streaming.path/public/appname -acodec mp3 -vcodec libx264  "X:\PATH\%output%.avi" -nostats -loglevel 0

    echo.
    echo "Recorded video is saved on X:\PATH\%output%.avi"
    pause

    The ffmpeg process is working as expected. My problem is after pressing CTRL + C to stop the process of ffmpeg, the echo command to show where the recorded file is saved is not executed.

    How can I stop the ffmpeg process and then continue running the remaining of the script ?

    Thanks

  • Bash : loop ffmpeg command through sets of subfolders and direct it to files in the folders for processing

    7 décembre 2018, par Katman

    I am playing around with embedding captions into mp4 video files, and want to find a way to do this across large sets of directories with the .mp4 and .srt files in them. Each pair of .mp4 and .srt files will be subfoldered together in their own directory, and the basename should be the same between the two. Example :

    Video1
     Video1.mp4
     Video1.srt
    Video2
     Video2.mp4
     Video2.srt

    I’ve tried several things but I’m a novice at this and only write very simple bash scripts for much more straightforward processes. For this I need to figure out how to write the bash script to run an ffmpeg command in every subfolder that will grab the mp4 and srt file and output a new mp4 of the merged data. The basic ffmpeg command to do this is :

    ffmpeg -i filename.mp4 -i filename.srt -c copy -c:s mov_text output.mp4

    I’ve tried to add :

    for dir in ./*/; do ffmpeg -i *.mp4 -i *.srt -c copy -c:s move_text “$file”.mp4

    …and several variations of this, but ffmpeg always stops with a “*.mp4 : No such file or directory” error. Then I tried to add "for file in..." after the "for dir in" statement but didn’t have any positive results. The following is closest to what I need - it at least goes to each folder and processes the files - but it does them independently and doesn’t combine the mp4 and srt source files as the ffmpeg command should. It outputs a video.mp4.mp4 and video.srt.mp4, and fails to combine them in either case.

    for dir in ./**/*;
    do ffmpeg -i "$dir" -i "$dir" -c copy -c:s mov_text "$dir".mp4

    I tried "$dir".mp4 and "$dir".srt but that just results in an error. I tried to pull just directory names :

    for dir in ./**/*;
    do ffmpeg -i "$(basename $dir)" -i "$(basename $dir)" -c copy -c:s mov_text "$dir".mp4

    and my attempts using "$(basename $dir).extension" have resulted in errors - it looks for video.mp4.mp4 or video.srt.mp4. Any tips as to what to add to get this process to work or another approach entirely would be greatly appreciated ! I figure it’s a simple bash thing I’m just ignorant of, but certainly need to learn how to do ! Thanks !

  • Is there a direct way to render/encode Vulkan output as an ffmpeg video file ?

    13 septembre 2022, par Luiz Ventura

    I'm about to generate 2D and 3D music animations and render them to video using C++. I was thinking about using OpenGL, but I've read that, unfortunately, it is being discontinued in favour of Vulkan, which seems to offer higher performance using a GPU, but is also a lower-level API, making it more difficult to learn. I still have almost no knowledge in both OpenGL and Vulkan, beginning to learn now.

    


    My question is :

    


    is there a way to encode the Vulkan render output (showing a window or not) into a video file, preferentially through FFPMEG ? If so, how could I do that ?

    


    Requisites :

    


      

    • Speed : the decrease in performance should be nearly that of encoding the video only, not much more than that (e.g. by having to save lossless frames as images first and then encoding a video from them).
    • 


    • Controllable FPS and resolution : the video fps and frame resolution can be freely chosen.
    • 


    • Reliability, reproducibility : running a code that gives a same Vulkan output twice should result in 2 equal videos independently of the system, i.e. no dropping frames, async problems (I want to sync with audio) or whatsoever. The chosen video fps should stay fixed (e.g. 60 fps), no matter if the computer can render 300 or 3 fps.
    • 


    


    What I found out so far :

    


      

    • An example of taking "screenshots" from Vulkan output : it writes to a ppm image at the end, which is a binary uncompressed image file.
    • 


    • An encoder for rendering videos from OpenGL output, which is what I want, but using OpenGL in that case.
    • 


    • That Khronos includes in the Vulkan API a video subset.
    • 


    • A video tool to decode, demux, process videos using FFMPEG and Vulkan.
    • 


    • That is possible to render the output into a buffer without the need of a screen to display it.
    •