Recherche avancée

Médias (0)

Mot : - Tags -/organisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (78)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (12394)

  • How can I use both vaapi acceleration and video overlays with ffmpeg

    10 novembre 2019, par nrdxp

    I am fairly new to ffmpeg and I am trying to capture both my webcam and my screen, all using vaapi acceleration (without it, its too slow). I want to overlay the webcam in the bottom right corner using ffmpeg. I need to use kmsgrab, so I can record a Wayland session on Linux.

    What I am doing to work around this is simply open the webcam in a window using the sdl backend, and then call another instance off ffmpeg to record the screen. This isn’t ideal however, since the window with the webcam gets covered up by other windows on fullscreen or when workspace switching. I would much rather encode the webcam on top of the screencast so it is always visible, no matter what I am doing.

    Here is the workaround script I am using right now :

    #!/usr/bin/env zsh

    # record webcam and open it in sdl window
    ffmpeg -v quiet -hide_banner -re -video_size 640X480 -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -i /dev/video0 -vf 'format=nv12,hwupload' -c:v hevc_vaapi -f hevc -threads $(nproc) - | ffmpeg -v quiet -i - -f sdl2 - &

    # wait for webcam window to open
    until swaymsg -t get_tree | grep 'pipe:' &>/dev/null; do
     sleep 0.5
    done

    # position webcam in the bottom right corner of screen using sway
    swaymsg floating enable
    swaymsg resize set width 320 height 240
    swaymsg move position 1580 795
    swaymsg focus tiling

    #screencast
    ffmpeg -format bgra -framerate 60 -f kmsgrab -thread_queue_size 1024 -i - \
     -f alsa -ac 2 -thread_queue_size 1024 -i hw:0 \
     -vf 'hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' \
     -c:v h264_vaapi -g 120 -b:v 3M -maxrate 3M -pix_fmt vaapi_vld -c:a aac -ab 96k -threads $(nproc) \
     output.mkv

    kill %1

    So far, I’ve tried adding the webcam as a second input to the screencast and using :

    -filter_complex '[1] scale=w=320:h=240,hwupload,format=nv12 [tmp]; \
    [0][tmp] overlay=x=1580:y=795,hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' \

    But I get the error :

    Impossible to convert between the formats supported by the filter 'Parsed_hwupload_1' and the filter 'auto_scaler_0'
    Error reinitializing filters!
    Failed to inject frame into filter network: Function not implemented
  • batch store %%f as variable

    22 juin 2017, par RyanMe321

    I’ve been trying to use %%f to operate a script on each file in a directory however however when referencing it through the script the %%f and %% nf don’t seem to be working. I’ve limited programming experience I’m trying to make a more useful version of this video formatting tutorial.

    So I’d like to store %%f and %% nf as variables to reference for the rest of the script though I can’t work out how.

    @echo off
    set /p FORMAT="Enter File Format: "
    FOR %%f IN (*.%FORMAT%) DO  echo %%f
    set TEST=%%f
    echo %TEST%
    cmd/k

    If I could store these it would resolve my issue, however this is the longer form of what I’m trying to do, this script works if I have the user enter the file manually into a variable (%VIDEO%=%%f and % nf). Though this is far from ideal.

    @echo off
    set /p FORMAT="Enter File Format: "

    FOR %%f IN (*.%FORMAT%) DO (
    ::IDFILE
    for /F "delims=" %%I in ('C:\ffmpeg\bin\ffprobe.exe -v error -show_entries format^=filename -of default^=noprint_wrappers^=1:nokey^=1 "%%f"') do set "FILENAME=%%I"

    for /F "delims=" %%I in ('C:\ffmpeg\bin\ffprobe.exe -v error -select_streams v:0 -show_entries stream^=codec_name -of default^=noprint_wrappers^=1:nokey^=1 "%%f"') do set "Vcodec=%%I"

    for /F "delims=" %%I in ('C:\ffmpeg\bin\ffprobe.exe -v error -select_streams a:0 -show_entries stream^=codec_name -of default^=noprint_wrappers^=1:nokey^=1 "%%f"') do set "Acodec=%%I"

    echo %FILENAME% is using %Vcodec% and %Acodec% codecs

    if %Vcodec% == h264 (echo DO NOT CONVERT VIDEO) else (echo CONVERT VIDEO)
    if %Acodec% == ac3 (echo DO NOT CONVERT AUDIO) else (echo CONVERT AUDIO)
    timeout /t 5

    :: COPY V FIX A
    if %Vcodec% == h264 if not %Acodec% == ac3 (echo Copying Video, Converting Audio
    timeout /t 5
    C:\ffmpeg\bin\ffmpeg.exe -i "%%f" -map 0 -vcodec copy -scodec copy -acodec ac3 -b:a 640K "%%~nf"-AC3.mkv)

    :: FIX V COPY A
    if not %Vcodec% == h264 if  %Acodec% == ac3 (echo Converting Video, Copying Audio
    timeout /t 5
    C:\ffmpeg\bin\ffmpeg.exe -i "%%f" -map 0 -vcodec libx264 -scodec copy -acodec copy "%%~nf-"h264.mkv)

    :: FIX V FIX A
    if not %Vcodec% == h264 if not %Acodec% == ac3 (echo Converting Video, Converting Audio
    timeout /t 5
    C:\ffmpeg\bin\ffmpeg.exe -i "%%f" -map 0 -vcodec libx264 -scodec copy -acodec ac3 -b:a 640K "%%~nf"-h264-AC3.mkv)

    :: COPY V COPY A
    if %Vcodec% == h264 if %Acodec% == ac3 (echo "Doesn't require any Conversion")
    )
    pause
    cmd/k
  • How to pipe live video frames from ffmpeg to PIL ?

    30 janvier 2017, par Ryan Martin

    I need to use ffmpeg/avconv to pipe jpg frames to a python PIL (Pillow) Image object, using gst as an intermediary*. I’ve been searching everywhere for this answer without much luck. I think I’m close - but I’m stuck. Using Python 2.7

    My ideal pipeline, launched from python, looks like this :

    1. ffmpeg/avconv (as h264 video)
    2. Piped ->
    3. gst-streamer (frames split into jpg)
    4. Piped ->
    5. Pil Image Object

    I have the first few steps under control as a single command that writes .jpgs to disk as furiously fast as the hardware will allow.

    That command looks something like this :

    command = [
           "ffmpeg",
           "-f video4linux2",
           "-r 30",
           "-video_size 1280x720",
           "-pixel_format 'uyvy422'",
           "-i /dev/video0",
           "-vf fps=30",
           "-f H264",
           "-vcodec libx264",
           "-preset ultrafast",
           "pipe:1 -",
           "|", # Pipe to GST
           "gst-launch-1.0 fdsrc !",
           "video/x-h264,framerate=30/1,stream-format=byte-stream !",
           "decodebin ! videorate ! video/x-raw,framerate=30/1 !",
           "videoconvert !",
           "jpegenc quality=55 !",
           "multifilesink location=" + Utils.live_sync_path + "live_%04d.jpg"
         ]

    This will successfully write frames to disk if ran with popen or os.system.

    But instead of writing frames to disk, I want to capture the output in my subprocess pipe and read the frames, as they are written, in a file-like buffer that can then be read by PIL.

    Something like this :

       import subprocess as sp
       import shlex
       import StringIO

       clean_cmd = shlex.split(" ".join(command))
       pipe = sp.Popen(clean_cmd, stdout = sp.PIPE, bufsize=10**8)

       while pipe:

           raw = pipe.stdout.read()
           buff = StringIO.StringIO()
           buff.write(raw)
           buff.seek(0)

           # Open or do something clever...
           im = Image.open(buff)
           im.show()

           pipe.flush()

    This code doesn’t work - I’m not even sure I can use "while pipe" in this way. I’m fairly new to using buffers and piping in this way.

    I’m not sure how I would know that an image has been written to the pipe or when to read the ’next’ image.

    Any help would be greatly appreciated in understanding how to read the images from a pipe rather than to disk.

    • This is ultimately a Raspberry Pi 3 pipeline and in order to increase my frame rates I can’t (A) read/write to/from disk or (B) use a frame by frame capture method - as opposed to running H246 video directly from the camera chip.