Recherche avancée

Médias (10)

Mot : - Tags -/wav

Autres articles (106)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (14613)

  • How to use FFMPEG on Python/Windows10 with Pipe for Screen recording ?

    20 septembre 2020, par Trmotta

    I'd like to record the screen with ffmpeg as it seems to be the only player out there who can record a region of the screen along with the mouse cursor.

    


    The following code was adapted from i want to display mouse pointer in my recording but it doesn't work on a Windows 10 (x64) setup (using Python 3.6).

    


    #!/usr/bin/env python3

# ffmpeg -y -pix_fmt bgr0 -f avfoundation -r 20 -t 10 -i 1 -vf scale=w=3840:h=2160 -f rawvideo /dev/null

import sys
import cv2
import time
import subprocess
import numpy as np

w,h = 100, 100

def ffmpegGrab():
    """Generator to read frames from ffmpeg subprocess"""

    #ffmpeg -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 640x480 -show_region 1 -i desktop output.mkv #CODE THAT ACTUALLY WORKS WITH FFMPEG CLI

    cmd = 'D:/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -show_region 1 -i desktop -f image2pipe, -pix_fmt bgr24 -vcodec rawvideo -an -sn' 

    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
    #out, err = proc.communicate()
    while True:
        frame = proc.stdout.read(w*h*3)
        yield np.frombuffer(frame, dtype=np.uint8).reshape((h,w,3))

# Get frame generator
gen = ffmpegGrab()

# Get start time
start = time.time()

# Read video frames from ffmpeg in loop
nFrames = 0
while True:
    # Read next frame from ffmpeg
    frame = next(gen)
    nFrames += 1

    cv2.imshow('screenshot', frame)

    if cv2.waitKey(1) == ord("q"):
        break

    fps = nFrames/(time.time()-start)
    print(f'FPS: {fps}')


cv2.destroyAllWindows()
out.release()


    


    By using 'cmd' as stated above, I'll get the following error :

    


    b"ffmpeg version git-2020-08-31-4a11a6f Copyright (c) 2000-2020 the FFmpeg developers\r\n  built with gcc 10.2.1 (GCC) 20200805\r\n  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --enable-libsvtav1 --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf\r\n  libavutil      56. 58.100 / 56. 58.100\r\n  libavcodec     58.101.101 / 58.101.101\r\n  libavformat    58. 51.101 / 58. 51.101\r\n  libavdevice    58. 11.101 / 58. 11.101\r\n  libavfilter     7. 87.100 /  7. 87.100\r\n  libswscale      5.  8.100 /  5.  8.100\r\n  libswresample   3.  8.100 /  3.  8.100\r\n  libpostproc    55.  8.100 / 55.  8.100\r\nTrailing option(s) found in the command: may be ignored.\r\n[gdigrab @ 0000017ab857f100] Capturing whole desktop as 100x100x32 at (10,20)\r\nInput #0, gdigrab, from 'desktop':\r\n  Duration: N/A, start: 1599021857.538752, bitrate: 9612 kb/s\r\n    Stream #0:0: Video: bmp, bgra, 100x100, 9612 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc\r\n**At least one output file must be specified**\r\n"


    


    Which is the contents of proc (and also of proc.communicate). The program crashes right after when trying to resize this message to an image of size 100x100.

    


    I do not want to have an output file. I need to use Python subprocess along with Pipe in order to directly deliver those screen frames to my Python code, no IO required at all.

    


    If I try the following :

    


    cmd = 'D :/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -i desktop -pix_fmt bgr24 -vcodec rawvideo -an -sn -f image2pipe'

    


    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)


    


    Then 'frame', inside 'while True', is filled with b''.

    


    Tried using the following libraries with no success, as I couldnt either find how to capture the mouse cursor or capture the screen at all : https://github.com/abhiTronix/vidgear, https://github.com/kkroening/ffmpeg-python

    


    What am I missing ?
Thank you.

    


  • How to pipe multiple images, being created in parallel with an index, to ffmpeg so that it can match the speed of image creation ?

    23 septembre 2020, par vishwas.mittal

    We've a system that spews out 4-channel png images frame-by-frame (we control the output format of these images as well, so we can use something else as long as it supports transparency). Right now, we're waiting for all the images and then encoding them with ffmpeg into a webm video file with vp8 (libvpx encoder). But we now want to pipeline these images to FFmpeg to encode into the WebM video simultaneously as the images are being spewed out so that we don't wait for ffmpeg to encode all the images afterwards.

    


    This is the current command, in python syntax :

    


    ['/usr/bin/ffmpeg', '-hide_banner', '-y', '-loglevel', 'info', '-f', 'rawvideo', '-pix_fmt', 'bgra', '-s', '1573x900', '-framerate', '30', '-i', '-', '-i', 'audio.wav', '-c:v', 'libvpx', '-b:v', '0', '-crf', '30', '-tile-columns', '2', '-quality', 'good', '-speed', '4', '-threads', '16', '-auto-alt-ref', '0', '-g', '300000', '-map', '0:v:0', '-map', '1:a:0', '-shortest', 'video.webm']
# for ease of read:
# /usr/bin/ffmpeg -hide_banner -y -loglevel info -f rawvideo -pix_fmt bgra -s 1573x900 -framerate 30 -i - -i audio.wav -c:v libvpx -b:v 0 -crf 30 -tile-columns 2 -quality good -speed 4 -threads 16 -auto-alt-ref 0 -g 300000 -map 0:v:0 -map 1:a:0 -shortest video.webm

proc = subprocess.Popen(args, stdin=subprocess.PIPE)


    


    Here is a sample example of passing the image to FFMPEG proc stdin as :

    


    # wait for the next frame to get ready
for frame_path in frame_path_list:
    while not os.path.exists(frame_path):
        time.sleep(0.25)
    frame = cv2.imread(frame_path, cv2.IMREAD_UNCHANGED)
    
    # put the frame in stdin so that it gets ready
    proc.stdin.write(frame.astype(np.uint8).tobytes())


    


    The current speed of this process is 0.135x which is a huge bottleneck for us. Earlier when we were taking input as -pattern_type glob -i images/*.png we were getting around 1x-1.2x for this on a single core. So, our conclusion is that we're getting bottlenecked by stdin and hence are looking for ways to pass input through multiple sources or somehow help ffmpeg to parallelize this effort - a few options that we're thinking of :

    


      

    • Somehow feed it to different pipes and make ffmpeg read from them.
    • 


    • Append a new image to ffmpeg without re-encoding the whole video, but we didn't find a way to do this with giving input images directly.
    • 


    


    But we haven't been able to get either of these working, open to any other solutions as well. Will really appreciate the help on this. Thanks !

    


  • FFMpeg fails to detect input stream when outputting to pipe's stdout

    27 septembre 2020, par La bla bla

    We have h264 frames as individual files, we read them to a python wrapper and piping them to ffmpeg.

    


    ffmpeg subprocess is launched using

    


        command = ["ffmpeg",
               "-hide_banner",
               "-vcodec", "h264",
               "-i", "pipe:0",
               "-video_size", "5120x3072",
               '-an', '-sn',  # we want to disable audio processing (there is no audio)
               '-pix_fmt', 'bgr24',
               "-vcodec", "rawvideo",
               '-f', 'image2pipe', '-']
    pipe = sp.Popen(command, stdin=sp.PIPE, stdout=sp.PIPE, bufsize=10 ** 8)


    


    Our goal is to use ffmpeg to convert the individual h264 frames into raw BGR data that we can manipulate using OpenCV.

    


    the files are read in a background thread and piped using

    


        ...
    for path in files:
        with open(path, "rb") as f:
            data = f.read()
            pipe.stdin.write(data)


    


    When we try to read the ffmpeg's output pipe using

    


        while True:
        # Capture frame-by-frame
        raw_image = pipe.stdout.read(width * height * 3)


    


    we get

    


    [h264 @ 0x1c31000] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
pipe:0: could not find codec parameters
Input #0, h264, from 'pipe:0':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: h264, none, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
Output file #0 does not contain any stream


    


    However, when I change the sp.Popen command to be

    


    
    f = open('ffmpeg_output.log', 'wt')
    pipe = sp.Popen(command, stdin=sp.PIPE, stdout=f, bufsize=10 ** 8) # Note: the stdout is not f


    


    we get the gibberish (i.e, binary data) in the ffmpeg_output.log file, and the console reads

    


    [h264 @ 0xf20000] Stream #0: not enough frames to estimate rate; consider increasing probesize
[h264 @ 0xf20000] decoding for stream 0 failed
Input #0, h264, from 'pipe:0':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: h264 (Baseline), yuv420p, 5120x3072, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Output #0, image2pipe, to 'pipe:':
  Metadata:
    encoder         : Lavf56.40.101
    Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 5120x3072, q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc
    Metadata:
      encoder         : Lavc56.60.100 rawvideo
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Invalid UE golomb code
    Last message repeated 89 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code
    Last message repeated 29 times
Invalid UE golomb code


    


    Why does ffmpeg cares if its stdout is a file or a pipe ?