Recherche avancée

Médias (91)

Autres articles (62)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (6311)

  • How to overlay sequence of frames on video using ffmpeg-python ?

    19 novembre 2022, par Yogesh Yadav

    I tried below but it is only showing the background video.

    


    background_video = ffmpeg.input( "input.mp4")
overlay_video = ffmpeg.input(f'{frames_folder}*.png', pattern_type='glob', framerate=25)
subprocess = ffmpeg.overlay(
                          background_video,
                          overlay_video,
                        ).filter("setsar", sar=1)


    


    I also tried to assemble sequence of frames into .webm/.mov video but transparency is lost. video is taking black as background.

    


    P.s - frame size is same as background video size. So no scaling needed.

    


    Edit

    


    I tried @Rotem suggestions

    


    


    Try using single PNG image first

    


    


    overlay_video =  ffmpeg.input('test-frame.png')


    


    It's not working for frames generated by OpenCV but working for any other png image. This is weird, when I'm manually viewing these frames folder it's showing blank images(Link to my frames folder).
But If I convert these frames into the video(see below) it is showing correctly what I draw on each frame.

    


    output_options = {
                    'crf': 20,
                    'preset': 'slower',
                    'movflags': 'faststart',
                    'pix_fmt': 'yuv420p'
                }
ffmpeg.input(f'{frames_folder}*.png', pattern_type='glob', framerate=25 , reinit_filter=0).output(
                    'movie.avi',
                    **output_options
                ).global_args('-report').run()


    


    


    try creating a video from all the PNG images without overlay

    


    


    It's working as expected only issue is transparency. Is there is way to create a transparent background video ? I tried .webm/.mov/.avi but no luck.

    


    


    Add .global_args('-report') and check the log file

    


    


    Report written to "ffmpeg-20221119-110731.log"
Log level: 48
ffmpeg version 5.1 Copyright (c) 2000-2022 the FFmpeg developers
  built with Apple clang version 13.1.6 (clang-1316.0.21.2.5)
  configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/5.1 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-neon
  libavutil      57. 28.100 / 57. 28.100
  libavcodec     59. 37.100 / 59. 37.100
  libavformat    59. 27.100 / 59. 27.100
  libavdevice    59.  7.100 / 59.  7.100
  libavfilter     8. 44.100 /  8. 44.100
  libswscale      6.  7.100 /  6.  7.100
  libswresample   4.  7.100 /  4.  7.100
  libpostproc    56.  6.100 / 56.  6.100
Input #0, image2, from './frames/*.png':
  Duration: 00:00:05.00, start: 0.000000, bitrate: N/A
  Stream #0:0: Video: png, rgba(pc), 1920x1080, 25 fps, 25 tbr, 25 tbn
Codec AVOption crf (Select the quality for constant quality mode) specified for output file #0 (movie.avi) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
Codec AVOption preset (Configuration preset) specified for output file #0 (movie.avi) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
Stream mapping:
  Stream #0:0 -> #0:0 (png (native) -> mpeg4 (native))
Press [q] to stop, [?] for help
Output #0, avi, to 'movie.avi':
  Metadata:
    ISFT            : Lavf59.27.100
  Stream #0:0: Video: mpeg4 (FMP4 / 0x34504D46), yuv420p(tv, progressive), 1920x1080, q=2-31, 200 kb/s, 25 fps, 25 tbn
    Metadata:
      encoder         : Lavc59.37.100 mpeg4
    Side data:
      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: N/A
frame=  125 fps= 85 q=31.0 Lsize=     491kB time=00:00:05.00 bitrate= 804.3kbits/s speed=3.39x    
video:482kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.772174%


    


    To draw frame I used below.

    


    for i in range(num_frames):
            transparent_img = np.zeros((height, width, 4),  dtype=np.uint8)
            cv2.line(transparent_img, (x1,y1), (x2,y2) ,(255, 255, 255), thickness=1, lineType=cv2.LINE_AA)
            self.frames.append(transparent_img)


## To Save each frame of the video in the given folder
for i, f in enumerate(frames):
    cv2.imwrite("{}/{:0{n}d}.png".format(path_to_frames, i, n=num_digits), f)





    


  • Stream RAW8 video from camera using openCV and Python [Windows]

    14 novembre 2022, par awin

    We have a camera that streams RAW8 video at 1920x1080. The GUID used is GREY. We are able to stream video from this camera using ffmpeg on Windows with the below command :

    


    ffmpeg -f dshow -pix_fmt gray -video_size 1920x1080 -i video="CAM0" -f nut - | ffplay -


    


    We are now trying to grab images from this camera using OpenCV using the below code snippet, but its unable to grab any frame (frame_grabbed is always false)

    


    import cv2
import numpy as np

# reading the video from CAM0
source = cv2.VideoCapture(1)

height = 1920
width = 1080

source.set(cv2.CAP_PROP_FRAME_WIDTH, width)
source.set(cv2.CAP_PROP_FRAME_HEIGHT, height)


image = np.zeros([height, width, 3], np.uint8)

while True:
    # Extracting the frames
    frame_grabbed , image = source.read()

    if (frame_grabbed ):
        colour1 = cv2.cvtColor(image, cv2.COLOR_BayerRG2BGR)
        cv2.imshow("Demosaiced image", colour1)
    else:
        print("No images grabbed")

#Exit on q
    key = cv2.waitKey(1)
    if key == ord("q"):
       break

# closing the window
cv2.destroyAllWindows()
source.release()



    


    Are we missing something here ?

    


    We then came across this post to pipe ffmpeg output to python (link). However, when we are passing the command as below :

    


    command = [ 'ffmpeg.exe',
            '-f', 'dshow',
            '-i', 'video="CAM0"',
            '-pix_fmt', 'gray',
            '-video_size','1920x1080'
            '-f', 'nut', '-']


    


    its throwing

    


    


    Could not find video device with name ["CAM0"] among source devices
of type video. video="CAM0" : I/O error

    


    


    I have verified that the camera is present using the below command :

    


    command = [ 'ffmpeg.exe',
            '-list_devices', 'true',
            '-f', 'dshow',
            '-i', 'dummy']


    


    This detects CAM0 as shown below :

    


    ffmpeg version 5.0.1-full_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 11.2.0 (Rev7, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab 
--enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
  libavutil      57. 17.100 / 57. 17.100
  libavcodec     59. 18.100 / 59. 18.100
  libavformat    59. 16.100 / 59. 16.100
  libavdevice    59.  4.100 / 59.  4.100
  libavfilter     8. 24.100 /  8. 24.100
  libswscale      6.  4.100 /  6.  4.100
  libswresample   4.  3.100 /  4.  3.100
  libpostproc    56.  3.100 / 56.  3.100
[dshow @ 000001ea39e40600] "HP HD Camera" (video)
[dshow @ 000001ea39e40600]   Alternative name "@device_pnp_\\?\usb#vid_04f2&pid_b6bf&mi_00#6&1737142c&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"
[dshow @ 000001ea39e40600] "CAM0" (video)
[dshow @ 000001ea39e40600]   Alternative name "@device_pnp_\\?\usb#vid_0400&pid_0011&mi_00#7&1affbd5b&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"


    


    In short, we are able to capture video using ffmpeg commandline, but unable to grab any frame using OpenCV videocapture or ffmpeg in opencv. Any pointers ?

    


    Thanks !

    


  • Combining png and mp3, why using -loop 1 on image and -shortest tag doesn't cut the output length when audio ends ?

    9 novembre 2022, par Alexander

    I have several image and audio files and I want to combine each of the set to a separate video file where image is stretched to audio file length. For example a lion image should be displayed as long as the lion voice is being played. The output file should end when lion audio ends.

    


    I also try to normalize output clips resolution since image files doesn't have same width/height.

    


    Here is the command I'm currently running. Command is on one line but I made some line breaks here for your convenience.

    


    -loop 1 
-i "C:\Temp\screenshot\clip_1.png"
-i "C:\Temp\audio\clip_1.mp3" 
-shortest 
-filter_complex "[0:v]scale=3000:-2:force_original_aspect_ratio=decrease" 
-c:v h264_nvenc 
-c:a aac 
-b:a 192k 
-y 
C:\Temp\clips\clip_1.mp4


    


    To my understanding the -loop 1 before image input should make it to loop infinite and -shortest should end the video when shortest input (audio in this case) ends. In following example, the audio file is 6 seconds long but the output is always 20 seconds.

    


    Here is the full output of ffmpeg :

    


    ffmpeg version 2022-11-03-git-5ccd4d3060-full_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 12.1.0 (Rev2, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enabl
e-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --ena
ble-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --e
nable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
  libavutil      57. 40.100 / 57. 40.100
  libavcodec     59. 51.101 / 59. 51.101
  libavformat    59. 34.101 / 59. 34.101
  libavdevice    59.  8.101 / 59.  8.101
  libavfilter     8. 49.101 /  8. 49.101
  libswscale      6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc    56.  7.100 / 56.  7.100
Input #0, png_pipe, from 'C:\Temp\screenshot\clip_1.png':
  Duration: N/A, bitrate: N/A
  Stream #0:0: Video: png, rgba(pc), 640x148, 25 fps, 25 tbr, 25 tbn
[mp3 @ 0000016c26784a80] Estimating duration from bitrate, this may be inaccurate
Input #1, mp3, from 'C:\Temp\audio\clip_1.mp3':
  Duration: 00:00:06.10, start: 0.000000, bitrate: 32 kb/s
  Stream #1:0: Audio: mp3, 24000 Hz, mono, fltp, 32 kb/s
Stream mapping:
  Stream #0:0 (png) -> scale:default (graph 0)
  scale:default (graph 0) -> Stream #0:0 (h264_nvenc)
  Stream #1:0 -> #0:1 (mp3 (mp3float) -> aac (native))
Press [q] to stop, [?] for help
[aac @ 0000016c267ab180] Too many bits 8192.000000 > 6144 per frame requested, clamping to max
Output #0, mp4, to 'C:\Temp\clips\clip_1.mp4':
  Metadata:
    encoder         : Lavf59.34.101
  Stream #0:0: Video: h264 (Main) (avc1 / 0x31637661), rgba(pc, gbr/unknown/unknown, progressive), 3000x694, q=2-31, 2000 kb/s, 25 fps, 12800 tbn
    Metadata:
      encoder         : Lavc59.51.101 h264_nvenc
    Side data:
      cpb: bitrate max/min/avg: 0/0/2000000 buffer size: 4000000 vbv_delay: N/A
  Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 24000 Hz, mono, fltp, 144 kb/s
    Metadata:
      encoder         : Lavc59.51.101 aac
frame=  504 fps=149 q=11.0 Lsize=     925kB time=00:00:20.00 bitrate= 378.8kbits/s speed=5.91x
video:852kB audio:67kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.645776%
[aac @ 0000016c267ab180] Qavg: 65496.695