Recherche avancée

Médias (91)

Autres articles (101)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (11516)

  • Python : What's the fastest way to load a video file into memory ?

    8 juillet 2017, par michaelh

    First some background

    I am trying to write my own set of tools for video analysis, mainly for detecting render errors like flashing frames and possibly some other stuff in the future.

    The (obvious) goal is to write a script, that is faster and more accurate than me watching the file in real time.

    Using OpenCV, I have something that looks like this :

    import cv2


    vid = cv2.VideoCapture("Video/OpenCV_Testfile.mov", cv2.CAP_FFMPEG)
    width = 1024
    height = 576
    length = vid.get(cv2.CAP_PROP_FRAME_COUNT)

    for f in range(length):
       blue_values = []
       vid.set(cv2.CAP_PROP_POS_FRAMES, f)
       is_read, frame = vid.read()
       if is_read:
           for row in range(height):
               for col in range(width):
                   blue_values.append(frame[row][col][0])
       print(blue_values)
    vid.release()

    This just prints out a list of all blue values of every frame.
    - Just for simplicity (My actual script compares a few values across each frame and only saves the frame number when all are equal)

    Although this works, it is not a very fast operation. (Nested loops, but most important, the read() method has to be called for every frame, which is rather slow.
    I tried to use multiprocessing but basically ended up having the same crashes as described here :

    how to get frames from video in parallel using cv2 & multiprocessing in python

    I have a 20s long 1024x576@25fps Testfile which performs as follows :

    • mov, ProRes : 15s
    • mp4, h.264 : 30s (too slow)

    My machine is capable of playing back h.264 in 1920x1080@50fps with mplayer (which uses ffmpeg to decode). So, I should be able to get more out of this. Which leads me to

    my Question

    How can I decode a video and simply dump all pixel values into a list for further (possibly multithreaded) operations ? Speed is really all that matters. Note : I’m not fixated on OpenCV. Whatever works best.

    Thanks !

  • How to re-encode to rgb pixel_format properly in ffmpeg

    1er février 2020, par captain_majid

    I’m using this command to record from multiple inputs :

    ffmpeg -y
    -f dshow -rtbufsize 1024M -thread_queue_size 1024 -probesize 64M -i video="screen-capture-recorder" -framerate 30
    -f dshow -rtbufsize 16M -i audio="virtual-audio-capturer"
    -f dshow -rtbufsize 16M -i audio="Microphone (DroidCam Virtual Audio)"
    -f dshow -rtbufsize 512M -thread_queue_size 512 -probesize 50M -i video="DroidCam Source 3"
    -stream_loop -1 -i ".\media\background sounds\blue.mp4"
    -i ".\media\pictures\Webcam Overlay\blue_.png"
    -i ".\media\pictures\Webcam Overlay\red.png"
    -f gdigrab -rtbufsize 512M -thread_queue_size 512 -probesize 64M -itsoffset 0.80 -i title="NohBoard v1.2.2" -framerate 60 -draw_mouse 0

    -filter_complex "
    [0:v] scale=1366x768 [desktop];
    [3:v] hue=s=-5, scale=240x160 [webcam];
    [desktop][webcam] overlay=x=W-w-285:y=H-h-7:format=rgb [deskCam];
    [4:v] format=rgba,colorchannelmixer=aa=0.5, scale=240x160 [vid];
    [deskCam][vid] overlay=x=W-w-5:y=H-h-245:format=rgb [deskCamVid];
    [deskCamVid][5:v] overlay=x=W-w-280:y=H-h-0:format=rgb [deskCamVidBlue];
    [deskCamVidBlue][6:v] overlay=x=W-w-0:y=H-h-238:format=rgb [deskCamVidBlueRed];
    [7:v] chromakey=0x00FF00:similarity=.200, scale=420x140 [kb];
    [deskCamVidBlueRed][kb] overlay=x=W-w-945:y=H-h-285:format=rgb [final];
    [1][2] amix [aud1]; [1][2][4] amix=inputs=3 [aud2]"
    -map "[final]" -map "[aud1]" -map "[aud2]" -metadata:s:a:0 title="No Music" -metadata:s:a:1 title="All sounds" out.mkv

    The problem is the colors are not as bright as I want unless adding :format=rgb to all overlays like seen above, but this delays my encoding a lot, also when I press ’Q’, only a small part of the video (like 1m of 3m) is produced.

    Also If you see any unnecessary switches or non-optimal ones, please advise.

  • running ffmpeg from Popen inside (twisted) timer.LoopingCall() stalls

    14 février 2014, par user1913115

    I have an RTSP stream which i need to re-stream as HLS. When RTSP stream goes down (e.g. camera disconnects) I put a blue screen to let the user know that the camera went offline. HLS segmenter is running separately, listening on port 22200 for incoming packets.

    in python the code essentially boils down to this :

    import psutil, subprocess as sb
    from twisted.internet import reactor, task
    from cameraControls import camStatus, camURL
    ffOn = False
    psRef = False
    def monitor():
    print "TIMER TICK"
    if camStatus()=='ON' and not ffOn: #camera just came online
     cmd = ["ffmpeg", "-i", camURL, "-codec", "copy", "-f", "mpegts", "udp://127.0.0.1:22200"]
     ps = sb.Popen(cmd,stderr=sb.PIPE)
     psRef=ps
    #check the stream:
    psmon = psutil.Process(psRef.pid)
    if psmon.status!=psutil.STATUS_RUNNING:
     print "FFMPEG STOPPED"

    tmr = task.LoopingCall(monitor)
    tmr.start(2)
    reactor.run()

    it works fine for 5-6 minutes, then i see the video stall and if i check the cpu usage of the ffmpeg it shows 0, the ffmpeg output doesn't change, as if paused. however psmon.status shows as running, and the timer is still going (i see "TIMER TICK" message pop up every 2 seconds in the command line.

    if i simply run the ffmpeg command from the command line (not from python) then it works for hours no problem.

    does anybody know if the twisted reactor is stalling the process ? or is it the subprocess.Popen itself issue ? or the timer itself is glitching somehow(even though it gets to the 'monitor' function) ? i have other timers running also in the same reactor (same thread), could that be an issue ?