Recherche avancée

Médias (91)

Autres articles (63)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Formulaire personnalisable

    21 juin 2013, par

    Cette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
    Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire. (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (9695)

  • VLC can not play RTSP stream while mpv, ffplay can play RTSP stream video

    15 novembre 2018, par Harshil Makwana

    I developed ffserver based RTSP server which streams live video, I can able to stream and display video on mpv and ffplayer. but on VLC player I can see first image only then It wont show other frames.

    Here are logs of VLC -vvv output :

    [0x7846e8] main playlist debug: art not found for rtsp://127.0.0.1:1234/test.h264
    Received 92 new bytes of response data.
    Received a complete PLAY response:
    RTSP/1.0 200 OK
    CSeq: 5
    Date: Thu, 15 Nov 2018 04:13:15 GMT
    Session: 522d937eb678c50a


    [0x7f9684000e38] live555 demux debug: play start: 0.000000 stop:0.000000
    [0x7f9684000e38] main demux debug: using access_demux module "live555"
    [0x7f96840246b8] main decoder debug: looking for decoder module matching "any": 39 candidates
    [0x7f96840246b8] avcodec decoder debug: trying to use direct rendering
    [0x7f96840246b8] avcodec decoder debug: allowing 3 thread(s) for decoding
    [0x7f96840246b8] avcodec decoder debug: avcodec codec (H264 - MPEG-4 AVC (part 10)) started
    [0x7f96840246b8] avcodec decoder debug: using frame thread mode with 3 threads
    [0x7f96840246b8] main decoder debug: using decoder module "avcodec"
    [0x7f9684034f68] main packetizer debug: looking for packetizer module matching "any": 21 candidates
    [0x7f9684034f68] main packetizer debug: using packetizer module "packetizer_h264"
    [0x7f9684150308] main demux meta debug: looking for meta reader module matching "any": 2 candidates
    [0x7f9684150308] lua demux meta debug: Trying Lua scripts in /home/hashmak/.local/share/vlc/lua/meta/reader
    [0x7f9684150308] lua demux meta debug: Trying Lua scripts in /usr/lib/vlc/lua/meta/reader
    [0x7f9684150308] lua demux meta debug: Trying Lua playlist script /usr/lib/vlc/lua/meta/reader/filename.luac
    [0x7f9684150308] lua demux meta debug: Trying Lua scripts in /usr/share/vlc/lua/meta/reader
    [0x7f9684150308] main demux meta debug: no meta reader modules matched
    [0x7f968c0009b8] main input debug: `rtsp://127.0.0.1:1234/test.h264' successfully opened
    [0x7f9684000e38] live555 demux debug: tk->rtpSource->hasBeenSynchronizedUsingRTCP()
    [0x7f968c0009b8] main input error: ES_OUT_RESET_PCR called
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer warning: waiting for SPS/PPS
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9684034f68] packetizer_h264 packetizer debug: found NAL_SPS (sps_id=0)
    [0x7f9684034f68] packetizer_h264 packetizer debug: found NAL_PPS (pps_id=0 sps_id=0)
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9694002718] main spu text debug: looking for text renderer module matching "any": 2 candidates
    [0x7f9694002718] freetype spu text debug: Building font databases.
    [0x7f9694002718] freetype spu text debug: Took 0 microseconds
    Fontconfig warning: FcPattern object size does not accept value "0"
    Fontconfig warning: FcPattern object size does not accept value "0"
    [0x7f9694002718] freetype spu text debug: Using Serif Bold as font from file /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf
    [0x7f9694002718] freetype spu text debug: using fontsize: 2
    [0x7f9694002718] main spu text debug: using text renderer module "freetype"
    [0x7f969400e408] main scale debug: looking for video filter2 module matching "any": 55 candidates
    [0x7f969400e408] swscale scale debug: 32x32 chroma: YUVA -> 16x16 chroma: RGBA with scaling using Bicubic (good quality)
    [0x7f969400e408] main scale debug: using video filter2 module "swscale"
    [0x7f9694025cc8] main scale debug: looking for video filter2 module matching "any": 55 candidates
    [0x7f9694025cc8] yuvp scale debug: YUVP to YUVA converter
    [0x7f9694025cc8] main scale debug: using video filter2 module "yuvp"
    [0x7f9694001428] main video output debug: Deinterlacing available
    [0x7f9694001428] main video output debug: deinterlace 0, mode blend, is_needed 0
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9694001428] main video output debug: Opening vout display wrapper
    [0x7f9674001248] main vout display debug: looking for vout display module matching "any": 12 candidates
    [0x7f9674002618] main window debug: looking for vout window xid module matching "qt4,any": 4 candidates
    [0x7f9674002618] qt4 window debug: requesting video window...
    [0x6f3208] qt4 interface debug: Video was requested 0, 0
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9674002618] main window debug: using vout window xid module "qt4"
    [0x7f96740027e8] main inhibit debug: looking for inhibit module matching "any": 2 candidates
    [0x7f96740027e8] dbus_screensaver inhibit debug: found service org.freedesktop.ScreenSaver
    [0x7f96740027e8] main inhibit debug: using inhibit module "dbus_screensaver"
    [0x7f9674001248] xcb_glx vout display debug: connected to X11.0 server
    [0x7f9674001248] xcb_glx vout display debug:  vendor : The X.Org Foundation
    [0x7f9674001248] xcb_glx vout display debug:  version: 11702000
    [0x7f9674001248] xcb_glx vout display debug: using screen 0x73
    [0x7f9674001248] xcb_glx vout display debug: using GLX extension version 1.4
    [0x7f9674001248] xcb_glx vout display debug: using X11 window 05400000
    shader program 1: WARNING: Output of vertex shader 'TexCoord1' not read by fragment shader
    WARNING: Output of vertex shader 'TexCoord2' not read by fragment shader

    [0x7f9674001248] main vout display debug: VoutDisplayEvent 'fullscreen' 0
    [0x7f9674001248] main vout display debug: VoutDisplayEvent 'resize' 1215x724 window
    [0x7f9674001248] main vout display debug: using vout display module "xcb_glx"
    [0x7f9694001428] main video output debug: original format sz 640x480, of (0,0), vsz 640x480, 4cc I420, sar 1:1, msk r0x0 g0x0 b0x0
    [0x7f9694002718] main spu text debug: removing module "freetype"
    [0x7f9694002718] main spu text debug: looking for text renderer module matching "any": 2 candidates
    [0x7f9694002718] freetype spu text debug: Building font databases.
    [0x7f9694002718] freetype spu text debug: Took 0 microseconds
    Fontconfig warning: FcPattern object size does not accept value "0"
    Fontconfig warning: FcPattern object size does not accept value "0"
    [0x7f9694002718] freetype spu text debug: Using Serif Bold as font from file /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf
    [0x7f9694002718] freetype spu text debug: using fontsize: 2
    [0x7f9694002718] main spu text debug: using text renderer module "freetype"
    [0x7f96840246b8] avcodec decoder debug: using direct rendering
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f96840246b8] main decoder debug: End of video preroll
    [0x7f96840246b8] main decoder debug: Received first picture
    [0x7f9674001248] xcb_glx vout display debug: display is visible
    [0x7f9674001248] main vout display error: Failed to resize display
    [0x7f968c0009b8] main input debug: Buffering 0%
    [h264 @ 0x7f9684037e40] illegal short term buffer state detected
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f9674001248] main vout display debug: auto hiding mouse cursor
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%


    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%



    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%


    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%




    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%
    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%


    [0x7f968c0009b8] main input debug: Buffering 0%


    [0x7f968c0009b8] main input debug: Buffering 0%


    [0x7f968c0009b8] main input debug: Buffering 0%

    [0x7f968c0009b8] main input debug: Buffering 0%

    Here you can see, VLC is always waiting on buffering, while other players can get buffers from my RTSP server, also I can see RTP packets on wireshark that server is continuously sending data to VLC.

    Can someone help me on this issue ?

    Very much thanks in advance.

  • Developing MobyCAIRO

    26 mai 2021, par Multimedia Mike — General

    I recently published a tool called MobyCAIRO. The ‘CAIRO’ part stands for Computer-Assisted Image ROtation, while the ‘Moby’ prefix refers to its role in helping process artifact image scans to submit to the MobyGames database. The tool is meant to provide an accelerated workflow for rotating and cropping image scans. It works on both Windows and Linux. Hopefully, it can solve similar workflow problems for other people.

    As of this writing, MobyCAIRO has not been tested on Mac OS X yet– I expect some issues there that should be easily solvable if someone cares to test it.

    The rest of this post describes my motivations and how I arrived at the solution.

    Background
    I have scanned well in excess of 2100 images for MobyGames and other purposes in the past 16 years or so. The workflow looks like this :


    Workflow diagram

    Image workflow


    It should be noted that my original workflow featured me manually rotating the artifact on the scanner bed in order to ensure straightness, because I guess I thought that rotate functions in image editing programs constituted dark, unholy magic or something. So my workflow used to be even more arduous :


    Longer workflow diagram

    I can’t believe I had the patience to do this for hundreds of scans


    Sometime last year, I was sitting down to perform some more scanning and found myself dreading the oncoming tedium of straightening and cropping the images. This prompted a pivotal question :


    Why can’t a computer do this for me ?

    After all, I have always been a huge proponent of making computers handle the most tedious, repetitive, mind-numbing, and error-prone tasks. So I did some web searching to find if there were any solutions that dealt with this. I also consulted with some like-minded folks who have to cope with the same tedious workflow.

    I came up empty-handed. So I endeavored to develop my own solution.

    Problem Statement and Prior Work

    I want to develop a workflow that can automatically rotate an image so that it is straight, and also find the most likely crop rectangle, uniformly whitening the area outside of the crop area (in the case of circles).

    As mentioned, I checked to see if any other programs can handle this, starting with my usual workhorse, Photoshop Elements. But I can’t expect the trimmed down version to do everything. I tried to find out if its big brother could handle the task, but couldn’t find a definitive answer on that. Nor could I find any other tools that seem to take an interest in optimizing this particular workflow.

    When I brought this up to some peers, I received some suggestions, including an idea that the venerable GIMP had a feature like this, but I could not find any evidence. Further, I would get responses of “Program XYZ can do image rotation and cropping.” I had to tamp down on the snark to avoid saying “Wow ! An image editor that can perform rotation AND cropping ? What a game-changer !” Rotation and cropping features are table stakes for any halfway competent image editor for the last 25 or so years at least. I am hoping to find or create a program which can lend a bit of programmatic assistance to the task.

    Why can’t other programs handle this ? The answer seems fairly obvious : Image editing tools are general tools and I want a highly customized workflow. It’s not reasonable to expect a turnkey solution to do this.

    Brainstorming An Approach
    I started with the happiest of happy cases— A disc that needed archiving (a marketing/press assets CD-ROM from a video game company, contents described here) which appeared to have some pretty clear straight lines :


    Ubisoft 2004 Product Catalog CD-ROM

    My idea was to try to find straight lines in the image and then rotate the image so that the image is parallel to the horizontal based on the longest single straight line detected.

    I just needed to figure out how to find a straight line inside of an image. Fortunately, I quickly learned that this is very much a solved problem thanks to something called the Hough transform. As a bonus, I read that this is also the tool I would want to use for finding circles, when I got to that part. The nice thing about knowing the formal algorithm to use is being able to find efficient, optimized libraries which already implement it.

    Early Prototype
    A little searching for how to perform a Hough transform in Python led me first to scikit. I was able to rapidly produce a prototype that did some basic image processing. However, running the Hough transform directly on the image and rotating according to the longest line segment discovered turned out not to yield expected results.


    Sub-optimal rotation

    It also took a very long time to chew on the 3300×3300 raw image– certainly longer than I care to wait for an accelerated workflow concept. The key, however, is that you are apparently not supposed to run the Hough transform on a raw image– you need to compute the edges first, and then attempt to determine which edges are ‘straight’. The recommended algorithm for this step is the Canny edge detector. After applying this, I get the expected rotation :


    Perfect rotation

    The algorithm also completes in a few seconds. So this is a good early result and I was feeling pretty confident. But, again– happiest of happy cases. I should also mention at this point that I had originally envisioned a tool that I would simply run against a scanned image and it would automatically/magically make the image straight, followed by a perfect crop.

    Along came my MobyGames comrade Foxhack to disabuse me of the hope of ever developing a fully automated tool. Just try and find a usefully long straight line in this :


    Nascar 07 Xbox Scan, incorrectly rotated

    Darn it, Foxhack…

    There are straight edges, to be sure. But my initial brainstorm of rotating according to the longest straight edge looks infeasible. Further, it’s at this point that we start brainstorming that perhaps we could match on ratings badges such as the standard ESRB badges omnipresent on U.S. video games. This gets into feature detection and complicates things.

    This Needs To Be Interactive
    At this point in the effort, I came to terms with the fact that the solution will need to have some element of interactivity. I will also need to get out of my safe Linux haven and figure out how to develop this on a Windows desktop, something I am not experienced with.

    I initially dreamed up an impressive beast of a program written in C++ that leverages Windows desktop GUI frameworks, OpenGL for display and real-time rotation, GPU acceleration for image analysis and processing tricks, and some novel input concepts. I thought GPU acceleration would be crucial since I have a fairly good GPU on my main Windows desktop and I hear that these things are pretty good at image processing.

    I created a list of prototyping tasks on a Trello board and made a decent amount of headway on prototyping all the various pieces that I would need to tie together in order to make this a reality. But it was ultimately slowgoing when you can only grab an hour or 2 here and there to try to get anything done.

    Settling On A Solution
    Recently, I was determined to get a set of old shareware discs archived. I ripped the data a year ago but I was blocked on the scanning task because I knew that would also involve tedious straightening and cropping. So I finally got all the scans done, which was reasonably quick. But I was determined to not manually post-process them.

    This was fairly recent, but I can’t quite recall how I managed to come across the OpenCV library and its Python bindings. OpenCV is an amazing library that provides a significant toolbox for performing image processing tasks. Not only that, it provides “just enough” UI primitives to be able to quickly create a basic GUI for your program, including image display via multiple windows, buttons, and keyboard/mouse input. Furthermore, OpenCV seems to be plenty fast enough to do everything I need in real time, just with (accelerated where appropriate) CPU processing.

    So I went to work porting the ideas from the simple standalone Python/scikit tool. I thought of a refinement to the straight line detector– instead of just finding the longest straight edge, it creates a histogram of 360 rotation angles, and builds a list of lines corresponding to each angle. Then it sorts the angles by cumulative line length and allows the user to iterate through this list, which will hopefully provide the most likely straightened angle up front. Further, the tool allows making fine adjustments by 1/10 of an angle via the keyboard, not the mouse. It does all this while highlighting in red the straight line segments that are parallel to the horizontal axis, per the current candidate angle.


    MobyCAIRO - rotation interface

    The tool draws a light-colored grid over the frame to aid the user in visually verifying the straightness of the image. Further, the program has a mode that allows the user to see the algorithm’s detected edges :


    MobyCAIRO - show detected lines

    For the cropping phase, the program uses the Hough circle transform in a similar manner, finding the most likely circles (if the image to be processed is supposed to be a circle) and allowing the user to cycle among them while making precise adjustments via the keyboard, again, rather than the mouse.


    MobyCAIRO - assisted circle crop

    Running the Hough circle transform is a significantly more intensive operation than the line transform. When I ran it on a full 3300×3300 image, it ran for a long time. I didn’t let it run longer than a minute before forcibly ending the program. Is this approach unworkable ? Not quite– It turns out that the transform is just as effective when shrinking the image to 400×400, and completes in under 2 seconds on my Core i5 CPU.

    For rectangular cropping, I just settled on using OpenCV’s built-in region-of-interest (ROI) facility. I tried to intelligently find the best candidate rectangle and allow fine adjustments via the keyboard, but I wasn’t having much success, so I took a path of lesser resistance.

    Packaging and Residual Weirdness
    I realized that this tool would be more useful to a broader Windows-using base of digital preservationists if they didn’t have to install Python, establish a virtual environment, and install the prerequisite dependencies. Thus, I made the effort to figure out how to wrap the entire thing up into a monolithic Windows EXE binary. It is available from the project’s Github release page (another thing I figured out for the sake of this project !).

    The binary is pretty heavy, weighing in at a bit over 50 megabytes. You might advise using compression– it IS compressed ! Before I figured out the --onefile command for pyinstaller.exe, the generated dist/ subdirectory was 150 MB. Among other things, there’s a 30 MB FORTRAN BLAS library packaged in !

    Conclusion and Future Directions
    Once I got it all working with a simple tkinter UI up front in order to select between circle and rectangle crop modes, I unleashed the tool on 60 or so scans in bulk, using the Windows forfiles command (another learning experience). I didn’t put a clock on the effort, but it felt faster. Of course, I was livid with proudness the whole time because I was using my own tool. I just wish I had thought of it sooner. But, really, with 2100+ scans under my belt, I’m just getting started– I literally have thousands more artifacts to scan for preservation.

    The tool isn’t perfect, of course. Just tonight, I threw another scan at MobyCAIRO. Just go ahead and try to find straight lines in this specimen :


    Reading Who? Reading You! CD-ROM

    I eventually had to use the text left and right of center to line up against the grid with the manual keyboard adjustments. Still, I’m impressed by how these computer vision algorithms can see patterns I can’t, highlighting lines I never would have guessed at.

    I’m eager to play with OpenCV some more, particularly the video processing functions, perhaps even some GPU-accelerated versions.

    The post Developing MobyCAIRO first appeared on Breaking Eggs And Making Omelettes.

  • Trying to get the current FPS and Frametime value into Matplotlib title

    16 juin 2022, par TiSoBr

    I try to turn an exported CSV with benchmark logs into an animated graph. Works so far, but I can't get the Titles on top of both plots with their current FPS and frametime in ms values animated.

    


    Thats the output I'm getting. Looks like he simply stores all values in there instead of updating them ?

    


    Screengrab of cli output
Screengrab of the final output (inverted)

    


    from __future__ import division
import sys, getopt
import time
import matplotlib
import numpy as np
import subprocess
import math
import re
import argparse
import os
import glob

import matplotlib.animation as animation
import matplotlib.pyplot as plt


def check_pos(arg):
    ivalue = int(arg)
    if ivalue <= 0:
        raise argparse.ArgumentTypeError("%s Not a valid positive integer value" % arg)
    return True
    
def moving_average(x, w):
    return np.convolve(x, np.ones(w), 'valid') / w
    

parser = argparse.ArgumentParser(
    description = "Example Usage python frame_scan.py -i mangohud -c '#fff' -o mymov",
    formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("-i", "--input", help = "Input data set from mangohud", required = True, nargs='+', type=argparse.FileType('r'), default=sys.stdin)
parser.add_argument("-o", "--output", help = "Output file name", required = True, type=str, default = "")
parser.add_argument("-r", "--framerate", help = "Set the desired framerate", required = False, type=float, default = 60)
parser.add_argument("-c", "--colors", help = "Colors for the line graphs; must be in quotes", required = True, type=str, nargs='+', default = 60)
parser.add_argument("--fpslength", help = "Configures how long the data will be shown on the FPS graph", required = False, type=float, default = 5)
parser.add_argument("--fpsthickness", help = "Changes the line width for the FPS graph", required = False, type=float, default = 3)
parser.add_argument("--frametimelength", help = "Configures how long the data will be shown on the frametime graph", required = False, type=float, default = 2.5)
parser.add_argument("--frametimethickness", help = "Changes the line width for the frametime graph", required = False, type=float, default = 1.5)
parser.add_argument("--graphcolor", help = "Changes all of the line colors on the graph; expects hex value", required = False, default = '#FFF')
parser.add_argument("--graphthicknes", help = "Changes the line width of the graph", required = False, type=float, default = 1)
parser.add_argument("-ts","--textsize", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 23)
parser.add_argument("-fsM","--fpsmax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 180)
parser.add_argument("-fsm","--fpsmin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fss","--fpsstep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 30)
parser.add_argument("-ftM","--frametimemax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 50)
parser.add_argument("-ftm","--frametimemin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fts","--frametimestep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 10)

arg = parser.parse_args()
status = False


if arg.input:
    status = True
if arg.output:
    status = True
if arg.framerate:
    status = check_pos(arg.framerate)
if arg.fpslength:
    status = check_pos(arg.fpslength)
if arg.fpsthickness:
    status = check_pos(arg.fpsthickness)
if arg.frametimelength:
    status = check_pos(arg.frametimelength)
if arg.frametimethickness:
    status = check_pos(arg.frametimethickness)
if arg.colors:
    if len(arg.output) != len(arg.colors):
        for i in arg.colors:
            if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", i):
                status = True
            else:
                print('{} : Isn\'t a valid hex value!'.format(i))
                status = False
    else:
        print('You must have the same amount of colors as files in input!')
        status = False
if arg.graphcolor:
    if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", arg.graphcolor):
        status = True
    else:
        print('{} : Isn\'t a vaild hex value!'.format(arg.graphcolor))
        status = False
if arg.graphthicknes:
    status = check_pos(arg.graphthicknes)
if arg.textsize:
    status = check_pos(arg.textsize)
if not status:
    print("For a list of arguments try -h or --help") 
    exit()


# Empty output folder
files = glob.glob('/output/*')
for f in files:
    os.remove(f)


# We need to know the longest recording out of all inputs so we know when to stop the video
longest_data = 0

# Format the raw data into a list of tuples (fps, frame time in ms, time from start in micro seconds)
# The first three lines of our data are setup so we ignore them
data_formated = []
for li, i in enumerate(arg.input):
    t = 0
    sublist = []
    for line in i.readlines()[3:]:
        x = line[:-1].split(',')
        fps = float(x[0])
        frametime = int(x[1])/1000 # convert from microseconds to milliseconds
        elapsed = int(x[11])/1000 # convert from nanosecond to microseconds
        data = (fps, frametime, elapsed)
        sublist.append(data)
    # Compare last entry of each list with the 
    if sublist[-1][2] >= longest_data:
        longest_data = sublist[-1][2]
    data_formated.append(sublist)


max_blocksize = max(arg.fpslength, arg.frametimelength) * arg.framerate
blockSize = arg.framerate * arg.fpslength


# Get step time in microseconds
step = (1/arg.framerate) * 1000000 # 1000000 is one second in microseconds
frame_size_fps = (arg.fpslength * arg.framerate) * step
frame_size_frametime = (arg.frametimelength * arg.framerate) * step


# Total frames will have to be updated for more then one source
total_frames = int(int(longest_data) / step)


if True: # Gonna be honest, this only exists so I can collapse this block of code

    # Sets up our figures to be next to each other (horizontally) and with a ratio 3:1 to each other
    fig, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})

    # Size of whole output 1920x360 1080/3=360
    fig.set_size_inches(19.20, 3.6)

    # Make the background transparent
    fig.patch.set_alpha(0)


    # Loop through all active axes; saves a lot of lines in ax1.do_thing(x) ax2.do_thing(x)
    for axes in fig.axes:

        # Set all splines to the same color and width
        for loc, spine in axes.spines.items():
            axes.spines[loc].set_color(arg.graphcolor)
            axes.spines[loc].set_linewidth(arg.graphthicknes)

        # Make sure we don't render any data points as this will be our background
        axes.set_xlim(-(max_blocksize * step), 0)
        

        # Make both plots transparent as well as the background
        axes.patch.set_alpha(.5)
        axes.patch.set_color('#020202')

        # Change the Y axis info to be on the right side
        axes.yaxis.set_label_position("right")
        axes.yaxis.tick_right()

        # Add the white lines across the graphs; the location of the lines are based off set_{}ticks
        axes.grid(alpha=.8, b=True, which='both', axis='y', color=arg.graphcolor, linewidth=arg.graphthicknes)

        # Remove X axis info
        axes.set_xticks([])

    # Add a another Y axis so ticks are on both sides
    tmp_ax1 = ax1.secondary_yaxis("left")
    tmp_ax2 = ax2.secondary_yaxis("left")

    # Set both to the same values
    ax1.set_yticks(np.arange(arg.fpsmin, arg.fpsmax + 1, step=arg.fpsstep))
    ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))
    tmp_ax1.set_yticks(np.arange(arg.fpsmin , arg.fpsmax + 1, step=arg.fpsstep))
    tmp_ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))

    # Change the "ticks" to be white and correct size also change font size
    ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    tmp_ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0) # Label size of 0 disables the fps/frame numbers
    tmp_ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0)


    # Limits Y scale
    ax1.set_ylim(arg.fpsmin,arg.fpsmax + 1)
    ax2.set_ylim(arg.frametimemin,arg.frametimemax + 1)

    # Add an empty plot
    line = ax1.plot([], lw=arg.fpsthickness)
    line2 = ax2.plot([], lw=arg.frametimethickness)

    # Sets all the data for our benchmark
    for benchmarks, color in zip(data_formated, arg.colors):
        y = moving_average([x[0] for x in benchmarks], 25)
        y2 = [x[1] for x in benchmarks]
        x = [x[2] for x in benchmarks]
        line += ax1.plot(x[12:-12],y, c=color, lw=arg.fpsthickness)
        line2 += ax2.step(x,y2, c=color, lw=arg.fpsthickness)
    
    # Add titles with values
    ax1.set_title("Avg. frames per second: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')
    ax2.set_title("Frametime in ms: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')  

    # Removes unwanted white space; also controls the space between the two graphs
    plt.tight_layout(pad=0, h_pad=0, w_pad=2.5)
    
    fig.canvas.draw()

    # Cache the background
    axbackground = fig.canvas.copy_from_bbox(ax1.bbox)
    ax2background = fig.canvas.copy_from_bbox(ax2.bbox)


# Create a ffmpeg instance as a subprocess we will pipe the finished frame into ffmpeg
# encoded in Apple QuickTime (qtrle) for small(ish) file size and alpha support
# There are free and opensource types that will also do this but with much larger sizes
canvas_width, canvas_height = fig.canvas.get_width_height()
outf = '{}.mov'.format(arg.output)
cmdstring = ('ffmpeg',
                '-stats', '-hide_banner', '-loglevel', 'error', # Makes ffmpeg less annoying / to much console output
                '-y', '-r', '60', # set the fps of the video
                '-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
                '-pix_fmt', 'argb', # format cant be changed since this is what  `fig.canvas.tostring_argb()` outputs
                '-f', 'rawvideo',  '-i', '-', # tell ffmpeg to expect raw video from the pipe
                '-vcodec', 'qtrle', outf) # output encoding must support alpha channel
pipe = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

def render_frame(frame : int):

    # Set the bounds of the graph for each frame to render the correct data
    start = (frame * step) - frame_size_fps
    end = start + frame_size_fps
    ax1.set_xlim(start,end)
     
     
    start = (frame * step) - frame_size_frametime
    end = start + frame_size_frametime
    ax2.set_xlim(start,end)
    

    # Restore background
    fig.canvas.restore_region(axbackground)
    fig.canvas.restore_region(ax2background)

    # Redraw just the points will only draw points with in `axes.set_xlim`
    for i in line:
        ax1.draw_artist(i)
        
    for i in line2:
        ax2.draw_artist(i)

    # Fill in the axes rectangle
    fig.canvas.blit(ax1.bbox)
    fig.canvas.blit(ax2.bbox)
    
    fig.canvas.flush_events()

    # Converts the finished frame to ARGB
    string = fig.canvas.tostring_argb()
    return string




#import multiprocessing
#p = multiprocessing.Pool()
#for i, _ in enumerate(p.imap(render_frame, range(0, int(total_frames + max_blocksize))), 20):
#    pipe.stdin.write(_)
#    sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))
#p.close()

#Signle Threaded not much slower then multi-threading
if __name__ == "__main__":
    for i , _ in enumerate(range(0, int(total_frames + max_blocksize))):
        render_frame(_)
        pipe.stdin.write(render_frame(_))
        sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))