Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (52)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (8837)

  • Ffmpeg error splitting into individual encoded frames

    19 avril 2021, par Vainmonde De Courtenay

    I have a folder of .png images I wanted to make a series of .h264 frames (one frame per .png). The frames are named frame001.png, frame002.png, ...

    


    First inside the folder containing the .png files I ran

    


    ffmpeg -r 10 -i frame%3d.png -codec libx264 -r 10 video.h264 -y


    


    which did its job, generating one video.h264. But now I want to divide that into many smaller .h264 files. Following this advice I tried

    


    ffmpeg -i video.h264 -f image2 -vcodec copy -bsf h264_mp4toannexb frame%03d.h264


    


    but I hit error

    


    [image2 @ 0x55d2fc1f7b20] Application provided invalid, non monotonically increasing dts to muxer in stream 0: -2 >= -2


    


    Full console debug :

    


    # ffmpeg -i video.h264 -f image2 -vcodec copy -bsf h264_mp4toannexb fr%03d.h264
ffmpeg version 3.4.8-0ubuntu0.2 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
  configuration: --prefix=/usr --extra-version=0ubuntu0.2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
Input #0, h264, from 'video.h264':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: h264 (High 4:4:4 Predictive), yuv444p(progressive), 480x852, 10 fps, 10 tbr, 1200k tbn, 20 tbc
Output #0, image2, to 'fr%03d.h264':
  Metadata:
    encoder         : Lavf57.83.100
    Stream #0:0: Video: h264 (High 4:4:4 Predictive), yuv444p(progressive), 480x852, q=2-31, 10 fps, 10 tbr, 10 tbn, 10 tbc
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[image2 @ 0x55d2fc1f7b20] Application provided invalid, non monotonically increasing dts to muxer in stream 0: -2 >= -2
frame=   28 fps=0.0 q=-1.0 Lsize=N/A time=00:00:02.50 bitrate=N/A speed=1.58e+03x    
video:142kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown


    


    I have tried the problem with multiple videos and the same thing. In fact when I check, the new files do appear. But they aren't really .h264's (just a few bytes large - appear to be ghost files) and I'm guessing this is down to the error displayed above.

    


  • Separate simultaneously changing regions of video into individual videos

    17 juillet 2019, par Elle Fie

    Given a single video stream (up to 4K resolution), where only small displayed portions may change, I’d like to identify these changing sections and create separate video streams, one for each changing section of the input video stream, in real time.

    Note that this is spatial extraction, not time slicing !

    Q1 : Is there a better name to address this process ?

    Q2 : Is this an already solved problem ?

    It seems ImageMagick’s Compare program supports diffing two images, which I can process to identify regions as coordinates for an ffmpeg crop (launched in parallel for each discovered diff region), but this method relies on having a PNG stream to avoid false positive diffs due to lossy encoding. Also, too slow to happen in real time.

    Q3 : Is there any way ffmpeg can dump out the causal regions influencing scene-change detection ?

  • How to to add additional metadata to individual frames, DDB's, when creating an AVI file with ffmpeg

    6 décembre 2019, par Totte Karlsson

    I’m creating avi videos from device dependent bitmaps, DDB’s.

    The pipeline is quite simple, a GigE camera provides frame by frame, and each frame, a DDB, is piped to a ffmpeg process creating a final AVI file, using h264 compression.

    These videos are scientific in nature, and we would like to store/embed experimental hardware information, such as the states of a few digital lines, with each frame.
    This information need to be available in the final avi video

    Question is, is this possible ?

    Looking at this : https://docs.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-bitmap it does not seem that adding additional data to the DDB themselves is possible, but I’m not sure.