
Recherche avancée
Autres articles (37)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (5951)
-
avutil/eval : Use even better PRNG
2 janvier 2024, par Michael Niedermayeravutil/eval : Use even better PRNG
This is the 64bit version of Chris Doty-Humphreys SFC64
Compared to the LCGs these produce much better quality numbers.
Compared to LFGs this needs less state. (our LFG has 224 byte
state for its 32bit version) this has 32byte state
Also the initialization for our LFG is slower.
This is also much faster than KISS or PCG.This commit replaces the broken LCG used before.
(broken as it had only a period 200M due to being put in a double)This changes the output from random() which is why libswresample.mak
is updated, update was done using the command in libswresample.makSigned-off-by : Michael Niedermayer <michael@niedermayer.cc>
-
avformat/matroskadec : Improve invalid length error handling
17 mai 2019, par Andreas Rheinhardtavformat/matroskadec : Improve invalid length error handling
1. Up until now, the error message for EBML numbers whose length exceeds
the limits imposed upon them because of the element's type did not
distinguish between known-length and unknown-length elements. As a
consequence, the numerical value of the define constant
EBML_UNKNOWN_LENGTH was emitted as part of the error message which is
of course not appropriate. This commit changes this by adding error
messages designed for unknown-length elements.2. We impose some (arbitrary) sanity checks on the lengths of certain
element types ; these checks were conducted before the checks depending
on whether the element exceeds its containing master element. Now the
order has been reversed, because a failure at the (formerly) latter
check implies that the file is truly erroneous and not only fails our
arbitrary length limit. Moreover, this increases the informativeness of
the error messages.3. Furthermore, the error message in general has been changed by replacing
the type of the element (something internal to this demuxer and
therefore suitable as debug output at best, not as an error message
intended for ordinary users) with the element ID. The element's position
has been added, too.4. Finally, the length limit for EBML_NONE elements has been changed so
that all unknown-length elements of EBML_NONE-type trigger an error.
This is done because unknown-length elements can't be skipped and need
to be parsed, but there is no syntax to parse available for EBML_NONE
elements. This is done in preparation for a further patch which allows
more unknown-length elements than just clusters and segments.Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
How to extract motion vectors from h264 without a full decode on the CPU
25 septembre 2020, par Adrian MayI'm trying to use my nose as a pointing device. The plan is to encode the video stream from a webcam pointed at my face as h264 or the like, get the motion vectors, cook the numbers a bit and chuck them into /dev/uinput to make the mouse pointer move about. The uinput bit was easy.


This has to work with zero discernable latency. This, for instance :


#!/bin/bash
[ -p pipe.mkv ] || mkfifo pipe.mkv
ffmpeg -y -rtbufsize 1M -s 640x360 -vcodec mjpeg -i /dev/video0 -c h264_nvenc pipe.mkv &
ffplay -flags2 +export_mvs -vf codecview=mv=pf+bf+bb pipe.mkv



shows that the vectors are there but with a latency of several seconds which is unusable in a mouse. I know that the first ffmpeg step is working very fast by using the GPU, so either the pipe or the h264 decode in the second step is introducing the latency.


I tried MV Tractus (same as mpegflow I think) in a similar pipe arrangement and it was also very slow. They do a full h264 decode on the CPU and I think that's the problem cos I can see them imposing a lot of load on one CPU. If the pipe had caused the delay by buffering badly then the CPU wouldn't have been loaded. I guess ffplay also did the decoding on the CPU and I couldn't persuade it not to, but it only wants to draw arrows which are no use to me.


I think there are several approaches, and I'd like advice on which would be best, or if there's something even better I don't know about. I could :


- 

- Decode in hardware and get the motion vectors. So far this has failed. I tried combining ffmpeg's
extract_mvs.c
andhw_decode.c
samples but no motion vectors turn up. vdpau is the only decoder I got working on my linux box. I have a nvidia gpu. - Do a minimal parse of the h264 to fish out the motion vectors only, ignoring all the other data. I think this would mean putting some kind of "motion only" option in libav's parser, but I'm not at all familiar with that code.
- Find some other h264 parsing library that has said option and also unpacks the container.
- Forget about hardware accelerated encoding and use a stripped down encoder to make only the motion vectors on either CPU or GPU. I suspect this would be slow cos I think calculating the motion vectors is the hardest part of the algorithm.










I'm tending towards the second option but I need some help figuring out where in the libav code to do it.


- Decode in hardware and get the motion vectors. So far this has failed. I tried combining ffmpeg's