
Recherche avancée
Autres articles (111)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Changer son thème graphique
22 février 2011, parLe thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
Modifier le thème graphique utilisé
Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
Il suffit ensuite de se rendre dans l’espace de configuration du (...)
Sur d’autres sites (6551)
-
Fastest way to extract raw Y' plane data from Y'Cb'Cr encoded video ?
20 février 2024, par memekoI have a use-case where I'm extracting
I-Frames
from videos and turning them into perceptual hashes for later analysis.

⠀


I'm currently using
ffmpeg
to do this with a command akin to :

ffmpeg -skip_frame nokey -i 'in%~1.mkv' -vsync vfr -frame_pts true -vf 'keyframes/_Y/out%~1/%%06d.bmp'


and then reading in the data from the resulting images.


⠀


This is a bit wasteful as, to my understanding,
ffmpeg
does implicitYUV -> RGB
colour-space conversion and I'm also needlessly saving intermediate data to disk.

Most modern video codecs utilise chroma subsampling and have frames encoded in a Y'CbCr colour-space, where Y' is the luma component, and Cb Cr are the blue-difference, red-difference chroma components.


Which in something like
YUV420p
used in h.264/h.265 video codecs is encoded as such :



Where each Y' value is
8 bits
long and corresponds to a pixel.

⠀


As I use gray-scale data for generating the perceptual hashes anyway, I was wondering if there is a way to simply grab just the raw Y' values from any given
I-Frame
into an array and skip all of the unnecessary conversions and extra steps ?

(as the luma component is essentially equivalent to the grayscale data i need for generating hashes)


I came across the
-vf 'extractplanes=y'
filter inffmpeg
that seems like it might do just that, but according to source :



"...what is extracted by 'extractplanes' is not raw data of the (for example) Y plane. Each extracted is converted to grayscale. That is, the converted video data has YUV (or RGB) which is different from the input."




which makes it seem like it's touching chroma components and doing some conversion anyway, in testing applying this filter didn't affect the processing time of the
I-Frame
extraction either.

⠀


My script is currently written in
Python
, but I am in the process of migrating it toC++
, so I would prefer any solutions pertaining to the latter.

ffmpeg
seems like the ideal candidate for this task, but I really am looking for whatever solution that would ingest the data fastest, preferably saving directly toRAM
, as I'll be processing a large number of video files and discardingI-Frame
luma pixel data once a hash has been generated.

I would also like to associate each
I-Frame
with its corresponding frame number in the video.

-
Android Video color effect issue using FFMPEG [on hold]
8 juin 2016, par umesh mishraI m facing one problem. when we use library for effect that i have mentioned below video is created only 1/3 of actual video size. please tell me what is issue. and also doesn’t work on marshmallow.
https://github.com/krazykira/VidEffects/wiki/Permanent-video-effects -
What are all the command `options` to execute with ?
25 janvier 2023, par Phil LucksI'd like to be able to compress the video in a way to help improve upload times.


In reading the docs for FFMpeg Kit, using React Native, there is a basic
execute
command string of'-i file1.mp4 -c:v mpeg4 file2.mp4'
... I can guess at what some of this means, in terms of input & output file names based on the ffMPEG docs, however, some of these options I am not sure of.

Like why is there a
-i
flag prefix ? Is this "input" ?
Why is there-c:v
? Is this "convert video" ?
What if I want to reduce frame rate, or change size of video ?

The TS def is just a string...


Is there a good place to understand what the official docs options map to the strings ? I think