
Recherche avancée
Médias (3)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (49)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)
Sur d’autres sites (7979)
-
What's the difference with crf and qp in ffmpeg ?
12 novembre 2024, par NovaI read https://trac.ffmpeg.org/wiki/Encode/H.264 about h264 encoding and discovered
qp
.

Q1 : What are the differences with crf and qp ?

Q2 : Is it better to use qp over crf overall, or is it only if for using qp 0 for best lossless ?

Q3 : Does qp have a known sensible setting if it's preferred ? So far, I know crf has the default value of 23 while 18 is a sensible preferred increase in quality, although I don't understand why 18 wouldn't be default if better sensible lossless.

Q4 : Would changing either of them cause incompatibility with non-ffmpeg players or just qp ?

I'm converting from webm to mp4.


I was going to test crf 23 and 18 and pick which is best but I can't seem to find any concrete information on this comparison or about
qp
.

-
Processing yuv4mpeg by hand
15 avril 2014, par user3534466Theoretical question.
I have named pipe(windows) with uncompressed-video yuv4mpeg and uncompressed-audio pcm. I need to read this stream in my program and render it to bitmap.
If I realy understood description of yuv4mpeg http://wiki.multimedia.cx/index.php?title=YUV4MPEG2, there are simple YCbCr-images after header.
Is it simple way to processing and rendering this data by my own code C++ without any libraries (ffmpeg) ?
-
Fastest way to extract raw Y' plane data from Y'Cb'Cr encoded video ?
20 février 2024, par memekoI have a use-case where I'm extracting
I-Frames
from videos and turning them into perceptual hashes for later analysis.

⠀


I'm currently using
ffmpeg
to do this with a command akin to :

ffmpeg -skip_frame nokey -i 'in%~1.mkv' -vsync vfr -frame_pts true -vf 'keyframes/_Y/out%~1/%%06d.bmp'


and then reading in the data from the resulting images.


⠀


This is a bit wasteful as, to my understanding,
ffmpeg
does implicitYUV -> RGB
colour-space conversion and I'm also needlessly saving intermediate data to disk.

Most modern video codecs utilise chroma subsampling and have frames encoded in a Y'CbCr colour-space, where Y' is the luma component, and Cb Cr are the blue-difference, red-difference chroma components.


Which in something like
YUV420p
used in h.264/h.265 video codecs is encoded as such :



Where each Y' value is
8 bits
long and corresponds to a pixel.

⠀


As I use gray-scale data for generating the perceptual hashes anyway, I was wondering if there is a way to simply grab just the raw Y' values from any given
I-Frame
into an array and skip all of the unnecessary conversions and extra steps ?

(as the luma component is essentially equivalent to the grayscale data i need for generating hashes)


I came across the
-vf 'extractplanes=y'
filter inffmpeg
that seems like it might do just that, but according to source :



"...what is extracted by 'extractplanes' is not raw data of the (for example) Y plane. Each extracted is converted to grayscale. That is, the converted video data has YUV (or RGB) which is different from the input."




which makes it seem like it's touching chroma components and doing some conversion anyway, in testing applying this filter didn't affect the processing time of the
I-Frame
extraction either.

⠀


My script is currently written in
Python
, but I am in the process of migrating it toC++
, so I would prefer any solutions pertaining to the latter.

ffmpeg
seems like the ideal candidate for this task, but I really am looking for whatever solution that would ingest the data fastest, preferably saving directly toRAM
, as I'll be processing a large number of video files and discardingI-Frame
luma pixel data once a hash has been generated.

I would also like to associate each
I-Frame
with its corresponding frame number in the video.