Recherche avancée

Médias (91)

Autres articles (87)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (12162)

  • Fastest way to extract raw Y' plane data from Y'Cb'Cr encoded video ?

    20 février 2024, par memeko

    I have a use-case where I'm extracting I-Frames from videos and turning them into perceptual hashes for later analysis.

    


    


    I'm currently using ffmpeg to do this with a command akin to :

    


    ffmpeg -skip_frame nokey -i 'in%~1.mkv' -vsync vfr -frame_pts true -vf 'keyframes/_Y/out%~1/%%06d.bmp'

    


    and then reading in the data from the resulting images.

    


    


    This is a bit wasteful as, to my understanding, ffmpeg does implicit YUV -> RGB colour-space conversion and I'm also needlessly saving intermediate data to disk.

    


    Most modern video codecs utilise chroma subsampling and have frames encoded in a Y'CbCr colour-space, where Y' is the luma component, and Cb Cr are the blue-difference, red-difference chroma components.

    


    Which in something like YUV420p used in h.264/h.265 video codecs is encoded as such :

    


    single YUV420p encoded frame

    


    Where each Y' value is 8 bits long and corresponds to a pixel.

    


    


    As I use gray-scale data for generating the perceptual hashes anyway, I was wondering if there is a way to simply grab just the raw Y' values from any given I-Frame into an array and skip all of the unnecessary conversions and extra steps ?

    


    (as the luma component is essentially equivalent to the grayscale data i need for generating hashes)

    


    I came across the -vf 'extractplanes=y' filter in ffmpeg that seems like it might do just that, but according to source :

    


    


    "...what is extracted by 'extractplanes' is not raw data of the (for example) Y plane. Each extracted is converted to grayscale. That is, the converted video data has YUV (or RGB) which is different from the input."

    


    


    which makes it seem like it's touching chroma components and doing some conversion anyway, in testing applying this filter didn't affect the processing time of the I-Frame extraction either.

    


    


    My script is currently written in Python, but I am in the process of migrating it to C++, so I would prefer any solutions pertaining to the latter.

    


    ffmpeg seems like the ideal candidate for this task, but I really am looking for whatever solution that would ingest the data fastest, preferably saving directly to RAM, as I'll be processing a large number of video files and discarding I-Frame luma pixel data once a hash has been generated.

    


    I would also like to associate each I-Frame with its corresponding frame number in the video.

    


  • Processing yuv4mpeg by hand

    15 avril 2014, par user3534466

    Theoretical question.

    I have named pipe(windows) with uncompressed-video yuv4mpeg and uncompressed-audio pcm. I need to read this stream in my program and render it to bitmap.

    If I realy understood description of yuv4mpeg http://wiki.multimedia.cx/index.php?title=YUV4MPEG2, there are simple YCbCr-images after header.

    Is it simple way to processing and rendering this data by my own code C++ without any libraries (ffmpeg) ?

  • What's the difference with crf and qp in ffmpeg ?

    12 novembre 2024, par Nova

    I read https://trac.ffmpeg.org/wiki/Encode/H.264 about h264 encoding and discovered qp.

    


    Q1 : What are the differences with crf and qp ?
    
Q2 : Is it better to use qp over crf overall, or is it only if for using qp 0 for best lossless ?
    
Q3 : Does qp have a known sensible setting if it's preferred ? So far, I know crf has the default value of 23 while 18 is a sensible preferred increase in quality, although I don't understand why 18 wouldn't be default if better sensible lossless.
    
Q4 : Would changing either of them cause incompatibility with non-ffmpeg players or just qp ?

    


    I'm converting from webm to mp4.

    


    I was going to test crf 23 and 18 and pick which is best but I can't seem to find any concrete information on this comparison or about qp.