Recherche avancée

Médias (91)

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (12708)

  • Fallback input for ffmpeg

    22 septembre 2018, par Daniel Cantarin

    I’m doing some transcoding from a third-party remote input stream that I do not control.

    This input stream has errors from time to time, that I would like to mitigate before sending the stream to my transcoding pipeline, avoiding this way some possible problems in the output.

    I have several ideas regarding different problems. But the most basic scenario I would like to set up is as follows : when the stream is down, or it somehow loses some frames, I want to fill that video gap with a secondary input (like a blank screen, for example).

    For this simple task, I would like to use ffmpeg. I know it can mix, let’s say, an input stream with a fullscreen black square static image. However, I have to deal with this other condition : ffmpeg would run in the same infraestructure for the actual transcoding pipeline. That infraestructure must use its computing power for rendering the output. So, whatever ffmpeg command I end up using should use the minimum possible computing power.

    My actual problem : if I use -vcodec copy, in order to use minimum CPU, I can’t alter the original stream. But if I alter the original stream (by mixing it with some other stream), the operation uses CPU.

    My question : Is there a way to use -vcodec copy, but with a fallback input (instead of a mixed one) for when there are video gaps in the primary stream ?

    Thanks in advance.

  • Android Video color effect issue using FFMPEG [on hold]

    8 juin 2016, par umesh mishra

    I m facing one problem. when we use library for effect that i have mentioned below video is created only 1/3 of actual video size. please tell me what is issue. and also doesn’t work on marshmallow.
    https://github.com/krazykira/VidEffects/wiki/Permanent-video-effects

  • Fastest way to extract raw Y' plane data from Y'Cb'Cr encoded video ?

    20 février 2024, par memeko

    I have a use-case where I'm extracting I-Frames from videos and turning them into perceptual hashes for later analysis.

    


    


    I'm currently using ffmpeg to do this with a command akin to :

    


    ffmpeg -skip_frame nokey -i 'in%~1.mkv' -vsync vfr -frame_pts true -vf 'keyframes/_Y/out%~1/%%06d.bmp'

    


    and then reading in the data from the resulting images.

    


    


    This is a bit wasteful as, to my understanding, ffmpeg does implicit YUV -> RGB colour-space conversion and I'm also needlessly saving intermediate data to disk.

    


    Most modern video codecs utilise chroma subsampling and have frames encoded in a Y'CbCr colour-space, where Y' is the luma component, and Cb Cr are the blue-difference, red-difference chroma components.

    


    Which in something like YUV420p used in h.264/h.265 video codecs is encoded as such :

    


    single YUV420p encoded frame

    


    Where each Y' value is 8 bits long and corresponds to a pixel.

    


    


    As I use gray-scale data for generating the perceptual hashes anyway, I was wondering if there is a way to simply grab just the raw Y' values from any given I-Frame into an array and skip all of the unnecessary conversions and extra steps ?

    


    (as the luma component is essentially equivalent to the grayscale data i need for generating hashes)

    


    I came across the -vf 'extractplanes=y' filter in ffmpeg that seems like it might do just that, but according to source :

    


    


    "...what is extracted by 'extractplanes' is not raw data of the (for example) Y plane. Each extracted is converted to grayscale. That is, the converted video data has YUV (or RGB) which is different from the input."

    


    


    which makes it seem like it's touching chroma components and doing some conversion anyway, in testing applying this filter didn't affect the processing time of the I-Frame extraction either.

    


    


    My script is currently written in Python, but I am in the process of migrating it to C++, so I would prefer any solutions pertaining to the latter.

    


    ffmpeg seems like the ideal candidate for this task, but I really am looking for whatever solution that would ingest the data fastest, preferably saving directly to RAM, as I'll be processing a large number of video files and discarding I-Frame luma pixel data once a hash has been generated.

    


    I would also like to associate each I-Frame with its corresponding frame number in the video.