Recherche avancée

Médias (91)

Autres articles (87)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (11684)

  • How do video encoding standards(like h.264) then serialize motion prediction ?

    12 août 2019, par Nephilim

    Motion prediction brute force algorithms, in a nutshell work like this(if I'm not mistaken) :

    



      

    1. Search every possible macroblock in the search window
    2. 


    3. Compare each of them with the reference macroblock
    4. 


    5. Take the one that is the most similar and encode the DIFFERENCE between the frames instead of the actual frame.
    6. 


    



    Now this in theory makes sense to me. But when it gets to the actual serializing I'm lost. We've found the most similar block. We know where it is, and from that we can calculate the distance vector of it. Let's say it's about 64 pixels to the right.

    



    Basically, when serializing this block, we do :

    



      

    • Ignore everything but luminosity(encode only Y, i think i saw this somewhere ?), take note of the difference between it and the reference block
    • 


    • Encode the motion, a distance vector
    • 


    • Encode the MSE, so we can reconstruct it
    • 


    



    Is the output of this a simple 2D array of luminosity values, with an appended/prepended MSE value and distance vector ? Where is the compression in this ? We got to take out the UV component ? There seem to be many resources that take on the surface level of video encoders, but it's very hard to find actual in-depth explanations of modern video encoders. Feel free to correct me on my above statements.

    


  • "Format" or style the output of showwaves/showwavespic in ffmpeg

    9 avril 2020, par flomei

    I'm trying to get my head wrapped around ffmpeg and its functions and filters.

    



    showwaves and showwavespic already create nice output, but I'm looking to style it even more. Lots of audioplayers for example create a "waveform" like the following, which would be a job for showwavespic, I think. (I think soundcloud for example does create a form like this with actual data.)
pretty waveform

    



    I wonder if I can use ffmpeg to create something like this directly from my raw input data. I thought I might need to split my audio track into X parts, calculate the average distance from the Y-axis and then create a bar. But I'm not sure if I can manage to do that with ffmpeg or if I need to build more of a toolchain for that.

    



    If I could create the output of showwaves to look like that above, that would be great. On the other hand I'd already be happy if I could just increase the stroke width of the showwaves output.

    



    Didn't found anything about the in the documentation or I looked at the wrong places, because I don't yet get the big picture of ffmpeg.

    


  • FFDEC_H264 dropping non-key frames

    15 avril 2013, par Kranti

    I am working on a sample GStreamer application to play MPEG2TS Video.

    My pipeline is :

    appsrc ! h264parse ! ffdec_h264 ! ffmpegcolorspace ! ximagesink

    If I pump the data without setting any timestamp, all the frames are getting played

    videoBuffer = gst_app_buffer_new (rawVideo, bufSize, test_free_video, rawVideo);

    But if I set the timestamp to the buffer, only I-frames are getting played :

    videoBuffer = gst_app_buffer_new (rawVideo, bufSize, test_free_video, rawVideo);
    GST_BUFFER_TIMESTAMP(videoBuffer)  = calc_timestamp(rawVideo);

    calc_timestamp() is a function to calculate timestamp based on PES header info

    From the GST_LOGS :

    Dropping non-keyframe (seek/init)
    Dropping non-keyframe (seek/init)
    Dropping non-keyframe (seek/init)

    The above logs are getting repeated. I don't have any clue, why is this happening ? Any input would be appreciated.

    Thanks in advance,
    Kranti