Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (58)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (8635)

  • libavutil : Use an intermediate variable in AV_COPY*U

    28 juillet 2016, par Martin Storsjö
    libavutil : Use an intermediate variable in AV_COPY*U
    

    If AV_RN and AV_WN are macros with multiple individual reads and
    writes, the previous version of the AV_COPYU macro would fail if
    the reads and writes overlap.

    This should not be any less efficient in any case, given a
    sensibly optimizing compiler.

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DBH] libavutil/intreadwrite.h
  • FFMPEG : How to extract a PNG sequence from a video, remove duplicate frames in the process and keep the original frame number ?

    16 mai 2020, par Simon

    I have a recording of an old game which has variable framerate. Since I want to process individual frames to upscale and modernize the footage I would like to avoid any duplicate frames. I know that I can use this function to extract all frames from a video :

    &#xA;&#xA;

    ffmpeg -i input.mov -r 60/1 out%04d.png&#xA;

    &#xA;&#xA;

    And I know that I can remove duplicate frames using this function :

    &#xA;&#xA;

    ffmpeg -i input.mov -vf mpdecimate,setpts=N/FRAME_RATE/TB output.mov&#xA;

    &#xA;&#xA;

    However, the above command removes duplicate frames and puts frames next to each other whereas in order to keep a timecode of sorts it would be a lot more useful to be able to extract PNGs with frame number (video is progressive 60fps) but without all of the duplicates.

    &#xA;&#xA;

    So, the question is : what if I want to extract PNG files BUT maintain the original corresponding framenumber within the sequence ? So, if we have a video with 10 frames and frames 2-8 are duplicates it spits out 1.png 2.png 9.png and 10.png ? How do I combine both bits of code listed above ?

    &#xA;

  • Greed is Good ; Greed Works

    25 novembre 2010, par Multimedia Mike — VP8

    Greed, for lack of a better word, is good ; Greed works. Well, most of the time. Maybe.

    Picking Prediction Modes
    VP8 uses one of 4 prediction modes to predict a 16x16 luma block or 8x8 chroma block before processing it (for luma, a block can also be broken into 16 4x4 blocks for individual prediction using even more modes).

    So, how to pick the best predictor mode ? I had no idea when I started writing my VP8 encoder. I did not read any literature on the matter ; I just sat down and thought of a brute-force approach. According to the comments in my code :

    // naive, greedy algorithm :
    //   residual = source - predictor
    //   mean = mean(residual)
    //   residual -= mean
    //   find the max diff between the mean and the residual
    // the thinking is that, post-prediction, the best block will
    // be comprised of similar samples
    

    After removing the predictor from the macroblock, individual 4x4 subblocks are put through a forward DCT and quantized. Optimal compression in this scenario results when all samples are the same since only the DC coefficient will be non-zero. Failing that, when the input samples are at least similar to each other, few of the AC coefficients will be non-zero, which helps compression. When the samples are all over the scale, there aren’t a whole lot of non-zero coefficients unless you crank up the quantizer, which results in poor quality in the reconstructed subblocks.

    Thus, my goal was to pick a prediction mode that, when applied to the input block, resulted in a residual in which each element would feature the least deviation from the mean of the residual (relative to other prediction choices).

    Greedy Approach
    I realized that this algorithm falls into the broad general category of "greedy" algorithms— one that makes locally optimal decisions at each stage. There are most likely smarter algorithms. But this one was good enough for making an encoder that just barely works.

    Compression Results
    I checked the total file compression size on my usual 640x360 Big Buck Bunny logo image while forcing prediction modes vs. using my greedy prediction picking algorithm. In this very simple test, DC-only actually resulted in slightly better compression than the greedy algorithm (which says nothing about overall quality).

    prediction mode quantizer index = 0 (minimum) quantizer index = 10
    greedy 286260 98028
    DC 280593 95378
    vertical 297206 105316
    horizontal 295357 104185
    TrueMotion 311660 113480

    As another data point, in both quantizer cases, my greedy algorithm selected a healthy mix of prediction modes :

    • quantizer index 0 : DC = 521, VERT = 151, HORIZ = 183, TM = 65
    • quantizer index 10 : DC = 486, VERT = 167, HORIZ = 190, TM = 77

    Size vs. Quality
    Again, note that this ad-hoc test only measures one property (a highly objective one)— compression size. It did not account for quality which is a far more controversial topic that I have yet to wade into.