
Recherche avancée
Médias (1)
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
Autres articles (56)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Changer son thème graphique
22 février 2011, parLe thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
Modifier le thème graphique utilisé
Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
Il suffit ensuite de se rendre dans l’espace de configuration du (...)
Sur d’autres sites (9806)
-
ffmpeg -is it possible to increase a clip duration ?
18 novembre 2011, par dpigeraI currently have a jpeg file which I converted to an flv using the following command :
ffmpeg -r 10 -b 180000 -i test.jpg test.mp4
Now, I want to increase the duration of this .mp4 clip, so the picture stays on the screen for more than a split second. Eventually, I hope to merge a stream of these files to create a slide show out of jpeg files.
Does anyone know how to increase the duration of a clip in ffmpeg ?
-
The Big VP8 Debug
20 novembre 2010, par Multimedia Mike — VP8I hope my previous walkthrough of the VP8 4x4 intra coding process was educational. Today, I’ll be walking through an example of what happens when my toy VP8 encoder encodes an intra 16x16 block. This may prove educational to those who have never been exposed to the deep details of this or related algorithms. Also, I wanted to illustrate where I think my VP8 encoder process is going bad and generating such grotesque results.
Before I start, let me give a shout-out to Google Docs’ Drawing tool which I used to generate these diagrams. It works quite well.
Results
(Always cut to the chase in a blog post ; results first.) I’m glad I composed this post. In the course of doing so, I found the problem, fixed it, and am now able to present this image that was decoded from the bitstream encoded by my
toyworking VP8 encoder :
Yeah, I know that image doesn’t look like anything you haven’t seen before. The difference is that it has made a successful trip through my VP8 encoder.
Follow along through the encoding process and learn of the mistake...
Original Block and Subblocks
Here is the 16x16 block to be encoded :
The block is broken down into 16 4x4 subblocks for further encoding :
Prediction
The first step is to pick a prediction mode, generate a prediction block, and subtract the predictors from the macroblock. In this case, we will use DC prediction which means the predictor will be the same for each element.In 4x4 VP8 DC intra prediction, samples outside of the frame are assumed to be 128. It’s a little different in 16x16 DC intra prediction— samples above the top row are assumed to be 127 while samples left of the leftmost column are assumed to be 129. For the top left macroblock, this still works out to 128.
Subtract 128 from each of the samples :
Forward Transform
Run each of the 16 prediction-removed subblocks through the forward transform. This example uses the forward transform from libvpx 0.9.5 :
I have highlighted the DC coefficients in each subblock. That’s because those receive special consideration in 16x16 intra coding.
Quantization
The Y plane AC quantizer is 4 in this example, the minimum allowed. (The Y plane DC quantizer is also 4 but doesn’t come into play for intra 16x16 coding since the DC coefficients follow a different process.) Thus, quantize (integer divide) each AC element in each subblock (we’ll ignore the DC coefficient for this part) :
The Y2 Round Trip
Those highlighted DC coefficients from each of the 16 subblocks comprise the Y2 block. This block is transformed with a slightly different algorithm called the Walsh-Hadamard Transform (WHT). The results of this transform are then quantized (using 8 for both Y2 DC and AC in this example, as those are the smallest Y2 quantizers that VP8 allows), then zigzagged and entropy-coded along with the rest of the macroblock coefficients.
On the decoder side, the Y2 coefficients are decoded, de-zigzagged, dequantized and run through the inverse WHT.
And this is where I suspect that most of the error is creeping into my VP8 encoder. Observe the round-trip through the Y2 process :
As intimated, this part causes me consternation due to the wide discrepancy between the original and the reconstructed Y2 blocks. Observe the absolute difference between the 2 vectors :
That’s really significant and leads me to believe that this is where the big problem is.
What’s Wrong ?
My first suspicion is that the quantization is throwing off the process. I was disabused of this idea when I removed quantization from the equation and immediately reversed the transform :
So perhaps there is a problem with the forward WHT. Just like with the usual subblock transform, the VP8 spec doesn’t define how to perform the forward WHT, only the inverse WHT. Do I need to audition different forward WHTs from various versions of libvpx, similar to what I did with the other transform ? That doesn’t make a lot of sense— libvpx doesn’t seem to have so much trouble with basic encoding.
The Punchline
I reviewed the forward WHT code, the stuff that I plagiarized from libvpx 0.9.0. The function takes, among other parameters, a pitch value. There are 2 loops in the code. The first iterates through the rows of the input matrix— which I assumed was a 4x4 matrix. I was puzzled that during each iteration of the row loop, the input pointer was only being advanced by
(pitch/2)
. I removed the division by 2 and the problem went away. I.e., the encoded image looks correct.What’s up with the
(pitch/2)
, anyway ? It seems that the encoder likes to pack 2 4x4 subblocks into an 8x4 block data structure. In fact, the forward DCTs in the libvpx encoder have the same artifact. Remember how I surveyed several variations of forward DCT from different versions of libvpx ? The one that proved most accurate in that test was the one I had already modified to advance the input pointer properly. Fixing the other 2 candidates yields similar results :input : 92 91 89 86 91 90 88 86 89 89 89 88 89 87 88 93 short 0.9.0 : -311 6 2 0 0 11 -6 1 2 -3 3 0 0 0 -2 1 inverse : 92 91 89 86 91 90 88 87 90 89 89 88 89 87 88 93 fast 0.9.0 : -313 5 1 0 1 11 -6 1 3 -3 4 0 0 0 -2 1 inverse : 91 91 89 86 90 90 88 86 89 89 89 88 89 87 88 93 short 0.9.5 : -312 7 1 0 1 12 -5 2 2 -3 3 -1 1 0 -2 1 inverse : 92 91 89 86 91 90 88 86 89 89 89 88 89 87 88 93
Code cribber beware !
Corrected Y2 Round Trip
Let’s look at that Y2 round trip one more time :
And another look at the error between the original and the reconstruction :
Better.
Dequantization, Prediction, Inverse Transforms, and Reconstruction
To be honest, now that I solved the major problem, I’m getting a little tired of making these pictures. Long story short, all elements of the original 16 subblocks are dequantized and their DC coefficients are filled in with the appropriate item from the reconstructed Y2 block. A base predictor block is generated (all 128 values in this case). And each Y block is run through the inverse transform and added to the predictor block. The following is the reconstruction :
And if you compare that against the original luma macroblock (I don’t feel like doing it right now), you’ll find that it’s pretty close.
I can’t believe how close I was all this time, and how long that pitch bug held me up.
-
Stop doing this in your encoder comparisons
14 juin 2010, par Dark Shikari — UncategorizedI’ll do a more detailed post later on how to properly compare encoders, but lately I’ve seen a lot of people doing something in particular that demonstrates they have no idea what they’re doing.
PSNR is not a very good metric. But it’s useful for one thing : if every encoder optimizes for it, you can effectively measure how good those encoders are at optimizing for PSNR. Certainly this doesn’t tell you everything you want to know, but it can give you a good approximation of “how good the encoder is at optimizing for SOMETHING“. The hope is that this is decently close to the visual results. This of course can fail to be the case if one encoder has psy optimizations and the other does not.
But it only works to begin with if both encoders are optimized for PSNR. If one optimizes for, say, SSIM, and one optimizes for PSNR, comparing PSNR numbers is completely meaningless. If anything, it’s worse than meaningless — it will bias enormously towards the encoder that is tuned towards PSNR, for obvious reasons.
And yet people keep doing this.
They keep comparing x264 against other encoders which are tuned against PSNR. But they don’t tell x264 to also tune for PSNR (–tune psnr, it’s not hard !), and surprise surprise, x264 loses. Of course, these people never bother to actually look at the output ; if they did, they’d notice that x264 usually looks quite a bit better despite having lower PSNR.
This happens so often that I suspect this is largely being done intentionally in order to cheat in encoder comparisons. Or perhaps it’s because tons of people who know absolutely nothing about video coding insist on doing comparisons without checking their methodology. Whatever it is, it clearly demonstrates that the person doing the test doesn’t understand what PSNR is or why it is used.
Another victim of this is Theora Ptalarbvorm, which optimizes for SSIM at the expense of PSNR — an absolutely great decision for visual quality. And of course if you just blindly compare Ptalarbvorm (1.2) and Thusnelda (1.1), you’ll notice Ptalarbvorm has much lower PSNR ! Clearly, it must be a worse encoder, right ?
Stop doing this. And call out the people who insist on cheating.