
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (70)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.
Sur d’autres sites (8245)
-
Construct fictitious P-frames from just I-frames [closed]
25 juillet 2024, par nilgirianSome context.. I saw this video recently https://youtu.be/zXTpASSd9xE?si=5alGvZ_e13w0Ahmb it's a continuous zoom into a fractal.


I've been thinking a whole lot of how did they created this video 9 years ago ? The problem is that these frames are mathematically intensive to calculate back then and today still fairly really hard now.


He states in the video it took him 33 hours to generate 1 keyframe.


I was wondering how I would replicate that work. I know by brute force I can generate several images files (essentially each image would be an I-frame) and then ask ffmpeg to compress it into mp4 (where it will convert most of those images into P-frames). I know that. But if I did it that way I calculated it'd take me 6.5 years to render that 9min video (at 30fps, 9 years ago).


So I imagine he only generated I-frames to cut down on time. And then this person somehow created fictitious P-frames in-between. Given that frame-to-frame are similar this seems like it should be doable since you're just zooming in. If he only generated just the I-frames at every 1 second (at 30fps) that work could be cut down to just 82 days.


So if I only want to generate the images that will be used as I-frames could ffmpeg or some other program just automatically make a best guess to generate fictitious P-frames for me ?


-
images->video->web canvas : RGB/YUV issues
5 février 2016, par nrobWe’ve written an web app which :
- takes 3D, time dependent weather data
- tiles each 3D time point to make a 2D frame (written out as a png image)
- stitches these frames together into a video (using ffmpeg/avconv)
- streams this video into a web app
- polls the canvas for frames
- sends the frames to the GPU where they are converted back to 3D and ray traced
You can see the app here, code here and you can see the data video here
Currently the pngs are written as RGB images, the video codec is in YUV and getting frames from the canvas returns RGB. As such there is a significant loss of information due to the conversion between image spaces.
Does anyone have suggestions what is the best way round this ?
I’ve tried a bunch of RGB video codecs, but I can’t get any to work, and I don’t know if the web browser will support it anyway. Can anyone suggest a good RGB codec (both lossy and lossless would be great)
Also, is it possible to write to YUV images/read them from a video canvas in HTML5 ?
Ultimately, I don’t even want anything to do with images/videos, I’m just hacking the codecs to stream/compress large animated 3D data volumes
-
Why does ffplay read both video and keyboard input from stdin ?
27 janvier 2016, par cxrodgersI’m trying to compress a video feed from a webcam while simultaneously displaying it, using ffmpeg and ffplay. I do actually have this working, but I want to disable the ffplay window from interpreting keyboard presses.
It took me a while to figure this out but here’s what I’m using :
ffmpeg -f video4linux2 -i /dev/video0 -vcodec mpeg4 -f rawvideo - \
| tee output.mkv \
| ffplay -fflags nobuffer -(Actually I am doing all of this from a Python script using the subprocess module. Here I have represented it as a straightforward terminal command because the result is the same.)
So this actually works and does everything I want. The only thing is that if the ffplay window is active, it interprets keypresses (like "F" for fullscreen). Instead I want it to completely ignore all keypresses.
My questions :
- How is this even possible ? I thought I was redirecting video input to stdin, and then telling ffplay to read video from stdin. How can keypresses be multiplexed on the same pipe ?
- How can I disable this behavior ? I tried "-nostdin" but it doesn’t work with my version.
# ffplay -nostdin output.mkv
ffplay version N-77455-g4707497 Copyright
(c) 2003-2015 the FFmpeg developers built with gcc 4.8 (Ubuntu
4.8.4-2ubuntu1 14.04)...
Failed to set value ’output.mkv’ for option ’nostdin’ : Option not
found