
Recherche avancée
Autres articles (52)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (8151)
-
Breaking a video into frames with python
30 juillet 2012, par user1481112I am trying to write a program that deletes frames of a video that don't have a particular symbol in them. My general plan :
- Split the audio from the video
- Split the video into frames
- Run the frames through a subroutine that looks for the symbol, by checking the pixels where it should be for being the correct color, and logging the ones that don't.
- Delete those frames and corresponding audio seconds
- Splices it all back together.
I need some help finding libraries that can do this. I was wondering if
wxpython
could do the detection of pixel color. I have no idea what library could split audio and video and which could edit audio. I know ffmpeg could split the video into frames but after two days of work I still have not been able to install it for python 2.7, so I either need a way to install it or a different library to do it. Any ideas ? -
Why does my Blink based browser play hide and seek ?
21 janvier 2016, par Caius JardWe have a C# tool (that I wrote) that records online broadcasts taking place a custom written (that we wrote) flash app. (There are no DRM or copyright issues here.)
We’ve coded up a system whereby this tool is installed on a Windows Server 2012 R2 Amazon AWS instance. After we boot the instance, the tool loads, waits for the right time to start recording, launches a browser and passes the command line argument of the URL to access the broadcast. The browser will then load the flash app and the interview audio and video will start arriving at the browser instance on AWS
By way of a virtual audio cable driver, screen / audio capture directshow filters and ffmpeg a screen recording is taken. The C# tool calls ffmpeg and ffmpeg will record the screen reliably for the entire interview, then the tool shuts the whole thing down
The problem I’m having is that both Chrome and Electron browser sometimes simply don’t draw themselves on the screen so all ffmpeg ends up recording is a blank desktop and the audio of the broadcast (hence, the browser IS running)
We found this out when recordings started turning up with X hours of merely recording the windows desktop and the tool’s main window with a countdown timer.
A screenshotting facility was built into the tool and added to its web control interface, and this way we can test whether the browser is visible - a human looks at the screenshot of every broadcast, just after recording has started (the browser is supposed to be on show by this time)
We notice that 50% of the time, the browser isn’t drawing itself on screen. By 50% I mean that every other recording that the AWS instance carries out, will be blank : AWS starts, records ok, shuts down. AWS starts again an hour later for a different broadcast, recording is blank, shuts down.. Starts/ok/shutdown. Starts/blank/shutdown. Repeat ad infinitum
What’s even more strange is that if I run VNCviewer on my dev machine and connect up to an instance that is having a problem, the instant that the VNC connection is up and the remote desktop is showing on my screen, the browser suddenly appears as if nothing was ever wrong. A screenshot from before the VNC connect shows blank desktop, connect VNC, take another screenshot and the browser is there. All through it the audio is fine - the browser connected to the boadcast is fine, for sure
It’s as though Chrome/Electron thinks "you know what, noone is looking at me so I’m not going to bother drawing myself". No screen saver is set, though the power plan has the setting "turn off the display after 15 minutes".
Perhaps Chrome/Electron have a test amounts to "if the display is off, don’t draw". I can’t explain the inconsistency though - the recorder launches at least 1 hour before it’s needed, and sits there idle until it’s time to start the browser. You’d hence imagine that the "power off the monitor after 15 mins" setting would reliably have ensured the "monitor" is "off" by the time every recording start comes around
This behaviour doesn’t happen with any of the other browsers (but unfortunately the app doesn’t and cannot work in them because it uses some weird chrome-only technology/API).
Can anyone suggest anything to look at to help debug this, or anything I can build into the C# tool to overcome the problem ? Coding it up to connect to itself via VNC for a few seconds after it has launched the browser.. Well that just tastes nasty.
Naturally, as soon as I connect to the machine via VNC (rather than RDP - RDP isn’t usable because the recording context is in a logged on session for a particular user) the problem goes away, which makes it frustratingly hard to debug.
-
Fighting with the VP8 Spec
4 juin 2010, par Multimedia Mike — VP8As stated in a previous blog post on the matter, FFmpeg’s policy is to reimplement codecs rather than adopt other codebases wholesale. And so it is with Google’s recently open sourced VP8 codec, the video portion of their Webm initiative. I happen to know that the new FFmpeg implementation is in the capable hands of several of my co-developers so I’m not even worrying about that angle.
Instead, I thought of another of my characteristically useless exercises : Create an independent VP8 decoder implementation entirely in pure Python. Silly ? Perhaps. But it has one very practical application : By attempting to write a new decoder based on the official bitstream documentation, this could serve as a mechanism for validating said spec, something near and dear to my heart.
What is the current state of the spec ? Let me reiterate that I’m glad it exists. As I stated during the initial open sourcing event, everything that Google produced for the initial event went well beyond my wildest expectations. Having said that, the documentation does fall short in a number of places. Fortunately, I am on the Webm mailing lists and am sending in corrections and ideas for general improvement. For the most part, I have been able to understand the general ideas behind the decoding flow based on the spec and am even able to implement certain pieces correctly. Then I usually instrument the libvpx source code with output statements in order to validate that I’m doing everything right.
Token Blocker
Unfortunately, I’m quite blocked right now on the chapter regarding token/DCT coefficient decoding (chapter 13 in the current document iteration). In his seminal critique of the codec, Dark Shikari complained that large segments of the spec are just C code fragments copy and pasted from the official production decoder. As annoying as that is, the biggest insult comes at the end of section 13.3 :While we have in fact completely described the coefficient decoding procedure, the reader will probably find it helpful to consult the reference implementation, which can be found in the file detokenize.c.
The reader most certainly will not find it helpful to consult the file detokenize.c. The file in question implements the coefficient residual decoding with an unholy sequence of C macros that contain goto statements. Honestly, I thought I did understand the coefficient decoding procedure based on the spec’s description. But my numbers don’t match up with the official decoder. Instrumenting or tracing macro’d code is obviously painful and studying the same code is making me think I don’t understand the procedure after all. To be fair, entropy decoding often occupies a lot of CPU time for many video decoders and I have little doubt that the macro/goto approach is much faster than clearer, more readable methods. It’s just highly inappropriate to refer to it for pedagogical purposes.
Aside : For comparison, check out the reference implementation for the VC-1 codec. It was written so clearly and naively that the implementors used an O(n) Huffman decoder. That’s commitment to clarity.
I wonder if my FFmpeg cohorts are having better luck with the DCT residue decoding in their new libavcodec implementation ? Maybe if I can get this Python decoder working, it can serve as a more appropriate reference decoder.
Update : Almost immediately after I posted this entry, I figured out a big problem that was holding me back, and then several more small ones, and finally decoded by first correct DCT coefficient from the stream (I’ve never been so happy to see the number -448). I might be back on track now. Even better was realizing that my original understanding of the spec was correct.
Unrelated
I found this image on the Doom9 forums. I ROFL’d :
It’s probably unfair and inaccurate but you have to admit it’s funny. Luckily, quality nitpickings aren’t my department. I’m just interested in getting codecs working, tested, and documented so that more people can use them reliably.