Recherche avancée

Médias (1)

Mot : - Tags -/sintel

Autres articles (47)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (10757)

  • Feeding a series of images to ffmpeg as each image is created [closed]

    5 février 2013, par Mark Schneider

    I'm trying to use ffmpeg to build a 1280x720 slide-show from a sequence of pictures and videos, but I have concerns about potential disk I/O bottleneck.

    I expect a typical slide-show to have about 50 pictures and 2-3 videos (10-15 seconds each at 30 fps). I would like to show each picture for 3-4 seconds (possibly with a
    Ken Burns effect) with a smooth 2 second crossfade between each set of pictures (or for pictures adjacent to videos - between the picture and the first/last frame of the video).

    Given about 50 pictures, the crossfades alone would amount to about 3,000 images (50 transitions x 2 secs/transition x 30 fps). And I suppose if I implement a Ken Burns effect during each picture's 3-4 second showing, I'd have to provide ffmpeg with individual images for each of those frames. (I'm writing a script in Ruby that will pull a list of images from a database and in turn call ImageMagick to create the individual images for each frame. As I understand it, the RMagick library interfaces with ImageMagick such that the output images come back as in-memory objects without needing to write to disk. FWIW, I'm developing in Windows 8 and will deploy to Heroku.)

    All of the slideshow examples I've found online feed ffmpeg a set of images which have already been created. However, in an effort to avoid waiting on considerable disk I/O, I'd like to feed each image to ffmpeg as the image is created rather than create them all in advance.

    Is there a way to send each image file to ffmpeg on the fly as the file is created in memory ?

  • Gstreamer : Hauppauge HD PVR and Multi-video file output

    7 juin 2014, par user3716978

    I have very specific requirements for a Gstreamer pipeline that I can’t seem to create. I’m running Linux Mint Mate 14 (Nadia).

    I have an HD PVR, which records in MPEG TS. It presents, as its interface, a V4L2 device at /dev/video0. What I need is to somehow have it output the captured video to multiple files. That is, like dvgrab’s autosplit, it would output, say, 1800 frames, then create a new output file, then capture another 1800, and on and on.

    I’ve tried numerous methods. First, using multifilesink with the keyframe next-file option does what I want, but it doesn’t seem to add stream headers to the segment files, so that they cannot play properly and/or are missing their initial keyframe.

    I’ve tried limiting each individual capture length using num-buffers, and just restarting the capture after the previous one ends. This works for maybe 30 or 40 files but all the switching on and off eventually locks up the HD PVR, and it has to be power-cycled.

    I could also have it dump images to the disk and work with the individual frames, but this is very slow with MPEG TS since it has to demux, decode, and reencode every frame. It eats up 100% cpu and drops about 60% of the frames on my computer.

    ffmpeg doesn’t work, because the HD PVR driver doesn’t support ioctl. I can’t seem to get mencoder to stream it this way either, but maybe it’s possible ?

    What I need is to :

    • Have a single capture stream, to avoid pissing off the HD PVR
    • Have it split the stream into multiple files which can be individually analyzed
    • Have those multiple files be valid videos
    • Not eat up 100% of my CPU (although high utilization is ok, it needs to run at full speed). Since the stream is 1920x1080x60fps, anything to do with reencoding won’t work. It pretty much needs to be a stream copy.

    Thank you

  • Export frames/images from compressed video

    24 mars 2015, par Jan Viehweger

    I have a compressed movie (mp4) and I want to extract every single frame / image from it. I know that each individual frame of the video only contains the changed pixels regarding to the last keyframe, because of the video compression. But that is exactly what I want. I just want to see those differences. I want to visualy see how the compressor works.

    Is there some tool like imagemagick out there what can things like that ?