Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (60)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (7885)

  • Multiple video sources combined into one

    28 septembre 2011, par Oded

    I am looking for an efficient way to do the following :

    Using several source videos (of approximately the same length), I need to generate an output video that is composed of all of the original sources each running in its own area (like a bunch of PIPs in several different sizes). So, the end result is that all the original are running side-by-side, each in its own area/box.

    The source and output need to be flv and the platform I am using is Windows (dev on Windows 7 64bit, deployment to Windows server 2008).

    I have looked at avisynth but unfortunately it can't handle flv and non of the plugins and flv splitters I have tried worked.

    My current process uses ffmpeg in the following manner :

    1. Use ffmpeg to generate 25 png's per second per video, resizing the original as needed.
    2. Use the System.Drawing namespace to combine each set of frames into a new image, starting with a static background, then loading each frame into an Image and drawing to the background Graphics object - this gives me the combined frames.
    3. Use ffmpeg to combine the generated images to a video.

    All this is very IO intensive (which is my processing bottleneck at the moment) and I feel there must be a more efficient way to reach my goal. I do not have much experience with video processing, and don't know what options are out there.

    Can anyone suggest a more efficient way of processing these ?

  • How to write a video encoder with ffmpeg ?

    27 décembre 2013, par SunnyShah

    I want to write an encoder with ffmpeg which can put iFrames (keyframes) at positions I want. Where can I found tutorials or reference material for it ?

    P.S
    Is it possible to do this with mencoder or any opensource encoder. I want to encode H263 file. I am writing under & for linux.

  • Which is better for pixel-level analysis of television (TV) video, OpenCV or ffmpeg ? [closed]

    5 décembre 2011, par Randall Cook

    I need to do some pixel-level analysis of television (TV) video. I have used ffmpeg in the past for analyzing video from files, but it wasn't exactly easy. I am thinking of giving OpenCV a try. Any recommendations or advice ?

    Let's assume that I am starting with an MPEG-2 transport stream, and the analysis needs to run in real-time on Linux. I was also planning on using Intel's IPP library for some of the number crunching.