Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (70)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (5191)

  • FFMpeg video clipping

    8 mars 2012, par integra753

    I would like to use the ffmpeg apis (not the command line) for clipping videos to a specific size (e.g say 1hr video, create a new video starting at 10 minutes and ending at 30 minutes). Are there any examples of doing this out there ?

    I have used the apis to stream and record video so I have a bit of background knowledge.

    Thanks.

  • How to create a video from png images using ffmpeg

    28 décembre 2011, par Rajat

    I want to create a video from different png images. My code is :

    ffmpeg -r 20 -f image2 -i slideshow/%d.png -y -s 320x240 -aspect 4:3 out.mp4

    and i receive output :

    FFmpeg version SVN-r26400, Copyright (c) 2000-2011 the FFmpeg developers
     built on Sep 27 2011 00:47:07 with gcc 4.1.2 20080704 (Red Hat 4.1.2-50)
     configuration: --enable-avfilter --enable-filter=fade
     libavutil     50.36. 0 / 50.36. 0
     libavcore      0.16. 1 /  0.16. 1
     libavcodec    52.108. 0 / 52.108. 0
     libavformat   52.93. 0 / 52.93. 0
     libavdevice   52. 2. 3 / 52. 2. 3
     libavfilter    1.74. 0 /  1.74. 0
     libswscale     0.12. 0 /  0.12. 0
    Input #0, image2, from 'slideshow/%d.png':
     Duration: 00:00:00.25, start: 0.000000, bitrate: N/A
       Stream #0.0: Video: png, rgb24, 720x471, 20 fps, 20 tbr, 20 tbn, 20 tbc
    [buffer @ 0x9687230] w:720 h:471 pixfmt:rgb24
    [scale @ 0x9687600] w:720 h:471 fmt:rgb24 -> w:320 h:240 fmt:yuv420p flags:0xa0000004
    Output #0, mp4, to 'out.mp4':
     Metadata:
       encoder         : Lavf52.93.0
       Stream #0.0: Video: mpeg4, yuv420p, 320x240 [PAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 20 tbn, 20 tbc
    Stream mapping:
     Stream #0.0 -> #0.0
    Press [q] to stop encoding
    Segmentation fault

    What might be the problem ? Please help...
    Currently i am using centos 5 server.

  • Recommendations for real-time pixel-level analysis of television (TV) video

    6 décembre 2011, par Randall Cook

    [Note : This is a rewrite of an earlier question that was considered inappropriate and closed.]

    I need to do some pixel-level analysis of television (TV) video. The exact nature of this analysis is not pertinent, but it basically involves looking at every pixel of every frame of TV video, starting from an MPEG-2 transport stream. The host platform will be server-class, multiprocessor 64-bit Linux machines.

    I need a library that can handle the decoding of the transport stream and present me with the image data in real-time. OpenCV and ffmpeg are two libraries that I am considering for this work. OpenCV is appealing because I have heard it has easy to use APIs and rich image analysis support, but I have no experience using it. I have used ffmpeg in the past for extracting video frame data from files for analysis, but it lacks image analysis support (though Intel's IPP can supplement).

    In addition to general recommendations for approaches to this problem (excluding the actual image analysis), I have some more specific questions that would help me get started :

    1. Are ffmpeg or OpenCV commonly used in industry as a foundation for real-time
      video analysis, or is there something else I should be looking at ?
    2. Can OpenCV decode video frames in real time, and still leave enough
      CPU left over to do nontrivial image analysis, also in real-time ?
    3. Is sufficient to use ffpmeg for MPEG-2 transport stream decoding, or
      is it preferable to just use an MPEG-2 decoding library directly (and if so, which one) ?
    4. Are there particular pixel formats for the output frames that ffmpeg
      or OpenCV is particularly efficient at producing (like RGB, YUV, or YUV422, etc) ?