Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (103)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (11147)

  • iOS SDK avcodec_decode_video Optimization

    6 août 2013, par Johny Cage

    I've recently started a project that relies on streaming FLV directly to an iOS device. As most famous i went with ffmpeg (and an iOS wrapper - kxmovie). To my surprise iPhone 4 is incapable of playing even SD low-bitrate FLV videos. The current implementation i'm using is decoding the video/audio/sub frames in dispatch_async while loop and copies the YUV frame data to a object, where the object is parsed to 3 textures - Y/U/V (in case of RGB color space - just parse the data) and rendered on screen. After much trial and error, i've decided to kill the whole rendering pipeline and leave only the avcodec_decode_video2 function to run. Surprisingly the FPS did not improve and videos are still unplayable.

    My question is : What can i do to improve the performance of avcodec_decode_video2 ?

    Note :
    I've tried a few commercial apps and they play the same file perfectly fine with no more than 50-60% cpu usage.

    The library is based off the 1.2 branch and this is are the build args :

    '--arch=arm',
    '--cpu=cortex-a8',
    '--enable-pic',
    "--extra-cflags='-arch armv7'",
    "--extra-ldflags='-arch armv7'",
    "--extra-cflags='-mfpu=neon -mfloat-abi=softfp -mvectorize-with-neon-quad'",
    '--enable-neon',
    '--enable-optimizations',
    '--disable-debug',
    '--disable-armv5te',
    '--disable-armv6',
    '--disable-armv6t2',
    '--enable-small',
    '--disable-ffmpeg',
    '--disable-ffplay',
    '--disable-ffserver',
    '--disable-ffprobe',
    '--disable-doc',
    '--disable-bzlib',
    '--target-os=darwin',
    '--enable-cross-compile',
    #'--enable-nonfree',
    '--enable-gpl',
    '--enable-version3',

    And according to Instruments the following functions take about 30% CPU usage each :

    Running Time    Self        Symbol Name
    37023.9ms   32.3%   13874,8                   ff_h264_decode_mb_cabac
    34626.2ms   30.2%   9194,7                    loop_filter
    29430.0ms   25.6%   173,8                     ff_h264_hl_decode_mb
  • Sub-pixel rendering with imagettftext()

    8 mars 2016, par user1661677

    I’m creating an image sequence, and encoding it to a video using PHP, GD library and ffmpeg. You’ll see I’m animating the two text layers inversely of each other, on the X axis. And with some simple operators $i/2 and $i/3, I’m trying to make their movement slower in the final animation.

    The problem I’m having is that when the video is rendered out, each layer’s text is only moving ever second, and third frame, respectively. This causes the animation to be a bit ’jerky’ and just not as smooth as Adobe After Effects with it’s ability to support sub-pixel rendering of elements.

    Is there any way to get imagettftext(), or some other method of drawing on images to support sub-pixel rendering ?

    Thank you.

    for ($i = 1; $i <= 125; $i++) {

       // Text on Image
       $front = imagecreatefromjpeg('front.jpg');
       $white = imagecolorallocate($front, 255, 255, 255);
       $text = 'some text';
       $text2 = 'some other text';
       $font = '/var/www/html/test/OpenSans-Bold.ttf';

       // Add text
       imagettftext($front, 60, 0, 1340-($i/2), 720, $white, $font, $text);
       imagettftext($front, 35, 0, 1240+($i/3), 800, $white, $font, $text2);

       // Write image to file
       imagejpeg($front, "images/".$i.".jpg", 100);
    }
  • FFMPEG : Reversing only a segment of a video file

    25 juillet 2019, par N. Johnson

    I am writing a script that takes a piece of video and then reverses the same piece of video and then concats the two together so the final video plays forwards and then loops backwards. I should note that eventually I want to be able to pull an unequal length for the reverse part.

    I can get the entire file to do this, but getting just a segment is not working as expected.

    See code below

    I’ve tried :

    %1 = timecode to seek to (the video file is only 20 seconds and never any longer)

    %2 = length of segment to pull out

    %3 = usually the same as %2 but may be different if we want to only reverse 2 seconds instead of the full 5 for example.

    ffmpeg -ss 00:00:%1 -an -i test.mp4 -t 00:00:%2 out.mp4
    :: the above works as expected

    :: this doesn't, no matter what I put into -ss. I've also tried moving -ss out front of the -i as suggested in the documentation? It gives me the right length of segment but never starts in the right place.

    ffmpeg  -an -i test.mp4 -ss 00:00:xx -t 00:00:%3 -vf reverse reversed2.mp4

    :the below works fine
    ffmpeg -f concat -safe 0 -i mylist.txt -c copy output.mp4

    When I run this with say %1 = 05 and %2 = 05, I get a segment from 5 seconds in that lasts 5 seconds. Then I get a seemingly random starting point and 5 seconds of reversed video. I’ve tried a number of inputs in "XX" from 10 (which I think is right) to 0 to 19 and all of them produce output. But it’s all wrong.