Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (100)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (7842)

  • x86/tx_float : remove vgatherdpd usage

    20 mai 2022, par Lynne
    x86/tx_float : remove vgatherdpd usage
    

    Its performance loss ranges from either being just as fast as individual loads
    (Skylake), a few percent slower (Alderlake), 8% slower (Zen 3), to completely
    disasterous (older/other CPUs).

    Sadly, gathers never panned out fast on x86, even with the benefit of time and
    implementation experience.

    This also saves a register, as there's no need to fill out an additional
    register mask.

    Zen 3 (16384-point transform) :
    Before : 1561050 decicycles in av_tx (fft), 131072 runs, 0 skips
    After : 1449621 decicycles in av_tx (fft), 131072 runs, 0 skips

    Alderlake :
    2% slower on big transforms (65536), to 1% (131072), to a few percent for smaller
    sizes.

    • [DH] libavutil/x86/tx_float.asm
    • [DH] libavutil/x86/tx_float_init.c
  • RGB-frame encoding - FFmpeg/libav

    4 février 2014, par learner

    I am learning video encoding & decoding in FFmpeg. I tried the code sample on this page (only the video encoding & decoding part). Here the dummy image being created is in YCbCr format. How do I achieve similar encoding by creating RGB frames ? I am stuck at :

    Firstly, how to create this RGB dummy frame ?

    Secondly, how to encode it ? Which codec to use ? Most of them work with YUV420p only...

    EDIT : I have a YCbCr encoder and decoder as given on the this page. The thing is, I have RGB frame-sequence in my database and I need to encode it. But the encoder is for YCbCr. So, I am wondering to convert RGB frames to YCbCr (or YUV420P) somehow and then encode them.
    At decoding end, I get decoded YCbCr frames and I convert them back to RGB. How to go ahead with it ?

    I did try the swscontext thing, but the converted frames lose color information and also scaling errors. I thought of doing it manually using two for loops and colorspace conversion formulae
    but I am not able to access individual pixel of a frame using FFmpeg/libav library ! Like in OpenCV we can easily access it with something like : Mat img(x,y) but no such thing here ! I am totally a newcomer to this area...

    Someone can help me ?

    Many Thanks !

  • Convert sequence of h264 frames into video with ffmpeg

    12 août 2022, par utnd03

    I'm testing a hardware video encoder module, and for each input frame (raw YUV in NV12 format) the encoder generates an h264 output frame. How can I "concatenate" these individual h264 output frames into a video (e.g. mp4) with ffmpeg ?

    


    $ ls
dst0000.h264  dst0002.h264  dst0004.h264  dst0006.h264  dst0008.h264 
dst0001.h264  dst0003.h264  dst0005.h264  dst0007.h264

$ file dst0000.h264
dst0000.h264: JVT NAL sequence, H.264 video, main @ L 31


    


    I tried ffmpeg -i dst0001.h264 -c:v copy output.mp4, which allows me to view a single h264 frame (and they look normal, so the encoder is working properly).

    


    I also tried ffmpeg -f image2 -s 640x480 -r 30 -i dst%04d.h264 -c:v copy out.mp4, but this gave the following error, which I had no idea how to fix :

    


    [image2 @ 0x55bf32735780] Could not find codec parameters for stream 0 (Video: none, none, 640x480): unknown codec
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from 'dst%04d.h264':
  Duration: 00:00:00.30, start: 0.000000, bitrate: N/A
  Stream #0:0: Video: none, none, 640x480, 30 fps, 30 tbr, 30 tbn
[mp4 @ 0x55bf3273fc80] Could not find tag for codec none in stream #0, codec not currently supported in container
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 -- 
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
    Last message repeated 1 times