Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (36)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (10966)

  • Trouble with hardware-assisted encoding/decoding via FFmpeg on Azure GPU vm's (ubuntu 16.04)

    3 mai 2017, par user3776020

    I am trying to use NVIDIA hardware acceleration with FFmpeg/libav, but can’t get it to work correctly on Azure vm’s running Ubuntu 16.04. For a sample case, I am trying to do a simple decoding of an h264 video into a raw YUV file (as detailed here : https:// developer.nvidia.com/ffmpeg).

    So far, I’ve tried it on NC-6, NC-12, and NV-6 machines (in different regions). In each of these instances, it would take about 30-45 seconds to process a single video frame. As a comparison, I also tried it on a P2.xlarge vm on AWS (which has very similar specs to the NC-6), which was able to process about 3000 frames in about 5 seconds. Has anyone else run into this issue with Azure machines, or has any idea why this would be the case ?

    Here are the commands I used to install the necessary drivers/libraries/etc (I also verified that each machine as the same NVIDIA driver version installed - 375.51) :

    CUDA_REPO_PKG=cuda-repo-ubuntu1604_8.0.61-1_amd64.deb

    wget -O /tmp/$CUDA_REPO_PKG
    http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/$CUDA_REPO_PKG

    sudo dpkg -i /tmp/$CUDA_REPO_PKG

    sudo apt-get update

    sudo apt-get install -y cuda-drivers

    sudo apt-get install -y cuda

    sudo apt-get install -y nvidia-cuda-toolkit

    [reboot]

    sudo apt-get update

    sudo apt-get upgrade -y

    sudo apt-get dist-upgrade -y

    [reboot]

    git clone https://github.com/FFmpeg/FFmpeg.git

    [download the latest video codec SDK from NVIDIA at : https://
    developer.nvidia.com/designworks/video_codec_sdk/downloads/v7.1]

    [unzipped codec, and copy header files from
    /Video_Codec_SDK_7.1.9/Samples/common/inc/ into /usr/include/]

    cd /FFmpeg

    ./configure —enable-nonfree —disable-shared —enable-nvenc
    —enable-cuda —enable-cuvid —enable-libnpp —extra-cflags=-Ilocal/include —extra-cflags=-I../nv_sdk —extra-ldflags=-L../nv_sdk

    sudo make && sudo make install

    For the FFmpeg command that I used to decode a sample movie file, I used the following :

    sudo ffmpeg -vsync 0 -c:v h264_cuvid -i sample_vid.mp4 -f rawvideo outputvid.yuv
  • Ffmpeg adding watermark with qsv hardware acceleration optimize performance

    11 février 2020, par Ksilon

    I add text watermark on video with ffmpeg but i’m new with ffmpeg and try to optimize performance for this.

    My test setup has i5-7500 and Intel HD 630. I tried this code to add watermark on video. If I do not set -hwaccel_output_format to yuv420p or nv12 , it gives error.

    ffmpeg -threads 4 -hwaccel qsv -hwaccel_output_format yuv420p -i "input.mp4" -vf "drawtext=text='TEST':x=(W-tw)/2:y=(H-th)/2:fontfile=arial.ttf:fontsize=250:fontcolor=white@0.4:shadowcolor=black@0.4:shadowx=2:shadowy=2" -c:v h264_qsv "output.mp4"

    When I run this code, Cpu usage 53% / fps = 90-95 / gpu_load(GPU-Z) = 35-38%
    When I changed -threads 1, Cpu usage 35% / fps = 68-72 / gpu_load(GPU-Z) = 28-30%

    Find -async_depth keyword on the Internet and tried it with 5 but nothing happens or I used it wrong.

    How can use more gpu and less cpu for this operation ?

  • AVCodec h264_v4l2m2m hardware decoding unable to find device

    26 juillet 2023, par nathansizemore

    Using a custom compiled FFmpeg :

    


     $ ./ffmpeg -codecs | grep h264
ffmpeg version n6.0 Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 7 (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04)
  configuration: --arch=aarch64 --enable-cross-compile --target-os=linux --cross-prefix=aarch64-linux-gnu- --prefix=/builds/dronesense/rust/ffmpeg-build/ffmpeg/out --pkgconfigdir= --pkg-config=pkg-config --extra-libs='-ldl -lpthread' --enable-libvpx --enable-libx264 --enable-libx265 --enable-decklink --enable-gpl --enable-nonfree --enable-shared --disable-static
  libavutil      58.  2.100 / 58.  2.100
  libavcodec     60.  3.100 / 60.  3.100
  libavformat    60.  3.100 / 60.  3.100
  libavdevice    60.  1.100 / 60.  1.100
  libavfilter     9.  3.100 /  9.  3.100
  libswscale      7.  1.100 /  7.  1.100
  libswresample   4. 10.100 /  4. 10.100
  libpostproc    57.  1.100 / 57.  1.100
 DEV.LS h264                 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_v4l2m2m ) (encoders: libx264 libx264rgb h264_v4l2m2m )


    


    /dev/video32 seems to have H.264 decoding support :

    


    $ v4l2-ctl --list-formats-out -d /dev/video32
ioctl: VIDIOC_ENUM_FMT
    Index       : 0
    Type        : Video Output Multiplanar
    Pixel Format: 'MPG2' (compressed)
    Name        : MPEG-2 ES

    Index       : 1
    Type        : Video Output Multiplanar
    Pixel Format: 'H264' (compressed)
    Name        : H.264

    Index       : 2
    Type        : Video Output Multiplanar
    Pixel Format: 'HEVC' (compressed)
    Name        : HEVC

    Index       : 3
    Type        : Video Output Multiplanar
    Pixel Format: 'VP80' (compressed)
    Name        : VP8

    Index       : 4
    Type        : Video Output Multiplanar
    Pixel Format: 'VP90' (compressed)
    Name        : VP9


    


    I've tried two approaches (Rust with bindgen) :

    


    Approach 1 :

    


    fn init_decoder() -> Arc<contextwrapper> {&#xA;    let name = CString::new("h264_v4l2m2m").unwrap();&#xA;    let codec = unsafe { ffmpeg::avcodec_find_decoder_by_name(name.as_ptr()) };&#xA;    if codec.is_null() {&#xA;        error!("finding codec");&#xA;        process::exit(1);&#xA;    }&#xA;&#xA;    let ctx = unsafe { ffmpeg::avcodec_alloc_context3(codec) };&#xA;    if ctx.is_null() {&#xA;        error!("creating context");&#xA;        process::exit(1);&#xA;    }&#xA;&#xA;    let r = unsafe { ffmpeg::avcodec_open2(ctx, codec, ptr::null_mut()) };&#xA;    if r &lt; 0 {&#xA;        error!("opening codec: {r}");&#xA;        process::exit(1);&#xA;    }&#xA;&#xA;    Arc::new(ContextWrapper(ctx))&#xA;}&#xA;</contextwrapper>

    &#xA;

    Results in :

    &#xA;

    [h264_v4l2m2m @ 0x7f1c001600] Could not find a valid device&#xA;[h264_v4l2m2m @ 0x7f1c001600] can&#x27;t configure decoder&#xA;[ERROR] [decoder] [webrtc::codec] opening codec: -1&#xA;

    &#xA;

    Approach 2

    &#xA;

    fn init_decoder() -> Arc<contextwrapper> {&#xA;    let name = CString::new("h264_v4l2m2m").unwrap();&#xA;    let codec = unsafe { ffmpeg::avcodec_find_decoder_by_name(name.as_ptr()) };&#xA;    if codec.is_null() {&#xA;        error!("finding codec");&#xA;        process::exit(1);&#xA;    }&#xA;&#xA;    let mut i = 0;&#xA;    let mut hw_pix_fmt: AVPixelFormat = unsafe { mem::zeroed() };&#xA;    loop {&#xA;        let config = unsafe { ffmpeg::avcodec_get_hw_config(codec, i) };&#xA;        if config.is_null() {&#xA;            error!("decoder not supported");&#xA;            process::exit(1);&#xA;        }&#xA;&#xA;        unsafe {&#xA;            info!("device type: {:?}", (*config).device_type);&#xA;            if ((*config).methods &amp; ffmpeg::AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX as i32) > 0 {&#xA;                hw_pix_fmt = (*config).pix_fmt;&#xA;                break;&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;    info!("pixel format: {:?}", hw_pix_fmt);&#xA;&#xA;    let ctx = unsafe { ffmpeg::avcodec_alloc_context3(codec) };&#xA;    if ctx.is_null() {&#xA;        error!("creating context");&#xA;        process::exit(1);&#xA;    }&#xA;&#xA;    let r = unsafe { ffmpeg::avcodec_open2(ctx, codec, ptr::null_mut()) };&#xA;    if r &lt; 0 {&#xA;        error!("opening codec: {r}");&#xA;        process::exit(1);&#xA;    }&#xA;&#xA;    Arc::new(ContextWrapper(ctx))&#xA;}&#xA;&#xA;</contextwrapper>

    &#xA;

    Results in :&#xA;error!("decoder not supported");

    &#xA;

    I feel like there is a major step missing because looking at FFmpeg's Hardware Decode Example there are looking for a device type, to which v4l2 is not a part of the enum, so I do not what functions to call to get it setup.

    &#xA;

    What is the proper way to setup an AVCodec decoder with H.264 hardware acceleration for v4l2m2m ?

    &#xA;