Recherche avancée

Médias (91)

Autres articles (42)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • L’utiliser, en parler, le critiquer

    10 avril 2011

    La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
    Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
    Une liste de discussion est disponible pour tout échange entre utilisateurs.

Sur d’autres sites (5310)

  • Why opencv videowriter is so slow ?

    22 février 2021, par user2267367

    Hi stackoverflow community,
I have a tricky problem and I need your help to understand what is going on here.
My program captures frames from a video grabber card (Blackmagic) which just works fine so far, at the same time I display the captured images with opencv (cv::imshow) which works good as well (But pretty cpu wasting).
The captured images are supposed to be stored on the disk as well, for this I put the captured Frames (cv::Mat) on a stack, to finally write them async with opencv :

    


    cv::VideoWriter videoWriter(path, cv::CAP_FFMPEG, fourcc, fps, *size);
videoWriter.set(cv::VIDEOWRITER_PROP_QUALITY, 100);

int id = metaDataWriter.insertNow(path);

while (this->isRunning) {

    while (!this->stackFrames.empty()) {

        cv:Mat m = this->stackFrames.pop();

        videoWriter << m;
    }
    
}

videoWriter.release();


    


    This code is running in an additional thread and will be stopped from outside.
The code is working so far, but it is sometimes pretty slow, which means my stack size increases and my system runs out of ram and get killed by the OS.

    


    Currently it is running on my developing system :

    


      

    • Ubuntu 18.04.05
    • 


    • OpenCV 4.4.0 compiled with Cuda
    • 


    • Intel i7 10. generation 32GB RAM, GPU Nvidia p620, M.2 SSD
    • 


    


    Depending on the codec (fourcc) this produces a high CPU load. So far I used mainly "MJPG", "x264". Sometimes even MJPG turns one core of the CPU to 100% load, and my stack raises until the programs run out of run. After a restart, sometimes, this problem is fixed, and it seems the load is distributed over all cores.

    


    Regarding to the Intel Doc. for my CPU, it has integrated hardware encoding/decoding for several codecs. But I guess opencv is not using them. Opencv even uses its own ffmpeg and not the one of my system. Here is my build command of opencv :

    


    cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D WITH_TBB=ON \
-D WITH_CUDA=ON \
-D BUILD_opencv_cudacodec=OFF \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D WITH_CUBLAS=1 \
-D WITH_V4L=ON \
-D WITH_QT=OFF \
-D WITH_OPENGL=ON \
-D WITH_GSTREAMER=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D WITH_FFMPEG=1 \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
-D WITH_CUDNN=ON \
-D OPENCV_DNN_CUDA=ON \
-D CUDA_ARCH_BIN=6.1 ..


    


    I just started development with linux and C++, before I was working with Java/Maven, so the use of cmake is still a work in progress, pls go easy on me.

    


    Basically my question is, how can I make the video encoding/writing faster, use the hardware acceleration at best ?
Or if you think there is something else fishy, pls let me know.

    


    BR Michael

    


  • FFMpeg : CUDA_ERROR_NOT_SUPPORTED on ubuntu20.04

    22 décembre 2020, par Superminaren

    I've been trying to get CUDA working on Ubuntu 20.04 for a while now.

    


    ffmpeg -vsync 0 -hwaccel cuvid -c:v h264_cuvid -i input.mp4 -c:a copy -c:v h264_nvenc -b:v 5M output.mp4


    


    Running the sample command linked above, I get the following error :

    


    


    [AVHWDeviceContext @ 0x5618466f7f80] cu->cuCtxCreate(&hwctx->cuda_ctx, desired_flags, hwctx->internal->cuda_device) failed -> CUDA_ERROR_NOT_SUPPORTED : operation not supported
Device creation failed : -1313558101.
[h264_cuvid @ 0x561846731940] No device available for decoder : device type cuda needed for codec h264_cuvid.

    


    


    I am not sure what causes this.

    


    All relevant environment parameters look like this.

    


    CUDADIR=/usr/local/cuda-11.2/   LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib64   CUDA_HOME=/usr/local/cuda-11.2/   PATH=/usr/local/cuda-11.2/bin:/opt/ffmpeg/bin/

    


    Running 'nvidia-smi' gives this response :

    


    This looks rather normal afaik too.

    


    If someone with more experience could help me that'd be greatly appreciated.

    


  • FFmpeg decode video to lossless frames hardware acceleration CUDA

    29 décembre 2020, par Seva Safris

    I have 1 second H.265-encoded videos at 30 fps coming into a server for processing. The server needs to decode the videos into individual frames (lossless). These videos are coming in very quickly, so performance is of utmost importance. The server has a H.265 compatible Nvidia GPU, and I have built ffmpeg with support for CUDA. The following is the configuration output from ffmpeg :

    


    ffmpeg version N-100479-gd67c6c7f6f Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 8 (Ubuntu 8.4.0-3ubuntu2)
  configuration: --enable-nonfree --enable-cuda-nvcc --enable-nvenc --enable-opencl --enable-shared
                 --enable-pthreads --enable-version3 --enable-avresample --enable-ffplay --enable-gnutls
                 --enable-gpl --disable-libaom --disable-libbluray --disable-libdav1d 
                 --disable-libmp3lame --enable-libopus --disable-librav1e --enable-librubberband
                 --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora
                 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264
                 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig
                 --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb
                 --enable-libopencore-amrwb --disable-libopenjpeg --enable-librtmp --enable-libspeex
                 --enable-libsoxr --disable-videotoolbox --disable-libjack --disable-indev=jack
                 --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64


    


    I decode the videos into PNGs, and am using the following command :

    


    ffmpeg -y -vsync 0 -hwaccel cuvid -hwaccel_output_format cuda -hwaccel_device 0 -c:v hevc_cuvid \
       -i 0.mp4 -vf hwdownload,format=nv12 -q:v 1 -qmin 1 -qmax 1 -start_number 0 f%d.png


    


    This command successfully leverages the hardware acceleration for the H.265 decode. But, the PNG encode is done by the CPU.

    


    Does CUDA have support for encoding of lossless images ? The format does not need to be PNG, but it does need to be lossless. CUDA has a nvJPEG Library, but JPEG is a lossy format. Is there a similar image encoding library in CUDA for a lossless format (that is also integrated with ffmpeg) ?

    


    Edit : Some more context....

    


    I am currently using PNGs because of their compression-ability. These images are 2560x1280 in size, btw. On one hand, it is this compression that costs the CPU cycles. On the other hand, I am also limited by the throughput of how fast (and how much aggregate data) can I upload these frames to the upstream consumer. So it's basically a tradeoff between :

    


      

    1. We want to extract these frames as quickly as possible.
    2. 


    3. We want efficiency regarding the image size.
    4.