Recherche avancée

Médias (0)

Mot : - Tags -/objet éditorial

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (66)

Sur d’autres sites (12705)

  • FFmpeg decode video to lossless frames hardware acceleration CUDA

    29 décembre 2020, par Seva Safris

    I have 1 second H.265-encoded videos at 30 fps coming into a server for processing. The server needs to decode the videos into individual frames (lossless). These videos are coming in very quickly, so performance is of utmost importance. The server has a H.265 compatible Nvidia GPU, and I have built ffmpeg with support for CUDA. The following is the configuration output from ffmpeg :

    


    ffmpeg version N-100479-gd67c6c7f6f Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 8 (Ubuntu 8.4.0-3ubuntu2)
  configuration: --enable-nonfree --enable-cuda-nvcc --enable-nvenc --enable-opencl --enable-shared
                 --enable-pthreads --enable-version3 --enable-avresample --enable-ffplay --enable-gnutls
                 --enable-gpl --disable-libaom --disable-libbluray --disable-libdav1d 
                 --disable-libmp3lame --enable-libopus --disable-librav1e --enable-librubberband
                 --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora
                 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264
                 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig
                 --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb
                 --enable-libopencore-amrwb --disable-libopenjpeg --enable-librtmp --enable-libspeex
                 --enable-libsoxr --disable-videotoolbox --disable-libjack --disable-indev=jack
                 --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64


    


    I decode the videos into PNGs, and am using the following command :

    


    ffmpeg -y -vsync 0 -hwaccel cuvid -hwaccel_output_format cuda -hwaccel_device 0 -c:v hevc_cuvid \
       -i 0.mp4 -vf hwdownload,format=nv12 -q:v 1 -qmin 1 -qmax 1 -start_number 0 f%d.png


    


    This command successfully leverages the hardware acceleration for the H.265 decode. But, the PNG encode is done by the CPU.

    


    Does CUDA have support for encoding of lossless images ? The format does not need to be PNG, but it does need to be lossless. CUDA has a nvJPEG Library, but JPEG is a lossy format. Is there a similar image encoding library in CUDA for a lossless format (that is also integrated with ffmpeg) ?

    


    Edit : Some more context....

    


    I am currently using PNGs because of their compression-ability. These images are 2560x1280 in size, btw. On one hand, it is this compression that costs the CPU cycles. On the other hand, I am also limited by the throughput of how fast (and how much aggregate data) can I upload these frames to the upstream consumer. So it's basically a tradeoff between :

    


      

    1. We want to extract these frames as quickly as possible.
    2. 


    3. We want efficiency regarding the image size.
    4. 


    


  • ffplay - Two videos (.mp4), one display screen and just a few seconds to display them together [closed]

    1er juillet 2024, par Gabe Mata

    I have two videos (.mp4), one display screen and just a few seconds to display them together.

    


    I am able to displayed them together on a split screen via ffmpeg and then opening the output file. The problem is that it takes a long time (3 minutes).

    


    Here is the Code : (first code)

    


    ffmpeg -i _20180114094126_flightvideo_cam1.mp4      \
       -i _20180114094126_flightvideo_cam2.mp4       \
       -filter_complex "                              \
               [0:v]crop=1280:360:0:0[v0];             \
               [1:v]scale=1280:-1,crop=1280:360:0:0[v1];\
               [v0] [v1]vstack[v]" \
       -map [v]                     \
       -vcodec libx264               \
       -pix_fmt yuv420p               \
       -preset ultrafast               \
        6000screen_take1.mkv  


    


    On the other hand, when using ffplay I am able to modify one video at the time and play it right away :

    


    $ ffplay -i _20180114094126_flightvideo_cam1.mp4 -vf scale=425:-2 


    


    How can I have the same outcome as the first code above, but display it on my screen right away(without waiting for the output file to be created, 3 minutes in this case) ?

    


    Please let me know if this is not clear.

    


  • Using ffmpeg output to HLS and Image Stills

    27 novembre 2018, par yomateo

    I want to combine the output from an RTSP stream into both an HLS stream and several image stills. I can do this fine separately (obviously) but i’m having trouble combining things. Can I get a quick hand ?

    Here are my outputs (that works) :

    Outputting HLS streams :

    ffmpeg -rtsp_transport tcp -i '$RTSP_URL'
       -c:v copy -b:v 64K -f flv rtmp://localhost/hls/stream_low \
       -c:v copy -b:v 512K -f flv rtmp://localhost/hls/stream_high

    Outputting image stills :

    ffmpeg -hide_banner -i '$(RTSP_URL)' -y  \
       -vframes 1 -vf "scale=1920:-1" -q:v 10 out/screenshot_1920x1080.jpeg \
       -vframes 1 -vf "scale=640:-1" -q:v 10 out/screenshot_640x360.jpeg \
       -vframes 1 -vf "scale=384:-1" -q:v 10 out/screenshot_384x216.jpeg \
       -vframes 1 -vf "scale=128:-1" -q:v 10 out/screenshot_128x72.jpeg

    Any help is appreciated (I also posted a bounty ^_^)

    Thanks guys !