Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (55)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (12302)

  • How to use ffmpeg on hardware acceleration with multiple inputs ?

    20 mars 2019, par Cole

    I’m trying to speed up the rendering of a video by using the GPU instead of the CPU. This code works, but I don’t know if I’m doing it correctly.

    ffmpeg -hwaccel cuvid -c:v hevc_cuvid \
    -i video.mp4 \
    -i logo.png \
    -i text.mov \
    -c:v h264_nvenc \
    -filter_complex " \
    [0]scale_npp=1920:1080,hwdownload,format=nv12[bg0]; \
    [bg0]trim=0.00:59.460,setpts=PTS-STARTPTS[bg0]; \
    [1]scale=150:-1[logo1];[bg0][logo1]overlay=(W-w)-10:(H-h)-10[bg0]; \
    [2]scale=500:-1[logo2];[logo2]setpts=PTS-STARTPTS[logo2]; \
    [bg0][logo2]overlay=-150:-100[bg0]; \
    [bg0]fade=in:00:30,fade=out:1750:30[bg0]" \
    -map "[bg0]" -preset fast -y output.mp4

    I feel like I need to be using hwuplaod somewhere in there, but I’m not totally sure. Any help would be appreciated.

    Log from run :

    ffmpeg version 4.0 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.11) 20160609
     configuration: --enable-gpl --enable-libx264 --enable-cuda --enable-nvenc --enable-cuvid --enable-nonfree --enable-libnpp --extra-cflags=-I/usr/local/cuda/include/ --extra-ldflags=-L/usr/local/cuda/lib64/
     libavutil      56. 14.100 / 56. 14.100
     libavcodec     58. 18.100 / 58. 18.100
     libavformat    58. 12.100 / 58. 12.100
     libavdevice    58.  3.100 / 58.  3.100
     libavfilter     7. 16.100 /  7. 16.100
     libswscale      5.  1.100 /  5.  1.100
     libswresample   3.  1.100 /  3.  1.100
     libpostproc    55.  1.100 / 55.  1.100
    [hevc @ 0x3eb04c0] vps_num_hrd_parameters -1 is invalid
    [hevc @ 0x3eb04c0] VPS 0 does not exist
    [hevc @ 0x3eb04c0] SPS 0 does not exist.
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2mp41
       encoder         : Lavf58.7.100
     Duration: 00:00:59.46, start: 0.000000, bitrate: 5894 kb/s
       Stream #0:0(und): Video: hevc (Main) (hev1 / 0x31766568), yuv420p(tv, progressive), 3840x2160 [SAR 1:1 DAR 16:9], 5891 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 29.97 tbc (default)
       Metadata:
         handler_name    : VideoHandler
    Input #1, png_pipe, from 'logo.png':
     Duration: N/A, bitrate: N/A
       Stream #1:0: Video: png, rgba(pc), 528x128 [SAR 11339:11339 DAR 33:8], 25 tbr, 25 tbn, 25 tbc
    Input #2, mov,mp4,m4a,3gp,3g2,mj2, from 'text.mov':
     Metadata:
       major_brand     : qt  
       minor_version   : 512
       compatible_brands: qt  
       encoder         : Lavf57.56.100
     Duration: 00:00:06.00, start: 0.000000, bitrate: 1276 kb/s
       Stream #2:0(eng): Video: qtrle (rle  / 0x20656C72), bgra, 1920x1080, 1274 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 12800 tbn, 12800 tbc (default)
       Metadata:
         handler_name    : DataHandler
    Stream mapping:
     Stream #0:0 (hevc_cuvid) -> scale_npp
     Stream #1:0 (png) -> scale
     Stream #2:0 (qtrle) -> scale
     fade -> Stream #0:0 (h264_nvenc)
    Press [q] to stop, [?] for help
    Output #0, mp4, to 'output.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2mp41
       encoder         : Lavf58.12.100
       Stream #0:0: Video: h264 (h264_nvenc) (Main) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 2000 kb/s, 29.97 fps, 30k tbn, 29.97 tbc (default)
       Metadata:
         encoder         : Lavc58.18.100 h264_nvenc
       Side data:
         cpb: bitrate max/min/avg: 0/0/2000000 buffer size: 4000000 vbv_delay: -1
    frame= 1783 fps=151 q=30.0 Lsize=   15985kB time=00:00:59.45 bitrate=2202.4kbits/s dup=4 drop=0 speed=5.04x    
    video:15977kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.050389%

    Not sure what to make of this, pretty new to ffmpeg.

  • Lost in converting hardware accelarated ffmpeg decoder to raw loopback device

    5 septembre 2019, par Mikeynl

    Before reaching out, i googled enough, tried many different docker containers, building, compiling etc etc.

    What I am looking for is a hardware accelerated conversion from a rtsp stream to a /dev/video0 device. I have a working v4l2loopback kernel module, and the following command is working :

    ffmpeg -loglevel panic -hide_banner -i "rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream" -f v4l2 -pix_fmt yuv420p /dev/video0

    I can test the /dev/video0 device to take a screenshot :

    ffmpeg -f video4linux2 -i /dev/video0 -ss 0:0:2 -frames 1 /var/www/html/out.png

    Above is working, but taking around 30 / 50% cpu usage to decode/encoding.

    I have a fully working test environment with a GeForce GTX 1050. All cuda / nvidia related drivers are in place. ffmpeg is compiled with the following options :

    configuration: --enable-nonfree --disable-shared --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/cuda/lib64

    My last attempt was :

    ffmpeg -hwaccel cuvid -c:v h264_cuvid -i "rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream" -c:v rawvideo -pix_fmt yuv420p -f v4l2 /dev/video0

    It gives me an error :

    Impossible to convert between the formats supported by the filter ’Parsed_null_0’ and the filter ’auto_scaler_0’
    Error reinitializing filters !
    Failed to inject frame into filter network : Function not implemented
    Error while processing the decoded data for stream #0:0

    I have at the moment absolute no idea anymore how to solve this.

    Thanks in advance !

    //added information

    root@localhost:~# ffmpeg -i "rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream" -c copy -f v4l2 /dev/video0
    ffmpeg version N-91067-g1c2e5fc Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.9) 20160609
     configuration: --enable-nonfree --disable-shared --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/cuda/lib64
     libavutil      56. 18.102 / 56. 18.102
     libavcodec     58. 19.101 / 58. 19.101
     libavformat    58. 13.102 / 58. 13.102
     libavdevice    58.  4.100 / 58.  4.100
     libavfilter     7. 22.100 /  7. 22.100
     libswscale      5.  2.100 /  5.  2.100
     libswresample   3.  2.100 /  3.  2.100
    Input #0, rtsp, from 'rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream':
     Metadata:
       title           : RTSP Session
     Duration: N/A, start: 2.300000, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1920x1080, 10 fps, 10 tbr, 90k tbn, 20 tbc
    [v4l2 @ 0x31a7f00] V4L2 output device supports only a single raw video stream
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
       Last message repeated 1 times
    root@localhost:~#

    root@localhost:~# ffmpeg -i "rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream" -f v4l2 -pix_fmt yuv420p /dev/video0                   ffmpeg version N-91067-g1c2e5fc Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.9) 20160609
     configuration: --enable-nonfree --disable-shared --enable-nvenc --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/cuda/lib64
     libavutil      56. 18.102 / 56. 18.102
     libavcodec     58. 19.101 / 58. 19.101
     libavformat    58. 13.102 / 58. 13.102
     libavdevice    58.  4.100 / 58.  4.100
     libavfilter     7. 22.100 /  7. 22.100
     libswscale      5.  2.100 /  5.  2.100
     libswresample   3.  2.100 /  3.  2.100
    Input #0, rtsp, from 'rtsp://192.168.0.17/user=admin&password=&channel=1&stream=0.dsp?real_stream':
     Metadata:
       title           : RTSP Session
     Duration: N/A, start: 2.300000, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1920x1080, 10 fps, 10 tbr, 90k tbn, 20 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
    Press [q] to stop, [?] for help
    [swscaler @ 0x2821240] deprecated pixel format used, make sure you did set range correctly
    Output #0, v4l2, to '/dev/video0':
     Metadata:
       title           : RTSP Session
       encoder         : Lavf58.13.102
       Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 1920x1080, q=2-31, 248832 kb/s, 10 fps, 10 tbn, 10 tbc
       Metadata:
         encoder         : Lavc58.19.101 rawvideo
    frame=   85 fps= 13 q=-0.0 Lsize=N/A time=00:00:08.50 bitrate=N/A dup=0 drop=4 speed=1.31x
    video:258188kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
  • Hardware accelerated decoding with FFmpeg falls back to software decoding

    9 février 2024, par iexav

    So I have followed the FFmpeg example for hardware accelerated decoding exactly as it is (I am referring to this example).

    


    https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/hw_decode.c#L76


    


    But I still seem to be decoding with the software decoder. When I open the task manager on windows the GPU isn't getting used. Before I make a call to av_hwframe_transfer_data() I check whether the frame is in the relevant hw_pix_fmt format and it is. Everything works, no errors, nothing, except the GPU is doing nothing. Now as an example I tried decoding a video that uses vp9 as a codec. If I specify the hardware accelerated codec I want by name it actually does work.

    


    vidCodec=avcodec_find_decoder_by_name("vp9_cuvid"); 


    


    When I do this and look at the task manager I can see that the CPU does much less work and my GPU actually does Video Decode work. Having to specify the hardware accelerated decoder for every single video I am decoding is ridiculous though.

    


    Edit : as per user4581301's answer, here are the pieces of relevant code. (It's actually in java because I am using the java FFmpeg wrapper but it's basically just making a bunch of calls to FFmpeg functions.)

    


    &#xA; ArrayList<string> deviceTypes = new ArrayList&lt;>();&#xA; int type = AV_HWDEVICE_TYPE_NONE;&#xA;                    while ((type = av_hwdevice_iterate_types(type)) != AV_HWDEVICE_TYPE_NONE) {&#xA;                        BytePointer p = av_hwdevice_get_type_name(type);&#xA;&#xA;                        deviceTypes.add(CString(p));&#xA;                    }&#xA;                    boolean flag=false;&#xA;&#xA;                    for(int j=0;j* Allocate a codec context for the decoder */&#xA;                if ((video_c = avcodec_alloc_context3(vidCodec)) == null) {&#xA;                    throw new Exception("avcodec_alloc_context3() error: Could not allocate video decoding context.");&#xA;                }&#xA;&#xA;&#xA;                /* copy the stream parameters from the muxer */&#xA;&#xA;                if ((ret = avcodec_parameters_to_context(video_c, video_st.codecpar())) &lt; 0) {&#xA;                    releaseUnsafe();&#xA;                    throw new Exception("avcodec_parameters_to_context() error " &#x2B; ret &#x2B; ": Could not copy the video stream parameters.");&#xA;                }&#xA;              &#xA;&#xA;                    video_c.get_format(AvFormatGetter.getInstance());&#xA;                    AVBufferRef hardwareDeviceContext =av_hwdevice_ctx_alloc(type);&#xA;&#xA;                    if ((ret = av_hwdevice_ctx_create(hardwareDeviceContext, type,(String) null, null, 0)) &lt; 0) {&#xA;                        System.err.println("Failed to create specified HW device. error " &#x2B; ret);&#xA;&#xA;                    }else{&#xA;                        video_c.hw_device_ctx(av_buffer_ref(hardwareDeviceContext));&#xA;&#xA;                    }&#xA;    &#xA;&#xA;&#xA;                &#xA;//The function that gets called for get_format&#xA;@Override&#xA; public int call(AVCodecContext context, IntPointer format) {&#xA;            int p;&#xA;&#xA;&#xA;            for (int i=0;;i&#x2B;&#x2B;) {&#xA;                if ((p=format.get(i)) == hw_pix_fmt) {&#xA;                    return p;&#xA;                }&#xA;                if(p==-1){&#xA;                    break;&#xA;                }&#xA;            }&#xA;&#xA;            System.out.println(hw_pix_fmt &#x2B;" is not found in the codec context");&#xA;            // Error&#xA;&#xA;            return AV_PIX_FMT_NONE;&#xA;        }&#xA;    &#xA;//The method that&#x27;s used for decoding video frames&#xA;  public Optional<boolean> decodeVideoFrame(AVPacket pkt, boolean readPacket, boolean keyFrames) throws Exception {&#xA;&#xA;        int ret;&#xA;        // Decode video frame&#xA;        if (readPacket) {&#xA;            ret = avcodec_send_packet(video_c, pkt);&#xA;          &#xA;            if (ret &lt; 0) {&#xA;                System.out.println("error during decoding");&#xA;                return Optional.empty();&#xA;            }&#xA;&#xA;            if (pkt.data() == null &amp;&amp; pkt.size() == 0) {&#xA;                pkt.stream_index(-1);&#xA;            }&#xA;           &#xA;        }&#xA;&#xA;        // Did we get a video frame?&#xA;        while (true) {&#xA;            ret = avcodec_receive_frame(video_c, picture_hw);&#xA;&#xA;            if (ret == AVERROR_EAGAIN() || ret == AVERROR_EOF()) {&#xA;                if (pkt.data() == null &amp;&amp; pkt.size() == 0) {&#xA;                    return Optional.empty();&#xA;                } else {&#xA;&#xA;                    return Optional.of(true);&#xA;&#xA;                }&#xA;            } else if (ret &lt; 0) {&#xA;&#xA;                // Ignore errors to emulate the behavior of the old API&#xA;                // throw new Exception("avcodec_receive_frame() error " &#x2B; ret &#x2B; ": Error during video decoding.");&#xA;                return Optional.of(true);&#xA;&#xA;            }&#xA;&#xA;            if (!keyFrames || picture.pict_type() == AV_PICTURE_TYPE_I) {&#xA;              &#xA;                if(picture_hw.format()==hw_pix_fmt){&#xA;                    if (av_hwframe_transfer_data(&#xA;                            picture, // The frame that will contain the usable data.&#xA;                            picture_hw, // Frame returned by avcodec_receive_frame()&#xA;                            0) &lt; 0) {&#xA;                        throw new Exception("Could not transfer data from gpu to cpu. ");&#xA;&#xA;                    }&#xA;                }&#xA;               //... The rest of the method here&#xA;                return Optional.of(false);&#xA;&#xA;            }&#xA;        }&#xA;    }&#xA;</boolean></string>

    &#xA;