Recherche avancée

Médias (91)

Autres articles (65)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (6324)

  • Can not build ffmpeg with gpu acceleration on macOS

    21 février 2018, par Kirill Serebriakov

    I’m trying to use my GPU for video encoding/decoding operations on macOS.

    • OS : MacOS 10.12.5 (Sierra) //hackintosh if it matters
    • CUDA Toolkit 8.0 installed
    • NVidia GTX 1080 with latest web driver

    Followed this guides :

    Config :

    ./configure --enable-cuda --enable-cuvid --enable-nvenc \
    --enable-nonfree --enable-libnpp \
    --extra-cflags=-I/Developer/NVIDIA/CUDA-8.0/include \
    --extra-ldflags=-L/Developer/NVIDIA/CUDA-8.0/lib

    Got this error :

    ERROR: cuvid requested, but not all dependencies are satisfied: cuda

    config.log - full configure log

    I did not install Video Codec SDK (not sure how to make it on macOS, just thought that it may come with cuda toolkit) and according to this page I have a lot of limitations on OSX.

    Is it possible on macOS ? Or this will work only for linux/windows ?

  • Desktop grabbing with FFmpeg at 60 fps using NVENC codec

    30 avril 2023, par Akatosh

    I'm having trouble recording my desktop at 60FPS using the latest Windows compiled FFmpeg with NVENC codec. Metadata says the file is 60 fps but when I play it, I can see clearly see it is not 60FPS.

    



    The command-line I use is the following :

    



    ffmpeg -y -rtbufsize 2000M -f gdigrab -framerate 60 -offset_x 0 -offset_y 0 -video_size 1920x1080 -i desktop -c:v h264_nvenc -preset:v fast -pix_fmt nv12 out.mp4


    



    I tried using a real time buffer, using another DirectShow device, changing the profile or forcing a bitrate, but the video always seems to be at 30fps.

    



    Recording the screen using NVIDIA's ShadowPlay works well, so I know it's feasible on my machine.

    



    Using FFprobe to check the ShadowPlay's output file I can see :

    



    


    Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p(tv,
 smpte170m/smpte170m/bt470m), 1920x1080 [SAR 1:1 DAR 16:9], 4573 kb/s,
 59.38 fps, 240 tbr, 60k tbn, 120 tbc (default)

    


    



    But If I force my output to have the same bitrate and profile I get :

    



    


    Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p,
 1920x1080 [SAR 1:1 DAR 16:9], 5519 kb/s, 60 fps, 60 tbr, 15360 tbn,
 120 tbc (default)

    


    



    I can see tbr and tbn are different, so I know my output is duplicating frames.

    



    For testing, all of my recordings had this 60 frame rate test page on the background, and I could clearly see the differences.

    



    I know ShadowPlay probably does a lot more under the hood than FFmpeg using the same codec. I know OBS can do it quite easily but I want to understand what I am doing wrong. Maybe it's some FFmpeg limitation ?

    



    Full console output

    



    Using -v trace command :

    



    [gdigrab @ 0000000002572cc0] Capturing whole desktop as 1920x1080x32 at (0,0)
[gdigrab @ 0000000002572cc0] Cursor pos (1850,750) -> (1842,741)
[gdigrab @ 0000000002572cc0] Probe buffer size limit of 5000000 bytes reached
[gdigrab @ 0000000002572cc0] Stream #0: not enough frames to estimate rate; consider increasing probesize
[gdigrab @ 0000000002572cc0] stream 0: start_time: 1467123648.275 duration: -9223372036854.775
[gdigrab @ 0000000002572cc0] format: start_time: 1467123648.275 duration: -9223372036854.775 bitrate=3981337 kb/s
Input #0, gdigrab, from 'desktop':
  Duration: N/A, start: 1467123648.275484, bitrate: 3981337 kb/s
    Stream #0:0, 1, 1/1000000: Video: bmp, 1 reference frame, bgra, 1920x1080 (0x0), 0/1, 3981337 kb/s, 60 fps, 1000k tbr, 1000k tbn, 1000k tbc
Successfully opened the file.
Parsing a group of options: output file out.mp4.
Applying option c:v (codec name) with argument h264_nvenc.
Applying option pix_fmt (set pixel format) with argument nv12.
Successfully parsed a group of options.
Opening an output file: out.mp4.
[file @ 0000000000e3a7c0] Setting default whitelist 'file,crypto'
Successfully opened the file.
detected 8 logical cores
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'video_size' to value '1920x1080'
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'pix_fmt' to value '30'
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'time_base' to value '1/1000000'
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'pixel_aspect' to value '0/1'
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'sws_param' to value 'flags=2'
[graph 0 input from stream 0:0 @ 000000000257ec00] Setting 'frame_rate' to value '60/1'
[graph 0 input from stream 0:0 @ 000000000257ec00] w:1920 h:1080 pixfmt:bgra tb:1/1000000 fr:60/1 sar:0/1 sws_param:flags=2
[format @ 000000000257ffc0] compat: called with args=[nv12]
[format @ 000000000257ffc0] Setting 'pix_fmts' to value 'nv12'
[auto-inserted scaler 0 @ 00000000025802c0] Setting 'flags' to value 'bicubic'
[auto-inserted scaler 0 @ 00000000025802c0] w:iw h:ih flags:'bicubic' interl:0
[format @ 000000000257ffc0] auto-inserting filter 'auto-inserted scaler 0' between the filter 'Parsed_null_0' and the filter 'format'
[AVFilterGraph @ 0000000000e373c0] query_formats: 4 queried, 2 merged, 1 already done, 0 delayed
[auto-inserted scaler 0 @ 00000000025802c0] w:1920 h:1080 fmt:bgra sar:0/1 -> w:1920 h:1080 fmt:nv12 sar:0/1 flags:0x4
[h264_nvenc @ 0000000000e3ca20] Nvenc initialized successfully
[h264_nvenc @ 0000000000e3ca20] 1 CUDA capable devices found
[h264_nvenc @ 0000000000e3ca20] [ GPU #0 - < GeForce GTX 670 > has Compute SM 3.0 ]
[h264_nvenc @ 0000000000e3ca20] supports NVENC
[mp4 @ 0000000000e3b580] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
Output #0, mp4, to 'out.mp4':
  Metadata:
    encoder         : Lavf57.40.101
    Stream #0:0, 0, 1/15360: Video: h264 (h264_nvenc) (Main), 1 reference frame ([33][0][0][0] / 0x0021), nv12, 1920x1080, 0/1, q=-1--1, 2000 kb/s, 60 fps, 15360 tbn, 60 tbc
    Metadata:
      encoder         : Lavc57.47.100 h264_nvenc
    Side data:
      cpb: bitrate max/min/avg: 0/0/2000000 buffer size: 4000000 vbv_delay: -1
Stream mapping:
  Stream #0:0 -> #0:0 (bmp (native) -> h264 (h264_nvenc))
Press [q] to stop, [?] for help
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000008
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[gdigrab @ 0000000002572cc0] Cursor pos (1850,750) -> (1842,741)
*** 35 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1850,750) -> (1842,741)
*** 7 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1850,649) -> (1850,649)
*** 1 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1858,535) -> (1858,535)
*** 3 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1859,454) -> (1859,454)
*** 2 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1865,384) -> (1865,384)
*** 2 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1846,348) -> (1846,348)
*** 3 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1770,347) -> (1770,347)
*** 2 dup!
[gdigrab @ 0000000002572cc0] Cursor pos (1545,388) -> (1545,388)
*** 4 dup!
frame=   69 fps=0.0 q=35.0 size=     184kB time=00:00:00.63 bitrate=2384.0kbits/[gdigrab @ 0000000002572cc0] Cursor pos (1523,389) -> (1519,378)


    


  • ffmpeg : avfilter's anull says "Rematrix is needed between stereo and 0 channels but there is not enough information to do it"

    12 juillet 2017, par kuanyui

    I’m trying to write a transcoder according to FFMpeg’s official example with ffmpeg 3.2.4 (official prebuild Win32), and try to transcode a video with stereo audio stream source (from avformat’s dshow).

    In the example code, which passes anull into avfilter_graph_parse_ptr() for audio stream, and "time_base=1/44100:sample_rate=44100:sample_fmt=s16:channels=2:channel_layout=0x3" is passed into avfilter_graph_create_filter(), occurs error in the following avfilter_graph_config() :

    [auto-inserted scaler 0 @ 32f77600] w:iw h:ih flags:'bilinear' interl:0
    [Parsed_null_0 @ 2e9d79a0] auto-inserting filter 'auto-inserted scaler 0' between the filter 'in' and the filter 'Parsed_null_0'
    [swscaler @ 3331bfe0] deprecated pixel format used, make sure you did set range correctly
    [auto-inserted scaler 0 @ 32f77600] w:1920 h:1080 fmt:yuvj422p sar:1/1 -> w:1920 h:1080 fmt:yuv420p sar:1/1 flags:0x2
    [libmp3lame @ 2e90a360] Channel layout not specified
    [in @ 3866e8a0] tb:1/44100 samplefmt:s16 samplerate:44100 chlayout:0x3
    [Parsed_anull_0 @ 330e8820] auto-inserting filter 'auto-inserted resampler 0' between the filter 'in' and the filter 'Parsed_anull_0'
    [auto-inserted resampler 0 @ 330e8dc0] [SWR @ 3809b620] Rematrix is needed between stereo and 0 channels but there is not enough information to do it
    [auto-inserted resampler 0 @ 330e8dc0] Failed to configure output pad on auto-inserted resampler 0

    I’ve googled for days but didn’t find any clue for it. Doesn’t what anull do is only "Pass the audio source unchanged to the output", why libav want to resample stereo to 0 channel ? What’s going wrong ?