Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (48)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (9346)

  • Files created with "ffmpeg hevc_nvenc" do not play on TV. (with video codec SDK 9.1 of nvidia)

    29 janvier 2020, par Dashhh

    Problem

    • Files created with hevc_nvenc do not play on TV. (samsung smart tv, model unknown)
      Related to my ffmpeg build is below.

    FFmpeg build conf

    $ ffmpeg -buildconf
       --enable-cuda
       --enable-cuvid
       --enable-nvenc
       --enable-nonfree
       --enable-libnpp
       --extra-cflags=-I/path/cuda/include
       --extra-ldflags=-L/path/cuda/lib64
       --prefix=/prefix/ffmpeg_build
       --pkg-config-flags=--static
       --extra-libs='-lpthread -lm'
       --extra-cflags=-I/prefix/ffmpeg_build/include
       --extra-ldflags=-L/prefix/ffmpeg_build/lib
       --enable-gpl
       --enable-nonfree
       --enable-version3
       --disable-stripping
       --enable-avisynth
       --enable-libass
       --enable-libfontconfig
       --enable-libfreetype
       --enable-libfribidi
       --enable-libgme
       --enable-libgsm
       --enable-librubberband
       --enable-libshine
       --enable-libsnappy
       --enable-libssh
       --enable-libtwolame
       --enable-libwavpack
       --enable-libzvbi
       --enable-openal
       --enable-sdl2
       --enable-libdrm
       --enable-frei0r
       --enable-ladspa
       --enable-libpulse
       --enable-libsoxr
       --enable-libspeex
       --enable-avfilter
       --enable-postproc
       --enable-pthreads
       --enable-libfdk-aac
       --enable-libmp3lame
       --enable-libopus
       --enable-libtheora
       --enable-libvorbis
       --enable-libvpx
       --enable-libx264
       --enable-libx265
       --disable-ffplay
       --enable-libopenjpeg
       --enable-libwebp
       --enable-libxvid
       --enable-libvidstab
       --enable-libopenh264
       --enable-zlib
       --enable-openssl

    ffmpeg Command

    • Command about FFmpeg encoding
    ffmpeg -ss 1800 -vsync 0 -hwaccel cuvid -hwaccel_device 0 \
    -c:v h264_cuvid -i /data/input.mp4 -t 10 \
    -filter_complex "\
    [0:v]hwdownload,format=nv12,format=yuv420p,\
    scale=iw*2:ih*2" -gpu 0 -c:v hevc_nvenc -pix_fmt yuv444p16le -preset slow -rc cbr_hq -b:v 5000k -maxrate 7000k -bufsize 1000k -acodec aac -ac 2 -dts_delta_threshold 1000 -ab 128k -flags global_header ./makevideo_nvenc_hevc.mp4

    Full log about This Command - check this full log

    The reason for adding "-color_ " in the command is as follows.

    • HDR video after creating bt2020 + smpte2084 video using nvidia hardware accelerator. (I’m studying to make HDR videos. I’m not sure if this is right.)

    How can I make a video using ffmpeg hevc_nvenc and have it play on TV ?


    Things i’ve done

    Here’s what I’ve researched about why it doesn’t work.
    - The header information is not properly included in the resulting video file. So I used a program called nvhsp to add SEI and VUI information inside the video. See below for the commands and logs used.

    nvhsp is open source for writing VUI and SEI bitstrings in raw video. nvhsp link

    # make rawvideo for nvhsp
    $  ffmpeg -vsync 0 -hwaccel cuvid -hwaccel_device 0 -c:v h264_cuvid \
    -i /data/input.mp4 -t 10 \
    -filter_complex "[0:v]hwdownload,format=nv12,\
    format=yuv420p,scale=iw*2:ih*2" \
    -gpu 0 -c:v hevc_nvenc -f rawvideo output_for_nvhsp.265

    # use nvhsp
    $ python nvhsp.py ./output_for_nvhsp.265 -colorprim bt2020 \
    -transfer smpte-st-2084 -colormatrix bt2020nc \
    -maxcll "1000,300" -videoformat ntsc -full_range tv \
    -masterdisplay "G (13250,34500) B (7500,3000 ) R (34000,16000) WP (15635,16450) L (10000000,1)" \
    ./after_nvhsp_proc_output.265

    Parsing the infile:

    ==========================

    Prepending SEI data
    Starting new SEI NALu ...
    SEI message with MaxCLL = 1000 and MaxFall = 300 created in SEI NAL
    SEI message Mastering Display Data G (13250,34500) B (7500,3000) R (34000,16000) WP (15635,16450) L (10000000,1) created in SEI NAL
    Looking for SPS ......... [232, 22703552]
    SPS_Nals_addresses [232, 22703552]
    SPS NAL Size 488
    Starting reading SPS NAL contents
    Reading of SPS NAL finished. Read 448 of SPS NALu data.

    Making modified SPS NALu ...
    Made modified SPS NALu-OK
    New SEI prepended
    Writing new stream ...
    Progress: 100%
    =====================
    Done!

    File nvhsp_after_output.mp4 created.

    # after process
    $ ffmpeg -y -f rawvideo -r 25 -s 3840x2160 -pix_fmt yuv444p16le -color_primaries bt2020 -color_trc smpte2084  -colorspace bt2020nc -color_range tv -i ./1/after_nvhsp_proc_output.265 -vcodec copy  ./1/result.mp4 -hide_banner

    Truncating packet of size 49766400 to 3260044
    [rawvideo @ 0x40a6400] Estimating duration from bitrate, this may be inaccurate
    Input #0, rawvideo, from './1/nvhsp_after_output.265':
     Duration: N/A, start: 0.000000, bitrate: 9953280 kb/s
       Stream #0:0: Video: rawvideo (Y3[0][16] / 0x10003359), yuv444p16le(tv, bt2020nc/bt2020/smpte2084), 3840x2160, 9953280 kb/s, 25 tbr, 25 tbn, 25 tbc
    [mp4 @ 0x40b0440] Could not find tag for codec rawvideo in stream #0, codec not currently supported in container
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
       Last message repeated 1 times

    Goal

    • I want to generate matadata normally when encoding a video through hevc_nvenc.

    • I want to create a video through hevc_nvenc and play HDR Video on smart tv with 10bit color depth support.


    Additional

    • Is it normal for ffmpeg hevc_nvenc not to generate metadata in the resulting video file ? or is it a bug ?

    • Please refer to the image below. (*’알 수 없음’ meaning ’unknown’)

      • if you need more detail file info, check this Gist Link (by ffprobe)
        hevc_nvenc metadata
    • However, if you encode a file in libx265, the attribute information is entered correctly as shown below.

      • if you need more detail file info, check this Gist Link
        libx265 metadata

    However, when using hevc_nvenc, all information is missing.

    • i used option -show_streams -show_programs -show_format -show_data -of json -show_frames -show_log 56 at ffprobe
  • "Missing reference picture" error when saving rtsp stream with ffmpeg

    4 mars 2020, par Cédric Kamermans

    I want to record 10 seconds of video with an IP camera via ffmpeg. The output video looks fine but i get a bunch of "Missing reference picture" errors in the log. This only happens in the beginning of the process. I also get the warning "circular_buffer_size is not supported on this build".

    I started of with the following code :

    -y -i rtsp://username:password@IP:88/videoMain -t 10 ffmpeg_capture.mp4

    But this resulted in the output being corrupted in the beginning.
    I found the following code on a forum and this seems to fix that problem. The errors still remain though.

    -y -i rtsp://username:password@IP:88/videoMain -b 900k -vcodec copy -r 60 -t 10 ffmpeg_capture.mp4

    One thing to note is that currently we’re using a C2 V3 IP camera. This model is just for testing, we will upgrade to a better model when we get this working.

    I want to clarify that i’m just beginning to use ffmpeg, so I don’t quite understand it yet. It would be greatly appreciated if someone could provide an example code of how I can fix this problem.

    Thanks in advance !

  • How to enable hardware support for H.264 encoding on raspberry Pi 4B

    24 mars 2021, par MSD Paul

    I am trying to enable the hardware support for H264 encoding on raspberry pi 4B model. Compiling FFmpeg source enabling the configurations

    



    sudo ./configure --arch=armel --target-os=linux --enable-gpl --enable-omx --enable-omx-rpi --enable-nonfree


    



    following the link, https://github.com/legotheboss/YouTube-files/wiki/(RPi)-Compile-FFmpeg-with-the-OpenMAX-H.264-GPU-acceleration

    



    but while executing the encoding command after building and installing the ffmpeg with those configuration properly, I am getting the following error

    



    [h264_omx @ 0x156b6e0] Using OMX.broadcom.video_encode
[h264_omx @ 0x156b6e0] OMX error 80001000
[h264_omx @ 0x156b6e0] err 80001018 (-2147479528) on line 561
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed!


    



    command used :

    



    ffmpeg -i /media/pi/pic_1_org.png -c:v h264_omx -c:a copy -b:v 1500k outputfile.mp4


    



    I just want to encode a single 4K image into a .mp4 file using H.264 encoder. 
Please let me know how to resolve this issue ?