Recherche avancée

Médias (21)

Mot : - Tags -/Nine Inch Nails

Autres articles (30)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (6450)

  • Video size optimization

    21 janvier 2020, par Heba Gamal Eldin

    I’m working on a task that should optimize the video’s size before uploading to the server in a web application.
    So i want my model to automatically optimize each input video size.

    I have many trials in different approaches like FFmpeg :
    I used libx265, h264 and lib265 as a codec, with some videos it increases the video size and others minimize it with little ratio and it takes so long to generate the output file.

    for example with video of size 8M

    input = {input_name: None}
    output = {output_name: '-n preset faster -vcodec libx265 -crf 28'}

    The output file is 10M.

    And also tried OpenCV :
    But the output videos aren’t written the file appears 0kb size.

    ex : with input video resolution (1280×544)
    I want to down scale it by :

    cap= cv2.VideoCapture(file_name)
    cap.set(3,640)
    cap.set(4,480)
    codec = cv2.VideoWriter_fourcc(*'XDIV')
    out = cv2.VideoWriter(output_file, codec, 28.0 , (640,480))
    While cap.isOpened():
      bol, frame = cap.read()
      out.write(frame)
      cv2.imshow('video', frame)

    I become little bit confused. what are the parameters of the input,output videos i should consider to be optimized and make a vital size change in each specific video ? Is it the codec, width, height only ?

    What is the most effective approach to do this ?

    Should i build a predictive model for estimating the suitable output video file parameters or there is a method that auto adjust ?

    if there’s an illustrative example please provide me.

  • How to solve javacv ExceptionInInitializerError [closed]

    6 juin, par shihui wei

    I want to combine multiple images into a gif. I chose ffmpeg. I learned that there are binary dependencies including ffmpeg in java, so I chose javacv

    


    First, I added the dependency of javacv to my pom file:

    


            <dependency>&#xA;            <groupid>org.bytedeco</groupid>&#xA;            <artifactid>javacv-platform</artifactid>&#xA;            <version>1.5.11</version>&#xA;        </dependency>&#xA;

    &#xA;

    Then I wrote a code to synthesize GIF from multiple images.

    &#xA;

    public static byte[] encodeGif(List<bufferedimage> frames, int frameDelayMs, boolean loopForever) {&#xA;&#xA;        if (frames == null || frames.isEmpty()) {&#xA;            throw new IllegalArgumentException("frames 不能为空");&#xA;        }&#xA;        int width = frames.get(0).getWidth();&#xA;        int height = frames.get(0).getHeight();&#xA;&#xA;        // ByteArrayOutputStream &#x2B; FFmpegFrameRecorder&#xA;        try (ByteArrayOutputStream baos = new ByteArrayOutputStream()) {&#xA;            &#xA;            try (FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(baos, width, height)) {&#xA;                recorder.setFormat("gif");&#xA;                recorder.setFrameRate(1000.0 / frameDelayMs);&#xA;&#xA;                recorder.setOption("loop", loopForever ? "0" : "1");&#xA;&#xA;                recorder.start();&#xA;&#xA;                Java2DFrameConverter converter = new Java2DFrameConverter();&#xA;                for (BufferedImage img : frames) {&#xA;                    // 把 BufferedImage 转成 Frame&#xA;                    Frame frame = converter.convert(img);&#xA;                    recorder.record(frame);&#xA;                }&#xA;&#xA;                recorder.stop();&#xA;            }&#xA;            return baos.toByteArray();&#xA;        } catch (Exception e) {&#xA;            log.error("FastGifUtil.encodeGif 失败", e);&#xA;            throw new RuntimeException("生成 GIF 失败", e);&#xA;        }&#xA;&#xA;    }&#xA;</bufferedimage>

    &#xA;

    Finally, I have prepared data to test, Below is my test code:

    &#xA;

        public void testGenerateGif() {&#xA;        log.info(">>>>>>>>>>> start get bufferedImage &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;");&#xA;        List<bufferedimage> bufferedImages = batchTaskUtils.batchIOTask(urls, url -> {&#xA;            byte[] byteData = imageClientUtils.byteData(url);&#xA;            return OpencvUtils.byteToBufferedImage(byteData);&#xA;        });&#xA;        log.info(">>>>>>>>>>> start generate gif &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;");&#xA;        long time = System.currentTimeMillis();&#xA;        byte[] bytes = GifUtils.encodeGif(bufferedImages, 50, true);&#xA;        log.info("{}", System.currentTimeMillis() - time);&#xA;        log.info(">>>>>>>>>>> start upload gif &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;");&#xA;        String upload = upload(bytes);&#xA;        log.info("{}", upload);&#xA;&#xA;    }&#xA;</bufferedimage>

    &#xA;

    However, I encountered a difficult problem : I cannot load the FFmpegFrameRecorder class. The exception error is :

    &#xA;

    java.lang.ExceptionInInitializerError&#xA;    at org.bytedeco.javacv.FFmpegFrameRecorder.<clinit>(FFmpegFrameRecorder.java:356)&#xA;    at com.kuaishou.qa.utils.GifUtils.encodeGif(GifUtils.java:29)&#xA;    at com.kuaishou.qa.AnimationDiffTest.testGenerateGif(AnimationDiffTest.java:88)&#xA;    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&#xA;    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)&#xA;    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&#xA;    at java.base/java.lang.reflect.Method.invoke(Method.java:566)&#xA;    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)&#xA;    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)&#xA;    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)&#xA;    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)&#xA;</clinit>

    &#xA;

  • Files created with "ffmpeg hevc_nvenc" do not play on TV. (with video codec SDK 9.1 of nvidia)

    29 janvier 2020, par Dashhh

    Problem

    • Files created with hevc_nvenc do not play on TV. (samsung smart tv, model unknown)
      Related to my ffmpeg build is below.

    FFmpeg build conf

    $ ffmpeg -buildconf
       --enable-cuda
       --enable-cuvid
       --enable-nvenc
       --enable-nonfree
       --enable-libnpp
       --extra-cflags=-I/path/cuda/include
       --extra-ldflags=-L/path/cuda/lib64
       --prefix=/prefix/ffmpeg_build
       --pkg-config-flags=--static
       --extra-libs='-lpthread -lm'
       --extra-cflags=-I/prefix/ffmpeg_build/include
       --extra-ldflags=-L/prefix/ffmpeg_build/lib
       --enable-gpl
       --enable-nonfree
       --enable-version3
       --disable-stripping
       --enable-avisynth
       --enable-libass
       --enable-libfontconfig
       --enable-libfreetype
       --enable-libfribidi
       --enable-libgme
       --enable-libgsm
       --enable-librubberband
       --enable-libshine
       --enable-libsnappy
       --enable-libssh
       --enable-libtwolame
       --enable-libwavpack
       --enable-libzvbi
       --enable-openal
       --enable-sdl2
       --enable-libdrm
       --enable-frei0r
       --enable-ladspa
       --enable-libpulse
       --enable-libsoxr
       --enable-libspeex
       --enable-avfilter
       --enable-postproc
       --enable-pthreads
       --enable-libfdk-aac
       --enable-libmp3lame
       --enable-libopus
       --enable-libtheora
       --enable-libvorbis
       --enable-libvpx
       --enable-libx264
       --enable-libx265
       --disable-ffplay
       --enable-libopenjpeg
       --enable-libwebp
       --enable-libxvid
       --enable-libvidstab
       --enable-libopenh264
       --enable-zlib
       --enable-openssl

    ffmpeg Command

    • Command about FFmpeg encoding
    ffmpeg -ss 1800 -vsync 0 -hwaccel cuvid -hwaccel_device 0 \
    -c:v h264_cuvid -i /data/input.mp4 -t 10 \
    -filter_complex "\
    [0:v]hwdownload,format=nv12,format=yuv420p,\
    scale=iw*2:ih*2" -gpu 0 -c:v hevc_nvenc -pix_fmt yuv444p16le -preset slow -rc cbr_hq -b:v 5000k -maxrate 7000k -bufsize 1000k -acodec aac -ac 2 -dts_delta_threshold 1000 -ab 128k -flags global_header ./makevideo_nvenc_hevc.mp4

    Full log about This Command - check this full log

    The reason for adding "-color_ " in the command is as follows.

    • HDR video after creating bt2020 + smpte2084 video using nvidia hardware accelerator. (I’m studying to make HDR videos. I’m not sure if this is right.)

    How can I make a video using ffmpeg hevc_nvenc and have it play on TV ?


    Things i’ve done

    Here’s what I’ve researched about why it doesn’t work.
    - The header information is not properly included in the resulting video file. So I used a program called nvhsp to add SEI and VUI information inside the video. See below for the commands and logs used.

    nvhsp is open source for writing VUI and SEI bitstrings in raw video. nvhsp link

    # make rawvideo for nvhsp
    $  ffmpeg -vsync 0 -hwaccel cuvid -hwaccel_device 0 -c:v h264_cuvid \
    -i /data/input.mp4 -t 10 \
    -filter_complex "[0:v]hwdownload,format=nv12,\
    format=yuv420p,scale=iw*2:ih*2" \
    -gpu 0 -c:v hevc_nvenc -f rawvideo output_for_nvhsp.265

    # use nvhsp
    $ python nvhsp.py ./output_for_nvhsp.265 -colorprim bt2020 \
    -transfer smpte-st-2084 -colormatrix bt2020nc \
    -maxcll "1000,300" -videoformat ntsc -full_range tv \
    -masterdisplay "G (13250,34500) B (7500,3000 ) R (34000,16000) WP (15635,16450) L (10000000,1)" \
    ./after_nvhsp_proc_output.265

    Parsing the infile:

    ==========================

    Prepending SEI data
    Starting new SEI NALu ...
    SEI message with MaxCLL = 1000 and MaxFall = 300 created in SEI NAL
    SEI message Mastering Display Data G (13250,34500) B (7500,3000) R (34000,16000) WP (15635,16450) L (10000000,1) created in SEI NAL
    Looking for SPS ......... [232, 22703552]
    SPS_Nals_addresses [232, 22703552]
    SPS NAL Size 488
    Starting reading SPS NAL contents
    Reading of SPS NAL finished. Read 448 of SPS NALu data.

    Making modified SPS NALu ...
    Made modified SPS NALu-OK
    New SEI prepended
    Writing new stream ...
    Progress: 100%
    =====================
    Done!

    File nvhsp_after_output.mp4 created.

    # after process
    $ ffmpeg -y -f rawvideo -r 25 -s 3840x2160 -pix_fmt yuv444p16le -color_primaries bt2020 -color_trc smpte2084  -colorspace bt2020nc -color_range tv -i ./1/after_nvhsp_proc_output.265 -vcodec copy  ./1/result.mp4 -hide_banner

    Truncating packet of size 49766400 to 3260044
    [rawvideo @ 0x40a6400] Estimating duration from bitrate, this may be inaccurate
    Input #0, rawvideo, from './1/nvhsp_after_output.265':
     Duration: N/A, start: 0.000000, bitrate: 9953280 kb/s
       Stream #0:0: Video: rawvideo (Y3[0][16] / 0x10003359), yuv444p16le(tv, bt2020nc/bt2020/smpte2084), 3840x2160, 9953280 kb/s, 25 tbr, 25 tbn, 25 tbc
    [mp4 @ 0x40b0440] Could not find tag for codec rawvideo in stream #0, codec not currently supported in container
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
       Last message repeated 1 times

    Goal

    • I want to generate matadata normally when encoding a video through hevc_nvenc.

    • I want to create a video through hevc_nvenc and play HDR Video on smart tv with 10bit color depth support.


    Additional

    • Is it normal for ffmpeg hevc_nvenc not to generate metadata in the resulting video file ? or is it a bug ?

    • Please refer to the image below. (*’알 수 없음’ meaning ’unknown’)

      • if you need more detail file info, check this Gist Link (by ffprobe)
        hevc_nvenc metadata
    • However, if you encode a file in libx265, the attribute information is entered correctly as shown below.

      • if you need more detail file info, check this Gist Link
        libx265 metadata

    However, when using hevc_nvenc, all information is missing.

    • i used option -show_streams -show_programs -show_format -show_data -of json -show_frames -show_log 56 at ffprobe