Recherche avancée

Médias (0)

Mot : - Tags -/utilisateurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (28)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (2218)

  • Docker Image for Torchvsion and Torchaudio with FFmpeg with nvidia headers

    22 juillet 2024, par Felipe Marra

    I'm trying to perform GPU video encoding/decoding using PyTorch. This means compiling FFmpeg from source along with Nvidia codec headers.

    


    Currently, my docker image looks like this :

    


    FROM nvcr.io/nvidia/pytorch:24.06-py3

RUN apt-get -yqq update && \
    apt-get install -yq --no-install-recommends ca-certificates expat libgomp1 && \
    apt-get autoremove -y && \
    apt-get clean -y

RUN apt-get install -y bash \
    autoconf \
    automake \
    build-essential \
    cmake \
    git-core \
    libass-dev \
    libfreetype6-dev \
    libgnutls28-dev \
    libmp3lame-dev \
    libnuma1 \
    libnuma-dev \
    libsdl2-dev \
    libtool \
    libva-dev \
    libvdpau-dev \
    libvorbis-dev \
    libxcb1-dev \
    libxcb-shm0-dev \
    libxcb-xfixes0-dev \
    libc6 \
    libc6-dev \
    meson \
    ninja-build \
    pkg-config \
    texinfo \
    unzip \
    wget \
    yasm \
    zlib1g-dev && \
    apt-get -yqq update

# Miniconda
COPY .devcontainer/env.yml .

RUN mkdir -p ~/miniconda3 && \
    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh && \
    bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3 && \
    rm -rf ~/miniconda3/miniconda.sh && \
    ~/miniconda3/bin/conda init bash && \
    ~/miniconda3/bin/conda init zsh && \
    ~/miniconda3/bin/conda env update --file env.yml

# FFMPEG
RUN git clone https://git.videolan.org/git/ffmpeg/nv-codec-headers.git && \
    cd nv-codec-headers && \
    make install

RUN git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg/ && \
    cd ffmpeg && \
    ./configure \
    --prefix="$CONDA_PREFIX"\
    --enable-nonfree \
    --enable-cuda-nvcc \
    --enable-libnpp \
    --extra-cflags=-I/usr/local/cuda/include \
    --extra-ldflags=-L/usr/local/cuda/lib64 \
    --disable-static \
    --enable-shared && \
    make -k -j 8 && \
    make install && \
    ldconfig

ENV LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH


    


    The env.yml contains the following :

    


    name: base
channels:
  - pytorch
  - nvidia
  - conda-forge
  - defaults
dependencies:
  - pytorch=2.3.1=py3.12_cuda12.1_cudnn8.9.2_0
  - torchvision=0.18.1=py312_cu121
  - torchaudio=2.3.1=py312_cu121
  - pytorch-cuda=12.1=ha16c6d3_5


    


    To test the image, I'm running :

    


    import torchvision
from torchaudio.utils import ffmpeg_utils

print("Library versions:")
print(ffmpeg_utils.get_versions())
print("\nBuild config:")
print(ffmpeg_utils.get_build_config())
print("\nDecoders:")
print([k for k in ffmpeg_utils.get_video_decoders().keys() if "cuvid" in k])
print("\nEncoders:")
print([k for k in ffmpeg_utils.get_video_encoders().keys() if "nvenc" in k])

torchvision.set_video_backend('cuda')


    


    I've also created this repo so that other people will be able to just run this image once its problems get solved.

    


    What is going on is that outside the Conda environment, FFmpeg is configured as expected. But inside it, I'm getting the folowing error :

    


    libopenh264.so.5: cannot open shared object file: No such file or directory


    


    By following this comment on a torchvision issue, I was able to make ffprobe -hide_banner -decoders | grep h264 and ffmpeg -hide_banner -encoders | grep 264 yield the expected outputs, as shown in torchaudio's doc, inside the Conda environment. But then torchaudio wasn't able to find FFmpeg.

    


    I'm new to the whole ecosystem (linux, docker and torch), and would appreciate it if someone could help me build this image, as I think that this should actually be officially provided by torch.

    


  • Use nvidia hardware acceleration to merge webms with pngs in ffmpeg

    2 février, par Joshi234

    So I need to merge around 18000 webms with with pngs, however on software encoding it's really slow, so I'm trying to use hardware acceleration to make this faster.

    


    I tried a lot of different stuff but none of them seems to work and it gives me generic errors to what I couldn't find anything relavant.

    


    This is the most "succesful" try I had :
ffmpeg -hwaccel cuvid -c:v vp9_cuvid  -i lightray.webm -i card.png -filter_complex "[1]format=argb,colorchannelmixer=aa=0.35[ol];[0][ol]overlay" -colorspace 5 -c:a copy output.webm

    


    Which gives me this error :

    


    ffmpeg version n7.0.1-ffmpeg-windows-build-helpers Copyright (c) 2000-2024 the FFmpeg developers
  built with gcc 10.2.0 (GCC)
  configuration: --pkg-config=pkg-config --pkg-config-flags=--static --extra-version=ffmpeg-windows-build-helpers --enable-version3 --disable-debug --disable-w32threads --arch=x86_64 --target-os=mingw32 --cross-prefix=/home/runner/work/ffmpeg-stable-autobuild/ffmpeg-stable-autobuild/sandbox/cross_compilers/mingw-w64-x86_64/bin/x86_64-w64-mingw32- --enable-libcaca --enable-gray --enable-libtesseract --enable-fontconfig --enable-gmp --enable-libass --enable-libbluray --enable-libbs2b --enable-libflite --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libvorbis --enable-libwebp --enable-libzimg --enable-libzvbi --enable-libmysofa --enable-libopenjpeg --enable-libopenh264 --enable-libvmaf --enable-libsrt --enable-libxml2 --enable-opengl --enable-libdav1d --enable-gnutls --enable-libsvtav1 --enable-libvpx --enable-libaom --enable-nvenc --enable-nvdec --extra-libs=-lz --extra-libs=-lpng --extra-libs=-lm --extra-libs=-lfreetype --extra-libs=-lshlwapi --extra-libs=-lmpg123 --extra-libs=-lpthread --extra-cflags=-DLIBTWOLAME_STATIC --extra-cflags=-DMODPLUG_STATIC --extra-cflags=-DCACA_STATIC --enable-amf --enable-libmfx --enable-libaribcaption --enable-gpl --enable-frei0r --enable-librubberband --enable-libvidstab --enable-libx264 --enable-libx265 --enable-avisynth --enable-libaribb24 --enable-libxvid --enable-libdavs2 --enable-libxavs2 --enable-libxavs --extra-cflags='-mtune=generic' --extra-cflags=-O3 --enable-static --disable-shared --prefix=/home/runner/work/ffmpeg-stable-autobuild/ffmpeg-stable-autobuild/sandbox/cross_compilers/mingw-w64-x86_64/x86_64-w64-mingw32 --enable-nonfree --enable-libfdk-aac --enable-decklink
  libavutil      59.  8.100 / 59.  8.100
  libavcodec     61.  3.100 / 61.  3.100
  libavformat    61.  1.100 / 61.  1.100
  libavdevice    61.  1.100 / 61.  1.100
  libavfilter    10.  1.100 / 10.  1.100
  libswscale      8.  1.100 /  8.  1.100
  libswresample   5.  1.100 /  5.  1.100
  libpostproc    58.  1.100 / 58.  1.100
[vp9 @ 000001baa2891400] Invalid frame marker
    Last message repeated 3 times
[vp9 @ 000001baa2891400] Not all references are available
[vp9 @ 000001baa2891400] Invalid frame marker
    Last message repeated 1 times
[vp9 @ 000001baa2891400] Requested reference 6 not available
[vp9 @ 000001baa2891400] Invalid frame marker
    Last message repeated 4 times
[vp9 @ 000001baa2891400] Requested reference 6 not available
[vp9 @ 000001baa2891400] Invalid frame marker
    Last message repeated 3 times
[vp9 @ 000001baa2891400] Not all references are available
[vp9 @ 000001baa2891400] Invalid frame marker
    Last message repeated 12 times
[vp9 @ 000001baa2891400] Requested reference 6 not available
[vp9 @ 000001baa2891400] Invalid frame marker
    Last message repeated 4 times
[vp9 @ 000001baa2891400] Not all references are available
[vp9 @ 000001baa2891400] Invalid frame marker
    Last message repeated 1 times
[vp9 @ 000001baa2891400] Not all references are available
    Last message repeated 1 times
[vp9 @ 000001baa2891400] Invalid frame marker
[vp9 @ 000001baa2891400] Requested reference 6 not available
[vp9 @ 000001baa2891400] Invalid frame marker
    Last message repeated 1 times
[vp9 @ 000001baa2891400] Requested reference 6 not available
[vp9 @ 000001baa2891400] Invalid frame marker
    Last message repeated 1 times
[vp9 @ 000001baa2891400] Not all references are available


    


    I'm by no means an expert in ffmpeg or in video encoding in general, so I have no idea what this is supposed to mean.

    


  • I Need Help Making Our FIRST Robotics Competition Driver Station Video Feed Faster [closed]

    30 mars, par Joshua Green

    I am currently using FFmpeg on a Raspberry Pi 4 Model B using an ArduCam UC-844 Rev. B as the camera. We do not need any audio and I don't care about the quality of the video. All we need is for the stream to be as fast as possible. The video from the camera is being streamed to the driver station via FFmpeg and being picked up on the driver station via FFplay. Right now we are getting a delay that we wish could go away or be significantly shortened. These are the commands we are using.

    


      

    • Raspberry Pi : ffmpeg -i /dev/video0 -c:v libx264 -crf 45 -maxrate 1M -bufsize 1.2M -preset ultrafast -tune zerolatency -filter:v fps=30 -f mpegts -omit_video_pes_length 0 udp://10.2.33.5:554

      


    • 


    • Driver Station : ffplay -fflags nobuffer -flags low_delay -probesize 32 -analyzeduration 0 -f mpegts -vf setpts=0 udp://10.2.33.5:554

      


    •