Recherche avancée

Médias (91)

Autres articles (42)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

Sur d’autres sites (4277)

  • PowerShell script to Embed Album art in MP3 files - issues with accelerated audio files in ffmpeg

    4 mars, par moshe baruchi

    I am trying to embed album art into an MP3 file using a PowerShell script along with the TagLib library. The MP3 file has been speed-boosted using FFmpeg. When I run the script on regular audio files, it works fine, but with the accelerated audio files, the album art is only visible when I open the file in MP3Tag or in VLC. It's not showing up in Windows audio players and definitely not in File Explorer.

    


    PowerShell code

    


    param (
    [string]$audio,
    [string]$picture
)
Add-Type -Path "TagLib-Sharp.dll"
$audioFilePath = $audio
$albumArtPath = $picture
$audioFile = [TagLib.File]::Create($audioFilePath)
$tag = $audioFile.Tag
$newPictures = New-Object System.Collections.Generic.List[TagLib.Picture]
$newPictures.Add((New-Object TagLib.Picture($albumArtPath)))
$tag.Pictures = $newPictures.ToArray()
$audioFile.Save()


    


    Commands in cmd

    


    ffmpeg -i song.mp3 -i cover.jpg -map 0:a -map 1 -c:a copy -c:v mjpeg -metadata:s:vtitle="Album Cover" -metadata:s:v comment="Cover (front)" output_with_cover.mp3
del song.mp3
rename output_with_cover.mp3 song.mp3
powershell -ExecutionPolicy Bypass -File "albumart.ps1" -audio "song.mp3" -picture "picture.jpg"


    


  • How to prevent gray overlays, transparency issues, and similar shader defects when using gl-transition filters

    29 janvier 2021, par Soren Wray

    I compiled and installed a local build of ffmpeg with support for gl-transition adapted from the official guide for Ubuntu. The build is configured with all relevant packages and seems to be working as intended. See the code samples at the end.

    


    I know the gl-transition filter is installed due to ./ffmpeg -v 0 -filters | grep gltransition, which outputs :

    


    T.. gltransition      VV->V      OpenGL blend transitions

    


    All sources were tested with the custom command string : ./ffmpeg -i ~/PATH/TO/INPUT1.mp4 -i ~/PATH/TO/INPUT2.mp4 -filter_complex "gltransition=duration=3:offset=1:source=/PATH/TO/EFFECT.glsl" -y ~/PATH/TO/OUTPUT.mp4, which is for a 3 second transition effect (duration=3), starting at 1 second (offset=1).

    


    I've been testing the code sources for various transition effects listed in the gl-transition gallery and have encountered some unusual gray overlays at the transition points, likely having to do with alpha channel transparency. In many cases, there are also shader or animation defects, e.g. with windowslice.glsl rendering only 1 slice when there are supposed to be 10, or again with WaterDrop.glsl, which simply fades out the clip in place of the intended ripple effect. Most complex animations seem to default to this monotonous gray overlay. I provide a gif example below for the GlitchedMemories.glsl transition.

    


    Example of Glitched Memories

    


    I couldn't locate any other reports of this particular issue online. The documentation for gl-transitions is sorely lacking and the Stack Exchange network has very little information about this custom filter. I don't know how to fix the problem. It could have something to do with the codec or pixel format used, or some quirk of my build, but the technical details are beyond me.

    


    Please note my compilation and configuration steps, perhaps the error is there :

    


    sudo apt-get update -qq && sudo apt-get -y install autoconf automake build-essential cmake git-core libass-dev libfreetype6-dev libgnutls28-dev libsdl2-dev libtool libunistring-dev libva-dev libvdpau-dev libvorbis-dev libxcb1-dev libxcb-shm0-dev libxcb-xfixes0-dev pkg-config texinfo wget yasm zlib1g-dev

mkdir -p ~/ffmpeg_sources ~/bin

sudo apt-get install nasm libx264-dev libx265-dev libnuma-dev libvpx-dev libfdk-aac-dev libmp3lame-dev libopus-dev

cd ~/ffmpeg_sources && wget -O ffmpeg-snapshot.tar.bz2 https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2 && tar xjvf ffmpeg-snapshot.tar.bz2

cd ~/ && git clone https://github.com/transitive-bullshit/ffmpeg-gl-transition.git


    


    Open ~/ffmpeg-gl-transition/vf_gltransition.c in an editor and delete line : # define GL_TRANSITION_USING_EGL // remove this line if you don't want to use EGL

    


    cd ~/ffmpeg_sources/ffmpeg && cp ~/ffmpeg-gl-transition/vf_gltransition.c libavfilter/

git apply ~/ffmpeg-gl-transition/ffmpeg.diff

PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure \
    --prefix="$HOME/ffmpeg_build" \
    --pkg-config-flags="--static" \
    --extra-cflags="-I$HOME/ffmpeg_build/include" \
    --extra-ldflags="-L$HOME/ffmpeg_build/lib" \
    --extra-libs="-lpthread -lm" \
    --bindir="$HOME/bin" \
    --enable-gpl \
    --enable-opengl \
    --enable-gnutls \
    --enable-libass \
    --enable-libfdk-aac \
    --enable-libfreetype \
    --enable-libmp3lame \
    --enable-libopus \
    --enable-libvorbis \
    --enable-libvpx \
    --enable-libx264 \
    --enable-libx265 \
    --disable-shared \
    --enable-static \
    --enable-runtime-cpudetect \
    --enable-filter=gltransition \
    --extra-libs='-lGLEW -lglfw' \
    --enable-nonfree && \
PATH="$HOME/bin:$PATH" make

source ~/.profile


    


  • Combining two live RTMP streams into another RTMP stream, synchronization issues (with FFMPEG)

    12 juin 2020, par Evk

    I'm trying to combine (side by side) two live video streams coming over RTMP, using the following ffmpeg command :

    



    ffmpeg -i "rtmp://first" -i "rtmp://first" -filter_complex "[0v][1v]xstack=inputs=2:layout=0_0|1920_0[stacked]" -map "[stacked]" -preset ultrafast -vcodec libx264 -tune zerolatency -an -f flv output.flv


    



    In this example I actually use the same input stream two times, because the issue is more visible this way. And the issue is in the output two streams are out of sync by about 2-3 seconds. That is - I expect (since I have two identical inputs) to have exactly the same left and right sides in the output. Instead - left side is behind right side by 2-3 seconds.

    



    What I believe is happening is ffmpeg connects to inputs in order (I see this in output log) and connection to each one takes 2-3 seconds (maybe it waits for I-frame, those streams have I-frame interval of 3 seconds). Then probably, it buffers frames received from first (already connected) input, while connecting to the second one. When second one is connected and frames from both inputs are ready to be put through the filter - first input buffer already contains 2-3 seconds of video - and result is out of sync.

    



    Again, that's just my assumptions. So, how can achieve my goal ? What I basically want is for ffmpeg to discard all "old" frames received before BOTH inputs are connected OR somehow put "empty" (black ?) frames for second input, while waiting for that second input to become available. I tried play with various flags, with PTS (setpts filter), but to no avail.