Recherche avancée

Médias (91)

Autres articles (60)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (6415)

  • ffmpeg yield different results when used in the application

    5 mai 2020, par xoail

    This is perplexing me. I am using ffmpeg to convert m4a to wav audio. When I use it with fluent-ffmpeg the output mime is audio/wave while when I do it using terminal I get audio/wav. For extactly same commands. The application I am trying to play these files is rejecting my audio/wave file.

    



    Using node (results in audio/wave) :

    



    const ffmpegPath = require('@ffmpeg-installer/ffmpeg')
const ffmpeg = require('fluent-ffmpeg');
ffmpeg()
.setFfmpegPath(ffmpegPath)
.input('source.m4a')
.outputOptions(['-ac 1','-ar 16000'])
.save('out.wav')


    



    Using terminal (results in audio/wav) :

    



    /same/path/to/static-ffmpeg-as-in-node-program/ffmpeg -i source.m4a -ac 1 -ar 16000 out.wav


    



    I've both of their encoding as, PCM S16 LE (s16l), 16 bits/sample, mono and 16khz samplerate audio files. I've tried adding other flags and got same results on both scenarios : -vn -acodec pcm_s16le -ar 16000 -ac 1 -f wav. Just that one doesnt work. And I have no idea why.

    



    I've also tried not using fluent-ffmpeg but using spawn and got same audio/wave file.

    



    Any insight ?

    


  • Playing large, 360 video files (4gb+), on Oculus Go, using Unity3D VideoPlayer

    15 avril 2020, par SteenPetersen

    I am using unity VideoPlayer to play 360 degree video files. Its all working wonderfully and I can build to Oculus Go and Quest without issue.

    



    The problem is that if the video file exceeds a file size of a little more than 4 GB then, once built, it wont play the file.

    



    Now this, to me, is odd behavior. The Oculus Go will play the video file if it is simply added to the gallery and played, telling me that the Oculus Go is capable of playing the file. I am a bit lost as to what to look at with this and how to go about fixing it. The 4GB file size limit reminded of the fat3d file size limit. But I'm unsure if this is related or not since it can play the video in the gallery.

    



    Thus far I have solved the problem by using FFMPEG to reduce the constant rate factor and thus quality of the video to get under the limit as this was hardly noticeable in the quality and reduced the file size significantly. The problem is that some videos are quite long and it seems that I actually need some of them to be approx 5 GB.

    



    Any insight is appreciated.

    


  • record Linux Wayland/DRM screen using ffmpeg's kmsgrab device with superimposed webcam

    7 février 2020, par nrdxp

    Setup is Linux, ffmpeg using kabylake iGPU.

    I am capturing a running sway instance using the kmsgrab device, which requires the use of a hardware backend to coherently process the image on my hardware. Only VA API fits this bill. I want to overlay the webcam in the bottom right corner during encoding. However, attempts at manipulating the filter graph to accomplish this have been unsuccessful. This is ultimately for Twitch/Tube stream.

    Right now, I am actually capturing the webcam to an sdl window, and simply recording the screen using separate instances of ffmpeg. This doesn’t actually solve my problem since the window is easily disguised by workspace switching or other windows.

    This is the workaround :

    #!/usr/bin/env zsh

    # record webcam and open it in sdl window
    ffmpeg -v quiet -hide_banner \
     -re -video_size 640X480 -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -i /dev/video0 \
     -vf 'format=nv12,hwupload' -c:v hevc_vaapi -f hevc - \
     | ffmpeg -v quiet -i - -f sdl2 - &

    # wait for webcam window to open
    until swaymsg -t get_tree | grep 'pipe:' &>/dev/null; do
     sleep 0.5
    done

    # position webcam in the bottom right corner of screen using sway
    swaymsg floating enable
    swaymsg resize set width 320 height 240
    swaymsg move position 1580 795
    swaymsg focus tiling

    #screencast
    ffmpeg -format bgra -framerate 60 -f kmsgrab -thread_queue_size 1024 -i - \
     -f alsa -ac 2 -thread_queue_size 1024 -i hw:0 \
     -vf 'hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' \
     -c:v h264_vaapi -g 120 -b:v 3M -maxrate 3M -pix_fmt vaapi_vld -c:a aac -ab 96k -threads $(nproc) \
     output.mkv

    kill %1