Recherche avancée

Médias (91)

Autres articles (111)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (8576)

  • aaccoder : rewrite PNS implementation

    6 septembre 2015, par Rostislav Pehlivanov
    aaccoder : rewrite PNS implementation
    

    This commit rewrites the PNS implementation and significantly
    improves sonic quality.

    The previous implementation marked an incredibly big amount
    of SFBs to predict when there was no need for this and this
    resulted in quite a large amount of artifacts. Also the
    quantization was incorrect (av_clip(4+log2f(...))) which
    led to 3x the intensity for PNS values leading to even more
    artifacts.

    This commit rewrites the PNS search function and introduces
    a major change : the PNS values are synthesized and are compared
    to the current coefficients in addition to passing through
    the revised checks to see whether PNS can be used.

    This decreases distortions and makes the current PNS implementation
    mainly focused on replacing any low-power non-zero bands as well
    as adding any zeroed bands back.

    The current encoder’s performance is enough (especially with
    IS) so PNS isn’t really required except to fill in the occasional
    few bands as well as extend any zeroed high frequency, so this
    combination which is already enabled by default works
    to get as much quality as it can within the bits allowed.

    Signed-off-by : Rostislav Pehlivanov <atomnuker@gmail.com>

    • [DH] libavcodec/aaccoder.c
  • ffmpeg itsoffset doesn't work with pcm audio and raw 264 video

    28 janvier 2019, par Danny

    I need to create an MP4 container with data from a hardware encoder. The encoder outputs PCM 16-bit signed audio and raw H.264 ES video frames.

    This ffmpeg command line I’ve got works but the audio and video are not sync’d.

    From other posts I know that itsoffset only works with video and probably doesn’t work with -v copy

    I’ve confirmed that applying an itsoffset has no effect.

    Here’s the command line. Any suggestions ?

    One post suggested itsoffset works if you re-encode the video. But doing that needs CPU power and adds latency. (And what’s the point of a hardware encoder then ?)

    ffmpeg -f s16le -ar 44.1k -ac 2      -i Audio_20190110-165736.pcm
          -fflags +genpts -itsoffset -5 -i Video_20190110-165736.264
          -c:v copy -c:a aac -b:a 128k
          -f mp4 -movflags +faststart  output.mp4

    EDIT I

    Here is a link to the audio/video input files referenced in the above command.

  • Texture bound to texture unit 0 is not renderable

    2 décembre 2018, par Trying_To_Understand

    I followed this tutorial https://github.com/phoboslab/jsmpeg. I have an open wesocket that gets the data. Sometimes, it’s take a long time to the ffmpeg (on a remote computer) to get the data from my ip camera, so I wrote a setTimeout function that waites for 10 sec to be "sure" that the ffmpeg getting the data from the ip camera. If I remove this setTimeout function this error will show up :

    [.WebGL-0000020CFA5C04D0]RENDER WARNING : texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering.

    This is my code for showing the stream to the client :

    this.dataService.getDataByParam(camera.CameraId.toString())
     .subscribe(
     (result: any) => {

       setTimeout(() => {
         this.ws = new WebSocket('ws://' + result);
         $(document).ready(() => {

           this.player = new jsmpeg(this.ws, {
             canvas: document.getElementById("canvas"),
             autoplay: true,
             audio: true,
             pauseWhenHidden: false,
           });

         })
       }, 10000);
     },
     (error: Error) => {
       this.streamErrorMsg = "Problem with the server, please try again after some time."
     });
    }

    How can i know for sure that the ffmpeg has connected successfully to the camera and started to convert the data on the client ? i want to avoid writting the setTimeout function.