Recherche avancée

Médias (17)

Mot : - Tags -/wired

Autres articles (61)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (10346)

  • avcodec/omx : Fix handling of fragmented buffers

    17 janvier 2019, par Dave Stevenson
    avcodec/omx : Fix handling of fragmented buffers
    

    See https://trac.ffmpeg.org/ticket/7687

    If an encoded frame is returned split over two or more
    IL buffers due to the size, then there is a race between
    whether get_buffer will fail, return NULL, and a truncated
    frame is passed on, or IL will return the remaining part
    of the encoded frame.
    If get_buffer returns NULL, part of the frame is left behind
    in the codec, and will be collected on the next call. That
    then leaves a frame stuck in the codec. Repeat enough times
    and the codec FIFO is full, and the pipeline stalls.

    A performance improvement in the Raspberry Pi firmware means
    that the timing has changed, and now frequently drops into the
    case where get_buffer returns NULL.

    Add code such that should a buffer be received without
    OMX_BUFFERFLAG_ENDOFFRAME that get_buffer is called with wait
    set, so we wait for the remainder of the frame.
    This code has been made conditional on the Pi build in case
    other IL implementations don't handle ENDOFFRAME correctly.

    Signed-off-by : Dave Stevenson <dave.stevenson@raspberrypi.org>
    Signed-off-by : Aman Gupta <aman@tmm1.net>
    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DH] libavcodec/omx.c
  • using pocketsphinx_continuous with a .wav file

    3 avril 2013, par user2242131

    I am attempting to write an application that will allow a user to speak a small set of commands from a remote system and have them executed on my server. Using pocketsphinx to parse the spoken text. When run locally with the microphone, pocketsphinx_continuous works perfectly no matter how I slur the words. But when importing the audio file and using ffmpeg to downsample the audio to a single channel, 16 bit PCM file, it will parse the first word without difficulty. Then it will skip everything else and treat it as . I am confident that the problem is in the file format and not in the pocketsphinx configuration.

    Using command line
    ffmpeg -y -i Sound\AddSheet.wav -ac 1 -f s16le -acodec pcm_s16le -ar 16k AddTmp.wav
    in a batch file.

    The bottom of the output I get is :

    INFO: fsg_search.c(1407): Start node ADD.0:5:47
    INFO: fsg_search.c(1407): Start node <sil>.0:2:49
    INFO: fsg_search.c(1446): End node <sil>.126:128:305 (-486)
    INFO: fsg_search.c(1662): lattice start node <s>.0 end node <sil>.126
    INFO: ps_lattice.c(1352): Normalizer P(O) = alpha(<sil>:126:305) = -175371
    INFO: ps_lattice.c(1390): Joint P(O,S) = -176076 P(S|O) = -705
    000000000: ADD USER
    </sil></sil></s></sil></sil>

    Which is not the audio in the file. The words spoken in the file are "ADD SPREADSHEET", which works perfectly from the same microphone without the intervening .wav file.

    I have tried increasing the audio volume and decreasing the background noise using sox :

    sox -v 3.0 Sound\%1 Sound\%1-loud.wav ffmpeg -i Sound\%1-loud.wav -vn -ss 00:00:00 -t 00:00:01 -y Sound\%1-noiseaud.wav
    sox Sound\%1-noiseaud.wav -n noiseprof Sound\%1-noise.prof
    sox Sound\%1 Sound\%1-clean.wav noisered sound\noise.prof 0.21
    ffmpeg -y -i Sound\%1-clean.wav -ac 1 -f s16le -acodec pcm_s16le -ar 16k AddTmp.wav

    with no noticeable effect on the final results.

    If you look at the output you will notice that fsg_search.c has found ADD as the start node, then silence for the remainder. Please help on this.

  • How to use sunlubo/SwiftFFmpeg library in iOS swift ?

    29 mars 2022, par Ankur Pachauri

    I am trying to use https://github.com/sunlubo/SwiftFFmpeg library in iOS to convert mp4 file to mpegts. But I am getting below error when trying to add it using SPM :

    &#xA;

    enter image description here

    &#xA;

    & No such module 'CFFmpeg' error when trying to add the library manually as used in the Demo project(https://github.com/sunlubo/SwiftFFmpegDemo-iOS). Can anyone provide me the steps or example on how to use this library for iOS ?

    &#xA;

    I am using a M1 Macbook, Xcode 13, SwiftFFmpeg library version(1.0.5), Swift version(5.4)

    &#xA;

    Thanks in advance !

    &#xA;