Recherche avancée

Médias (91)

Autres articles (85)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (11818)

  • How to print the video meta output by the browser version of ffmpeg.wasm to the console of Google Chrome ?

    17 janvier 2021, par helloAl

    I would like to ask about how to use the browser version of ffmpeg.wasm.

    


    Through my investigation, I know that the following command can be used to output the video metadata to a file in the terminal of windows or mac.

    


    ffmpeg -i testvideo.mp4 -f ffmetadata testoutput.txt


    


    and then I can get this matadata like this :
enter image description here

    


    I want to parse the metadata of the video through the browser, and then print the metadata to the Google console (or output to a file). At present, I know that the browser version of ffmpeg.wasm can achieve this function, but I have looked at its examples, which does not involve this part of the content. (https://github.com/ffmpegwasm/ffmpeg.wasm/blob/master/examples/browser/image2video.html)

    


    But I want to print it to the console of Google Chrome through the browser version(usage:brower) of ffmpeg.wasm (https://github.com/ffmpegwasm/ffmpeg.wasm).
So I want to ask you how to achieve this, thank you.

    


  • avcodec/pngenc : fix sBIT writing for indexed-color PNGs

    19 juillet 2024, par Leo Izen
    avcodec/pngenc : fix sBIT writing for indexed-color PNGs
    

    We currently write invalid sBIT entries for indexed PNGs, which by PNG
    specification[1] must be 3-bytes long. The values also are capped at 8
    for indexed-color PNGs, not the palette depth. This patch fixes both of
    these issues previously fixed in the decoder, but not the encoder.

    [1] : https://www.w3.org/TR/png-3/#11sBIT

    Regression since : c125860892e931d9b10f88ace73c91484815c3a8.

    Signed-off-by : Leo Izen <leo.izen@gmail.com>
    Reported-by : Ramiro Polla : <ramiro.polla@gmail.com>

    • [DH] libavcodec/pngenc.c
  • Google cloud speech to text not giving output for OGG & MP3 files

    27 avril 2021, par Vedant Jumle

    I am trying to perform speech to text on a bunch of audio files which are over 10 mins long. I don't want to waste storage on the cloud bucket by straight-up uploading wav files on it. So I am using ffmpeg to convert the files either to ogg or mp3 like :&#xA;ffmpeg -y -i audio.wav -ar 12000 -r 16000 audio.mp3

    &#xA;

    ffmpeg -y -i audio.wav -ar 12000 -r 16000 audio.ogg

    &#xA;

    For testing purpose I ran the speech to text service on a dummy wav file and it seemed to work, I got the text as expected. But for some reason it isn't detecting any speech when I use the ogg or mp3 file. I could not give amr files to work either.

    &#xA;

    My code :

    &#xA;

    def transcribe_gcs(gcs_uri):&#xA;    client = speech.SpeechClient()&#xA;&#xA;    audio = speech.RecognitionAudio(uri=gcs_uri)&#xA;    config = speech.RecognitionConfig(&#xA;        encoding="OGG_OPUS", #replace with "LINEAR16" for wav, "OGG_OPUS" for ogg, "AMR" for amr&#xA;        sample_rate_hertz=16000,&#xA;        language_code="en-US",&#xA;    )&#xA;    print("starting operation")&#xA;    operation = client.long_running_recognize(config=config, audio=audio)&#xA;    response = operation.result()&#xA;    print(response)&#xA;

    &#xA;

    I have set up the authentication properly, so that is not a problem.

    &#xA;

    When I run the speech to text service on the same audio but in ogg or mp3(I just comment out the encoding setting from the config for mp3) format, it gives no response, just prints out a line break and done.

    &#xA;

    What can I do to fix this ?

    &#xA;