Recherche avancée

Médias (91)

Autres articles (41)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

Sur d’autres sites (5213)

  • Problems with Python's azure.cognitiveservices.speech when installing together with FFmpeg in a Linux web app

    15 mai 2024, par Kakobo kakobo

    I need some help.
I'm building an web app that takes any audio format, converts into a .wav file and then passes it to 'azure.cognitiveservices.speech' for transcription.I'm building the web app via a container Dockerfile as I need to install ffmpeg to be able to convert non ".wav" audio files to ".wav" (as azure speech services only process wav files). For some odd reason, the 'speechsdk' class of 'azure.cognitiveservices.speech' fails to work when I install ffmpeg in the web app. The class works perfectly fine when I install it without ffpmeg or when i build and run the container in my machine.

    


    I have placed debug print statements in the code. I can see the class initiating, for some reason it does not buffer in the same when when running it locally in my machine. The routine simply stops without any reason.

    


    Has anybody experienced a similar issue with azure.cognitiveservices.speech conflicting with ffmpeg ?

    


    Here's my Dockerfile :

    


    # Use an official Python runtime as a parent imageFROM python:3.11-slim

#Version RunRUN echo "Version Run 1..."

Install ffmpeg

RUN apt-get update && apt-get install -y ffmpeg && # Ensure ffmpeg is executablechmod a+rx /usr/bin/ffmpeg && # Clean up the apt cache by removing /var/lib/apt/lists saves spaceapt-get clean && rm -rf /var/lib/apt/lists/*

//Set the working directory in the container

WORKDIR /app

//Copy the current directory contents into the container at /app

COPY . /app

//Install any needed packages specified in requirements.txt

RUN pip install --no-cache-dir -r requirements.txt

//Make port 80 available to the world outside this container

EXPOSE 8000

//Define environment variable

ENV NAME World

//Run main.py when the container launches

CMD ["streamlit", "run", "main.py", "--server.port", "8000", "--server.address", "0.0.0.0"]`and here's my python code:


    


    def transcribe_audio_continuous_old(temp_dir, audio_file, language):
    speech_key = azure_speech_key
    service_region = azure_speech_region

    time.sleep(5)
    print(f"DEBUG TIME BEFORE speechconfig")

    ran = generate_random_string(length=5)
    temp_file = f"transcript_key_{ran}.txt"
    output_text_file = os.path.join(temp_dir, temp_file)
    speech_recognition_language = set_language_to_speech_code(language)
    
    speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
    speech_config.speech_recognition_language = speech_recognition_language
    audio_input = speechsdk.AudioConfig(filename=os.path.join(temp_dir, audio_file))
        
    speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_input, language=speech_recognition_language)
    done = False
    transcript_contents = ""

    time.sleep(5)
    print(f"DEBUG TIME AFTER speechconfig")
    print(f"DEBUG FIle about to be passed {audio_file}")

    try:
        with open(output_text_file, "w", encoding=encoding) as file:
            def recognized_callback(evt):
                print("Start continuous recognition callback.")
                print(f"Recognized: {evt.result.text}")
                file.write(evt.result.text + "\n")
                nonlocal transcript_contents
                transcript_contents += evt.result.text + "\n"

            def stop_cb(evt):
                print("Stopping continuous recognition callback.")
                print(f"Event type: {evt}")
                speech_recognizer.stop_continuous_recognition()
                nonlocal done
                done = True
            
            def canceled_cb(evt):
                print(f"Recognition canceled: {evt.reason}")
                if evt.reason == speechsdk.CancellationReason.Error:
                    print(f"Cancellation error: {evt.error_details}")
                nonlocal done
                done = True

            speech_recognizer.recognized.connect(recognized_callback)
            speech_recognizer.session_stopped.connect(stop_cb)
            speech_recognizer.canceled.connect(canceled_cb)

            speech_recognizer.start_continuous_recognition()
            while not done:
                time.sleep(1)
                print("DEBUG LOOPING TRANSCRIPT")

    except Exception as e:
        print(f"An error occurred: {e}")

    print("DEBUG DONE TRANSCRIPT")

    return temp_file, transcript_contents


    


    The transcript this callback works fine locally, or when installed without ffmpeg in the linux web app. Not sure why it conflicts with ffmpeg when installed via container dockerfile. The code section that fails can me found on note #NOTE DEBUG"

    


  • How to use ffmpeg.so file in my Android Studio project for encoding audio buffer ? [on hold]

    13 novembre 2016, par bluesky

    I have ffmpeg.so file. I have downloaded it from other site. How can I use this in my current Android Studio project ? I want to know how to integrate ffmpeg library to Android Studio project.

  • Use Ffmeg to Convert Video to Gif android studio

    21 octobre 2014, par Donnie Ibiyemi

    Am currently making a simple Androidapp that converts a video from the sd card into a gif.

    I learnt Ffmeg is the most efficient method to handle the conversion. But i have no idea how to add ffmeg to my android studio project.

    Please kindly point me in the right direction