Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (51)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

  • Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur

    8 février 2011, par

    La visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
    Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
    Configuration de la boite multimédia
    Dès (...)

Sur d’autres sites (3386)

  • FFMPEG Streaming, using list for multiple presentations

    3 janvier 2021, par JJ The Second

    I am currently using a third party library to transcode videos from mp4 to HLS. https://video.aminyazdanpanah.com/python/start?r=hls#hls Great documentation and works fine however I have an issue by passing a list to hls.representations() that I think is something wrong I do. Here is how I run my code.

    


    presetList = []
rep_1  = Representation(Size(1920,1080), Bitrate(4096 * 1024, 320 * 1024))
                    presetList.append(rep_1)
rep_2 = Representation(Size(1440, 900), Bitrate(2048 * 1024, 320 * 1024))
                    presetList.append(rep_2)

video =  "file.mp4"
video = ffmpeg_streaming.input(video)
completed_destination = "completed.m3u8"
hls = video.hls(Formats.h264())
hls.representations(presetList)
hls.output(completed_destination)


    


    When I run this I get following error, that is triggered by library meaning values in my list not passed properly ?

    


      File "/var/www/transcoder/transcoder/env/lib/python3.8/site-packages/ffmpeg_streaming/_hls_helper.py", line 87, in stream_info
    f'BANDWIDTH={rep.bitrate.calc_overall}',
AttributeError: 'list' object has no attribute 'bitrate'


    


    if I instead run the same code with only change as below, works like a charm :

    


    hls.representations(rep_1, rep_2)


    


    What am I doing wrong here ? Thanks

    


  • Problems with Python's azure.cognitiveservices.speech when installing together with FFmpeg in a Linux web app

    15 mai 2024, par Kakobo kakobo

    I need some help.
I'm building an web app that takes any audio format, converts into a .wav file and then passes it to 'azure.cognitiveservices.speech' for transcription.I'm building the web app via a container Dockerfile as I need to install ffmpeg to be able to convert non ".wav" audio files to ".wav" (as azure speech services only process wav files). For some odd reason, the 'speechsdk' class of 'azure.cognitiveservices.speech' fails to work when I install ffmpeg in the web app. The class works perfectly fine when I install it without ffpmeg or when i build and run the container in my machine.

    


    I have placed debug print statements in the code. I can see the class initiating, for some reason it does not buffer in the same when when running it locally in my machine. The routine simply stops without any reason.

    


    Has anybody experienced a similar issue with azure.cognitiveservices.speech conflicting with ffmpeg ?

    


    Here's my Dockerfile :

    


    # Use an official Python runtime as a parent imageFROM python:3.11-slim

#Version RunRUN echo "Version Run 1..."

Install ffmpeg

RUN apt-get update && apt-get install -y ffmpeg && # Ensure ffmpeg is executablechmod a+rx /usr/bin/ffmpeg && # Clean up the apt cache by removing /var/lib/apt/lists saves spaceapt-get clean && rm -rf /var/lib/apt/lists/*

//Set the working directory in the container

WORKDIR /app

//Copy the current directory contents into the container at /app

COPY . /app

//Install any needed packages specified in requirements.txt

RUN pip install --no-cache-dir -r requirements.txt

//Make port 80 available to the world outside this container

EXPOSE 8000

//Define environment variable

ENV NAME World

//Run main.py when the container launches

CMD ["streamlit", "run", "main.py", "--server.port", "8000", "--server.address", "0.0.0.0"]`and here's my python code:


    


    def transcribe_audio_continuous_old(temp_dir, audio_file, language):
    speech_key = azure_speech_key
    service_region = azure_speech_region

    time.sleep(5)
    print(f"DEBUG TIME BEFORE speechconfig")

    ran = generate_random_string(length=5)
    temp_file = f"transcript_key_{ran}.txt"
    output_text_file = os.path.join(temp_dir, temp_file)
    speech_recognition_language = set_language_to_speech_code(language)
    
    speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
    speech_config.speech_recognition_language = speech_recognition_language
    audio_input = speechsdk.AudioConfig(filename=os.path.join(temp_dir, audio_file))
        
    speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_input, language=speech_recognition_language)
    done = False
    transcript_contents = ""

    time.sleep(5)
    print(f"DEBUG TIME AFTER speechconfig")
    print(f"DEBUG FIle about to be passed {audio_file}")

    try:
        with open(output_text_file, "w", encoding=encoding) as file:
            def recognized_callback(evt):
                print("Start continuous recognition callback.")
                print(f"Recognized: {evt.result.text}")
                file.write(evt.result.text + "\n")
                nonlocal transcript_contents
                transcript_contents += evt.result.text + "\n"

            def stop_cb(evt):
                print("Stopping continuous recognition callback.")
                print(f"Event type: {evt}")
                speech_recognizer.stop_continuous_recognition()
                nonlocal done
                done = True
            
            def canceled_cb(evt):
                print(f"Recognition canceled: {evt.reason}")
                if evt.reason == speechsdk.CancellationReason.Error:
                    print(f"Cancellation error: {evt.error_details}")
                nonlocal done
                done = True

            speech_recognizer.recognized.connect(recognized_callback)
            speech_recognizer.session_stopped.connect(stop_cb)
            speech_recognizer.canceled.connect(canceled_cb)

            speech_recognizer.start_continuous_recognition()
            while not done:
                time.sleep(1)
                print("DEBUG LOOPING TRANSCRIPT")

    except Exception as e:
        print(f"An error occurred: {e}")

    print("DEBUG DONE TRANSCRIPT")

    return temp_file, transcript_contents


    


    The transcript this callback works fine locally, or when installed without ffmpeg in the linux web app. Not sure why it conflicts with ffmpeg when installed via container dockerfile. The code section that fails can me found on note #NOTE DEBUG"

    


  • How to stream to the stream name come in response from Youtube livestream api

    7 décembre 2018, par Anirudha Gupta

    I am calling this API https://developers.google.com/youtube/v3/live/docs/liveStreams/insert ? to get stream name from Livestream API

    {
    "kind": "youtube#liveStream",
    "etag": "\"etag"",
    "id": "-ABa1o",
    "snippet": {
     "publishedAt": "2018-12-07T05:41:12.000Z",
     "channelId": "UC-
     "title": "Hello World",
     "description": "Snippet description of testing",
     "isDefaultStream": false
    },
    "cdn": {
     "format": "360p",
     "ingestionType": "rtmp",
     "ingestionInfo": {
      "streamName": "9qq0-ct85-ctub-",
      "ingestionAddress": "rtmp://a.rtmp.youtube.com/live2",
      "backupIngestionAddress": "rtmp://b.rtmp.youtube.com/live2?backup=1"
     },
     "resolution": "360p",
     "frameRate": "30fps"
    },
    "status": {
     "streamStatus": "ready",
     "healthStatus": {
      "status": "noData"
     }
    },
    "contentDetails": {
     "closedCaptionsIngestionUrl": "http://upload.youtube.com/closedcaption?cid=9qq0-ct85-ctub-",
     "isReusable": true
    }
    }

    I see a response like this, When I use OBS to stream to this RMTP URL it doesn’t have the title I set in the stream as you can see come in response. I am getting stream name but not sure if I do it correctly.

    If I call the path as rtmp://a.rtmp.youtube.com/live2/steamnamefromurl/mykey
    it’s work but not have the title I set by call API. Anyone please check the page and help what I am going wrong. What I am looking for is get the title and description set for stream, or verified that I am doing it correctly.