Recherche avancée

Médias (91)

Autres articles (76)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (11773)

  • Title : Getting "invalid_request_error" when trying to pass converted audio file to OpenAI API

    19 avril 2023, par Dummy Cron

    I am working on a project where I receive a URL from a webhook on my server whenever users share a voice note on my WhatsApp. I am using WATI as my WhatsApp API Provder

    


    The file URL received is in the .opus format, which I need to convert to WAV and pass to the OpenAI Whisper API translation task.

    


    I am trying convert it to .wav using ffmpeg, and pass it to the OpenAI API for translation processing.
However, I am getting an "invalid_request_error"

    


    import requests
import io
import subprocess
file_url = #.opus file url
api_key = #WATI API Keu

def transcribe_audio_to_text():
  # Fetch the audio file and convert to wav format

  headers = {'Authorization': f'Bearer {api_key}'}
  response = requests.get(file_url, headers=headers)
  audio_bytes = io.BytesIO(response.content)

  process = subprocess.Popen(['ffmpeg', '-i', '-', '-f', 'wav', '-acodec', 'libmp3lame', '-'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
  wav_audio, _ = process.communicate(input=audio_bytes.read())

  # Set the Whisper API endpoint and headers
  WHISPER_API_ENDPOINT = 'https://api.openai.com/v1/audio/translations'
  whisper_api_headers = {'Authorization': 'Bearer ' + WHISPER_API_KEY,
                         'Content-Type': 'application/json'}
  print(whisper_api_headers)
  # Send the audio file for transcription

  payload = {'model': 'whisper-1'}
  files = {'file': ('audio.wav', io.BytesIO(wav_audio), 'audio/wav')}

  # files = {'file': ('audio.wav', io.BytesIO(wav_audio), 'application/octet-stream')}

  # files = {'file': ('audio.mp3', io.BytesIO(mp3_audio), 'audio/mp3')}
  response = requests.post(WHISPER_API_ENDPOINT, headers=whisper_api_headers, data=payload)
  print(response)
  # Get the transcription text
  if response.status_code == 200:
      result = response.json()
      text = result['text']
      print(response, text)
  else:
      print('Error:', response)
      err = response.json()
      print(response.status_code)
      print(err)
      print(response.headers)

transcribe_audio_to_text()


    


    Output :

    


    Error: <response>&#xA;400&#xA;{&#x27;error&#x27;: {&#x27;message&#x27;: "We could not parse the JSON body of your request. (HINT: This likely means you aren&#x27;t using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please send an email to support@openai.com and include any relevant code you&#x27;d like help with.)", &#x27;type&#x27;: &#x27;invalid_request_error&#x27;, &#x27;param&#x27;: None, &#x27;code&#x27;: None}}&#xA;</response>

    &#xA;

  • "Conversion failed !" when trying to write to frame to a rmtp stream

    8 mai 2023, par Loc Bui Nhien

    I'm trying to write video frames to an RTMP stream using FFMPEG and Python subsystem. The code will try to get videos in a 'ReceivedRecording' then it is stream to a RTMP streaming server using nginx. My method seems to work, but at times, the code will stop running due to

    &#xA;

    [flv @ 0x55b933694b40] Failed to update header with correct duration.&#xA;[flv @ 0x55b933694b40] Failed to update header with correct filesize.&#xA;

    &#xA;

    and

    &#xA;

    Conversion failed

    &#xA;

    then

    &#xA;

    BrokenPipeError: [Errno 32] Broken pipe

    &#xA;

    Here my implementation of the task :

    &#xA;

    import subprocess&#xA;import cv2&#xA;rtmp_url = "rtmp://..."&#xA;&#xA;path = &#x27;ReceivedRecording&#x27;&#xA;&#xA;received_video_path = &#x27;ReceivedRecording&#x27;&#xA;while True:&#xA;    video_files = [filenames for filenames in sorted(&#xA;        os.listdir(received_video_path))]&#xA;    # Loop through the videos and concatenate them&#xA;    for filename in video_files[:len(video_files)-1]:&#xA;        video = cv2.VideoCapture(os.path.join(received_video_path, filename))&#xA;&#xA;        if p is None:&#xA;            fps = int(video.get(cv2.CAP_PROP_FPS))&#xA;            width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))&#xA;            height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))&#xA;            # command and params for ffmpeg&#xA;            command = [&#x27;ffmpeg&#x27;,&#xA;                        &#x27;-y&#x27;,&#xA;                        &#x27;-f&#x27;, &#x27;rawvideo&#x27;,&#xA;                        &#x27;-vcodec&#x27;, &#x27;rawvideo&#x27;,&#xA;                        &#x27;-pix_fmt&#x27;, &#x27;bgr24&#x27;,&#xA;                        &#x27;-s&#x27;, "{}x{}".format(width, height),&#xA;                        &#x27;-re&#x27;,&#xA;                        &#x27;-r&#x27;, &#x27;5&#x27;,&#xA;                        &#x27;-i&#x27;, &#x27;-&#x27;,&#xA;                        # &#x27;-filter:v&#x27;, &#x27;setpts=4.0*PTS&#x27;,&#xA;                        &#x27;-c:v&#x27;, &#x27;libx264&#x27;,&#xA;                        &#x27;-pix_fmt&#x27;, &#x27;yuv420p&#x27;,&#xA;                        &#x27;-preset&#x27;, &#x27;ultrafast&#x27;,&#xA;                        &#x27;-tune&#x27;,&#x27;zerolatency&#x27;,&#xA;                        &#x27;-vsync&#x27;,&#x27;vfr&#x27;,&#xA;                        # &#x27;-crf&#x27;,&#x27;23&#x27;,&#xA;                        &#x27;-f&#x27;, &#x27;flv&#x27;,&#xA;                        rtmp_url]&#xA;            &#xA;            # using subprocess and pipe to fetch frame data&#xA;            p = subprocess.Popen(command, stdin=subprocess.PIPE)&#xA;        else:&#xA;&#xA;            # Loop through the frames of each video&#xA;            while True:&#xA;                start_time = time.time()&#xA;&#xA;                ret, frame = video.read()&#xA;                if not ret:&#xA;                    # End of video, move to next video&#xA;                    video.release()&#xA;                    break&#xA;&#xA;                p.stdin.write(frame.tobytes())&#xA;&#xA;            os.remove(os.path.join(received_video_path, filename))&#xA;

    &#xA;

    Here is my nginx rtmp settings :

    &#xA;

    rtmp {&#xA;    server {&#xA;        listen 1935;&#xA;        chunk_size 7096;&#xA;&#xA;        application live {&#xA;            live on;&#xA;            record off;&#xA;            push rtmp://...;&#xA;        }&#xA;    }&#xA;}&#xA;

    &#xA;

    Here is the log file :

    &#xA;

    av_interleaved_write_frame(): Connection reset by peer&#xA;No more output streams to write to, finishing.&#xA;[flv @ 0x5561d1ca9b40] Failed to update header with correct duration.&#xA;[flv @ 0x5561d1ca9b40] Failed to update header with correct filesize.&#xA;Error writing trailer of rtmp://...: Connection reset by peer&#xA;frame=    1 fps=0.0 q=20.0 Lsize=    1024kB time=00:00:00.00 bitrate=8390776.0kbits/s speed=0.00619x&#xA;video:1053kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;Input file #0 (pipe:):&#xA;  Input stream #0:0 (video): 1 packets read (11059200 bytes); 1 frames decoded;&#xA;  Total: 1 packets (11059200 bytes) demuxed&#xA;Output file #0 (rtmp://...):&#xA;  Output stream #0:0 (video): 1 frames encoded; 1 packets muxed (1077799 bytes);&#xA;  Total: 1 packets (1077799 bytes) muxed&#xA;1 frames successfully decoded, 0 decoding errors&#xA;[AVIOContext @ 0x5561d1cad380] Statistics: 0 seeks, 35 writeouts&#xA;[rtmp @ 0x5561d1cb7b80] Deleting stream...&#xA;[libx264 @ 0x5561d1caae40] frame I:1     Avg QP:20.00  size:1077192&#xA;[libx264 @ 0x5561d1caae40] mb I  I16..4: 100.0%  0.0%  0.0%&#xA;[libx264 @ 0x5561d1caae40] coded y,uvDC,uvAC intra: 92.2% 50.5% 10.2%&#xA;[libx264 @ 0x5561d1caae40] i16 v,h,dc,p: 34% 16% 37% 12%&#xA;[libx264 @ 0x5561d1caae40] i8c dc,h,v,p: 35% 23% 33%  9%&#xA;[libx264 @ 0x5561d1caae40] kb/s:43087.68&#xA;[AVIOContext @ 0x5561d1ca6a80] Statistics: 11059200 bytes read, 0 seeks&#xA;Conversion failed!&#xA;

    &#xA;

  • Is there a way to cut movement "dead air" on a screen recording ? [closed]

    16 mai 2023, par Raelbe

    I have got a couple of screen recordings of a painting I've done, and I've managed to concat the files together.

    &#xA;

    Unfortunately, there is a lot of "dead air" in the video (where I have left my desk, so there is no movement happening on screen) is there a way to cut out this down time ?

    &#xA;

    I found an example that another artist uses for his screen recordings, so I plugged it in with my file directory's. This is what I used :

    &#xA;

    .\ffmpeg -f concat -safe 0 -i "merge.txt" -vf npdecimate=hi=64*12:lo=64*5:frac=0.33,seipts=N/30/TB,"setpts=0.25*PTS" -r 30 -crf 30 -an Illu_Test.mp4&#xA;

    &#xA;

    I got this error message at the end :

    &#xA;

    [AVFilterGraph @ 000001cadfe5b1c0] No option name near &#x27;N/30/TB&#x27;&#xA;[AVFilterGraph @ 000001cadfe5b1c0] Error parsing a filter description around: ,setpts=0.25*PTS&#xA;[AVFilterGraph @ 000001cadfe5b1c0] Error parsing filterchain &#x27;npdecimate=hi=64*12:lo=64*5:frac=0.33,seipts=N/30/TB,setpts=0.25*PTS&#x27; around: ,setpts=0.25*PTS&#xA;Error reinitializing filters!&#xA;Failed to inject frame into filter network: Invalid argument&#xA;Error while processing the decoded data for stream #0:0`&#xA;

    &#xA;

    So I chopped it up a bit and this is what I used to concat the files and it worked perfectly.

    &#xA;

    .\ffmpeg -f concat -safe 0 -i "merge.txt" -crf 30 -an Illu_Test.mp4&#xA;

    &#xA;

    Now, I'm looking to cut out the seconds of no movement. I'm unsure what the -crf command does (as stated I am brand new to this) OG artist states that :

    &#xA;

    "This is the tolerance level that determines whether there has been enough change between frames or not to be considered as detected motion."

    &#xA;

    Any help would be appreciated.

    &#xA;

    (Apologies if the format of this question is wrong)

    &#xA;