
Recherche avancée
Autres articles (105)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (16908)
-
Title : Getting "invalid_request_error" when trying to pass converted audio file to OpenAI API
19 avril 2023, par Dummy CronI am working on a project where I receive a URL from a webhook on my server whenever users share a voice note on my WhatsApp. I am using WATI as my WhatsApp API Provder


The file URL received is in the .opus format, which I need to convert to WAV and pass to the OpenAI Whisper API translation task.


I am trying convert it to .wav using ffmpeg, and pass it to the OpenAI API for translation processing.
However, I am getting an "invalid_request_error"


import requests
import io
import subprocess
file_url = #.opus file url
api_key = #WATI API Keu

def transcribe_audio_to_text():
 # Fetch the audio file and convert to wav format

 headers = {'Authorization': f'Bearer {api_key}'}
 response = requests.get(file_url, headers=headers)
 audio_bytes = io.BytesIO(response.content)

 process = subprocess.Popen(['ffmpeg', '-i', '-', '-f', 'wav', '-acodec', 'libmp3lame', '-'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
 wav_audio, _ = process.communicate(input=audio_bytes.read())

 # Set the Whisper API endpoint and headers
 WHISPER_API_ENDPOINT = 'https://api.openai.com/v1/audio/translations'
 whisper_api_headers = {'Authorization': 'Bearer ' + WHISPER_API_KEY,
 'Content-Type': 'application/json'}
 print(whisper_api_headers)
 # Send the audio file for transcription

 payload = {'model': 'whisper-1'}
 files = {'file': ('audio.wav', io.BytesIO(wav_audio), 'audio/wav')}

 # files = {'file': ('audio.wav', io.BytesIO(wav_audio), 'application/octet-stream')}

 # files = {'file': ('audio.mp3', io.BytesIO(mp3_audio), 'audio/mp3')}
 response = requests.post(WHISPER_API_ENDPOINT, headers=whisper_api_headers, data=payload)
 print(response)
 # Get the transcription text
 if response.status_code == 200:
 result = response.json()
 text = result['text']
 print(response, text)
 else:
 print('Error:', response)
 err = response.json()
 print(response.status_code)
 print(err)
 print(response.headers)

transcribe_audio_to_text()



Output :


Error: <response>
400
{'error': {'message': "We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please send an email to support@openai.com and include any relevant code you'd like help with.)", 'type': 'invalid_request_error', 'param': None, 'code': None}}
</response>


-
How to keep video quality same as it is after merge intro image beggining to video using ffmpeg
13 juin 2021, par Manoj KagI have selected high resolution video but once i run command the video resolution quality was changed and too poor quality's video i get as output video, but i don't would like to loose my video quality. let me share full command and complete log below :


Note : libx264 encoder is not supporting iOS, I'm getting failure error so i use h264_videotoolbox so i would like to get supported command with h264_videotoolbox encoder


Command :




ffmpeg -i test.MOV -loop 1 -t 5 -i 2.jpg -f lavfi -t 5 -i anullsrc
-filter_complex "[0:v]trim=0:5,drawbox=t=fill[base] ;[1][base]scale2ref=iw:ih:force_original_aspect_ratio=decrease:flags=spline[2nd][base2] ;[base2][2nd]overlay='(W-w)/2' :'(H-h)/2'[padded] ;[padded][2:a][0:v][0:a]concat=n=2:v=1:a=1[v][a]"
-c:v h264_videotoolbox -c:a aac -map "[v]" -map "[a]" output.mp4




Complete log


ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers
 built with Apple clang version 12.0.0 (clang-1200.0.32.29)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.4_1 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox
 libavutil 56. 70.100 / 56. 70.100
 libavcodec 58.134.100 / 58.134.100
 libavformat 58. 76.100 / 58. 76.100
 libavdevice 58. 13.100 / 58. 13.100
 libavfilter 7.110.100 / 7.110.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 9.100 / 5. 9.100
 libswresample 3. 9.100 / 3. 9.100
 libpostproc 55. 9.100 / 55. 9.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test.MOV':
 Metadata:
 major_brand : qt 
 minor_version : 0
 compatible_brands: qt 
 creation_time : 2021-03-07T06:36:17.000000Z
 com.apple.quicktime.location.accuracy.horizontal: 30.000000
 com.apple.quicktime.location.ISO6709: +23.1141+072.5768+061.729/
 com.apple.quicktime.make: Apple
 com.apple.quicktime.model: iPhone 6s
 com.apple.quicktime.software: 14.3
 com.apple.quicktime.creationdate: 2021-03-07T12:06:17+0530
 Duration: 00:00:29.79, start: 0.000000, bitrate: 15778 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 15643 kb/s, 29.98 fps, 29.97 tbr, 600 tbn, 1200 tbc (default)
 Metadata:
 creation_time : 2021-03-07T06:36:17.000000Z
 handler_name : Core Media Video
 vendor_id : [0][0][0][0]
 encoder : H.264
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 89 kb/s (default)
 Metadata:
 creation_time : 2021-03-07T06:36:17.000000Z
 handler_name : Core Media Audio
 vendor_id : [0][0][0][0]
 Stream #0:2(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
 Metadata:
 creation_time : 2021-03-07T06:36:17.000000Z
 handler_name : Core Media Metadata
 Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
 Metadata:
 creation_time : 2021-03-07T06:36:17.000000Z
 handler_name : Core Media Metadata
 Stream #0:4(und): Data: none (mebx / 0x7862656D), 34 kb/s (default)
 Metadata:
 creation_time : 2021-03-07T06:36:17.000000Z
 handler_name : Core Media Metadata
Input #1, image2, from '2.jpg':
 Duration: 00:00:00.04, start: 0.000000, bitrate: 17347 kb/s
 Stream #1:0: Video: mjpeg (Baseline), yuvj444p(pc, bt470bg/unknown/unknown), 360x360 [SAR 72:72 DAR 1:1], 25 fps, 25 tbr, 25 tbn, 25 tbc
Input #2, lavfi, from 'anullsrc':
 Duration: N/A, start: 0.000000, bitrate: 705 kb/s
 Stream #2:0: Audio: pcm_u8, 44100 Hz, stereo, u8, 705 kb/s
Stream mapping:
 Stream #0:0 (h264) -> trim
 Stream #0:0 (h264) -> concat:in1:v0
 Stream #0:1 (aac) -> concat:in1:a0
 Stream #1:0 (mjpeg) -> scale2ref:default
 Stream #2:0 (pcm_u8) -> concat:in0:a0
 concat:out:v0 -> Stream #0:0 (h264_videotoolbox)
 concat:out:a0 -> Stream #0:1 (aac)
Press [q] to stop, [?] for help
[swscaler @ 0x7f976242b000] deprecated pixel format used, make sure you did set range correctly
Output #0, mp4, to 'output.mp4':
 Metadata:
 major_brand : qt 
 minor_version : 0
 compatible_brands: qt 
 com.apple.quicktime.creationdate: 2021-03-07T12:06:17+0530
 com.apple.quicktime.location.accuracy.horizontal: 30.000000
 com.apple.quicktime.location.ISO6709: +23.1141+072.5768+061.729/
 com.apple.quicktime.make: Apple
 com.apple.quicktime.model: iPhone 6s
 com.apple.quicktime.software: 14.3
 encoder : Lavf58.76.100
 Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1920x1080, q=2-31, 200 kb/s, 29.97 fps, 30k tbn (default)
 Metadata:
 encoder : Lavc58.134.100 h264_videotoolbox
 Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
 Metadata:
 encoder : Lavc58.134.100 aac
frame= 1 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= frame= 12 fps=0.0 q=-0.0 size= 256kB time=00:00:00.32 bitrate=6452.4kbits/frame= 34 fps= 32 q=-0.0 size= 256kB time=00:00:01.02 bitrate=2053.0kbits/frame= 55 fps= 35 q=-0.0 size= 256kB time=00:00:01.74 bitrate=1204.4kbits/frame= 76 fps= 37 q=-0.0 size= 256kB time=00:00:02.46 bitrate= 852.2kbits/frame= 98 fps= 38 q=-0.0 size= 256kB time=00:00:03.18 bitrate= 659.4kbits/frame= 120 fps= 39 q=-0.0 size= 256kB time=00:00:03.90 bitrate= 537.7kbits/frame= 141 fps= 39 q=-0.0 size= 512kB time=00:00:04.62 bitrate= 907.8kbits/[out_0_0 @ 0x7f975de0ae80] 100 buffers queued in out_0_0, something may be wrong.
[out_0_1 @ 0x7f975de0a5c0] 100 buffers queued in out_0_1, something may be wrong.
frame= 301 fps= 67 q=-0.0 size= 768kB time=00:00:11.09 bitrate= 566.9kbits/frame= 406 fps= 81 q=-0.0 size= 1024kB time=00:00:14.60 bitrate= 574.4kbits/frame= 509 fps= 92 q=-0.0 size= 1280kB time=00:00:18.04 bitrate= 581.2kbits/frame= 604 fps=100 q=-0.0 size= 1792kB time=00:00:21.19 bitrate= 692.5kbits/frame= 705 fps=108 q=-0.0 size= 2048kB time=00:00:24.56 bitrate= 682.9kbits/frame= 809 fps=115 q=-0.0 size= 2304kB time=00:00:28.04 bitrate= 672.9kbits/frame= 909 fps=121 q=-0.0 size= 2816kB time=00:00:31.37 bitrate= 735.4kbits/frame= 1012 fps=126 q=-0.0 size= 3072kB time=00:00:34.71 bitrate= 725.0kbits/frame= 1043 fps=127 q=-0.0 Lsize= 3288kB time=00:00:34.78 bitrate= 774.3kbits/s speed=4.23x 
video:2995kB audio:255kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.168448%
[aac @ 0x7f9760024200] Qavg: 9569.656



-
No such file or directory Error with FFMPEG + CarrierWave screenshot method
10 juillet 2013, par dodgerogers747I am using AWS CORS to upload videos to my site, all of which works as planned.
I have the following model method which runs as an after_create callback (for speed) to take a screenshot from the video file on AWS. I plan to move this out into a delayed job but I don't think this will solve this particular issue. Please advise if mistaken.
I use FFMPEG to take a screenshot from the AWS self.file location, I then send the file to CarrierWave by saving the file to self.screenshot where it is uploaded to AWS.
Approx. 50% of the time it errors out with
Errno::ENOENT - No such file or directory
for the location of the screenshot image.How can I rectify my code to remove this error and how come it only occurs around 50% of the time ? If anyone needs more code just shout.
video.rb
after_create :take_screenshot
mount_uploader :screenshot, ImageUploader
def take_screenshot
location = "#{Rails.root}/public/uploads/tmp/screenshots/#{unique}_#{File.basename(file)}.jpg"
system `ffmpeg #{log_level} -i #{self.file} -ss 00:00:0#{time_frame} -vframes 1 #{location}`
logger.debug "Trying to take screenshot from #{self.file}"
#pass the actual file to CarrierWave to handle the image upload
self.screenshot = File.open(location)
self.save
logger.debug "Deleting tmp file: #{location}: #{File.delete(location)}" if self.screenshot.present?
end
def unique
(0..6).map{(65+rand(26)).chr}.join
end
def log_level
"-loglevel panic"
end
def time_frame
rand(0..3)
endStack trace :
Started POST "/videos" for 127.0.0.1 at 2013-07-10 03:58:49 +0800
Processing by VideosController#create as JS
Parameters: {"utf8"=>"✓", "authenticity_token"=>"6M1Ia+Ag2E3HVKH2PO/p7jewxSpMPdWeVHGA933Bzjw=", "video"=>{"file"=>"http://bucketname.s3.amazonaws.com/uploads/video/file/671a87fb-91de-4eaf-a38a-1b25c51798e5/Good_7iron.m4v"}}
User Load (0.3ms) SELECT `users`.* FROM `users` WHERE `users`.`id` = 9 LIMIT 1
(0.1ms) BEGIN
SQL (0.2ms) INSERT INTO `videos` (`created_at`, `file`, `question_id`, `screenshot`, `updated_at`, `user_id`) VALUES ('2013-07-09 19:58:49', 'http://bucketname.s3.amazonaws.com/uploads/video/file/671a87fb-91de-4eaf-a38a-1b25c51798e5/Good_7iron.m4v', NULL, NULL, '2013-07-09 19:58:49', 9)
Trying to take screenshot from http://bucketname.s3.amazonaws.com/uploads/video/file/671a87fb-91de-4eaf-a38a-1b25c51798e5/Good_7iron.m4v
(0.8ms) ROLLBACK
Completed 500 Internal Server Error in 3550ms
Errno::ENOENT - No such file or directory - /Users/me/rails/project/public/uploads/tmp/screenshots/WCACLIC_Good_7iron.m4v.jpg:
app/models/video.rb:24:in `initialize'
app/models/video.rb:24:in `open'
app/models/video.rb:24:in `take_screenshot'