
Recherche avancée
Autres articles (45)
-
Qualité du média après traitement
21 juin 2013, parLe bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (7146)
-
"Conversion failed !" when trying to write to frame to a rmtp stream
8 mai 2023, par Loc Bui NhienI'm trying to write video frames to an RTMP stream using FFMPEG and Python subsystem. The code will try to get videos in a 'ReceivedRecording' then it is stream to a RTMP streaming server using nginx. My method seems to work, but at times, the code will stop running due to


[flv @ 0x55b933694b40] Failed to update header with correct duration.
[flv @ 0x55b933694b40] Failed to update header with correct filesize.



and


Conversion failed


then


BrokenPipeError: [Errno 32] Broken pipe


Here my implementation of the task :


import subprocess
import cv2
rtmp_url = "rtmp://..."

path = 'ReceivedRecording'

received_video_path = 'ReceivedRecording'
while True:
 video_files = [filenames for filenames in sorted(
 os.listdir(received_video_path))]
 # Loop through the videos and concatenate them
 for filename in video_files[:len(video_files)-1]:
 video = cv2.VideoCapture(os.path.join(received_video_path, filename))

 if p is None:
 fps = int(video.get(cv2.CAP_PROP_FPS))
 width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
 height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
 # command and params for ffmpeg
 command = ['ffmpeg',
 '-y',
 '-f', 'rawvideo',
 '-vcodec', 'rawvideo',
 '-pix_fmt', 'bgr24',
 '-s', "{}x{}".format(width, height),
 '-re',
 '-r', '5',
 '-i', '-',
 # '-filter:v', 'setpts=4.0*PTS',
 '-c:v', 'libx264',
 '-pix_fmt', 'yuv420p',
 '-preset', 'ultrafast',
 '-tune','zerolatency',
 '-vsync','vfr',
 # '-crf','23',
 '-f', 'flv',
 rtmp_url]
 
 # using subprocess and pipe to fetch frame data
 p = subprocess.Popen(command, stdin=subprocess.PIPE)
 else:

 # Loop through the frames of each video
 while True:
 start_time = time.time()

 ret, frame = video.read()
 if not ret:
 # End of video, move to next video
 video.release()
 break

 p.stdin.write(frame.tobytes())

 os.remove(os.path.join(received_video_path, filename))



Here is my nginx rtmp settings :


rtmp {
 server {
 listen 1935;
 chunk_size 7096;

 application live {
 live on;
 record off;
 push rtmp://...;
 }
 }
}



Here is the log file :


av_interleaved_write_frame(): Connection reset by peer
No more output streams to write to, finishing.
[flv @ 0x5561d1ca9b40] Failed to update header with correct duration.
[flv @ 0x5561d1ca9b40] Failed to update header with correct filesize.
Error writing trailer of rtmp://...: Connection reset by peer
frame= 1 fps=0.0 q=20.0 Lsize= 1024kB time=00:00:00.00 bitrate=8390776.0kbits/s speed=0.00619x
video:1053kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Input file #0 (pipe:):
 Input stream #0:0 (video): 1 packets read (11059200 bytes); 1 frames decoded;
 Total: 1 packets (11059200 bytes) demuxed
Output file #0 (rtmp://...):
 Output stream #0:0 (video): 1 frames encoded; 1 packets muxed (1077799 bytes);
 Total: 1 packets (1077799 bytes) muxed
1 frames successfully decoded, 0 decoding errors
[AVIOContext @ 0x5561d1cad380] Statistics: 0 seeks, 35 writeouts
[rtmp @ 0x5561d1cb7b80] Deleting stream...
[libx264 @ 0x5561d1caae40] frame I:1 Avg QP:20.00 size:1077192
[libx264 @ 0x5561d1caae40] mb I I16..4: 100.0% 0.0% 0.0%
[libx264 @ 0x5561d1caae40] coded y,uvDC,uvAC intra: 92.2% 50.5% 10.2%
[libx264 @ 0x5561d1caae40] i16 v,h,dc,p: 34% 16% 37% 12%
[libx264 @ 0x5561d1caae40] i8c dc,h,v,p: 35% 23% 33% 9%
[libx264 @ 0x5561d1caae40] kb/s:43087.68
[AVIOContext @ 0x5561d1ca6a80] Statistics: 11059200 bytes read, 0 seeks
Conversion failed!



-
Title : Getting "invalid_request_error" when trying to pass converted audio file to OpenAI API
19 avril 2023, par Dummy CronI am working on a project where I receive a URL from a webhook on my server whenever users share a voice note on my WhatsApp. I am using WATI as my WhatsApp API Provder


The file URL received is in the .opus format, which I need to convert to WAV and pass to the OpenAI Whisper API translation task.


I am trying convert it to .wav using ffmpeg, and pass it to the OpenAI API for translation processing.
However, I am getting an "invalid_request_error"


import requests
import io
import subprocess
file_url = #.opus file url
api_key = #WATI API Keu

def transcribe_audio_to_text():
 # Fetch the audio file and convert to wav format

 headers = {'Authorization': f'Bearer {api_key}'}
 response = requests.get(file_url, headers=headers)
 audio_bytes = io.BytesIO(response.content)

 process = subprocess.Popen(['ffmpeg', '-i', '-', '-f', 'wav', '-acodec', 'libmp3lame', '-'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
 wav_audio, _ = process.communicate(input=audio_bytes.read())

 # Set the Whisper API endpoint and headers
 WHISPER_API_ENDPOINT = 'https://api.openai.com/v1/audio/translations'
 whisper_api_headers = {'Authorization': 'Bearer ' + WHISPER_API_KEY,
 'Content-Type': 'application/json'}
 print(whisper_api_headers)
 # Send the audio file for transcription

 payload = {'model': 'whisper-1'}
 files = {'file': ('audio.wav', io.BytesIO(wav_audio), 'audio/wav')}

 # files = {'file': ('audio.wav', io.BytesIO(wav_audio), 'application/octet-stream')}

 # files = {'file': ('audio.mp3', io.BytesIO(mp3_audio), 'audio/mp3')}
 response = requests.post(WHISPER_API_ENDPOINT, headers=whisper_api_headers, data=payload)
 print(response)
 # Get the transcription text
 if response.status_code == 200:
 result = response.json()
 text = result['text']
 print(response, text)
 else:
 print('Error:', response)
 err = response.json()
 print(response.status_code)
 print(err)
 print(response.headers)

transcribe_audio_to_text()



Output :


Error: <response>
400
{'error': {'message': "We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please send an email to support@openai.com and include any relevant code you'd like help with.)", 'type': 'invalid_request_error', 'param': None, 'code': None}}
</response>


-
ffmpeg "End mismatch 1" warning, jpeg2000 to avi
11 avril 2023, par jklebesTrying to convert a directory of jpeg2000 grayscale images to a video with ffmpeg, I get warnings


[0;36m[jpeg2000 @ 0x55d8fa1b68c0] [0m[0;33mEnd mismatch 1



(and lots of


Last message repeated <n> times
</n>


)


The command was


ffmpeg -y -r 10 -start_number 1 -i <path>/surface_30///img_000%01d.jp2 -vcodec msmpeg4 -vf scale=1920:-1 -q:v 8 <path>//surface_30///surface_30.avi
</path></path>


The output is


ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
 built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0)
 configuration: --prefix=/home/jklebes001/miniconda3 --cc=/tmp/build/80754af9/ffmpeg_1587154242452/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --enable-avresample --enable-gmp --enable-hardcoded-tables --enable-libfreetype --enable-libvpx --enable-pthreads --enable-libopus --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame --disable-nonfree --enable-gpl --enable-gnutls --disable-openssl --enable-libopenh264 --enable-libx264
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
[0;36m[jpeg2000 @ 0x55cb44144480] [0m[0;33mEnd mismatch 1

[0m Last message repeated 1 times
 Last message repeated 2 times
 Last message repeated 3 times



...


Last message repeated 73 times

Input #0, image2, from '<path>//surface_30///img_000%01d.jp2':

 Duration: 00:00:00.20, start: 0.000000, bitrate: N/A

 Stream #0:0: Video: jpeg2000, gray, 6737x4869, 25 tbr, 25 tbn, 25 tbc

Stream mapping:

 Stream #0:0 -> #0:0 (jpeg2000 (native) -> msmpeg4v3 (msmpeg4))

Press [q] to stop, [?] for help

[0;36m[jpeg2000 @ 0x55cb4418e200] [0m[0;33mEnd mismatch 1

[0m[0;36m[jpeg2000 @ 0x55cb441900c0] [0m[0;33mEnd mismatch 1
</path>


...


(about 600 lines of "end mismatch" and "last message repeated" cut)


...


[0m[0;36m[jpeg2000 @ 0x55cb4418e8c0] [0m[0;33mEnd mismatch 1

[0mOutput #0, avi, to '<path>/surface_30///surface_30.avi':

 Metadata:

 ISFT : Lavf58.29.100

 Stream #0:0: Video: msmpeg4v3 (msmpeg4) (MP43 / 0x3334504D), yuv420p, 1920x1388, q=2-31, 200 kb/s, 10 fps, 10 tbn, 10 tbc

 Metadata:

 encoder : Lavc58.54.100 msmpeg4

 Side data:

 cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1

frame= 2 fps=0.8 q=8.0 size= 6kB time=00:00:00.20 bitrate= 227.1kbits/s speed=0.0844x 
frame= 5 fps=1.7 q=8.0 size= 6kB time=00:00:00.50 bitrate= 90.8kbits/s speed=0.172x 
frame= 5 fps=1.7 q=8.0 Lsize= 213kB time=00:00:00.50 bitrate=3494.7kbits/s speed=0.172x 
video:208kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.732246%
</path>


What is the meaning of characters like [0 ;33m here ?


I thought it might have something to do with bit depth and color format. Setting
-pix_fmt gray
had no effect, and indeed the format of the jp2 images is already detected as 8-bit gray.

The output .avi exists and seems fine.


The line was previously used on jpeg files and works fine on jpeg. With jpeg, the output has the line


Input #0, image2, from '<path>/surface_30///img_000%01d.jpeg':

 Duration: 00:00:00.16, start: 0.000000, bitrate: N/A

 Stream #0:0: Video: mjpeg (Baseline), gray(bt470bg/unknown/unknown), 6737x4869 [SAR 1:1 DAR 6737:4869], 25 tbr, 25 tbn, 25 tbc

Stream mapping:

 Stream #0:0 -> #0:0 (mjpeg (native) -> msmpeg4v3 (msmpeg4))

Press [q] to stop, [?] for help

Output #0, avi, to '<path>/surface_30///surface_30.avi':

 Metadata:

 ISFT : Lavf58.29.100

 Stream #0:0: Video: msmpeg4v3 (msmpeg4) (MP43 / 0x3334504D), yuv420p, 6737x4869 [SAR 1:1 DAR 6737:4869], q=2-31, 200 kb/s, 10 fps, 10 tbn, 10 tbc

 Metadata:

 encoder : Lavc58.54.100 msmpeg4

 Side data:

 cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1

frame= 2 fps=0.0 q=8.0 size= 6662kB time=00:00:00.20 bitrate=272859.9kbits/s speed=0.334x 
frame= 3 fps=2.2 q=10.0 size= 10502kB time=00:00:00.30 bitrate=286764.2kbits/s speed=0.22x 
frame= 4 fps=1.9 q=12.3 size= 13574kB time=00:00:00.40 bitrate=277987.7kbits/s speed=0.19x 
frame= 4 fps=1.4 q=12.3 size= 13574kB time=00:00:00.40 bitrate=277987.7kbits/s speed=0.145x 
frame= 4 fps=1.4 q=12.3 Lsize= 13657kB time=00:00:00.40 bitrate=279702.3kbits/s speed=0.145x 
video:13652kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.041926%
</path></path>


detecting mjpeg format and similar, but more detailed format
gray(bt470bg/unknown/unknown), 6737x4869 [SAR 1:1 DAR 6737:4869].


What is the difference when switching input to jp2 ?