
Recherche avancée
Autres articles (13)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Qualité du média après traitement
21 juin 2013, parLe bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...) -
Qu’est ce qu’un masque de formulaire
13 juin 2013, parUn masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
Chaque formulaire de publication d’objet peut donc être personnalisé.
Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)
Sur d’autres sites (4493)
-
Yet another ffmpeg concat audio sync issue [closed]
15 mars 2024, par DemiurgI've read through dozens of posts, tried many suggestions, nothing seems to work for me. The funny part is that the video is fine in some players (e.g. Quicktime) but not the others (e.g. Chrome).


This is what I currently use :


ffmpeg -i segment.mp4 -q 0 -c copy segment.ts
ffmpeg -f concat -i videos.txt -c copy -y final.mp4



This is what ffmpeg shows for the originals


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '52M35S_1710280355.mp4':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp42isom
 creation_time : 2024-03-12T21:53:35.000000Z
 Duration: 00:00:59.86, start: 0.000000, bitrate: 851 kb/s
 Stream #0:0[0x1](und): Audio: opus (Opus / 0x7375704F), 48000 Hz, mono, fltp, 10 kb/s (default)
 Metadata:
 creation_time : 2024-03-12T21:53:35.000000Z
 vendor_id : [0][0][0][0]
 Stream #0:1[0x2](und): Video: hevc (Main) (hvc1 / 0x31637668), yuvj420p(pc, bt709), 1920x1080, 836 kb/s, 10.02 fps, 10 tbr, 90k tbn (default)
 Metadata:
 creation_time : 2024-03-12T21:53:35.000000Z
 vendor_id : [0][0][0][0]



-
FFmpeg : Specify pixel format for STD_IN input
27 février 2024, par ShadowMagic896This is the current command :


ffmpeg -i pipe:0 -pix_fmt yuv420p -f mp4 -vf "transpose=1" -f matroska pipe:1


Essentially, it takes an MP4 file, rotates it 90 degrees, and converts it back to mtk and outputs it to STD_OUT.


This is the error :


WARNING:root:[mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9979dd00] stream 0, offset 0x50: partial file
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9979dd00] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 170x144, 32 kb/s): unspecified pixel format
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'pipe:0':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp41isom
 creation_time : 2024-02-26T20:53:03.000000Z
 Duration: 00:00:03.88, start: 0.000000, bitrate: N/A
 Stream #0:0[0x1](und): Video: h264 (avc1 / 0x31637661), none, 170x144, 32 kb/s, 30 fps, 30 tbr, 30k tbn (default)
 Metadata:
 creation_time : 2024-02-26T20:53:03.000000Z
 handler_name : VideoHandler
 vendor_id : [0][0][0][0]
 encoder : AVC Coding
 Stream #0:1[0x2](und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 192 kb/s (default)
 Metadata:
 creation_time : 2024-02-26T20:53:03.000000Z
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
 Stream #0:1 -> #0:1 (aac (native) -> vorbis (libvorbis))
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9979dd00] stream 0, offset 0x50: partial file
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9978b080] Error during demuxing: Invalid data found when processing input
Cannot determine format of input 0:0 after EOF
[vf#0:0 @ 000001ff997d6040] Task finished with error code: -1094995529 (Invalid data found when processing input)
[vf#0:0 @ 000001ff997d6040] Terminating thread with return code -1094995529 (Invalid data found when processing input)
[aost#0:1/libvorbis @ 000001ff99cd1780] No filtered frames for output stream, trying to initialize anyway.
[vost#0:0/libx264 @ 000001ff997d4fc0] Could not open encoder before EOF
[vost#0:0/libx264 @ 000001ff997d4fc0] Task finished with error code: -22 (Invalid argument)
[vost#0:0/libx264 @ 000001ff997d4fc0] Terminating thread with return code -22 (Invalid argument)
[out#0/matroska @ 000001ff997a6180] Nothing was written into output file, because at least one of its streams received no packets.
frame= 0 fps=0.0 q=0.0 Lsize= 0KiB time=N/A bitrate=N/A speed=N/A 
Conversion failed!



I am running this via Python, this is the script :


async def send_proc_pipe(self) -> bytes:

 command = f"ffmpeg -hide_banner -loglevel error -i pipe:0 -pix_fmt yuv420p -f mp4 -vf \"transpose=1\" -f matroska pipe:1"

 proc = await asyncio.create_subprocess_shell(
 cmd=command,
 stdin=asyncio.subprocess.PIPE,
 stdout=asyncio.subprocess.PIPE,
 stderr=asyncio.subprocess.PIPE
 )
 
 std_in = self.blob

 c_out, c_err = await proc.communicate(std_in)
 if c_err:
 logging.warning(c_err.decode("utf-8"))

 return c_out



I'm not really use what else to try here. I've re-ordered the parameters and tried different pixel formats with no success.


-
How to send a camera capture frame to YouTube streaming using ffmpeg
2 mars 2024, par 유혜진import subprocess 
import cv2

# YouTube streaming settings
YOUTUBE_URL = "rtmp://a.rtmp.youtube.com/live2/"
KEY = "..."

# OpenCV camera setup
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)

# FFmpeg command for streaming
command = [r"C:\utility\ffmpeg\ffmpeg-2024-02-22-git-76b2bb96b4-full_build\ffmpeg-2024-02-22-git-76b2bb96b4-full_build\bin\ffmpeg.exe",
 '-f', 'rawvideo',
 '-pix_fmt', 'bgr24',
 '-s', '640x480',
 '-i', '-',
 '-ar', '44100',
 '-ac', '2',
 '-acodec', 'pcm_s16le',
 '-f', 's16le',
 '-ac', '2',
 '-i', 'NUL', 
 '-acodec', 'aac',
 '-ab', '128k',
 '-strict', 'experimental',
 '-vcodec', 'h264',
 '-pix_fmt', 'yuv420p',
 '-g', '50',
 '-vb', '1000k',
 '-profile:v', 'baseline',
 '-preset', 'ultrafast',
 '-r', '30',
 '-f', 'flv', 
 f"{YOUTUBE_URL}/{KEY}",]

# Open a subprocess with FFmpeg
pipe = subprocess.Popen(command, stdin=subprocess.PIPE)

while True:
 # Read a frame from the camera
 ret, frame = cap.read()
 if not ret:
 break

 # Display the frame
 cv2.imshow('Frame', frame)
 cv2.waitKey(1) # Wait for 1ms

 # Send the frame through the pipe for streaming
 pipe.stdin.write(frame.tobytes())

 # Check for 'q' key press to stop streaming
 if cv2.waitKey(1) & 0xFF == ord('q'):
 break

# Release resources
cap.release()
cv2.destroyAllWindows()



I'm trying to implement capturing the camera screen using opencv and transmitting this frame to the YouTube streaming broadcast via ffmpeg. YouTube streaming does start when I run this code. However, it appears to be a black screen, not a camera screen. I don't see what the problem is.


I didn't even start streaming at first, but I changed the command option to various things, and when I ran the code, I succeeded in starting streaming. There are many references to transmitting mp4, but there are not many references to transmitting real-time capture. I'm going to process the camera screen using opencv and then send it to streaming. I don't know what the problem is. Please help me.