
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (34)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (7953)
-
How to send a camera capture frame to YouTube streaming using ffmpeg
2 mars 2024, par 유혜진import subprocess 
import cv2

# YouTube streaming settings
YOUTUBE_URL = "rtmp://a.rtmp.youtube.com/live2/"
KEY = "..."

# OpenCV camera setup
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)

# FFmpeg command for streaming
command = [r"C:\utility\ffmpeg\ffmpeg-2024-02-22-git-76b2bb96b4-full_build\ffmpeg-2024-02-22-git-76b2bb96b4-full_build\bin\ffmpeg.exe",
 '-f', 'rawvideo',
 '-pix_fmt', 'bgr24',
 '-s', '640x480',
 '-i', '-',
 '-ar', '44100',
 '-ac', '2',
 '-acodec', 'pcm_s16le',
 '-f', 's16le',
 '-ac', '2',
 '-i', 'NUL', 
 '-acodec', 'aac',
 '-ab', '128k',
 '-strict', 'experimental',
 '-vcodec', 'h264',
 '-pix_fmt', 'yuv420p',
 '-g', '50',
 '-vb', '1000k',
 '-profile:v', 'baseline',
 '-preset', 'ultrafast',
 '-r', '30',
 '-f', 'flv', 
 f"{YOUTUBE_URL}/{KEY}",]

# Open a subprocess with FFmpeg
pipe = subprocess.Popen(command, stdin=subprocess.PIPE)

while True:
 # Read a frame from the camera
 ret, frame = cap.read()
 if not ret:
 break

 # Display the frame
 cv2.imshow('Frame', frame)
 cv2.waitKey(1) # Wait for 1ms

 # Send the frame through the pipe for streaming
 pipe.stdin.write(frame.tobytes())

 # Check for 'q' key press to stop streaming
 if cv2.waitKey(1) & 0xFF == ord('q'):
 break

# Release resources
cap.release()
cv2.destroyAllWindows()



I'm trying to implement capturing the camera screen using opencv and transmitting this frame to the YouTube streaming broadcast via ffmpeg. YouTube streaming does start when I run this code. However, it appears to be a black screen, not a camera screen. I don't see what the problem is.


I didn't even start streaming at first, but I changed the command option to various things, and when I ran the code, I succeeded in starting streaming. There are many references to transmitting mp4, but there are not many references to transmitting real-time capture. I'm going to process the camera screen using opencv and then send it to streaming. I don't know what the problem is. Please help me.


-
FFmpeg : Specify pixel format for STD_IN input
27 février 2024, par ShadowMagic896This is the current command :


ffmpeg -i pipe:0 -pix_fmt yuv420p -f mp4 -vf "transpose=1" -f matroska pipe:1


Essentially, it takes an MP4 file, rotates it 90 degrees, and converts it back to mtk and outputs it to STD_OUT.


This is the error :


WARNING:root:[mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9979dd00] stream 0, offset 0x50: partial file
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9979dd00] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 170x144, 32 kb/s): unspecified pixel format
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'pipe:0':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp41isom
 creation_time : 2024-02-26T20:53:03.000000Z
 Duration: 00:00:03.88, start: 0.000000, bitrate: N/A
 Stream #0:0[0x1](und): Video: h264 (avc1 / 0x31637661), none, 170x144, 32 kb/s, 30 fps, 30 tbr, 30k tbn (default)
 Metadata:
 creation_time : 2024-02-26T20:53:03.000000Z
 handler_name : VideoHandler
 vendor_id : [0][0][0][0]
 encoder : AVC Coding
 Stream #0:1[0x2](und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 192 kb/s (default)
 Metadata:
 creation_time : 2024-02-26T20:53:03.000000Z
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
 Stream #0:1 -> #0:1 (aac (native) -> vorbis (libvorbis))
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9979dd00] stream 0, offset 0x50: partial file
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9978b080] Error during demuxing: Invalid data found when processing input
Cannot determine format of input 0:0 after EOF
[vf#0:0 @ 000001ff997d6040] Task finished with error code: -1094995529 (Invalid data found when processing input)
[vf#0:0 @ 000001ff997d6040] Terminating thread with return code -1094995529 (Invalid data found when processing input)
[aost#0:1/libvorbis @ 000001ff99cd1780] No filtered frames for output stream, trying to initialize anyway.
[vost#0:0/libx264 @ 000001ff997d4fc0] Could not open encoder before EOF
[vost#0:0/libx264 @ 000001ff997d4fc0] Task finished with error code: -22 (Invalid argument)
[vost#0:0/libx264 @ 000001ff997d4fc0] Terminating thread with return code -22 (Invalid argument)
[out#0/matroska @ 000001ff997a6180] Nothing was written into output file, because at least one of its streams received no packets.
frame= 0 fps=0.0 q=0.0 Lsize= 0KiB time=N/A bitrate=N/A speed=N/A 
Conversion failed!



I am running this via Python, this is the script :


async def send_proc_pipe(self) -> bytes:

 command = f"ffmpeg -hide_banner -loglevel error -i pipe:0 -pix_fmt yuv420p -f mp4 -vf \"transpose=1\" -f matroska pipe:1"

 proc = await asyncio.create_subprocess_shell(
 cmd=command,
 stdin=asyncio.subprocess.PIPE,
 stdout=asyncio.subprocess.PIPE,
 stderr=asyncio.subprocess.PIPE
 )
 
 std_in = self.blob

 c_out, c_err = await proc.communicate(std_in)
 if c_err:
 logging.warning(c_err.decode("utf-8"))

 return c_out



I'm not really use what else to try here. I've re-ordered the parameters and tried different pixel formats with no success.


-
Yet another ffmpeg concat audio sync issue [closed]
15 mars 2024, par DemiurgI've read through dozens of posts, tried many suggestions, nothing seems to work for me. The funny part is that the video is fine in some players (e.g. Quicktime) but not the others (e.g. Chrome).


This is what I currently use :


ffmpeg -i segment.mp4 -q 0 -c copy segment.ts
ffmpeg -f concat -i videos.txt -c copy -y final.mp4



This is what ffmpeg shows for the originals


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '52M35S_1710280355.mp4':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp42isom
 creation_time : 2024-03-12T21:53:35.000000Z
 Duration: 00:00:59.86, start: 0.000000, bitrate: 851 kb/s
 Stream #0:0[0x1](und): Audio: opus (Opus / 0x7375704F), 48000 Hz, mono, fltp, 10 kb/s (default)
 Metadata:
 creation_time : 2024-03-12T21:53:35.000000Z
 vendor_id : [0][0][0][0]
 Stream #0:1[0x2](und): Video: hevc (Main) (hvc1 / 0x31637668), yuvj420p(pc, bt709), 1920x1080, 836 kb/s, 10.02 fps, 10 tbr, 90k tbn (default)
 Metadata:
 creation_time : 2024-03-12T21:53:35.000000Z
 vendor_id : [0][0][0][0]