
Recherche avancée
Autres articles (38)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encodage et transformation en formats lisibles sur Internet
10 avril 2011MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...) -
Librairies et logiciels spécifiques aux médias
10 décembre 2010, parPour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)
Sur d’autres sites (5796)
-
Output file does not show up after executing ffmpeg command [closed]
19 février 2024, par davaiI'm using ffmpeg to combine an MP3 + G file and produce an MP4 file. I've placed the source code / .exe file for 'ffmpeg' in the project folder, and the MP3 + G files are also in the project folder. I also set the MP4 output to show up in the project folder as well. The weird thing is that, initially, I was producing output files, and while trying to tweak the constant rate factor, the MP4 output just stopped showing up entirely. I'm also not receiving any errors while running the code, and it does print out that the file has been successfully created, despite nothing showing up in the project folder.



 String mp3FilePath = "C:/Users/exampleuser/pfolder/example.mp3";
 String gFilePath = "C:/Users/exampleuser/pfolder/example.cdg";
 String mp4OutputPath = "C:/Users/exampleuser/pfolder/example.mp4";

 try
 {
 String[] command = {
 "C:/Users/tonih/IdeaProjects/MP3GtoMP4Conversion/ffmpeg/ffmpeg-2024-02-19-git-0c8e64e268-full_build/bin/ffmpeg.exe",
 "-i", mp3FilePath, // Input MP3 file
 "-r", "25", // Frame rate
 "-loop", "1", // Loop input video
 "-i", gFilePath, // Input G file
 "-c:v", "libx264", // Video codec
 "-preset", "slow", // Encoding preset for quality (choose according to your requirement)
 "-crf", "18", // Constant Rate Factor (lower is higher quality, typical range 18-28)
 "-c:a", "aac", // Audio codec
 "-b:a", "320k", // Audio bitrate
 "-shortest", // Stop when the shortest stream ends
 mp4OutputPath // Output MP4 file
 };

 Process process = Runtime.getRuntime().exec(command);
 process.waitFor();
 System.out.println("MP4 file created successfully: " + mp4OutputPath);
 }
 catch (IOException | InterruptedException e)
 {
 e.printStackTrace();
 }



-
How to send a camera capture frame to YouTube streaming using ffmpeg
2 mars 2024, par 유혜진import subprocess 
import cv2

# YouTube streaming settings
YOUTUBE_URL = "rtmp://a.rtmp.youtube.com/live2/"
KEY = "..."

# OpenCV camera setup
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)

# FFmpeg command for streaming
command = [r"C:\utility\ffmpeg\ffmpeg-2024-02-22-git-76b2bb96b4-full_build\ffmpeg-2024-02-22-git-76b2bb96b4-full_build\bin\ffmpeg.exe",
 '-f', 'rawvideo',
 '-pix_fmt', 'bgr24',
 '-s', '640x480',
 '-i', '-',
 '-ar', '44100',
 '-ac', '2',
 '-acodec', 'pcm_s16le',
 '-f', 's16le',
 '-ac', '2',
 '-i', 'NUL', 
 '-acodec', 'aac',
 '-ab', '128k',
 '-strict', 'experimental',
 '-vcodec', 'h264',
 '-pix_fmt', 'yuv420p',
 '-g', '50',
 '-vb', '1000k',
 '-profile:v', 'baseline',
 '-preset', 'ultrafast',
 '-r', '30',
 '-f', 'flv', 
 f"{YOUTUBE_URL}/{KEY}",]

# Open a subprocess with FFmpeg
pipe = subprocess.Popen(command, stdin=subprocess.PIPE)

while True:
 # Read a frame from the camera
 ret, frame = cap.read()
 if not ret:
 break

 # Display the frame
 cv2.imshow('Frame', frame)
 cv2.waitKey(1) # Wait for 1ms

 # Send the frame through the pipe for streaming
 pipe.stdin.write(frame.tobytes())

 # Check for 'q' key press to stop streaming
 if cv2.waitKey(1) & 0xFF == ord('q'):
 break

# Release resources
cap.release()
cv2.destroyAllWindows()



I'm trying to implement capturing the camera screen using opencv and transmitting this frame to the YouTube streaming broadcast via ffmpeg. YouTube streaming does start when I run this code. However, it appears to be a black screen, not a camera screen. I don't see what the problem is.


I didn't even start streaming at first, but I changed the command option to various things, and when I ran the code, I succeeded in starting streaming. There are many references to transmitting mp4, but there are not many references to transmitting real-time capture. I'm going to process the camera screen using opencv and then send it to streaming. I don't know what the problem is. Please help me.


-
FFmpeg : Specify pixel format for STD_IN input
27 février 2024, par ShadowMagic896This is the current command :


ffmpeg -i pipe:0 -pix_fmt yuv420p -f mp4 -vf "transpose=1" -f matroska pipe:1


Essentially, it takes an MP4 file, rotates it 90 degrees, and converts it back to mtk and outputs it to STD_OUT.


This is the error :


WARNING:root:[mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9979dd00] stream 0, offset 0x50: partial file
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9979dd00] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 170x144, 32 kb/s): unspecified pixel format
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'pipe:0':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp41isom
 creation_time : 2024-02-26T20:53:03.000000Z
 Duration: 00:00:03.88, start: 0.000000, bitrate: N/A
 Stream #0:0[0x1](und): Video: h264 (avc1 / 0x31637661), none, 170x144, 32 kb/s, 30 fps, 30 tbr, 30k tbn (default)
 Metadata:
 creation_time : 2024-02-26T20:53:03.000000Z
 handler_name : VideoHandler
 vendor_id : [0][0][0][0]
 encoder : AVC Coding
 Stream #0:1[0x2](und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 192 kb/s (default)
 Metadata:
 creation_time : 2024-02-26T20:53:03.000000Z
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
 Stream #0:1 -> #0:1 (aac (native) -> vorbis (libvorbis))
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9979dd00] stream 0, offset 0x50: partial file
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 000001ff9978b080] Error during demuxing: Invalid data found when processing input
Cannot determine format of input 0:0 after EOF
[vf#0:0 @ 000001ff997d6040] Task finished with error code: -1094995529 (Invalid data found when processing input)
[vf#0:0 @ 000001ff997d6040] Terminating thread with return code -1094995529 (Invalid data found when processing input)
[aost#0:1/libvorbis @ 000001ff99cd1780] No filtered frames for output stream, trying to initialize anyway.
[vost#0:0/libx264 @ 000001ff997d4fc0] Could not open encoder before EOF
[vost#0:0/libx264 @ 000001ff997d4fc0] Task finished with error code: -22 (Invalid argument)
[vost#0:0/libx264 @ 000001ff997d4fc0] Terminating thread with return code -22 (Invalid argument)
[out#0/matroska @ 000001ff997a6180] Nothing was written into output file, because at least one of its streams received no packets.
frame= 0 fps=0.0 q=0.0 Lsize= 0KiB time=N/A bitrate=N/A speed=N/A 
Conversion failed!



I am running this via Python, this is the script :


async def send_proc_pipe(self) -> bytes:

 command = f"ffmpeg -hide_banner -loglevel error -i pipe:0 -pix_fmt yuv420p -f mp4 -vf \"transpose=1\" -f matroska pipe:1"

 proc = await asyncio.create_subprocess_shell(
 cmd=command,
 stdin=asyncio.subprocess.PIPE,
 stdout=asyncio.subprocess.PIPE,
 stderr=asyncio.subprocess.PIPE
 )
 
 std_in = self.blob

 c_out, c_err = await proc.communicate(std_in)
 if c_err:
 logging.warning(c_err.decode("utf-8"))

 return c_out



I'm not really use what else to try here. I've re-ordered the parameters and tried different pixel formats with no success.