
Recherche avancée
Autres articles (42)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (6218)
-
FFmpeg-wasm unpredictable erros with next js
21 juillet, par hjtomiI use ffmpeg-wasm in next-js.
Whenever I execute an ffmpeg command it has a chance that it throws a runtime error saying memory access out of bounds. In this case I created a page that has an file selection html element and a button that converts each image to jpeg format with the following command :


await ffmpeg.exec(['-i', originalFileName, '-q:v', '5', convertedFileName]);


I have tried the same process with different images, extensions, sizes, restarting the application, restarting the computer, different browsers, mobile/desktop and sometimes it works, sometimes it doesnt. I feel like there is something that is out of my control. Maybe something that has to do with webassembly itself.


In each runtime it has a 50-50 chance that ffmpeg will work or not.


I have tried changing the target extension to different types.


I have tried changing the ffmpeg command entirely to make it do something else.


Writing the files to the virutal file system works as expected.


I am on windows 10 using next-js version 15.3.5 and ffmpeg version 0.12.10


'use client';

import * as React from 'react';
import { FFmpeg } from '@ffmpeg/ffmpeg'; // Import FFmpeg
import { fetchFile, toBlobURL } from '@ffmpeg/util'; // Import fetchFile

export default function Page() {
 const [files, setFiles] = React.useState([]);
 const [error, setError] = React.useState<string null="null">(null);

 // FFmpeg related states and ref
 const ffmpegRef = React.useRef<ffmpeg null="null">(null);
 const [ffmpegLoaded, setFFmpegLoaded] = React.useState(false);
 const [isConvertingImages, setIsConvertingImages] = React.useState(false);
 const [videoGenerationProgress, setVideoGenerationProgress] = React.useState<string>('');

 // --- Load FFmpeg on component mount ---
 React.useEffect(() => {
 const loadFFmpeg = async () => {
 const baseURL = "https://unpkg.com/@ffmpeg/core@0.12.10/dist/umd";
 try {
 const ffmpeg = new FFmpeg();
 // Set up logging for FFmpeg progress
 ffmpeg.on('log', ({ message }) => {
 if (message.includes('frame=')) {
 setVideoGenerationProgress(message);
 }
 });
 await ffmpeg.load({
 coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, "text/javascript"),
 wasmURL: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, "application/wasm"),
 });
 ffmpegRef.current = ffmpeg;
 setFFmpegLoaded(true);
 console.log('FFmpeg loaded successfully!');
 } catch (err) {
 console.error('Failed to load FFmpeg:', err);
 setError('Failed to load video processing tools.');
 }
 };

 loadFFmpeg();
 }, []);

 // --- Internal file selection logic ---
 const handleFileChange = (e: React.ChangeEvent<htmlinputelement>) => {
 if (e.target.files && e.target.files.length > 0) {
 const newFiles = Array.from(e.target.files).filter(file => file.type.startsWith('image/')); // Only accept images
 setFiles(newFiles);
 setError(null);
 }
 };

 // --- Handle Image conversion ---
 const handleConvertImages = async () => {
 if (!ffmpegLoaded || !ffmpegRef.current) {
 setError('FFmpeg is not loaded yet. Please wait.');
 return;
 }
 if (files.length === 0) {
 setError('Please select images first to generate a video.');
 return;
 }

 setIsConvertingImages(true);
 setError(null);
 setVideoGenerationProgress('');

 try {
 const ffmpeg = ffmpegRef.current;
 const targetExtension = 'jpeg'; // <--- Define your target extension here (e.g., 'png', 'webp')
 const convertedImageNames: string[] = [];

 // Convert all uploaded images to the target format
 for (let i = 0; i < files.length; i++) {
 const file = files[i];
 // Give the original file a unique name in FFmpeg's VFS
 const originalFileName = `original_image_${String(i).padStart(3, '0')}.${file.name.split('.').pop()}`;
 // Define the output filename with the target extension
 const convertedFileName = `converted_image_${String(i).padStart(3, '0')}.${targetExtension}`;
 convertedImageNames.push(convertedFileName);

 // Write the original file data to FFmpeg's virtual file system
 await ffmpeg.writeFile(`${originalFileName}`, await fetchFile(file));
 console.log(`Wrote original ${originalFileName} to FFmpeg FS`);

 setVideoGenerationProgress(`Converting ${file.name} to ${targetExtension.toUpperCase()}...`);

 await ffmpeg.exec(['-i', originalFileName, '-q:v', '5', convertedFileName]); // <------

 console.log(`Converted ${originalFileName} to ${convertedFileName}`);

 // Delete the original file from VFS to free up memory
 await ffmpeg.deleteFile(originalFileName);
 console.log(`Deleted original ${originalFileName} from FFmpeg FS`);
 }

 setFiles([]);

 setVideoGenerationProgress(`All images converted to ${targetExtension.toUpperCase()}.`);
 } catch (err) {
 console.error('Error converting images:', err);
 setError(`Failed to convert images: ${err instanceof Error ? err.message : String(err)}`);
 } finally {
 setIsConvertingImages(false);
 }
 };

 return (
 <div classname="min-h-screen bg-gray-100 flex items-center justify-center p-4 relative overflow-hidden">
 <div classname="bg-white p-8 rounded-lg shadow-xl w-full max-w-md z-10 relative"> {/* Ensure content is above overlay */}
 <h2 classname="text-2xl font-semibold text-gray-800 mb-6 text-center">Select your images</h2>

 {/* Regular File Input Area (still needed for click-to-select) */}
 > document.getElementById('file-input')?.click()}
 >
 
 
 <svg classname="mx-auto h-12 w-12 text-gray-400" fill="none" viewbox="0 0 24 24" stroke="currentColor">
 <path strokelinecap="round" strokelinejoin="round" strokewidth="2" d="M7 16a4 4 0 01-.88-7.903A5 5 0 1115.9 6L16 6a5 5 0 011 9.9M15 13l-3-3m0 0l-3 3m3-3v12"></path>
 </svg>
 <p classname="mt-2 text-sm text-gray-600">
 <span classname="font-medium text-blue-600">Click to select</span>
 </p>
 <p classname="text-xs text-gray-500">Supports multiple formats</p>
 </div>

 {/* Selected Files Display */}
 {files.length > 0 && (
 <div data-testid="selected-files-display" classname="selected-files-display mt-4 p-3 bg-blue-50 border border-blue-200 rounded-md text-sm text-blue-800">
 <p classname="font-semibold mb-2">Selected Files ({files.length}):</p>
 <ul classname="max-h-40 overflow-y-auto">
 {files.map((file) => (
 <li key="{file.name}" classname="flex items-center justify-between py-1">
 <span>{file.name}</span>
 </li>
 ))}
 </ul>
 </div>
 )}

 {/* Error Message */}
 {error && (
 <div classname="mt-4 p-3 bg-red-50 border border-red-200 rounded-md text-sm text-red-800">
 {error}
 </div>
 )}

 {/* Image conversion Progress/Status */}
 {isConvertingImages && (
 <div classname="mt-4 p-3 bg-purple-50 border border-purple-200 rounded-md text-sm text-purple-800 text-center">
 <p classname="font-semibold">Converting images</p>
 <p classname="text-xs mt-1">{videoGenerationProgress}</p>
 </div>
 )}
 {!ffmpegLoaded && (
 <div classname="mt-4 p-3 bg-yellow-50 border border-yellow-200 rounded-md text-sm text-yellow-800 text-center">
 Loading video tools (FFmpeg)... Please wait.
 </div>
 )}

 {/* Convert images button */}
 
 Convert images
 
 </div>
 
 );
}
</htmlinputelement></string></ffmpeg></string>


The next js page above


-
Merged Video Contains Inverted Clips After First Video Ends
3 février, par Nikunj AgrawalI am working on a Flutter application that merges multiple videos using
ffmpeg_kit_flutter
. However, after merging, I notice that the second video (and any subsequent ones) appear inverted or rotated in the final output.

Issue Details :


- 

- The first video appears normal.
- The videos can be recorded using both front and back cameras.
- The second (and later) videos are flipped or rotated upside down.
- This happens after merging using
ffmpeg_kit_flutter
.










Question :
How can I correctly merge multiple videos in Flutter without rotation issues ? Is there a way to normalize video orientation before merging using
ffmpeg_kit_flutter
?

Any help would be appreciated ! 🚀


Code :


import 'dart:io';
import 'dart:math';

import 'package:camera/camera.dart';
import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';
import 'package:ffmpeg_kit_flutter/return_code.dart';
import 'package:flutter/material.dart';
import 'package:path_provider/path_provider.dart';
import 'package:permission_handler/permission_handler.dart';
import 'package:record/record.dart';
import 'package:videotest/video_player.dart';

class MergeVideoRecording extends StatefulWidget {
 const MergeVideoRecording({super.key});

 @override
 State<mergevideorecording> createState() => _MergeVideoRecordingState();
}

class _MergeVideoRecordingState extends State<mergevideorecording> {
 CameraController? _cameraController;
 final AudioRecorder _audioRecorder = AudioRecorder();

 bool _isRecording = false;
 String? _videoPath;
 String? _audioPath;
 List<cameradescription> _cameras = [];
 int _currentCameraIndex = 0;
 final List<string> _recordedVideos = [];

 @override
 Widget build(BuildContext context) {
 return Scaffold(
 body: Column(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 _cameraController != null && _cameraController!.value.isInitialized
 ? SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Stack(
 children: [
 ClipRRect(
 borderRadius: BorderRadius.circular(16),
 child: SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Transform(
 alignment: Alignment.center,
 transform:
 _cameras[_currentCameraIndex].lensDirection ==
 CameraLensDirection.front
 ? Matrix4.rotationY(pi)
 : Matrix4.identity(),
 child: CameraPreview(_cameraController!),
 ),
 ),
 ),
 Align(
 alignment: Alignment.topRight,
 child: InkWell(
 onTap: _switchCamera,
 child: const Padding(
 padding: EdgeInsets.all(8.0),
 child: CircleAvatar(
 radius: 18,
 backgroundColor: Colors.white,
 child: Icon(
 Icons.flip_camera_android,
 color: Colors.black,
 ),
 ),
 ),
 ),
 ),
 ],
 ),
 )
 : const CircularProgressIndicator(),
 const SizedBox(height: 16),
 Row(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 FloatingActionButton(
 heroTag: 'record_button',
 onPressed: _toggleRecording,
 child: Icon(
 _isRecording ? Icons.stop : Icons.video_camera_back,
 ),
 ),
 const SizedBox(
 width: 50,
 ),
 FloatingActionButton(
 heroTag: 'merge_button',
 onPressed: _mergeVideos,
 child: const Icon(
 Icons.merge,
 ),
 ),
 ],
 ),
 if (!_isRecording)
 ListView.builder(
 shrinkWrap: true,
 itemCount: _recordedVideos.length,
 itemBuilder: (context, index) => InkWell(
 onTap: () {
 Navigator.push(
 context,
 MaterialPageRoute(
 builder: (context) => VideoPlayerScreen(
 videoPath: _recordedVideos[index],
 ),
 ),
 );
 },
 child: ListTile(
 title: Text('Video ${index + 1}'),
 subtitle: Text('Path ${_recordedVideos[index]}'),
 trailing: const Icon(Icons.play_arrow),
 ),
 ),
 ),
 ],
 ),
 );
 }

 @override
 void dispose() {
 _cameraController?.dispose();
 _audioRecorder.dispose();
 super.dispose();
 }

 @override
 void initState() {
 super.initState();
 _initializeDevices();
 }

 Future<void> _initializeCameraController(CameraDescription camera) async {
 _cameraController = CameraController(
 camera,
 ResolutionPreset.high,
 enableAudio: true,
 imageFormatGroup: ImageFormatGroup.yuv420, // Add this line
 );

 await _cameraController!.initialize();
 await _cameraController!.setExposureMode(ExposureMode.auto);
 await _cameraController!.setFocusMode(FocusMode.auto);
 setState(() {});
 }

 Future<void> _initializeDevices() async {
 final cameraStatus = await Permission.camera.request();
 final micStatus = await Permission.microphone.request();

 if (!cameraStatus.isGranted || !micStatus.isGranted) {
 _showError('Camera and microphone permissions required');
 return;
 }

 _cameras = await availableCameras();
 if (_cameras.isNotEmpty) {
 final frontCameraIndex = _cameras.indexWhere(
 (camera) => camera.lensDirection == CameraLensDirection.front);
 _currentCameraIndex = frontCameraIndex != -1 ? frontCameraIndex : 0;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 // Merge video
 Future<void> _mergeVideos() async {
 if (_recordedVideos.isEmpty) {
 _showError('No videos to merge');
 return;
 }

 try {
 // Debug logging
 print('Starting merge process');
 print('Number of videos to merge: ${_recordedVideos.length}');
 for (var i = 0; i < _recordedVideos.length; i++) {
 final file = File(_recordedVideos[i]);
 final exists = await file.exists();
 final size = exists ? await file.length() : 0;
 print('Video $i: ${_recordedVideos[i]}');
 print('Exists: $exists, Size: $size bytes');
 }

 final Directory appDir = await getApplicationDocumentsDirectory();
 final String outputPath =
 '${appDir.path}/merged_${DateTime.now().millisecondsSinceEpoch}.mp4';
 final String listFilePath = '${appDir.path}/list.txt';

 print('Output path: $outputPath');
 print('List file path: $listFilePath');

 // Create and verify list file
 final listFile = File(listFilePath);
 final fileContent = _recordedVideos
 .map((path) => "file '${path.replaceAll("'", "'\\''")}'")
 .join('\n');
 await listFile.writeAsString(fileContent);

 print('List file content:');
 print(await listFile.readAsString());

 // Simpler FFmpeg command for testing
 final command = '''
 -f concat
 -safe 0
 -i "$listFilePath"
 -c copy
 -y
 "$outputPath"
 '''
 .trim()
 .replaceAll('\n', ' ');

 print('Executing FFmpeg command: $command');

 final session = await FFmpegKit.execute(command);
 final returnCode = await session.getReturnCode();
 final logs = await session.getAllLogsAsString();
 final failStackTrace = await session.getFailStackTrace();

 print('FFmpeg return code: ${returnCode?.getValue() ?? "null"}');
 print('FFmpeg logs: $logs');
 if (failStackTrace != null) {
 print('FFmpeg fail stack trace: $failStackTrace');
 }

 if (ReturnCode.isSuccess(returnCode)) {
 final outputFile = File(outputPath);
 final outputExists = await outputFile.exists();
 final outputSize = outputExists ? await outputFile.length() : 0;

 print('Output file exists: $outputExists');
 print('Output file size: $outputSize bytes');

 if (outputExists && outputSize > 0) {
 setState(() => _recordedVideos.add(outputPath));
 _showSuccess('Videos merged successfully');
 } else {
 _showError('Merged file is empty or not created');
 }
 } else {
 _showError('Failed to merge videos. Check logs for details.');
 }

 // Clean up
 try {
 await listFile.delete();
 print('List file cleaned up successfully');
 } catch (e) {
 print('Failed to delete list file: $e');
 }
 } catch (e, s) {
 print('Error during merge: $e');
 print('Stack trace: $s');
 _showError('Error merging videos: ${e.toString()}');
 }
 }

 void _showError(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.red),
 );
 }

 void _showSuccess(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.green),
 );
 }

 Future<void> _startAudioRecording() async {
 try {
 final Directory tempDir = await getTemporaryDirectory();
 final audioPath = '${tempDir.path}/recording.wav';
 await _audioRecorder.start(const RecordConfig(), path: audioPath);
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _startVideoRecording() async {
 try {
 await _cameraController!.startVideoRecording();
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _stopAndSaveAudioRecording() async {
 _audioPath = await _audioRecorder.stop();
 if (_audioPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String audioFileName = 'audio_$timestamp.wav';
 await File(_audioPath!).copy('${appDir.path}/$audioFileName');
 _showSuccess('Saved: $audioFileName');
 }
 }

 Future<void> _stopAndSaveVideoRecording() async {
 try {
 final video = await _cameraController!.stopVideoRecording();
 _videoPath = video.path;

 if (_videoPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String videoFileName = 'video_$timestamp.mp4';
 final savedVideoPath = '${appDir.path}/$videoFileName';
 await File(_videoPath!).copy(savedVideoPath);

 setState(() {
 _recordedVideos.add(savedVideoPath);
 _isRecording = false;
 });

 _showSuccess('Saved: $videoFileName');
 }
 } catch (e) {
 _showError('Recording stop error: $e');
 }
 }

 Future<void> _switchCamera() async {
 if (_cameras.length <= 1) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 await _startVideoRecording();
 } else {
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 Future<void> _toggleRecording() async {
 if (_cameraController == null) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 await _stopAndSaveAudioRecording();
 } else {
 _startVideoRecording();
 _startAudioRecording();
 setState(() => _recordedVideos.clear());
 }
 }
}
</void></void></void></void></void></void></void></void></void></string></cameradescription></mergevideorecording></mergevideorecording>


-
FFmpeg RTSP drop rate increases when frame rate is reduced
13 avril 2024, par Avishka PereraI need to read an RTSP stream, process the images individually in Python, and then write the images back to an RTSP stream. As the RTSP server, I am using Mediamtx [1]. For streaming, I am using FFmpeg [2].


I have the following code that works perfectly fine. For simplification purposes, I am streaming three generated images.


import time
import numpy as np
import subprocess

width, height = 640, 480
fps = 25
rtsp_server_address = f"rtsp://localhost:8554/mystream"

ffmpeg_cmd = [
 "ffmpeg",
 "-re",
 "-f",
 "rawvideo",
 "-pix_fmt",
 "rgb24",
 "-s",
 f"{width}x{height}",
 "-i",
 "-",
 "-r",
 str(fps),
 "-avoid_negative_ts",
 "make_zero",
 "-vcodec",
 "libx264",
 "-threads",
 "4",
 "-f",
 "rtsp",
 rtsp_server_address,
]
colors = np.array(
 [
 [255, 0, 0],
 [0, 255, 0],
 [0, 0, 255],
 ]
).reshape(3, 1, 1, 3)
images = (np.ones((3, width, height, 3)) * colors).astype(np.uint8)

if __name__ == "__main__":

 process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)
 start = time.time()
 exported = 0
 while True:
 exported += 1
 next_time = start + exported / fps
 now = time.time()
 if next_time > now:
 sleep_dur = next_time - now
 time.sleep(sleep_dur)

 image = images[exported % 3]
 image_bytes = image.tobytes()

 process.stdin.write(image_bytes)
 process.stdin.flush()

 process.stdin.close()
 process.wait()



The issue is, that I need to run this at 10 fps because the processing step is heavy and can only afford 10 fps. Hence, as I reduce the frame rate from 25 to 10, the drop rate increases from 0% to 100%. And after a few iterations, I get a
BrokenPipeError: [Errno 32] Broken pipe
. Refer to the appendix for the complete log.

As an alternative, I can use OpenCV compiled from source with GStreamer [3], but I prefer using FFmpeg to make the shipping process simple. Since compiling OpenCV from source can be tedious and dependent on the system.


References


[1] Mediamtx (formerly rtsp-simple-server) : https://github.com/bluenviron/mediamtx


[2] FFmpeg : https://github.com/FFmpeg/FFmpeg


[3] Compile OpenCV with GStreamer : https://github.com/bluenviron/mediamtx?tab=readme-ov-file#opencv


Appendix


Creating the source stream


To instantiate the unprocessed stream, I use the following command. This streams the content of my webcam as and RTSP stream.


ffmpeg -video_size 1280x720 -i /dev/video0 -avoid_negative_ts make_zero -vcodec libx264 -r 10 -f rtsp rtsp://localhost:8554/webcam



Error log


ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12.3.0 (conda-forge gcc 12.3.0-5)
 configuration: --prefix=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-c++ --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --disable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --enable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libopus --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/pkg-config
 libavutil 58. 29.100 / 58. 29.100
 libavcodec 60. 31.102 / 60. 31.102
 libavformat 60. 16.100 / 60. 16.100
 libavdevice 60. 3.100 / 60. 3.100
 libavfilter 9. 12.100 / 9. 12.100
 libswscale 7. 5.100 / 7. 5.100
 libswresample 4. 12.100 / 4. 12.100
 libpostproc 57. 3.100 / 57. 3.100
Input #0, rawvideo, from 'fd:':
 Duration: N/A, start: 0.000000, bitrate: 184320 kb/s
 Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 640x480, 184320 kb/s, 25 tbr, 25 tbn
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
[libx264 @ 0x5e2ef8b01340] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x5e2ef8b01340] profile High 4:4:4 Predictive, level 2.2, 4:4:4, 8-bit
[libx264 @ 0x5e2ef8b01340] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=10 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, rtsp, to 'rtsp://localhost:8554/mystream':
 Metadata:
 encoder : Lavf60.16.100
 Stream #0:0: Video: h264, yuv444p(tv, progressive), 640x480, q=2-31, 10 fps, 90k tbn
 Metadata:
 encoder : Lavc60.31.102 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[vost#0:0/libx264 @ 0x5e2ef8b01080] Error submitting a packet to the muxer: Broken pipe 
[out#0/rtsp @ 0x5e2ef8afd780] Error muxing a packet
[out#0/rtsp @ 0x5e2ef8afd780] video:1kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
frame= 1 fps=0.1 q=-1.0 Lsize=N/A time=00:00:04.70 bitrate=N/A dup=0 drop=70 speed=0.389x 
[libx264 @ 0x5e2ef8b01340] frame I:16 Avg QP: 6.00 size: 147
[libx264 @ 0x5e2ef8b01340] frame P:17 Avg QP: 9.94 size: 101
[libx264 @ 0x5e2ef8b01340] frame B:17 Avg QP: 9.94 size: 64
[libx264 @ 0x5e2ef8b01340] consecutive B-frames: 50.0% 0.0% 42.0% 8.0%
[libx264 @ 0x5e2ef8b01340] mb I I16..4: 81.3% 18.7% 0.0%
[libx264 @ 0x5e2ef8b01340] mb P I16..4: 52.9% 0.0% 0.0% P16..4: 0.0% 0.0% 0.0% 0.0% 0.0% skip:47.1%
[libx264 @ 0x5e2ef8b01340] mb B I16..4: 0.0% 5.9% 0.0% B16..8: 0.1% 0.0% 0.0% direct: 0.0% skip:94.0% L0:56.2% L1:43.8% BI: 0.0%
[libx264 @ 0x5e2ef8b01340] 8x8 transform intra:15.4% inter:100.0%
[libx264 @ 0x5e2ef8b01340] coded y,u,v intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%
[libx264 @ 0x5e2ef8b01340] i16 v,h,dc,p: 97% 0% 3% 0%
[libx264 @ 0x5e2ef8b01340] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 0% 0% 100% 0% 0% 0% 0% 0% 0%
[libx264 @ 0x5e2ef8b01340] Weighted P-Frames: Y:52.9% UV:52.9%
[libx264 @ 0x5e2ef8b01340] ref P L0: 88.9% 0.0% 0.0% 11.1%
[libx264 @ 0x5e2ef8b01340] kb/s:8.27
Conversion failed!
Traceback (most recent call last):
 File "/home/avishka/projects/read-process-stream/minimal-ffmpeg-error.py", line 58, in <module>
 process.stdin.write(image_bytes)
BrokenPipeError: [Errno 32] Broken pipe
</module>