
Recherche avancée
Autres articles (74)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)
Sur d’autres sites (9669)
-
Merged Video Contains Inverted Clips After First Video Ends
3 février, par Nikunj AgrawalI am working on a Flutter application that merges multiple videos using
ffmpeg_kit_flutter
. However, after merging, I notice that the second video (and any subsequent ones) appear inverted or rotated in the final output.

Issue Details :


- 

- The first video appears normal.
- The videos can be recorded using both front and back cameras.
- The second (and later) videos are flipped or rotated upside down.
- This happens after merging using
ffmpeg_kit_flutter
.










Question :
How can I correctly merge multiple videos in Flutter without rotation issues ? Is there a way to normalize video orientation before merging using
ffmpeg_kit_flutter
?

Any help would be appreciated ! 🚀


Code :


import 'dart:io';
import 'dart:math';

import 'package:camera/camera.dart';
import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';
import 'package:ffmpeg_kit_flutter/return_code.dart';
import 'package:flutter/material.dart';
import 'package:path_provider/path_provider.dart';
import 'package:permission_handler/permission_handler.dart';
import 'package:record/record.dart';
import 'package:videotest/video_player.dart';

class MergeVideoRecording extends StatefulWidget {
 const MergeVideoRecording({super.key});

 @override
 State<mergevideorecording> createState() => _MergeVideoRecordingState();
}

class _MergeVideoRecordingState extends State<mergevideorecording> {
 CameraController? _cameraController;
 final AudioRecorder _audioRecorder = AudioRecorder();

 bool _isRecording = false;
 String? _videoPath;
 String? _audioPath;
 List<cameradescription> _cameras = [];
 int _currentCameraIndex = 0;
 final List<string> _recordedVideos = [];

 @override
 Widget build(BuildContext context) {
 return Scaffold(
 body: Column(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 _cameraController != null && _cameraController!.value.isInitialized
 ? SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Stack(
 children: [
 ClipRRect(
 borderRadius: BorderRadius.circular(16),
 child: SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Transform(
 alignment: Alignment.center,
 transform:
 _cameras[_currentCameraIndex].lensDirection ==
 CameraLensDirection.front
 ? Matrix4.rotationY(pi)
 : Matrix4.identity(),
 child: CameraPreview(_cameraController!),
 ),
 ),
 ),
 Align(
 alignment: Alignment.topRight,
 child: InkWell(
 onTap: _switchCamera,
 child: const Padding(
 padding: EdgeInsets.all(8.0),
 child: CircleAvatar(
 radius: 18,
 backgroundColor: Colors.white,
 child: Icon(
 Icons.flip_camera_android,
 color: Colors.black,
 ),
 ),
 ),
 ),
 ),
 ],
 ),
 )
 : const CircularProgressIndicator(),
 const SizedBox(height: 16),
 Row(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 FloatingActionButton(
 heroTag: 'record_button',
 onPressed: _toggleRecording,
 child: Icon(
 _isRecording ? Icons.stop : Icons.video_camera_back,
 ),
 ),
 const SizedBox(
 width: 50,
 ),
 FloatingActionButton(
 heroTag: 'merge_button',
 onPressed: _mergeVideos,
 child: const Icon(
 Icons.merge,
 ),
 ),
 ],
 ),
 if (!_isRecording)
 ListView.builder(
 shrinkWrap: true,
 itemCount: _recordedVideos.length,
 itemBuilder: (context, index) => InkWell(
 onTap: () {
 Navigator.push(
 context,
 MaterialPageRoute(
 builder: (context) => VideoPlayerScreen(
 videoPath: _recordedVideos[index],
 ),
 ),
 );
 },
 child: ListTile(
 title: Text('Video ${index + 1}'),
 subtitle: Text('Path ${_recordedVideos[index]}'),
 trailing: const Icon(Icons.play_arrow),
 ),
 ),
 ),
 ],
 ),
 );
 }

 @override
 void dispose() {
 _cameraController?.dispose();
 _audioRecorder.dispose();
 super.dispose();
 }

 @override
 void initState() {
 super.initState();
 _initializeDevices();
 }

 Future<void> _initializeCameraController(CameraDescription camera) async {
 _cameraController = CameraController(
 camera,
 ResolutionPreset.high,
 enableAudio: true,
 imageFormatGroup: ImageFormatGroup.yuv420, // Add this line
 );

 await _cameraController!.initialize();
 await _cameraController!.setExposureMode(ExposureMode.auto);
 await _cameraController!.setFocusMode(FocusMode.auto);
 setState(() {});
 }

 Future<void> _initializeDevices() async {
 final cameraStatus = await Permission.camera.request();
 final micStatus = await Permission.microphone.request();

 if (!cameraStatus.isGranted || !micStatus.isGranted) {
 _showError('Camera and microphone permissions required');
 return;
 }

 _cameras = await availableCameras();
 if (_cameras.isNotEmpty) {
 final frontCameraIndex = _cameras.indexWhere(
 (camera) => camera.lensDirection == CameraLensDirection.front);
 _currentCameraIndex = frontCameraIndex != -1 ? frontCameraIndex : 0;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 // Merge video
 Future<void> _mergeVideos() async {
 if (_recordedVideos.isEmpty) {
 _showError('No videos to merge');
 return;
 }

 try {
 // Debug logging
 print('Starting merge process');
 print('Number of videos to merge: ${_recordedVideos.length}');
 for (var i = 0; i < _recordedVideos.length; i++) {
 final file = File(_recordedVideos[i]);
 final exists = await file.exists();
 final size = exists ? await file.length() : 0;
 print('Video $i: ${_recordedVideos[i]}');
 print('Exists: $exists, Size: $size bytes');
 }

 final Directory appDir = await getApplicationDocumentsDirectory();
 final String outputPath =
 '${appDir.path}/merged_${DateTime.now().millisecondsSinceEpoch}.mp4';
 final String listFilePath = '${appDir.path}/list.txt';

 print('Output path: $outputPath');
 print('List file path: $listFilePath');

 // Create and verify list file
 final listFile = File(listFilePath);
 final fileContent = _recordedVideos
 .map((path) => "file '${path.replaceAll("'", "'\\''")}'")
 .join('\n');
 await listFile.writeAsString(fileContent);

 print('List file content:');
 print(await listFile.readAsString());

 // Simpler FFmpeg command for testing
 final command = '''
 -f concat
 -safe 0
 -i "$listFilePath"
 -c copy
 -y
 "$outputPath"
 '''
 .trim()
 .replaceAll('\n', ' ');

 print('Executing FFmpeg command: $command');

 final session = await FFmpegKit.execute(command);
 final returnCode = await session.getReturnCode();
 final logs = await session.getAllLogsAsString();
 final failStackTrace = await session.getFailStackTrace();

 print('FFmpeg return code: ${returnCode?.getValue() ?? "null"}');
 print('FFmpeg logs: $logs');
 if (failStackTrace != null) {
 print('FFmpeg fail stack trace: $failStackTrace');
 }

 if (ReturnCode.isSuccess(returnCode)) {
 final outputFile = File(outputPath);
 final outputExists = await outputFile.exists();
 final outputSize = outputExists ? await outputFile.length() : 0;

 print('Output file exists: $outputExists');
 print('Output file size: $outputSize bytes');

 if (outputExists && outputSize > 0) {
 setState(() => _recordedVideos.add(outputPath));
 _showSuccess('Videos merged successfully');
 } else {
 _showError('Merged file is empty or not created');
 }
 } else {
 _showError('Failed to merge videos. Check logs for details.');
 }

 // Clean up
 try {
 await listFile.delete();
 print('List file cleaned up successfully');
 } catch (e) {
 print('Failed to delete list file: $e');
 }
 } catch (e, s) {
 print('Error during merge: $e');
 print('Stack trace: $s');
 _showError('Error merging videos: ${e.toString()}');
 }
 }

 void _showError(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.red),
 );
 }

 void _showSuccess(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.green),
 );
 }

 Future<void> _startAudioRecording() async {
 try {
 final Directory tempDir = await getTemporaryDirectory();
 final audioPath = '${tempDir.path}/recording.wav';
 await _audioRecorder.start(const RecordConfig(), path: audioPath);
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _startVideoRecording() async {
 try {
 await _cameraController!.startVideoRecording();
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _stopAndSaveAudioRecording() async {
 _audioPath = await _audioRecorder.stop();
 if (_audioPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String audioFileName = 'audio_$timestamp.wav';
 await File(_audioPath!).copy('${appDir.path}/$audioFileName');
 _showSuccess('Saved: $audioFileName');
 }
 }

 Future<void> _stopAndSaveVideoRecording() async {
 try {
 final video = await _cameraController!.stopVideoRecording();
 _videoPath = video.path;

 if (_videoPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String videoFileName = 'video_$timestamp.mp4';
 final savedVideoPath = '${appDir.path}/$videoFileName';
 await File(_videoPath!).copy(savedVideoPath);

 setState(() {
 _recordedVideos.add(savedVideoPath);
 _isRecording = false;
 });

 _showSuccess('Saved: $videoFileName');
 }
 } catch (e) {
 _showError('Recording stop error: $e');
 }
 }

 Future<void> _switchCamera() async {
 if (_cameras.length <= 1) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 await _startVideoRecording();
 } else {
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 Future<void> _toggleRecording() async {
 if (_cameraController == null) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 await _stopAndSaveAudioRecording();
 } else {
 _startVideoRecording();
 _startAudioRecording();
 setState(() => _recordedVideos.clear());
 }
 }
}
</void></void></void></void></void></void></void></void></void></string></cameradescription></mergevideorecording></mergevideorecording>


-
record mediasoup RTP stream using FFmpeg for Firefox
30 juillet 2024, par Hadi AghandehI am trying to record WebRTC stream using mediasoup. I could record successfully on chrome and safari 13/14/15. However on Firefox the does not work.


Client side code is a vue js component which gets rtp-compabilities using socket.io and create producers after the server creates the transports. This works good on chrome and safari.


const { connect , createLocalTracks } = require('twilio-video');
const SocketClient = require("socket.io-client");
const SocketPromise = require("socket.io-promise").default;
const MediasoupClient = require("mediasoup-client");

export default {
 data() {
 return {
 errors: [],
 isReady: false,
 isRecording: false,
 loading: false,
 sapio: {
 token: null,
 connectionId: 0
 },
 server: {
 host: 'https://rtc.test',
 ws: '/server',
 socket: null,
 },
 peer: {},
 }
 },
 mounted() {
 this.init();
 },
 methods: {
 async init() {
 await this.startCamera();

 if (this.takeId) {
 await this.recordBySapioServer();
 }
 },
 startCamera() {
 return new Promise( (resolve, reject) => {
 if (window.videoMediaStreamObject) {
 this.setVideoElementStream(window.videoMediaStreamObject);
 resolve();
 } else {
 // Get user media as required
 try {
 this.localeStream = navigator.mediaDevices.getUserMedia({
 audio: true,
 video: true,
 }).then((stream) => {
 this.setVideoElementStream(stream);
 resolve();
 })
 } catch (err) {
 console.error(err);
 reject();
 }
 }
 })
 },
 setVideoElementStream(stream) {
 this.localStream = stream;
 this.$refs.video.srcObject = stream;
 this.$refs.video.muted = true;
 this.$refs.video.play().then((video) => {
 this.isStreaming = true;
 this.height = this.$refs.video.videoHeight;
 this.width = this.$refs.video.videoWidth;
 });
 },
 // first thing we need is connecting to websocket
 connectToSocket() {
 const serverUrl = this.server.host;
 console.log("Connect with sapio rtc server:", serverUrl);

 const socket = SocketClient(serverUrl, {
 path: this.server.ws,
 transports: ["websocket"],
 });
 this.socket = socket;

 socket.on("connect", () => {
 console.log("WebSocket connected");
 // we ask for rtp-capabilities from server to send to us
 socket.emit('send-rtp-capabilities');
 });

 socket.on("error", (err) => {
 this.loading = true;
 console.error("WebSocket error:", err);
 });

 socket.on("router-rtp-capabilities", async (msg) => {
 const { routerRtpCapabilities, sessionId, externalId } = msg;
 console.log('[rtpCapabilities:%o]', routerRtpCapabilities);
 this.routerRtpCapabilities = routerRtpCapabilities;

 try {
 const device = new MediasoupClient.Device();
 // Load the mediasoup device with the router rtp capabilities gotten from the server
 await device.load({ routerRtpCapabilities });

 this.peer.sessionId = sessionId;
 this.peer.externalId = externalId;
 this.peer.device = device;

 this.createTransport();
 } catch (error) {
 console.error('failed to init device [error:%o]', error);
 socket.disconnect();
 }
 });

 socket.on("create-transport", async (msg) => {
 console.log('handleCreateTransportRequest() [data:%o]', msg);

 try {
 // Create the local mediasoup send transport
 this.peer.sendTransport = await this.peer.device.createSendTransport(msg);
 console.log('send transport created [id:%s]', this.peer.sendTransport.id);

 // Set the transport listeners and get the users media stream
 this.handleSendTransportListeners();
 this.setTracks();
 this.loading = false;
 } catch (error) {
 console.error('failed to create transport [error:%o]', error);
 socket.disconnect();
 }
 });

 socket.on("connect-transport", async (msg) => {
 console.log('handleTransportConnectRequest()');
 try {
 const action = this.connectTransport;

 if (!action) {
 throw new Error('transport-connect action was not found');
 }

 await action(msg);
 } catch (error) {
 console.error('ailed [error:%o]', error);
 }
 });

 socket.on("produce", async (msg) => {
 console.log('handleProduceRequest()');
 try {
 if (!this.produce) {
 throw new Error('produce action was not found');
 }
 await this.produce(msg);
 } catch (error) {
 console.error('failed [error:%o]', error);
 }
 });

 socket.on("recording", async (msg) => {
 this.isRecording = true;
 });

 socket.on("recording-error", async (msg) => {
 this.isRecording = false;
 console.error(msg);
 });

 socket.on("recording-closed", async (msg) => {
 this.isRecording = false;
 console.warn(msg)
 });

 },
 createTransport() {
 console.log('createTransport()');

 if (!this.peer || !this.peer.device.loaded) {
 throw new Error('Peer or device is not initialized');
 }

 // First we must create the mediasoup transport on the server side
 this.socket.emit('create-transport',{
 sessionId: this.peer.sessionId
 });
 },
 handleSendTransportListeners() {
 this.peer.sendTransport.on('connect', this.handleTransportConnectEvent);
 this.peer.sendTransport.on('produce', this.handleTransportProduceEvent);
 this.peer.sendTransport.on('connectionstatechange', connectionState => {
 console.log('send transport connection state change [state:%s]', connectionState);
 });
 },
 handleTransportConnectEvent({ dtlsParameters }, callback, errback) {
 console.log('handleTransportConnectEvent()');
 try {
 this.connectTransport = (msg) => {
 console.log('connect-transport action');
 callback();
 this.connectTransport = null;
 };

 this.socket.emit('connect-transport',{
 sessionId: this.peer.sessionId,
 transportId: this.peer.sendTransport.id,
 dtlsParameters
 });

 } catch (error) {
 console.error('handleTransportConnectEvent() failed [error:%o]', error);
 errback(error);
 }
 },
 handleTransportProduceEvent({ kind, rtpParameters }, callback, errback) {
 console.log('handleTransportProduceEvent()');
 try {
 this.produce = jsonMessage => {
 console.log('handleTransportProduceEvent callback [data:%o]', jsonMessage);
 callback({ id: jsonMessage.id });
 this.produce = null;
 };

 this.socket.emit('produce', {
 sessionId: this.peer.sessionId,
 transportId: this.peer.sendTransport.id,
 kind,
 rtpParameters
 });
 } catch (error) {
 console.error('handleTransportProduceEvent() failed [error:%o]', error);
 errback(error);
 }
 },
 async recordBySapioServer() {
 this.loading = true;
 this.connectToSocket();
 },
 async setTracks() {
 // Start mediasoup-client's WebRTC producers
 const audioTrack = this.localStream.getAudioTracks()[0];
 this.peer.audioProducer = await this.peer.sendTransport.produce({
 track: audioTrack,
 codecOptions :
 {
 opusStereo : 1,
 opusDtx : 1
 }
 });


 let encodings;
 let codec;
 const codecOptions = {videoGoogleStartBitrate : 1000};

 codec = this.peer.device.rtpCapabilities.codecs.find((c) => c.kind.toLowerCase() === 'video');
 if (codec.mimeType.toLowerCase() === 'video/vp9') {
 encodings = { scalabilityMode: 'S3T3_KEY' };
 } else {
 encodings = [
 { scaleResolutionDownBy: 4, maxBitrate: 500000 },
 { scaleResolutionDownBy: 2, maxBitrate: 1000000 },
 { scaleResolutionDownBy: 1, maxBitrate: 5000000 }
 ];
 }
 const videoTrack = this.localStream.getVideoTracks()[0];
 this.peer.videoProducer =await this.peer.sendTransport.produce({
 track: videoTrack,
 encodings,
 codecOptions,
 codec
 });

 },
 startRecording() {
 this.Q.answer.recordingId = this.peer.externalId;
 this.socket.emit("start-record", {
 sessionId: this.peer.sessionId
 });
 },
 stopRecording() {
 this.socket.emit("stop-record" , {
 sessionId: this.peer.sessionId
 });
 },
 },

}






console.log of my ffmpeg process :


// sdp string
[sdpString:v=0
 o=- 0 0 IN IP4 127.0.0.1
 s=FFmpeg
 c=IN IP4 127.0.0.1
 t=0 0
 m=video 25549 RTP/AVP 101 
 a=rtpmap:101 VP8/90000
 a=sendonly
 m=audio 26934 RTP/AVP 100 
 a=rtpmap:100 opus/48000/2
 a=sendonly
 ]

// ffmpeg args
commandArgs:[
 '-loglevel',
 'debug',
 '-protocol_whitelist',
 'pipe,udp,rtp',
 '-fflags',
 '+genpts',
 '-f',
 'sdp',
 '-i',
 'pipe:0',
 '-map',
 '0:v:0',
 '-c:v',
 'copy',
 '-map',
 '0:a:0',
 '-strict',
 '-2',
 '-c:a',
 'copy',
 '-f',
 'webm',
 '-flags',
 '+global_header',
 '-y',
 'storage/recordings/26e63cb3-4f81-499e-941a-c0bb7f7f52ce.webm',
 [length]: 26
]
// ffmpeg log
ffmpeg::process::data [data:'ffmpeg version n4.4']
ffmpeg::process::data [data:' Copyright (c) 2000-2021 the FFmpeg developers']
ffmpeg::process::data [data:'\n']
ffmpeg::process::data [data:' built with gcc 11.1.0 (GCC)\n']
ffmpeg::process::data [data:' configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-amf --enable-avisynth --enable-cuda-llvm --enable-lto --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmfx --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librav1e --enable-librsvg --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-libzimg --enable-nvdec --enable-nvenc --enable-shared --enable-version3\n']
ffmpeg::process::data [data:' libavutil 56. 70.100 / 56. 70.100\n' +
 ' libavcodec 58.134.100 / 58.134.100\n' +
 ' libavformat 58. 76.100 / 58. 76.100\n' +
 ' libavdevice 58. 13.100 / 58. 13.100\n' +
 ' libavfilter 7.110.100 / 7.110.100\n' +
 ' libswscale 5. 9.100 / 5. 9.100\n' +
 ' libswresample 3. 9.100 / 3. 9.100\n' +
 ' libpostproc 55. 9.100 / 55. 9.100\n' +
 'Splitting the commandline.\n' +
 "Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.\n" +
 "Reading option '-protocol_whitelist' ..."]
ffmpeg::process::data [data:" matched as AVOption 'protocol_whitelist' with argument 'pipe,udp,rtp'.\n" +
 "Reading option '-fflags' ..."]
ffmpeg::process::data [data:" matched as AVOption 'fflags' with argument '+genpts'.\n" +
 "Reading option '-f' ... matched as option 'f' (force format) with argument 'sdp'.\n" +
 "Reading option '-i' ... matched as input url with argument 'pipe:0'.\n" +
 "Reading option '-map' ... matched as option 'map' (set input stream mapping) with argument '0:v:0'.\n" +
 "Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'copy'.\n" +
 "Reading option '-map' ... matched as option 'map' (set input stream mapping) with argument '0:a:0'.\n" +
 "Reading option '-strict' ...Routing option strict to both codec and muxer layer\n" +
 " matched as AVOption 'strict' with argument '-2'.\n" +
 "Reading option '-c:a' ... matched as option 'c' (codec name) with argument 'copy'.\n" +
 "Reading option '-f' ... matched as option 'f' (force format) with argument 'webm'.\n" +
 "Reading option '-flags' ... matched as AVOption 'flags' with argument '+global_header'.\n" +
 "Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.\n" +
 "Reading option 'storage/recordings/26e63cb3-4f81-499e-941a-c0bb7f7f52ce.webm' ... matched as output url.\n" +
 'Finished splitting the commandline.\n' +
 'Parsing a group of options: global .\n' +
 'Applying option loglevel (set logging level) with argument debug.\n' +
 'Applying option y (overwrite output files) with argument 1.\n' +
 'Successfully parsed a group of options.\n' +
 'Parsing a group of options: input url pipe:0.\n' +
 'Applying option f (force format) with argument sdp.\n' +
 'Successfully parsed a group of options.\n' +
 'Opening an input file: pipe:0.\n' +
 "[sdp @ 0x55604dc58400] Opening 'pipe:0' for reading\n" +
 '[sdp @ 0x55604dc58400] video codec set to: vp8\n' +
 '[sdp @ 0x55604dc58400] audio codec set to: opus\n' +
 '[sdp @ 0x55604dc58400] audio samplerate set to: 48000\n' +
 '[sdp @ 0x55604dc58400] audio channels set to: 2\n' +
 '[udp @ 0x55604dc6c500] end receive buffer size reported is 425984\n' +
 '[udp @ 0x55604dc6c7c0] end receive buffer size reported is 425984\n' +
 '[sdp @ 0x55604dc58400] setting jitter buffer size to 500\n' +
 '[udp @ 0x55604dc6d900] end receive buffer size reported is 425984\n' +
 '[udp @ 0x55604dc6d2c0] end receive buffer size reported is 425984\n' +
 '[sdp @ 0x55604dc58400] setting jitter buffer size to 500\n']
ffmpeg::process::data [data:'[sdp @ 0x55604dc58400] Before avformat_find_stream_info() pos: 210 bytes read:210 seeks:0 nb_streams:2\n']
 **mediasoup:Consumer resume() +1s**
 **mediasoup:Channel request() [method:consumer.resume, id:12] +1s**
 **mediasoup:Channel request succeeded [method:consumer.resume, id:12] +0ms**
 **mediasoup:Consumer resume() +1ms**
 **mediasoup:Channel request() [method:consumer.resume, id:13] +0ms**
 **mediasoup:Channel request succeeded [method:consumer.resume, id:13] +0ms**
ffmpeg::process::data [data:'[sdp @ 0x55604dc58400] Could not find codec parameters for stream 0 (Video: vp8, 1 reference frame, yuv420p): unspecified size\n' +
 "Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options\n"]
ffmpeg::process::data [data:'[sdp @ 0x55604dc58400] After avformat_find_stream_info() pos: 210 bytes read:210 seeks:0 frames:0\n' +
 "Input #0, sdp, from 'pipe:0':\n" +
 ' Metadata:\n' +
 ' title : FFmpeg\n' +
 ' Duration: N/A, bitrate: N/A\n' +
 ' Stream #0:0, 0, 1/90000: Video: vp8, 1 reference frame, yuv420p, 90k tbr, 90k tbn, 90k tbc\n' +
 ' Stream #0:1, 0, 1/48000: Audio: opus, 48000 Hz, stereo, fltp\n' +
 'Successfully opened the file.\n' +
 'Parsing a group of options: output url storage/recordings/26e63cb3-4f81-499e-941a-c0bb7f7f52ce.webm.\n' +
 'Applying option map (set input stream mapping) with argument 0:v:0.\n' +
 'Applying option c:v (codec name) with argument copy.\n' +
 'Applying option map (set input stream mapping) with argument 0:a:0.\n' +
 'Applying option c:a (codec name) with argument copy.\n' +
 'Applying option f (force format) with argument webm.\n' +
 'Successfully parsed a group of options.\n' +
 'Opening an output file: storage/recordings/26e63cb3-4f81-499e-941a-c0bb7f7f52ce.webm.\n' +
 "[file @ 0x55604dce5bc0] Setting default whitelist 'file,crypto,data'\n"]
ffmpeg::process::data [data:'Successfully opened the file.\n' +
 '[webm @ 0x55604dce0fc0] dimensions not set\n' +
 'Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument\n' +
 'Error initializing output stream 0:1 -- \n' +
 'Stream mapping:\n' +
 ' Stream #0:0 -> #0:0 (copy)\n' +
 ' Stream #0:1 -> #0:1 (copy)\n' +
 ' Last message repeated 1 times\n' +
 '[AVIOContext @ 0x55604dc6dcc0] Statistics: 0 seeks, 0 writeouts\n' +
 '[AVIOContext @ 0x55604dc69380] Statistics: 210 bytes read, 0 seeks\n']
ffmpeg::process::close




FFmpeg says
dimensions not set
andCould not write header for output file
when I use Firefox. This might be enough for understanding the problem, but if you need more information you can read how server side is performing.
Server-Side in summary can be something like this :
lets say we initialized worker and router at run time using following functions.

// Start the mediasoup workers
module.exports.initializeWorkers = async () => {
 const { logLevel, logTags, rtcMinPort, rtcMaxPort } = config.worker;

 console.log('initializeWorkers() creating %d mediasoup workers', config.numWorkers);

 for (let i = 0; i < config.numWorkers; ++i) {
 const worker = await mediasoup.createWorker({
 logLevel, logTags, rtcMinPort, rtcMaxPort
 });

 worker.once('died', () => {
 console.error('worker::died worker has died exiting in 2 seconds... [pid:%d]', worker.pid);
 setTimeout(() => process.exit(1), 2000);
 });

 workers.push(worker);
 }
};



module.exports.createRouter = async () => {
 const worker = getNextWorker();

 console.log('createRouter() creating new router [worker.pid:%d]', worker.pid);

 console.log(`config.router.mediaCodecs:${JSON.stringify(config.router.mediaCodecs)}`)

 return await worker.createRouter({ mediaCodecs: config.router.mediaCodecs });
};



We pass
router.rtpCompatibilities
to the client. clients get thertpCompatibilities
and create a device and loads it. after that a transport must be created at server side.

const handleCreateTransportRequest = async (jsonMessage) => {

 const transport = await createTransport('webRtc', router);

 var peer;
 try {peer = peers.get(jsonMessage.sessionId);}
 catch{console.log('peer not found')}
 
 peer.addTransport(transport);

 peer.socket.emit('create-transport',{
 id: transport.id,
 iceParameters: transport.iceParameters,
 iceCandidates: transport.iceCandidates,
 dtlsParameters: transport.dtlsParameters
 });
};



Then after the client side also created the transport we listen to connect event an at the time of event, we request the server to create connection.


const handleTransportConnectRequest = async (jsonMessage) => {
 var peer;
 try {peer = peers.get(jsonMessage.sessionId);}
 catch{console.log('peer not found')}

 if (!peer) {
 throw new Error(`Peer with id ${jsonMessage.sessionId} was not found`);
 }

 const transport = peer.getTransport(jsonMessage.transportId);

 if (!transport) {
 throw new Error(`Transport with id ${jsonMessage.transportId} was not found`);
 }

 await transport.connect({ dtlsParameters: jsonMessage.dtlsParameters });
 console.log('handleTransportConnectRequest() transport connected');
 peer.socket.emit('connect-transport');
};



Similar thing happen on produce event.


const handleProduceRequest = async (jsonMessage) => {
 console.log('handleProduceRequest [data:%o]', jsonMessage);

 var peer;
 try {peer = peers.get(jsonMessage.sessionId);}
 catch{console.log('peer not found')}

 if (!peer) {
 throw new Error(`Peer with id ${jsonMessage.sessionId} was not found`);
 }

 const transport = peer.getTransport(jsonMessage.transportId);

 if (!transport) {
 throw new Error(`Transport with id ${jsonMessage.transportId} was not found`);
 }

 const producer = await transport.produce({
 kind: jsonMessage.kind,
 rtpParameters: jsonMessage.rtpParameters
 });

 peer.addProducer(producer);

 console.log('handleProducerRequest() new producer added [id:%s, kind:%s]', producer.id, producer.kind);

 peer.socket.emit('produce',{
 id: producer.id,
 kind: producer.kind
 });
};



For Recording, first I create plain transports for audio and video producers.


const rtpTransport = router.createPlainTransport(config.plainRtpTransport);



then rtp transport must be connected to ports :


await rtpTransport.connect({
 ip: '127.0.0.1',
 port: remoteRtpPort,
 rtcpPort: remoteRtcpPort
 });



Then the consumer must also be created.


const rtpConsumer = await rtpTransport.consume({
 producerId: producer.id,
 rtpCapabilities,
 paused: true
 });



After that we can start recording using following code :


this._rtpParameters = args;
 this._process = undefined;
 this._observer = new EventEmitter();
 this._peer = args.peer;

 this._sdpString = createSdpText(this._rtpParameters);
 this._sdpStream = convertStringToStream(this._sdpString);
 // create dir
 const dir = process.env.REOCRDING_PATH ?? 'storage/recordings';
 if (!fs.existsSync(dir)) shelljs.mkdir('-p', dir);
 
 this._extension = 'webm';
 // create file path
 this._path = `${dir}/${args.peer.sessionId}.${this._extension}`
 let loop = 0;
 while(fs.existsSync(this._path)) {
 this._path = `${dir}/${args.peer.sessionId}-${++loop}.${this._extension}`
 }

this._recordingnModel = await Recording.findOne({sessionIds: { $in: [this._peer.sessionId] }})
 this._recordingnModel.files.push(this._path);
 this._recordingnModel.save();

let proc = ffmpeg(this._sdpStream)
 .inputOptions([
 '-protocol_whitelist','pipe,udp,rtp',
 '-f','sdp',
 ])
 .format(this._extension)
 .output(this._path)
 .size('720x?')
 .on('start', ()=>{
 this._peer.socket.emit('recording');
 })
 .on('end', ()=>{
 let path = this._path.replace('storage/recordings/', '');
 this._peer.socket.emit('recording-closed', {
 url: `${process.env.APP_URL}/recording/file/${path}`
 });
 });

 proc.run();
 this._process = proc;
 }




-
The 11th Hour RoQ Variation
12 avril 2012, par Multimedia Mike — Game Hacking, dreamroq, Reverse Engineering, roq, Vector QuantizationI have been looking at the RoQ file format almost as long as I have been doing practical multimedia hacking. However, I have never figured out how the RoQ format works on The 11th Hour, which was the game for which the RoQ format was initially developed. When I procured the game years ago, I remember finding what appeared to be RoQ files and shoving them through the open source decoders but not getting the right images out.
I decided to dust off that old copy of The 11th Hour and have another go at it.
Baseline
The game consists of 4 CD-ROMs. Each disc has a media/ directory that has a series of files bearing the extension .gjd, likely the initials of one Graeme J. Devine. These are resource files which are merely headerless concatenations of other files. Thus, at first glance, one file might appear to be a single RoQ file. So that’s the source of some of the difficulty : Sending an apparent RoQ .gjd file through a RoQ player will often cause the program to complain when it encounters the header of another RoQ file.I have uploaded some samples to the usual place.
However, even the frames that a player can decode (before encountering a file boundary within the resource file) look wrong.
Investigating Codebooks Using dreamroq
I wrote dreamroq last year– an independent RoQ playback library targeted towards embedded systems. I aimed it at a gjd file and quickly hit a codebook error.RoQ is a vector quantizer video codec that maintains a codebook of 256 2×2 pixel vectors. In the Quake III and later RoQ files, these are transported using a YUV 4:2:0 colorspace– 4 Y samples, a U sample, and a V sample to represent 4 pixels. This totals 6 bytes per vector. A RoQ codebook chunk contains a field that indicates the number of 2×2 vectors as well as the number of 4×4 vectors. The latter vectors are each comprised of 4 2×2 vectors.
Thus, the total size of a codebook chunk ought to be (# of 2×2 vectors) * 6 + (# of 4×4 vectors) * 4.
However, this is not the case with The 11th Hour RoQ files.
Longer Codebooks And Mystery Colorspace
Juggling the numbers for a few of the codebook chunks, I empirically determined that the 2×2 vectors are represented by 10 bytes instead of 6. Now I need to determine what exactly these 10 bytes represent.I should note that I suspect that everything else about these files lines up with successive generations of the format. For example if a file has 640×320 resolution, that amounts to 40×20 macroblocks. dreamroq iterates through 40×20 8×8 blocks and precisely exhausts the VQ bitstream. So that all looks valid. I’m just puzzled on the codebook format.
Here is an example codebook dump :
ID 0x1002, len = 0x0000014C, args = 0x1C0D 0 : 00 00 00 00 00 00 00 00 80 80 1 : 08 07 00 00 1F 5B 00 00 7E 81 2 : 00 00 15 0F 00 00 40 3B 7F 84 3 : 00 00 00 00 3A 5F 18 13 7E 84 4 : 00 00 00 00 3B 63 1B 17 7E 85 5 : 18 13 00 00 3C 63 00 00 7E 88 6 : 00 00 00 00 00 00 59 3B 7F 81 7 : 00 00 56 23 00 00 61 2B 80 80 8 : 00 00 2F 13 00 00 79 63 81 83 9 : 00 00 00 00 5E 3F AC 9B 7E 81 10 : 1B 17 00 00 B6 EF 77 AB 7E 85 11 : 2E 43 00 00 C1 F7 75 AF 7D 88 12 : 6A AB 28 5F B6 B3 8C B3 80 8A 13 : 86 BF 0A 03 D5 FF 3A 5F 7C 8C 14 : 00 00 9E 6B AB 97 F5 EF 7F 80 15 : 86 73 C8 CB B6 B7 B7 B7 85 8B 16 : 31 17 84 6B E7 EF FF FF 7E 81 17 : 79 AF 3B 5F FC FF E2 FF 7D 87 18 : DC FF AE EF B3 B3 B8 B3 85 8B 19 : EF FF F5 FF BA B7 B6 B7 88 8B 20 : F8 FF F7 FF B3 B7 B7 B7 88 8B 21 : FB FF FB FF B8 B3 B4 B3 85 88 22 : F7 FF F7 FF B7 B7 B9 B7 87 8B 23 : FD FF FE FF B9 B7 BB B7 85 8A 24 : E4 FF B7 EF FF FF FF FF 7F 83 25 : FF FF AC EB FF FF FC FF 7F 83 26 : CC C7 F7 FF FF FF FF FF 7F 81 27 : FF FF FE FF FF FF FF FF 80 80
Note that 0x14C (the chunk size) = 332, 0x1C and 0x0D (the chunk arguments — count of 2×2 and 4×4 vectors, respectively) are 28 and 13. 28 * 10 + 13 * 4 = 332, so the numbers check out.
Do you see any patterns in the codebook ? Here are some things I tried :
- Treating the last 2 bytes as U & V and treating the first 4 as the 4 Y samples :
- Treating the last 2 bytes as U & V and treating the first 8 as 4 16-bit little-endian Y samples :
- Disregarding the final 2 bytes and treating the first 8 bytes as 4 RGB565 pixels (both little- and big-endian, respectively, shown here) :
- Based on the type of data I’m seeing in these movies (which appears to be intended as overlays), I figured that some of these bits might indicate transparency ; here is 15-bit big-endian RGB which disregards the top bit of each pixel :
These images are taken from the uploaded sample bdpuz.gjd, apparently a component of the puzzle represented in this screenshot.
Unseen Types
It has long been rumored that early RoQ files could contain JPEG images. I finally found one such specimen. One of the files bundled early in the uploaded fhpuz.gjd sample contains a JPEG frame. It’s a standard JFIF file and can easily be decoded after separating the bytes from the resource using ‘dd’. JPEGs serve as intraframes in the coding scheme, with successive RoQ frames moving objects on top.However, a new chunk type showed up as well, one identified by 0×1030. I have never encountered this type. Where could I possibly find data about this ? Fortunately, iD Games recently posted all of their open sourced games at Github. Reading through the code for their official RoQ decoder, I see that this is called a RoQ_PACKET. The name and the code behind it are both supremely unhelpful. The code is basically a no-op. The payloads of the various RoQ_PACKETs from one sample are observed to be either 8784, 14752, or 14760 bytes in length. It’s very likely that this serves the same purpose as the JPEG intraframes.
Other Tidbits
I read through the readme.txt on the first game disc and found this nugget :g) Animations displayed normally or in SPOOKY MODE
SPOOKY MODE is blue-tinted grayscale with color cursors, puzzle
and game pieces. It is the preferred display setting of the
developers at Trilobyte. Just for fun, try out the SPOOKY
MODE.The MobyGames screenshot page has a number of screenshots labeled as being captured in spooky mode. Color tricks ?
Meanwhile, another twist arose as I kept tweaking dreamroq to deal with more RoQ weirdness : After modifying my dreamroq code to handle these 10-byte vectors, it eventually chokes on another codebook. These codebooks happen to have 6-byte vectors again ! Fortunately, I was already working on a scheme to automatically detect which codebook is in play (plugging the numbers into a formula and seeing which vector size checks out).
- Treating the last 2 bytes as U & V and treating the first 4 as the 4 Y samples :