
Recherche avancée
Autres articles (74)
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (9638)
-
ffmpeg Non monotonous DTS, Previous DTS is always the same, audio microphone streaming [closed]
5 février, par adrien gonzalezI'm using ffmpeg to stream audio from a microphone using rtp. I'm on Raspberry and use an external sound card (HifiBerry DAC + ADC Pro).
My goal is to stream audio with the lowest latency possible to others Raspberry reading this audio with ffplay. I try not to compress the audio flux and leave it untouched as wav 48000 Hz.
I encounter often some Non Monotonous DTS errors. When this happens I have a latency of hundred of milliseconds adding itself.
I tried to add the +igndts flag but it is not changing anything. Also tried +genpts flag.


What is weird is that the previous DTS is always the same (201165 is the example below) and does not seems to change.
I looked on forums for answers but I'm unable to find one.


Here is my bash command :


ffmpeg -guess_layout_max 0 -re -f alsa -i hw -acodec pcm_s16le -ac 1 -payload_type 10 -f rtp rtp://192.168.1.152:5003


And the result from the terminal :


ffmpeg version 5.1.6-0+deb12u1+rpt1 Copyright (c) 2000-2024 the FFmpeg developers


built with gcc 12 (Debian 12.2.0-14)
 configuration: --prefix=/usr --extra-version=0+deb12u1+rpt1 --toolchain=hardened --incdir=/usr/include/aarch64-linux-gnu --enable-gpl --disable-stripping --disable-mmal --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librist --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sand --enable-sdl2 --disable-sndio --enable-libjxl --enable-neon --enable-v4l2-request --enable-libudev --enable-epoxy --libdir=/usr/lib/aarch64-linux-gnu --arch=arm64 --enable-pocketsphinx --enable-librsvg --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-libplacebo --enable-librav1e --enable-shared
 libavutil 57. 28.100 / 57. 28.100
 libavcodec 59. 37.100 / 59. 37.100
 libavformat 59. 27.100 / 59. 27.100
 libavdevice 59. 7.100 / 59. 7.100
 libavfilter 8. 44.100 / 8. 44.100
 libswscale 6. 7.100 / 6. 7.100
 libswresample 4. 7.100 / 4. 7.100
 libpostproc 56. 6.100 / 56. 6.100
Input #0, alsa, from 'hw':
 Duration: N/A, start: 1738663653.066577, bitrate: 1536 kb/s
 Stream #0:0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s
Stream mapping:
 Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, rtp, to 'rtp://192.168.1.152:5003':
 Metadata:
 encoder : Lavf59.27.100
 Stream #0:0: Audio: pcm_s16le, 48000 Hz, mono, s16, 768 kb/s
 Metadata:
 encoder : Lavc59.37.100 pcm_s16le
SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 192.168.1.152
t=0 0
a=tool:libavformat LIBAVFORMAT_VERSION
m=audio 5003 RTP/AVP 10
b=AS:768

[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201160; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201155; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201149; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201142; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201134; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201124; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201114; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201102; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201089; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201075; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201060; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201044; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201027; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 201009; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 200990; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 200970; changing to 201165. This may result in incorrect timestamps in the output file.
[rtp @ 0x558b48ea90] Non-monotonous DTS in output stream 0:0; previous: 201165, current: 200949; changing to 201165. This may result in incorrect timestamps in the output file.



I tried to add the +igndts flag but it is not changing anything. Also tried +genpts flag. I expected the DTS to restore itself but I still have the same issue


-
FFmpeg matlab error : At least one output file must be specified ? [closed]
3 mars, par as mohI'm trying to get I frames from a video using Matlab using this command
system(sprintf('ffmpeg -i testVid.mp4 -vf "select=eq(pict_type\,I)" -vsync vfr output_%03d.png'));
,but i get this message

ffmpeg version 7.1-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers 
 built with gcc 14.2.0 (Rev1, Built by MSYS2 project) 
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libopenjpeg --enable-libquirc --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-libqrencode --enable-librav1e --enable-libsvtav1 --enable-libvvenc --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-liblc3 --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint 
 libavutil 59. 39.100 / 59. 39.100 
 libavcodec 61. 19.100 / 61. 19.100 
 libavformat 61. 7.100 / 61. 7.100 
 libavdevice 61. 3.100 / 61. 3.100 
 libavfilter 10. 4.100 / 10. 4.100 
 libswscale 8. 3.100 / 8. 3.100 
 libswresample 5. 3.100 / 5. 3.100 
 libpostproc 58. 3.100 / 58. 3.100 
Trailing option(s) found in the command: may be ignored. 
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'testVid.mp4': 
 Metadata: 
 major_brand : isom 
 minor_version : 512 
 compatible_brands: isomiso2avc1mp41 
 encoder : Lavf57.83.100 
 Duration: 00:00:02.02, start: 0.000000, bitrate: 12798 kb/s 
 Stream #0:0[0x1](eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuvj420p(pc, progressive), 1280x720 [SAR 1:1 DAR 16:9], 12662 kb/s, 29.74 fps, 30 tbr, 90k tbn (default) 
 Metadata: 
 handler_name : VideoHandler 
 vendor_id : [0][0][0][0] 
 Stream #0:1[0x2](eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 121 kb/s (default) 
 Metadata: 
 handler_name : SoundHandler 
 vendor_id : [0][0][0][0] 
At least one output file must be specified 



i searched and tried many cases but i don't know where is the problem, any help please ?


-
Merged Video Contains Inverted Clips After First Video Ends
3 février, par Nikunj AgrawalI am working on a Flutter application that merges multiple videos using
ffmpeg_kit_flutter
. However, after merging, I notice that the second video (and any subsequent ones) appear inverted or rotated in the final output.

Issue Details :


- 

- The first video appears normal.
- The videos can be recorded using both front and back cameras.
- The second (and later) videos are flipped or rotated upside down.
- This happens after merging using
ffmpeg_kit_flutter
.










Question :
How can I correctly merge multiple videos in Flutter without rotation issues ? Is there a way to normalize video orientation before merging using
ffmpeg_kit_flutter
?

Any help would be appreciated ! 🚀


Code :


import 'dart:io';
import 'dart:math';

import 'package:camera/camera.dart';
import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';
import 'package:ffmpeg_kit_flutter/return_code.dart';
import 'package:flutter/material.dart';
import 'package:path_provider/path_provider.dart';
import 'package:permission_handler/permission_handler.dart';
import 'package:record/record.dart';
import 'package:videotest/video_player.dart';

class MergeVideoRecording extends StatefulWidget {
 const MergeVideoRecording({super.key});

 @override
 State<mergevideorecording> createState() => _MergeVideoRecordingState();
}

class _MergeVideoRecordingState extends State<mergevideorecording> {
 CameraController? _cameraController;
 final AudioRecorder _audioRecorder = AudioRecorder();

 bool _isRecording = false;
 String? _videoPath;
 String? _audioPath;
 List<cameradescription> _cameras = [];
 int _currentCameraIndex = 0;
 final List<string> _recordedVideos = [];

 @override
 Widget build(BuildContext context) {
 return Scaffold(
 body: Column(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 _cameraController != null && _cameraController!.value.isInitialized
 ? SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Stack(
 children: [
 ClipRRect(
 borderRadius: BorderRadius.circular(16),
 child: SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Transform(
 alignment: Alignment.center,
 transform:
 _cameras[_currentCameraIndex].lensDirection ==
 CameraLensDirection.front
 ? Matrix4.rotationY(pi)
 : Matrix4.identity(),
 child: CameraPreview(_cameraController!),
 ),
 ),
 ),
 Align(
 alignment: Alignment.topRight,
 child: InkWell(
 onTap: _switchCamera,
 child: const Padding(
 padding: EdgeInsets.all(8.0),
 child: CircleAvatar(
 radius: 18,
 backgroundColor: Colors.white,
 child: Icon(
 Icons.flip_camera_android,
 color: Colors.black,
 ),
 ),
 ),
 ),
 ),
 ],
 ),
 )
 : const CircularProgressIndicator(),
 const SizedBox(height: 16),
 Row(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 FloatingActionButton(
 heroTag: 'record_button',
 onPressed: _toggleRecording,
 child: Icon(
 _isRecording ? Icons.stop : Icons.video_camera_back,
 ),
 ),
 const SizedBox(
 width: 50,
 ),
 FloatingActionButton(
 heroTag: 'merge_button',
 onPressed: _mergeVideos,
 child: const Icon(
 Icons.merge,
 ),
 ),
 ],
 ),
 if (!_isRecording)
 ListView.builder(
 shrinkWrap: true,
 itemCount: _recordedVideos.length,
 itemBuilder: (context, index) => InkWell(
 onTap: () {
 Navigator.push(
 context,
 MaterialPageRoute(
 builder: (context) => VideoPlayerScreen(
 videoPath: _recordedVideos[index],
 ),
 ),
 );
 },
 child: ListTile(
 title: Text('Video ${index + 1}'),
 subtitle: Text('Path ${_recordedVideos[index]}'),
 trailing: const Icon(Icons.play_arrow),
 ),
 ),
 ),
 ],
 ),
 );
 }

 @override
 void dispose() {
 _cameraController?.dispose();
 _audioRecorder.dispose();
 super.dispose();
 }

 @override
 void initState() {
 super.initState();
 _initializeDevices();
 }

 Future<void> _initializeCameraController(CameraDescription camera) async {
 _cameraController = CameraController(
 camera,
 ResolutionPreset.high,
 enableAudio: true,
 imageFormatGroup: ImageFormatGroup.yuv420, // Add this line
 );

 await _cameraController!.initialize();
 await _cameraController!.setExposureMode(ExposureMode.auto);
 await _cameraController!.setFocusMode(FocusMode.auto);
 setState(() {});
 }

 Future<void> _initializeDevices() async {
 final cameraStatus = await Permission.camera.request();
 final micStatus = await Permission.microphone.request();

 if (!cameraStatus.isGranted || !micStatus.isGranted) {
 _showError('Camera and microphone permissions required');
 return;
 }

 _cameras = await availableCameras();
 if (_cameras.isNotEmpty) {
 final frontCameraIndex = _cameras.indexWhere(
 (camera) => camera.lensDirection == CameraLensDirection.front);
 _currentCameraIndex = frontCameraIndex != -1 ? frontCameraIndex : 0;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 // Merge video
 Future<void> _mergeVideos() async {
 if (_recordedVideos.isEmpty) {
 _showError('No videos to merge');
 return;
 }

 try {
 // Debug logging
 print('Starting merge process');
 print('Number of videos to merge: ${_recordedVideos.length}');
 for (var i = 0; i < _recordedVideos.length; i++) {
 final file = File(_recordedVideos[i]);
 final exists = await file.exists();
 final size = exists ? await file.length() : 0;
 print('Video $i: ${_recordedVideos[i]}');
 print('Exists: $exists, Size: $size bytes');
 }

 final Directory appDir = await getApplicationDocumentsDirectory();
 final String outputPath =
 '${appDir.path}/merged_${DateTime.now().millisecondsSinceEpoch}.mp4';
 final String listFilePath = '${appDir.path}/list.txt';

 print('Output path: $outputPath');
 print('List file path: $listFilePath');

 // Create and verify list file
 final listFile = File(listFilePath);
 final fileContent = _recordedVideos
 .map((path) => "file '${path.replaceAll("'", "'\\''")}'")
 .join('\n');
 await listFile.writeAsString(fileContent);

 print('List file content:');
 print(await listFile.readAsString());

 // Simpler FFmpeg command for testing
 final command = '''
 -f concat
 -safe 0
 -i "$listFilePath"
 -c copy
 -y
 "$outputPath"
 '''
 .trim()
 .replaceAll('\n', ' ');

 print('Executing FFmpeg command: $command');

 final session = await FFmpegKit.execute(command);
 final returnCode = await session.getReturnCode();
 final logs = await session.getAllLogsAsString();
 final failStackTrace = await session.getFailStackTrace();

 print('FFmpeg return code: ${returnCode?.getValue() ?? "null"}');
 print('FFmpeg logs: $logs');
 if (failStackTrace != null) {
 print('FFmpeg fail stack trace: $failStackTrace');
 }

 if (ReturnCode.isSuccess(returnCode)) {
 final outputFile = File(outputPath);
 final outputExists = await outputFile.exists();
 final outputSize = outputExists ? await outputFile.length() : 0;

 print('Output file exists: $outputExists');
 print('Output file size: $outputSize bytes');

 if (outputExists && outputSize > 0) {
 setState(() => _recordedVideos.add(outputPath));
 _showSuccess('Videos merged successfully');
 } else {
 _showError('Merged file is empty or not created');
 }
 } else {
 _showError('Failed to merge videos. Check logs for details.');
 }

 // Clean up
 try {
 await listFile.delete();
 print('List file cleaned up successfully');
 } catch (e) {
 print('Failed to delete list file: $e');
 }
 } catch (e, s) {
 print('Error during merge: $e');
 print('Stack trace: $s');
 _showError('Error merging videos: ${e.toString()}');
 }
 }

 void _showError(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.red),
 );
 }

 void _showSuccess(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.green),
 );
 }

 Future<void> _startAudioRecording() async {
 try {
 final Directory tempDir = await getTemporaryDirectory();
 final audioPath = '${tempDir.path}/recording.wav';
 await _audioRecorder.start(const RecordConfig(), path: audioPath);
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _startVideoRecording() async {
 try {
 await _cameraController!.startVideoRecording();
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _stopAndSaveAudioRecording() async {
 _audioPath = await _audioRecorder.stop();
 if (_audioPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String audioFileName = 'audio_$timestamp.wav';
 await File(_audioPath!).copy('${appDir.path}/$audioFileName');
 _showSuccess('Saved: $audioFileName');
 }
 }

 Future<void> _stopAndSaveVideoRecording() async {
 try {
 final video = await _cameraController!.stopVideoRecording();
 _videoPath = video.path;

 if (_videoPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String videoFileName = 'video_$timestamp.mp4';
 final savedVideoPath = '${appDir.path}/$videoFileName';
 await File(_videoPath!).copy(savedVideoPath);

 setState(() {
 _recordedVideos.add(savedVideoPath);
 _isRecording = false;
 });

 _showSuccess('Saved: $videoFileName');
 }
 } catch (e) {
 _showError('Recording stop error: $e');
 }
 }

 Future<void> _switchCamera() async {
 if (_cameras.length <= 1) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 await _startVideoRecording();
 } else {
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 Future<void> _toggleRecording() async {
 if (_cameraController == null) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 await _stopAndSaveAudioRecording();
 } else {
 _startVideoRecording();
 _startAudioRecording();
 setState(() => _recordedVideos.clear());
 }
 }
}
</void></void></void></void></void></void></void></void></void></string></cameradescription></mergevideorecording></mergevideorecording>