
Recherche avancée
Médias (1)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (22)
-
Submit enhancements and plugins
13 avril 2011If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone. -
D’autres logiciels intéressants
12 avril 2011, parOn ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
Videopress
Site Internet : (...) -
MediaSPIP Init et Diogène : types de publications de MediaSPIP
11 novembre 2010, parÀ l’installation d’un site MediaSPIP, le plugin MediaSPIP Init réalise certaines opérations dont la principale consiste à créer quatre rubriques principales dans le site et de créer cinq templates de formulaire pour Diogène.
Ces quatre rubriques principales (aussi appelées secteurs) sont : Medias ; Sites ; Editos ; Actualités ;
Pour chacune de ces rubriques est créé un template de formulaire spécifique éponyme. Pour la rubrique "Medias" un second template "catégorie" est créé permettant d’ajouter (...)
Sur d’autres sites (4581)
-
How Piwik uses Travis CI to deliver a reliable analytics platform to the community
26 mai 2014, par Matthieu Aubry — Development, MetaIn this post, we will explain how the Piwik project uses continuous integration to deliver a quality software platform to dozens of thousands of users worldwide. Read this post if you are interested in Piwik project, Quality Assurance or Automated testing.
Why do we care about tests ?
Continuous Integration brings us agility and peace of mind. From the very beginning of the Piwik project, it was clear to us that writing and maintaining automated tests was a necessity, in order to create a successful open source software platform.
Over the years we have invested a lot of time into writing and maintaining our tests suites. This work has paid off in so many ways ! Piwik platform has fewer bugs, fewer regressions, and we are able to release new minor and major versions frequently.
Which parts of Piwik software are automatically tested ?
- Piwik back-end in PHP5 : we use PHPUnit to write and run our PHP tests : unit tests, integration tests, and plugin tests.
- piwik.js Tracker : the JS tracker is included into all websites that use Piwik. For this reason, it is critical that piwik.js JavaScript tracker always works without any issue or regression. Our Javascript Tracker tests includes both unit and integration tests.
- Piwik front-end : more recently we’ve started to write JavaScript tests for the user interface partially written in AngularJS.
- Piwik front-end screenshots tests : after each change to Piwik, more than 150 different screenshots are automatically taken. For example, we take screenshots of each of the 8-step installation process, we take screenshots of the password reset workflow, etc. Each of these screenshot is then compared pixel by pixel, with the “expected” screenshot, and we can automatically detect whether the last code change has introduced an undesired visual change. Learn more about Piwik screenshot tests.
How often do we run the tests ?
The tests are executed by Travis CI after each change to the Piwik source code. On average all our tests run 20 times per day. Whenever a Piwik developer pushes some code to Github, or when a community member issues a Pull request, Travis CI automatically runs the tests. In case some of the automated tests started failing after a change, the developer that has made the change is notified by email.
Should I use Travis CI ?
Over the last six years, we have used various Continuous Integration servers such as Bamboo, Hudson, Jenkins… and have found that the Travis CI is the ideal continuous integration service for open source projects that are hosted on Github. Travis CI is free for open source projects and the Travis CI team is very friendly and reactive ! If you work on commercial closed source software, you may also use Travis by signing up to Travis CI Pro.
Summary
Tests make the Piwik analytics platform better. Writing tests make Piwik contributors better developers. We save a lot of time and effort, and we are not afraid of change !
Here is the current status of our builds :
Main build :
Screenshot tests build :PS : If you are a developer looking for a challenge, Piwik is hiring a software developer to join our engineering team in New Zealand or Poland.
-
Cannot concatenate videos ffmpeg [on hold]
1er mai 2014, par Paul PrescodI have a bitmap that I would like to concatenate to the front of many videos as a sort of title screen or disclaimer screen.
I try to turn it into a video with the same attributes as the rest of the video. So first I introspect the video :
ffmpeg version 2.2.1 Copyright (c) 2000-2014 the FFmpeg developers
built on Apr 11 2014 22:50:38 with Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)
configuration: --prefix=/usr/local/Cellar/ffmpeg/2.2.1 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-avresample --enable-vda --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-libxvid
libavutil 52. 66.100 / 52. 66.100
libavcodec 55. 52.102 / 55. 52.102
libavformat 55. 33.100 / 55. 33.100
libavdevice 55. 10.100 / 55. 10.100
libavfilter 4. 2.100 / 4. 2.100
libavresample 1. 2. 0 / 1. 2. 0
libswscale 2. 5.102 / 2. 5.102
libswresample 0. 18.100 / 0. 18.100
libpostproc 52. 3.100 / 52. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'EO1.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf52.78.3
Duration: 00:00:17.77, start: 0.000000, bitrate: 582 kb/s
Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 512 kb/s, 23.98 fps, 23.98 tbr, 1199 tbn, 47.96 tbc (default)
Metadata:
creation_time : 1970-01-01 00:00:00
handler_name : VideoHandler
Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 64 kb/s (default)
Metadata:
creation_time : 1970-01-01 00:00:00
handler_name : SoundHandlerThen I try and create a similar file :
/usr/local/Cellar/ffmpeg/2.2.1/bin/ffmpeg -y -loop 1 -i Disclaimer.png -c:v libx264 -r 23.98 -t 5 -pix_fmt yuv420p -profile:v main disclaimer.mp4
It seems to work okay. The video plays I would expect it to. The attributes turn out very similar. Here is a diff :
< Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'disclaimer.mp4':
---
> Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'EO1.mp4':
18,20c18,21
< encoder : Lavf55.33.100
< Duration: 00:00:05.01, start: 0.000000, bitrate: 21 kb/s
< Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 17 kb/s, 23.98 fps, 23.98 tbr, 19184 tbn, 47.96 tbc (default)
---
> creation_time : 1970-01-01 00:00:00
> encoder : Lavf52.78.3
> Duration: 00:00:17.77, start: 0.000000, bitrate: 582 kb/s
> Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 512 kb/s, 23.98 fps, 23.98 tbr, 1199 tbn, 47.96 tbc (default)
21a23
22a25,28
> Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 64 kb/s (default)
> Metadata:
> handler_name : SoundHandlerBut when I try to concatenate, I get errors.
> $ cat temporary.txt
file disclaimer.mp4
file EO1.mp4/usr/local/Cellar/ffmpeg/2.2.1/bin/ffmpeg -y -f concat -i temporary.txt -c copy output.mp4
[concat @ 0x7fd880806600] Estimating duration from bitrate, this may be inaccurate
Input #0, concat, from 'temporary.txt':
Duration: 00:00:00.02, start: 0.000000, bitrate: 17 kb/s
Stream #0:0: Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 584x328 [SAR 1:1 DAR 73:41], 17 kb/s, 23.98 fps, 23.98 tbr, 19184 tbn, 47.96 tbc
Output #0, mp4, to 'output.mp4':
Metadata:
encoder : Lavf55.33.100
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 584x328 [SAR 1:1 DAR 73:41], q=2-31, 17 kb/s, 23.98 fps, 19184 tbn, 19184 tbc
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[mp4 @ 0x7fd880829a00] Non-monotonous DTS in output stream 0:0; previous: 93600, current: 5951; changing to 93601. This may result in incorrect timestamps in the output file.
[concat @ 0x7fd880806600] Invalid stream index 1
[mp4 @ 0x7fd880829a00] Non-monotonous DTS in output stream 0:0; previous: 93601, current: 6001; changing to 93602. This may result in incorrect timestamps in the output file.
[concat @ 0x7fd880806600] Invalid stream index 1
[mp4 @ 0x7fd880829a00] Non-monotonous DTS in output stream 0:0; previous: 93602, current: 6051; changing to 93603. This may result in incorrect timestamps in the output file....
frame= 546 fps=0.0 q=-1.0 Lsize= 1127kB time=00:00:04.90 bitrate=1882.9kbits/s
video:1123kB audio:0kB subtitle:0 data:0 global headers:0kB muxing overhead 0.349865%The output looks like it only has my disclaimer file in it, not the rest of the video.
I’m also confused why it feels like it needs to "estimate" anything. It knows the input FPS and input durations. I’m not sure if this is the problem or not. Maybe its just a bug.
-
Merged Video Contains Inverted Clips After First Video Ends
3 février, par Nikunj AgrawalI am working on a Flutter application that merges multiple videos using
ffmpeg_kit_flutter
. However, after merging, I notice that the second video (and any subsequent ones) appear inverted or rotated in the final output.

Issue Details :


- 

- The first video appears normal.
- The videos can be recorded using both front and back cameras.
- The second (and later) videos are flipped or rotated upside down.
- This happens after merging using
ffmpeg_kit_flutter
.










Question :
How can I correctly merge multiple videos in Flutter without rotation issues ? Is there a way to normalize video orientation before merging using
ffmpeg_kit_flutter
?

Any help would be appreciated ! 🚀


Code :


import 'dart:io';
import 'dart:math';

import 'package:camera/camera.dart';
import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';
import 'package:ffmpeg_kit_flutter/return_code.dart';
import 'package:flutter/material.dart';
import 'package:path_provider/path_provider.dart';
import 'package:permission_handler/permission_handler.dart';
import 'package:record/record.dart';
import 'package:videotest/video_player.dart';

class MergeVideoRecording extends StatefulWidget {
 const MergeVideoRecording({super.key});

 @override
 State<mergevideorecording> createState() => _MergeVideoRecordingState();
}

class _MergeVideoRecordingState extends State<mergevideorecording> {
 CameraController? _cameraController;
 final AudioRecorder _audioRecorder = AudioRecorder();

 bool _isRecording = false;
 String? _videoPath;
 String? _audioPath;
 List<cameradescription> _cameras = [];
 int _currentCameraIndex = 0;
 final List<string> _recordedVideos = [];

 @override
 Widget build(BuildContext context) {
 return Scaffold(
 body: Column(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 _cameraController != null && _cameraController!.value.isInitialized
 ? SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Stack(
 children: [
 ClipRRect(
 borderRadius: BorderRadius.circular(16),
 child: SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Transform(
 alignment: Alignment.center,
 transform:
 _cameras[_currentCameraIndex].lensDirection ==
 CameraLensDirection.front
 ? Matrix4.rotationY(pi)
 : Matrix4.identity(),
 child: CameraPreview(_cameraController!),
 ),
 ),
 ),
 Align(
 alignment: Alignment.topRight,
 child: InkWell(
 onTap: _switchCamera,
 child: const Padding(
 padding: EdgeInsets.all(8.0),
 child: CircleAvatar(
 radius: 18,
 backgroundColor: Colors.white,
 child: Icon(
 Icons.flip_camera_android,
 color: Colors.black,
 ),
 ),
 ),
 ),
 ),
 ],
 ),
 )
 : const CircularProgressIndicator(),
 const SizedBox(height: 16),
 Row(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 FloatingActionButton(
 heroTag: 'record_button',
 onPressed: _toggleRecording,
 child: Icon(
 _isRecording ? Icons.stop : Icons.video_camera_back,
 ),
 ),
 const SizedBox(
 width: 50,
 ),
 FloatingActionButton(
 heroTag: 'merge_button',
 onPressed: _mergeVideos,
 child: const Icon(
 Icons.merge,
 ),
 ),
 ],
 ),
 if (!_isRecording)
 ListView.builder(
 shrinkWrap: true,
 itemCount: _recordedVideos.length,
 itemBuilder: (context, index) => InkWell(
 onTap: () {
 Navigator.push(
 context,
 MaterialPageRoute(
 builder: (context) => VideoPlayerScreen(
 videoPath: _recordedVideos[index],
 ),
 ),
 );
 },
 child: ListTile(
 title: Text('Video ${index + 1}'),
 subtitle: Text('Path ${_recordedVideos[index]}'),
 trailing: const Icon(Icons.play_arrow),
 ),
 ),
 ),
 ],
 ),
 );
 }

 @override
 void dispose() {
 _cameraController?.dispose();
 _audioRecorder.dispose();
 super.dispose();
 }

 @override
 void initState() {
 super.initState();
 _initializeDevices();
 }

 Future<void> _initializeCameraController(CameraDescription camera) async {
 _cameraController = CameraController(
 camera,
 ResolutionPreset.high,
 enableAudio: true,
 imageFormatGroup: ImageFormatGroup.yuv420, // Add this line
 );

 await _cameraController!.initialize();
 await _cameraController!.setExposureMode(ExposureMode.auto);
 await _cameraController!.setFocusMode(FocusMode.auto);
 setState(() {});
 }

 Future<void> _initializeDevices() async {
 final cameraStatus = await Permission.camera.request();
 final micStatus = await Permission.microphone.request();

 if (!cameraStatus.isGranted || !micStatus.isGranted) {
 _showError('Camera and microphone permissions required');
 return;
 }

 _cameras = await availableCameras();
 if (_cameras.isNotEmpty) {
 final frontCameraIndex = _cameras.indexWhere(
 (camera) => camera.lensDirection == CameraLensDirection.front);
 _currentCameraIndex = frontCameraIndex != -1 ? frontCameraIndex : 0;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 // Merge video
 Future<void> _mergeVideos() async {
 if (_recordedVideos.isEmpty) {
 _showError('No videos to merge');
 return;
 }

 try {
 // Debug logging
 print('Starting merge process');
 print('Number of videos to merge: ${_recordedVideos.length}');
 for (var i = 0; i < _recordedVideos.length; i++) {
 final file = File(_recordedVideos[i]);
 final exists = await file.exists();
 final size = exists ? await file.length() : 0;
 print('Video $i: ${_recordedVideos[i]}');
 print('Exists: $exists, Size: $size bytes');
 }

 final Directory appDir = await getApplicationDocumentsDirectory();
 final String outputPath =
 '${appDir.path}/merged_${DateTime.now().millisecondsSinceEpoch}.mp4';
 final String listFilePath = '${appDir.path}/list.txt';

 print('Output path: $outputPath');
 print('List file path: $listFilePath');

 // Create and verify list file
 final listFile = File(listFilePath);
 final fileContent = _recordedVideos
 .map((path) => "file '${path.replaceAll("'", "'\\''")}'")
 .join('\n');
 await listFile.writeAsString(fileContent);

 print('List file content:');
 print(await listFile.readAsString());

 // Simpler FFmpeg command for testing
 final command = '''
 -f concat
 -safe 0
 -i "$listFilePath"
 -c copy
 -y
 "$outputPath"
 '''
 .trim()
 .replaceAll('\n', ' ');

 print('Executing FFmpeg command: $command');

 final session = await FFmpegKit.execute(command);
 final returnCode = await session.getReturnCode();
 final logs = await session.getAllLogsAsString();
 final failStackTrace = await session.getFailStackTrace();

 print('FFmpeg return code: ${returnCode?.getValue() ?? "null"}');
 print('FFmpeg logs: $logs');
 if (failStackTrace != null) {
 print('FFmpeg fail stack trace: $failStackTrace');
 }

 if (ReturnCode.isSuccess(returnCode)) {
 final outputFile = File(outputPath);
 final outputExists = await outputFile.exists();
 final outputSize = outputExists ? await outputFile.length() : 0;

 print('Output file exists: $outputExists');
 print('Output file size: $outputSize bytes');

 if (outputExists && outputSize > 0) {
 setState(() => _recordedVideos.add(outputPath));
 _showSuccess('Videos merged successfully');
 } else {
 _showError('Merged file is empty or not created');
 }
 } else {
 _showError('Failed to merge videos. Check logs for details.');
 }

 // Clean up
 try {
 await listFile.delete();
 print('List file cleaned up successfully');
 } catch (e) {
 print('Failed to delete list file: $e');
 }
 } catch (e, s) {
 print('Error during merge: $e');
 print('Stack trace: $s');
 _showError('Error merging videos: ${e.toString()}');
 }
 }

 void _showError(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.red),
 );
 }

 void _showSuccess(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.green),
 );
 }

 Future<void> _startAudioRecording() async {
 try {
 final Directory tempDir = await getTemporaryDirectory();
 final audioPath = '${tempDir.path}/recording.wav';
 await _audioRecorder.start(const RecordConfig(), path: audioPath);
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _startVideoRecording() async {
 try {
 await _cameraController!.startVideoRecording();
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _stopAndSaveAudioRecording() async {
 _audioPath = await _audioRecorder.stop();
 if (_audioPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String audioFileName = 'audio_$timestamp.wav';
 await File(_audioPath!).copy('${appDir.path}/$audioFileName');
 _showSuccess('Saved: $audioFileName');
 }
 }

 Future<void> _stopAndSaveVideoRecording() async {
 try {
 final video = await _cameraController!.stopVideoRecording();
 _videoPath = video.path;

 if (_videoPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String videoFileName = 'video_$timestamp.mp4';
 final savedVideoPath = '${appDir.path}/$videoFileName';
 await File(_videoPath!).copy(savedVideoPath);

 setState(() {
 _recordedVideos.add(savedVideoPath);
 _isRecording = false;
 });

 _showSuccess('Saved: $videoFileName');
 }
 } catch (e) {
 _showError('Recording stop error: $e');
 }
 }

 Future<void> _switchCamera() async {
 if (_cameras.length <= 1) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 await _startVideoRecording();
 } else {
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 Future<void> _toggleRecording() async {
 if (_cameraController == null) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 await _stopAndSaveAudioRecording();
 } else {
 _startVideoRecording();
 _startAudioRecording();
 setState(() => _recordedVideos.clear());
 }
 }
}
</void></void></void></void></void></void></void></void></void></string></cameradescription></mergevideorecording></mergevideorecording>