
Recherche avancée
Autres articles (29)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)
Sur d’autres sites (5141)
-
Merged Video Contains Inverted Clips After First Video Ends
3 février, par Nikunj AgrawalI am working on a Flutter application that merges multiple videos using
ffmpeg_kit_flutter
. However, after merging, I notice that the second video (and any subsequent ones) appear inverted or rotated in the final output.

Issue Details :


- 

- The first video appears normal.
- The videos can be recorded using both front and back cameras.
- The second (and later) videos are flipped or rotated upside down.
- This happens after merging using
ffmpeg_kit_flutter
.










Question :
How can I correctly merge multiple videos in Flutter without rotation issues ? Is there a way to normalize video orientation before merging using
ffmpeg_kit_flutter
?

Any help would be appreciated ! 🚀


Code :


import 'dart:io';
import 'dart:math';

import 'package:camera/camera.dart';
import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';
import 'package:ffmpeg_kit_flutter/return_code.dart';
import 'package:flutter/material.dart';
import 'package:path_provider/path_provider.dart';
import 'package:permission_handler/permission_handler.dart';
import 'package:record/record.dart';
import 'package:videotest/video_player.dart';

class MergeVideoRecording extends StatefulWidget {
 const MergeVideoRecording({super.key});

 @override
 State<mergevideorecording> createState() => _MergeVideoRecordingState();
}

class _MergeVideoRecordingState extends State<mergevideorecording> {
 CameraController? _cameraController;
 final AudioRecorder _audioRecorder = AudioRecorder();

 bool _isRecording = false;
 String? _videoPath;
 String? _audioPath;
 List<cameradescription> _cameras = [];
 int _currentCameraIndex = 0;
 final List<string> _recordedVideos = [];

 @override
 Widget build(BuildContext context) {
 return Scaffold(
 body: Column(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 _cameraController != null && _cameraController!.value.isInitialized
 ? SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Stack(
 children: [
 ClipRRect(
 borderRadius: BorderRadius.circular(16),
 child: SizedBox(
 width: MediaQuery.of(context).size.width * 0.4,
 height: MediaQuery.of(context).size.height * 0.3,
 child: Transform(
 alignment: Alignment.center,
 transform:
 _cameras[_currentCameraIndex].lensDirection ==
 CameraLensDirection.front
 ? Matrix4.rotationY(pi)
 : Matrix4.identity(),
 child: CameraPreview(_cameraController!),
 ),
 ),
 ),
 Align(
 alignment: Alignment.topRight,
 child: InkWell(
 onTap: _switchCamera,
 child: const Padding(
 padding: EdgeInsets.all(8.0),
 child: CircleAvatar(
 radius: 18,
 backgroundColor: Colors.white,
 child: Icon(
 Icons.flip_camera_android,
 color: Colors.black,
 ),
 ),
 ),
 ),
 ),
 ],
 ),
 )
 : const CircularProgressIndicator(),
 const SizedBox(height: 16),
 Row(
 mainAxisAlignment: MainAxisAlignment.center,
 children: [
 FloatingActionButton(
 heroTag: 'record_button',
 onPressed: _toggleRecording,
 child: Icon(
 _isRecording ? Icons.stop : Icons.video_camera_back,
 ),
 ),
 const SizedBox(
 width: 50,
 ),
 FloatingActionButton(
 heroTag: 'merge_button',
 onPressed: _mergeVideos,
 child: const Icon(
 Icons.merge,
 ),
 ),
 ],
 ),
 if (!_isRecording)
 ListView.builder(
 shrinkWrap: true,
 itemCount: _recordedVideos.length,
 itemBuilder: (context, index) => InkWell(
 onTap: () {
 Navigator.push(
 context,
 MaterialPageRoute(
 builder: (context) => VideoPlayerScreen(
 videoPath: _recordedVideos[index],
 ),
 ),
 );
 },
 child: ListTile(
 title: Text('Video ${index + 1}'),
 subtitle: Text('Path ${_recordedVideos[index]}'),
 trailing: const Icon(Icons.play_arrow),
 ),
 ),
 ),
 ],
 ),
 );
 }

 @override
 void dispose() {
 _cameraController?.dispose();
 _audioRecorder.dispose();
 super.dispose();
 }

 @override
 void initState() {
 super.initState();
 _initializeDevices();
 }

 Future<void> _initializeCameraController(CameraDescription camera) async {
 _cameraController = CameraController(
 camera,
 ResolutionPreset.high,
 enableAudio: true,
 imageFormatGroup: ImageFormatGroup.yuv420, // Add this line
 );

 await _cameraController!.initialize();
 await _cameraController!.setExposureMode(ExposureMode.auto);
 await _cameraController!.setFocusMode(FocusMode.auto);
 setState(() {});
 }

 Future<void> _initializeDevices() async {
 final cameraStatus = await Permission.camera.request();
 final micStatus = await Permission.microphone.request();

 if (!cameraStatus.isGranted || !micStatus.isGranted) {
 _showError('Camera and microphone permissions required');
 return;
 }

 _cameras = await availableCameras();
 if (_cameras.isNotEmpty) {
 final frontCameraIndex = _cameras.indexWhere(
 (camera) => camera.lensDirection == CameraLensDirection.front);
 _currentCameraIndex = frontCameraIndex != -1 ? frontCameraIndex : 0;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 // Merge video
 Future<void> _mergeVideos() async {
 if (_recordedVideos.isEmpty) {
 _showError('No videos to merge');
 return;
 }

 try {
 // Debug logging
 print('Starting merge process');
 print('Number of videos to merge: ${_recordedVideos.length}');
 for (var i = 0; i < _recordedVideos.length; i++) {
 final file = File(_recordedVideos[i]);
 final exists = await file.exists();
 final size = exists ? await file.length() : 0;
 print('Video $i: ${_recordedVideos[i]}');
 print('Exists: $exists, Size: $size bytes');
 }

 final Directory appDir = await getApplicationDocumentsDirectory();
 final String outputPath =
 '${appDir.path}/merged_${DateTime.now().millisecondsSinceEpoch}.mp4';
 final String listFilePath = '${appDir.path}/list.txt';

 print('Output path: $outputPath');
 print('List file path: $listFilePath');

 // Create and verify list file
 final listFile = File(listFilePath);
 final fileContent = _recordedVideos
 .map((path) => "file '${path.replaceAll("'", "'\\''")}'")
 .join('\n');
 await listFile.writeAsString(fileContent);

 print('List file content:');
 print(await listFile.readAsString());

 // Simpler FFmpeg command for testing
 final command = '''
 -f concat
 -safe 0
 -i "$listFilePath"
 -c copy
 -y
 "$outputPath"
 '''
 .trim()
 .replaceAll('\n', ' ');

 print('Executing FFmpeg command: $command');

 final session = await FFmpegKit.execute(command);
 final returnCode = await session.getReturnCode();
 final logs = await session.getAllLogsAsString();
 final failStackTrace = await session.getFailStackTrace();

 print('FFmpeg return code: ${returnCode?.getValue() ?? "null"}');
 print('FFmpeg logs: $logs');
 if (failStackTrace != null) {
 print('FFmpeg fail stack trace: $failStackTrace');
 }

 if (ReturnCode.isSuccess(returnCode)) {
 final outputFile = File(outputPath);
 final outputExists = await outputFile.exists();
 final outputSize = outputExists ? await outputFile.length() : 0;

 print('Output file exists: $outputExists');
 print('Output file size: $outputSize bytes');

 if (outputExists && outputSize > 0) {
 setState(() => _recordedVideos.add(outputPath));
 _showSuccess('Videos merged successfully');
 } else {
 _showError('Merged file is empty or not created');
 }
 } else {
 _showError('Failed to merge videos. Check logs for details.');
 }

 // Clean up
 try {
 await listFile.delete();
 print('List file cleaned up successfully');
 } catch (e) {
 print('Failed to delete list file: $e');
 }
 } catch (e, s) {
 print('Error during merge: $e');
 print('Stack trace: $s');
 _showError('Error merging videos: ${e.toString()}');
 }
 }

 void _showError(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.red),
 );
 }

 void _showSuccess(String message) {
 ScaffoldMessenger.of(context).showSnackBar(
 SnackBar(content: Text(message), backgroundColor: Colors.green),
 );
 }

 Future<void> _startAudioRecording() async {
 try {
 final Directory tempDir = await getTemporaryDirectory();
 final audioPath = '${tempDir.path}/recording.wav';
 await _audioRecorder.start(const RecordConfig(), path: audioPath);
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _startVideoRecording() async {
 try {
 await _cameraController!.startVideoRecording();
 setState(() => _isRecording = true);
 } catch (e) {
 _showError('Recording start error: $e');
 }
 }

 Future<void> _stopAndSaveAudioRecording() async {
 _audioPath = await _audioRecorder.stop();
 if (_audioPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String audioFileName = 'audio_$timestamp.wav';
 await File(_audioPath!).copy('${appDir.path}/$audioFileName');
 _showSuccess('Saved: $audioFileName');
 }
 }

 Future<void> _stopAndSaveVideoRecording() async {
 try {
 final video = await _cameraController!.stopVideoRecording();
 _videoPath = video.path;

 if (_videoPath != null) {
 final Directory appDir = await getApplicationDocumentsDirectory();
 final timestamp = DateTime.now().millisecondsSinceEpoch;
 final String videoFileName = 'video_$timestamp.mp4';
 final savedVideoPath = '${appDir.path}/$videoFileName';
 await File(_videoPath!).copy(savedVideoPath);

 setState(() {
 _recordedVideos.add(savedVideoPath);
 _isRecording = false;
 });

 _showSuccess('Saved: $videoFileName');
 }
 } catch (e) {
 _showError('Recording stop error: $e');
 }
 }

 Future<void> _switchCamera() async {
 if (_cameras.length <= 1) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 await _startVideoRecording();
 } else {
 _currentCameraIndex = (_currentCameraIndex + 1) % _cameras.length;
 await _initializeCameraController(_cameras[_currentCameraIndex]);
 }
 }

 Future<void> _toggleRecording() async {
 if (_cameraController == null) return;

 if (_isRecording) {
 await _stopAndSaveVideoRecording();
 await _stopAndSaveAudioRecording();
 } else {
 _startVideoRecording();
 _startAudioRecording();
 setState(() => _recordedVideos.clear());
 }
 }
}
</void></void></void></void></void></void></void></void></void></string></cameradescription></mergevideorecording></mergevideorecording>


-
How do I get ffmpeg.exe to function on a cloud hosting site like Discloud ?
16 septembre 2023, par Sc4rl3ttfir3So to keep my Discord bot running 24/7, I use the hosting site Discloud. I just added music playing capabilities to my bot, and they function fine when I run the bot locally, because it can access the .exe file for ffmpeg through the file path I have put in the code through the play command.


Code :


@bot.command(name='play', help='To play song')
async def play(ctx, url):
 voice_channel = ctx.author.voice.channel

 if voice_channel is None:
 return await ctx.send('You are not connected to a voice channel.')

 async with ctx.typing():
 filename = await YTDLSource.from_url(url, loop=bot.loop)
 
 if ctx.voice_client is not None:
 ctx.voice_client.stop()

 source = discord.FFmpegPCMAudio(
 executable="C:\\Users\\Scarlett Rivera\\Documents\\ffmpeg\\ffmpeg.exe",
 source=filename
 )
 
 voice_channel.play(source)

 await ctx.send('**Now playing:** {}'.format(filename))



However, whenever I try to use the play command while the bot is running through Discloud, it gives me this error in the logs :


Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/discord/ext/commands/core.py", line 235, in wrapped
ret = await coro(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user_686208406444572830/Sc4rl3ttb0t.py", line 351, in play
source = discord.FFmpegPCMAudio(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/discord/player.py", line 290, in __init__
super().__init__(source, executable=executable, args=args, **subprocess_kwargs)
File "/usr/local/lib/python3.11/site-packages/discord/player.py", line 166, in __init__
self._process = self._spawn_process(args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/discord/player.py", line 183, in _spawn_process
raise ClientException(executable + ' was not found.') from None
discord.errors.ClientException: C:\Users\Scarlett Rivera\Documents\ffmpeg\ffmpeg.exe was not found.
 
The above exception was the direct cause of the following exception:
 
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/discord/ext/commands/bot.py", line 1350, in invoke
await ctx.command.invoke(ctx)
File "/usr/local/lib/python3.11/site-packages/discord/ext/commands/core.py", line 1029, in invoke
await injected(*ctx.args, **ctx.kwargs) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/discord/ext/commands/core.py", line 244, in wrapped
raise CommandInvokeError(exc) from exc
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: ClientException: C:\Users\Scarlett Rivera\Documents\ffmpeg\ffmpeg.exe was not found.



I've tried reaching out to another programmer who helped me yesterday with programming my bot to fulfill the main purpose I wanted from it, and they weren't able to come up with anything because they don't have any experience with Discloud. If anyone at all could help me, that would be extremely appreciated


-
Xuggle - Concatenate two videos - Error - java.lang.RuntimeException : error -1094995529 decoding audio
1er avril 2013, par user2232357I am using the Xuggle API to concatenate two MPEG videos (with Audio inbuilt in the MPEGs).
I am referring to the https://code.google.com/p/xuggle/source/browse/trunk/java/xuggle-xuggler/src/com/xuggle/mediatool/demos/ConcatenateAudioAndVideo.java?r=929. (my both inputs and output are MPEGs).Getting the bellow error.
14:06:50.139 [main] ERROR org.ffmpeg - [mp2 @ 0x7fd54693d000] incomplete frame
java.lang.RuntimeException: error -1094995529 decoding audio
at com.xuggle.mediatool.MediaReader.decodeAudio(MediaReader.java:549)
at com.xuggle.mediatool.MediaReader.readPacket(MediaReader.java:469)
at com.tav.factory.video.XuggleMediaCreator.concatenateAllVideos(XuggleMediaCreator.java:271)
at com.tav.factory.video.XuggleMediaCreator.main(XuggleMediaCreator.java:446)Can anyone help mw with this ??? Thanks in Advance..
Here is the complete code.
public String concatenateAllVideos(ArrayList<tavtexttoavrequest> list){
String finalPath="";
String sourceUrl1 = "/Users/SSID/WS/SampleTTS/page2/AV_TAVImage2.mpeg";
String sourceUrl2 = "/Users/SSID/WS/SampleTTS/page2/AV_TAVImage3.mpeg";
String destinationUrl = "/Users/SSID/WS/SampleTTS/page2/z_AV_TAVImage_Final23.mpeg";
out.printf("transcode %s + %s -> %s\n", sourceUrl1, sourceUrl2,
destinationUrl);
//////////////////////////////////////////////////////////////////////
// //
// NOTE: be sure that the audio and video parameters match those of //
// your input media //
// //
//////////////////////////////////////////////////////////////////////
// video parameters
final int videoStreamIndex = 0;
final int videoStreamId = 0;
final int width = 400;
final int height = 400;
// audio parameters
final int audioStreamIndex = 1;
final int audioStreamId = 0;
final int channelCount = 1;
final int sampleRate = 16000 ; // Hz 16000 44100;
// create the first media reader
IMediaReader reader1 = ToolFactory.makeReader(sourceUrl1);
// create the second media reader
IMediaReader reader2 = ToolFactory.makeReader(sourceUrl2);
// create the media concatenator
MediaConcatenator concatenator = new MediaConcatenator(audioStreamIndex,
videoStreamIndex);
// concatenator listens to both readers
reader1.addListener(concatenator);
reader2.addListener(concatenator);
// create the media writer which listens to the concatenator
IMediaWriter writer = ToolFactory.makeWriter(destinationUrl);
concatenator.addListener(writer);
// add the video stream
writer.addVideoStream(videoStreamIndex, videoStreamId, width, height);
// add the audio stream
writer.addAudioStream(audioStreamIndex, audioStreamId, channelCount,sampleRate);
// read packets from the first source file until done
try {
while (reader1.readPacket() == null)
;
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// read packets from the second source file until done
try {
while (reader2.readPacket() == null)
;
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// close the writer
writer.close();
return finalPath;
}
static class MediaConcatenator extends MediaToolAdapter
{
// the current offset
private long mOffset = 0;
// the next video timestamp
private long mNextVideo = 0;
// the next audio timestamp
private long mNextAudio = 0;
// the index of the audio stream
private final int mAudoStreamIndex;
// the index of the video stream
private final int mVideoStreamIndex;
/**
* Create a concatenator.
*
* @param audioStreamIndex index of audio stream
* @param videoStreamIndex index of video stream
*/
public MediaConcatenator(int audioStreamIndex, int videoStreamIndex)
{
mAudoStreamIndex = audioStreamIndex;
mVideoStreamIndex = videoStreamIndex;
}
public void onAudioSamples(IAudioSamplesEvent event)
{
IAudioSamples samples = event.getAudioSamples();
// set the new time stamp to the original plus the offset established
// for this media file
long newTimeStamp = samples.getTimeStamp() + mOffset;
// keep track of predicted time of the next audio samples, if the end
// of the media file is encountered, then the offset will be adjusted
// to this time.
mNextAudio = samples.getNextPts();
// set the new timestamp on audio samples
samples.setTimeStamp(newTimeStamp);
// create a new audio samples event with the one true audio stream
// index
super.onAudioSamples(new AudioSamplesEvent(this, samples,
mAudoStreamIndex));
}
public void onVideoPicture(IVideoPictureEvent event)
{
IVideoPicture picture = event.getMediaData();
long originalTimeStamp = picture.getTimeStamp();
// set the new time stamp to the original plus the offset established
// for this media file
long newTimeStamp = originalTimeStamp + mOffset;
// keep track of predicted time of the next video picture, if the end
// of the media file is encountered, then the offset will be adjusted
// to this this time.
//
// You'll note in the audio samples listener above we used
// a method called getNextPts(). Video pictures don't have
// a similar method because frame-rates can be variable, so
// we don't now. The minimum thing we do know though (since
// all media containers require media to have monotonically
// increasing time stamps), is that the next video timestamp
// should be at least one tick ahead. So, we fake it.
mNextVideo = originalTimeStamp + 1;
// set the new timestamp on video samples
picture.setTimeStamp(newTimeStamp);
// create a new video picture event with the one true video stream
// index
super.onVideoPicture(new VideoPictureEvent(this, picture,
mVideoStreamIndex));
}
public void onClose(ICloseEvent event)
{
// update the offset by the larger of the next expected audio or video
// frame time
mOffset = Math.max(mNextVideo, mNextAudio);
if (mNextAudio < mNextVideo)
{
// In this case we know that there is more video in the
// last file that we read than audio. Technically you
// should pad the audio in the output file with enough
// samples to fill that gap, as many media players (e.g.
// Quicktime, Microsoft Media Player, MPlayer) actually
// ignore audio time stamps and just play audio sequentially.
// If you don't pad, in those players it may look like
// audio and video is getting out of sync.
// However kiddies, this is demo code, so that code
// is left as an exercise for the readers. As a hint,
// see the IAudioSamples.defaultPtsToSamples(...) methods.
}
}
public void onAddStream(IAddStreamEvent event)
{
// overridden to ensure that add stream events are not passed down
// the tool chain to the writer, which could cause problems
}
public void onOpen(IOpenEvent event)
{
// overridden to ensure that open events are not passed down the tool
// chain to the writer, which could cause problems
}
public void onOpenCoder(IOpenCoderEvent event)
{
// overridden to ensure that open coder events are not passed down the
// tool chain to the writer, which could cause problems
}
public void onCloseCoder(ICloseCoderEvent event)
{
// overridden to ensure that close coder events are not passed down the
// tool chain to the writer, which could cause problems
}
}
</tavtexttoavrequest>