
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (96)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (10320)
-
avformat/matroskaenc : Only write Cues at the front if space has been reserved
2 mai 2020, par Andreas Rheinhardtavformat/matroskaenc : Only write Cues at the front if space has been reserved
If the AVIOContext for output was unseekable when writing the header,
no space for Cues would be reserved even if the reserve_index_space
option was used (because it is reasonable to expect that one can't seek
back to the beginning to write the Cues anyway). But if the AVIOContext
was seekable when writing the trailer, it was presumed that space for
the Cues had been reserved when the reserve_index_space option indicated
so even when it was not. As a result, the beginning of the file would be
overwritten.This commit fixes this : If the reserve_index_space option had been used
and no space has been reserved in advance because of unseekability when
writing the header, then no attempt to write Cues will be performed
when writing the trailer ; after all, writing them at the front is
impossible and writing them at the end is probably undesired.Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
How do I know the time left in order to convert to an audio file, to use that to display the remainder for downloading in front of the user
6 novembre 2020, par Kerols afifiI got this code from GITHUB, and this is better for me than using the FFmpeg library, because this library only works at a minimum. Issued 24 APIs, but the code that I got I don't know exactly how to tell me when the conversion is finished and also how much time is left because it's the conversion to appear. In front of the user


@SuppressLint("NewApi")
 public void genVideoUsingMuxer(Context context,String srcPath, String dstPath, int startMs, int endMs, boolean useAudio, boolean useVideo,long time) throws IOException {
 // Set up MediaExtractor to read from the source.
 extractor = new MediaExtractor();
 extractor.setDataSource(srcPath);
 int trackCount = extractor.getTrackCount();
 // Set up MediaMuxer for the destination.

 mediaMuxer = new MediaMuxer(dstPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
 // Set up the tracks and retrieve the max buffer size for selected
 // tracks.
 HashMap indexMap = new HashMap(trackCount);
 int bufferSize = -1;
 for (int i = 0; i < trackCount; i++) {
 MediaFormat format = extractor.getTrackFormat(i);
 String mime = format.getString(MediaFormat.KEY_MIME);
 boolean selectCurrentTrack = false;
 if (mime.startsWith("audio/") && useAudio) {
 selectCurrentTrack = true;
 } else if (mime.startsWith("video/") && useVideo) {
 selectCurrentTrack = true;
 }
 if (selectCurrentTrack) {
 extractor.selectTrack(i);
 int dstIndex = mediaMuxer.addTrack(format);
 indexMap.put(i, dstIndex);
 if (format.containsKey(MediaFormat.KEY_MAX_INPUT_SIZE)) {
 int newSize = format.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
 bufferSize = newSize > bufferSize ? newSize : bufferSize;
 }
 }
 }
 if (bufferSize < 0) {
 bufferSize = DEFAULT_BUFFER_SIZE;
 }
 // Set up the orientation and starting time for extractor.
 MediaMetadataRetriever retrieverSrc = new MediaMetadataRetriever();
 retrieverSrc.setDataSource(srcPath);
 String degreesString = retrieverSrc.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_ROTATION);
 if (degreesString != null) {
 int degrees = Integer.parseInt(degreesString);
 if (degrees >= 0) {
 mediaMuxer.setOrientationHint(degrees);
 }
 }
 if (startMs > 0) {
 extractor.seekTo(startMs * 1000, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
 }
 // Copy the samples from MediaExtractor to MediamediaMuxer. We will loop
 // for copying each sample and stop when we get to the end of the source
 // file or exceed the end time of the trimming.
 int offset = 0;
 int trackIndex = -1;
 time = bufferSize;
 ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
 mediaMuxer.start();
 while (true) {
 bufferInfo.offset = offset;
 bufferInfo.size = extractor.readSampleData(dstBuf, offset);
 if (bufferInfo.size < 0) {
 Log.d(TAG, "Saw input EOS.");
 bufferInfo.size = 0;
 break;
 } else {
 bufferInfo.presentationTimeUs = extractor.getSampleTime();
 if (endMs > 0 && bufferInfo.presentationTimeUs > (endMs * 1000)) {
 Log.d(TAG, "The current sample is over the trim end time.");
 break;
 } else {
 bufferInfo.flags = extractor.getSampleFlags();
 trackIndex = extractor.getSampleTrackIndex();
 mediaMuxer.writeSampleData(indexMap.get(trackIndex), dstBuf, bufferInfo);
 extractor.advance();
 }
 }
 }

 mediaMuxer.stop();
 mediaMuxer.release();

 return;
}



-
Flutter record front facing camera at exact same time as playing video
27 novembre 2020, par xceedI've been playing around with Flutter and trying to get it so I can record the front facing camera (using the camera plugin [https://pub.dev/packages/camera]) as well as playing a video to the user (using the video_player plugin [https://pub.dev/packages/video_player]).


Next I use ffmpeg to horizontally stack the videos together. This all works fine but when I play back the final output there is a slight delay when listening to the audio. I'm calling
Future.wait([cameraController.startVideoRecording(filePath), videoController.play()]);
but there is a slight delay in these tasks actually starting. I don't even need them to fire at the exact same time (which I'm realising is probably impossible), instead if I knew exactly when each of the tasks begun then I can use the time difference to sync the audio using ffmpeg or similar.

I've tried adding a listener on the videoController to see when isPlaying first returns true, and also watching the output directory for when the recorded video appears on the filesystem :


listener = () {
 if (videoController.value.isPlaying) {
 isPlaying = DateTime.now().microsecondsSinceEpoch;
 log('isPlaying '+isPlaying.toString());
 }
 videoController.removeListener(listener);
 };
 videoController.addListener(listener);

 var watcher = DirectoryWatcher('${extDir.path}/');
 watcher.events.listen((event) {
 if (event.type == ChangeType.ADD) {
 fileAdded = DateTime.now().microsecondsSinceEpoch;
 log('added '+fileAdded.toString());
 }
 });



Then likewise for checking if the camera is recording :


var listener;
 listener = () {
 if (cameraController.value.isRecordingVideo) {
 log('isRecordingVideo '+DateTime.now().microsecondsSinceEpoch.toString());
 //cameraController.removeListener(listener);
 }
 };
 cameraController.addListener(listener);



This results in (for example) the following order and microseconds for each event :


is playing: 1606478994797247
is recording: 1606478995492889 (695642 microseconds later)
added: 1606478995839676 (346787 microseconds later)



However, when I play back the video the syncing is off by approx 0.152 seconds so doesn't marry up to the time differences reported above.


Does anyone have any idea how I could accomplish near perfect syncing when combining 2 videos ? Thanks.