
Recherche avancée
Autres articles (73)
-
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)
Sur d’autres sites (10403)
-
Extract individual macroblock types and their corresponding motion vectors [closed]
14 mai 2023, par Prajit KumarI need to make a pair for each macroblock from a frame of a video containing its type and motion vector.


I extracted motion vectors by using the python module of mv-extractor.


For macroblock type I used ffmpeg command :
ffmpeg -threads 1 -debug 'mb_type' -i file.h264 -f null -


The info received from ffmpeg command doesn't match with the location of motion vectors extracted (Macroblocks which are divided into smaller blocks of size 8X16 or 16X8 do not match with the info of macroblock size received in motion vector info). Also, the ffmpeg command for extracting macroblock type doesn't work properly on some videos.


Can you please tell a more streamlined way of doing this task.


-
ImageJ / Fiji shows wrong number of frames in video (FFMPEG import)
28 avril 2023, par locoric_polskaI am counting the number of animals in a an area using Fiji. I import the video through the FFMPEG plug-in (videos are mp4 with mpeg-4 codec). However, I noticed that when I import the videos Fiji uploads the wrong number of frames, and I cannot understand why and how.


An example. I have a video shot at 25fps which is 1582s long. If I do the calculations the video should have 39550 frames in total (1582*25). When I open it through a Computer vision package in R, I see that the video correctly contains 39550 frames. However, when loaded in Fiji, the shown number of frames is 49511. So Fiji is adding 9961 frames to the video. This is consistent across all videos that are recorded in 25fps, while it does not appear in videos shot at 24fps.


Curiously, I found that the ratio between the number of frames read by Fiji and the 'real' number of frames is consistent between 0.79 and 0.80. This makes me think that Fiji is expecting the video to be 30fps and (possibly) duplicating frames to adjust the video to this assumption.


Unfortunately, I discovered all this after finishing my analysis and while trying to merge this dataset with another obtained through CV. The number of frame does not match between datasets and I am not sure how to solve this.


Any help would be greatly appreciated !!


An idea is to multiply all the frame numbers by 0.8 to adjust them to the old assumption. This solution assumes that Fiji is duplicating frames throughout the video in a consistent way


-
How to generate a GIF thumbnail from a video without saving individual frames to disk ?
12 mars 2023, par Rabie DaddiI have a Node.js script that uses fluent-FFmpeg to generate a GIF thumbnail from a video for the first 4 seconds. Currently, the script saves individual frames as PNG images to disk, and then reads them back in to generate the GIF. However, this creates a lot of unnecessary I/O.


Is there a way to modify the script to generate the GIF directly from the video frames, without saving them to disk first ? Ideally, I would like to do this while still using FFmpeg for the processing.


Here's the current code for generating the frames and the GIF :


function generateFrames(videoUrl) {
 return new Promise((resolve, reject) => {
 ffmpeg(videoUrl)
 .setStartTime(0) // start at 0 seconds
 .setDuration(4) // cut 4 seconds
 .videoFilters('scale=if(gte(iw\\,ih)\\,min(600\\,iw)\\,-2):if(lt(iw\\,ih)\\,min(600\\,ih)\\,-2)')
 .fps(4)
 .output('output/img%04d.png') // output file pattern with %04d indicating a sequence number with four digits
 .on('end', () => {
 console.log('GIF generated successfully!');
 resolve()
 })
 .on('error', (err) => {
 console.log('Error generating GIF: ' + err.message);
 reject()
 })
 .run();
 });
}

function generateGif() {
 const inputPattern = 'output/img%04d.png';
 const outputFilename = 'output/output2.gif';

 ffmpeg(inputPattern)
 .inputFPS(9)
 .output(outputFilename)
 .on('error', (err) => {
 console.log('Error generating GIF: ' + err.message);
 })
 .run();
}


execute = async () => {
 await generateFrames('video.mp4')
 generateGif()
}

execute()



Any help or suggestions would be greatly appreciated. Thank you !