
Recherche avancée
Médias (1)
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (52)
-
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)
Sur d’autres sites (8072)
-
FFmpeg dnn_processing with tensorflow backend : difficulties applying a filter on an image
9 avril 2023, par ArnoBenI am trying to perform a video segmentation for background blurring similar to Google Meet or Zoom using FFmpeg, and I'm not very familiar with it.


Google's MediaPipe model is available as a tensorflow .pb file here (using
download_hhxwww.sh
).

I can load it in python and it works as expected, though I do need to format the input frames : scaling to the model input dimension, adding a batch dimension, dividing the pixel values by 255 to have a range 0-1.


FFmpeg has a filter that can use tensorflow models thanks to dnn_processing, but I'm wondering about these preprocessing steps. I tried to read the dnn_backend_tf.c file in ffmpeg's github repo, but C is not my forte. I'm guessing it adds a batch dimension somewhere otherwise the model wouldn't run, but I'm not sure about the rest.


Here is my current command :


ffmpeg \
 -i $FILE -filter_complex \
 "[0:v]scale=160:96,format=rgb24,dnn_processing=dnn_backend=tensorflow:model=$MODEL:input=input_1:output=segment[masks];[masks]extractplanes=2[mask]" \
 -map "[mask]" output.png



- 

- I'm already applying a scaling to match the input dimension.
- I wrote this
[masks]extractplanes=2[mask]
because the model outputs a HxWx2 tensor (background mask and foreground mask) and I want to keep the foreground mask.






The result I get with this command is the following (input-output) :




I'm not sure how to interpret the problems in this output. In python I can easily get a nice grayscale output :




I'm trying to obtain something similar with FFmpeg.


Any suggestion or insights to obtain a correct output with FFmpeg would be greatly appreciated.


PS : If I try to apply this on a video file, it hits a Segmentation Fault somewhere before getting any output so I stick with testing on an image for now.


-
ffmpeg - prop flicker removal works but ffmpeg insists on changing framerate [closed]
28 avril 2023, par Mutley EugeniusI have been researching how to use FFmpeg to do some fantastic wizardry on my cockpit videos to remove the dramatically distracting propeller, but after many hours now I cannot seem to find a way to get the program to stop stripping more than half the frames. The source video file is 1920 x 1080 @ 60fps and I believe I have my offset right (1/60= 0.0166) but the output video is always coming out at 25 fps. Can't see what element in the code is telling it to do that, or how to change it.


Here's my file :


https://drive.google.com/file/d/1VPttH4PHgUr0uzmsbl4Bcyg5_gAixblA/view?usp=sharing


Here's my code :


ffmpeg -i G:\PropFlicker.mp4 -itsoffset 0.01666 -i G:\PropFlicker.mp4 -filter_complex color=0x202020:s=1920x1080:d=24[mask];[mask][0][1]maskedmax=[out] -map [out] -crf 10 G:\PropNoFlicker.mp4



I have tried adding
-r 60
which gives me a 60 fps output file, but the video is still being processed at 25 and it just adds in duplicated frames after processing to pad it out to 60. The rendering shows more dropped frames than rendered frames by about a 3 to 2 ratio which matches frame drops from 60 to 25. I lose my shot fluidity and I get flickery artifacts

What am I missing to get the flicker removal processing done at 60 fps and the output file rendered at 60 fps with the original smoothness ?


I'm also not sure what the :d=24 is doing. I tried d=60, but it made no difference.


I copied original code that I found in this link :




-
Issue with streaming in realtime to HTML5 video player
24 juillet 2023, par ImaSquareBTWok so i created a project which should take an mkv file convert it to a sutaible fomrat in relatime and play it as it transcodes in the html 5 video player and should play as soon as small segement is ready for playback but unfornatlly it does seem to work heres my code if your curious, help would be very much appreicted




 
 
 
 
 <video controls="controls">
 <source src="output.mp4" type="video/mp4">
 </source></video>
 <code class="echappe-js"><script src="https://cdnjs.cloudflare.com/ajax/libs/ffmpeg/0.11.6/ffmpeg.min.js" integrity="sha512-91IRkhfv1tLVYAdH5KTV&#x2B;KntIZP/7VzQ9E/qbihXFSj0igeacWWB7bQrdiuaJVMXlCVREL4Z5r&#x2B;3C4yagAlwEw==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>

<script src='http://stackoverflow.com/feeds/tag/player.js'></script>

 



and my javscript :


let mediaSource;
let sourceBuffer;
let isTranscoding = false;
let bufferedSeconds = 0;

async function setupMediaSource() {
 mediaSource = new MediaSource();
 const videoPlayer = document.getElementById('video-player');
 videoPlayer.src = URL.createObjectURL(mediaSource);
 await new Promise(resolve => {
 mediaSource.addEventListener('sourceopen', () => {
 sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.64001E, mp4a.40.2"'); // Match the transcoded format
 sourceBuffer.addEventListener('updateend', () => {
 if (!sourceBuffer.updating && mediaSource.readyState === 'open') {
 mediaSource.endOfStream();
 resolve();
 }
 });
 });
 });
}

async function transcodeVideo() {
 const ffmpeg = createFFmpeg({ log: true });
 const videoPlayer = document.getElementById('video-player');

 await ffmpeg.load();
 const transcodeCommand = ['-i', 'The Legend of Old Gregg S02E05.mkv', '-c:v', 'libx264', '-preset', 'ultrafast', '-c:a', 'aac', '-strict', 'experimental', '-f', 'mp4', '-movflags', 'frag_keyframe+empty_moov', 'output.mp4'];

 const videoUrl = 'The Legend of Old Gregg S02E05.mkv'; // The URL of the original video file

 let lastBufferedSeconds = 0;
 let currentTime = 0;
 while (currentTime < videoPlayer.duration) {
 if (!isTranscoding && sourceBuffer.buffered.length > 0 && sourceBuffer.buffered.end(0) - currentTime < 5) {
 isTranscoding = true;
 const start = sourceBuffer.buffered.end(0);
 const end = Math.min(videoPlayer.duration, start + 10);
 await fetchAndTranscode(videoUrl, start, end, ffmpeg, transcodeCommand);
 isTranscoding = false;
 currentTime = end;
 bufferedSeconds += (end - start);
 const transcodingSpeed = bufferedSeconds / (currentTime);
 lastBufferedSeconds = bufferedSeconds;
 console.log(`Transcoded ${end - start} seconds of video. Buffered: ${bufferedSeconds.toFixed(2)}s (${transcodingSpeed.toFixed(2)}x speed)`);
 }
 await new Promise(resolve => setTimeout(resolve, 500));
 }

 await ffmpeg.exit();
}

async function fetchAndTranscode(url, start, end, ffmpeg, transcodeCommand) {
 const response = await fetch(url, { headers: { Range: `bytes=${start}-${end}` } });
 const data = new Uint8Array(await response.arrayBuffer());
 ffmpeg.FS('writeFile', 'input.mkv', data); // Use 'input.mkv' as a temporary file
 await ffmpeg.run(...transcodeCommand);
 const outputData = ffmpeg.FS('readFile', 'output.mp4');
 appendSegmentToBuffer(outputData);
}

function appendSegmentToBuffer(segment) {
 if (!sourceBuffer.updating) {
 sourceBuffer.appendBuffer(segment);
 } else {
 sourceBuffer.addEventListener('updateend', () => {
 sourceBuffer.appendBuffer(segment);
 });
 }
}

async function createFFmpeg(options) {
 const { createFFmpeg } = FFmpeg;
 const ffmpeg = createFFmpeg(options);
 await ffmpeg.load();
 return ffmpeg;
}

(async () => {
 await setupMediaSource();
 await transcodeVideo();
})();