
Recherche avancée
Autres articles (93)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...)
Sur d’autres sites (5045)
-
Is there any way to change file FPS in javascript browser or prepare wav conventer to 60FPS videos ?
16 novembre 2020, par SZtyroI'm making web application which stores short audio files that have been cut from large video files. User uploads .mp4 file, chooses sound length and here's a little trick. Cutting audio can only be done in backend (correct me if I'm wrong) and sending 700MB data is not good option, so I use code below to decode audio data from .mp4 and then I send it with start and stop params. Backend (Node.js) use's FFMPEG to cut audio and save's it.


This part works, but i realised that decoded audio from 60FPS video doesn't sound good (not terrible but totally useless in my app). My goal is to avoid third party, especially desktop, apps (like audacity) and allow user to cut revelant part of audio from any mp4 video. Is there any way to convert 60FPS video to 30FPS video (ArrayBuffer) in browser and then decode audio ?


fileInput.onchange = event => {
 this.file = event.target["files"][0];
 //.mp4 file
 this.fileURL = URL.createObjectURL(this.file)

 let baseAudioContext = new AudioContext();
 this.file.arrayBuffer().then(buff => {

 baseAudioContext.decodeAudioData(buff,
 success => {
 console.log(success)
 this.bufferToWave(success, 0, success.length);
 },
 err => console.log(err));
 })
 }

 bufferToWave(abuffer, offset, len) {

 var numOfChan = abuffer.numberOfChannels,
 length = len * numOfChan * 2 + 44,
 buffer = new ArrayBuffer(length),
 view = new DataView(buffer),
 channels = [], i, sample,
 pos = 0;

 // write WAVE header
 setUint32(0x46464952); // "RIFF"
 setUint32(length - 8); // file length - 8
 setUint32(0x45564157); // "WAVE"

 setUint32(0x20746d66); // "fmt " chunk
 setUint32(16); // length = 16
 setUint16(1); // PCM (uncompressed)
 setUint16(numOfChan);
 setUint32(abuffer.sampleRate);
 setUint32(abuffer.sampleRate * 2 * numOfChan); // avg. bytes/sec
 setUint16(numOfChan * 2); // block-align
 setUint16(16); // 16-bit (hardcoded in this demo)

 setUint32(0x61746164); // "data" - chunk
 setUint32(length - pos - 4); // chunk length

 // write interleaved data
 for (i = 0; i < abuffer.numberOfChannels; i++)
 channels.push(abuffer.getChannelData(i));

 while (pos < length) {
 for (i = 0; i < numOfChan; i++) { // interleave channels
 sample = Math.max(-1, Math.min(1, channels[i][offset])); // clamp
 sample = (0.5 + sample < 0 ? sample * 32768 : sample * 32767) | 0; // scale to 16-bit signed int
 view.setInt16(pos, sample, true); // update data chunk
 pos += 2;
 }
 offset++ // next source sample
 }

 // create Blob
 //return (URL || webkitURL).createObjectURL(new Blob([buffer], { type: "audio/wav" }));
 var u = (URL || webkitURL).createObjectURL(new Blob([buffer], { type: "audio/wav" }));

 //temporary part
 //downloading file to check quality
 //in this part sound is already broken, no need to show backend code
 const a = document.createElement('a');
 a.style.display = 'none';
 a.href = u;
 a.download = name;
 document.body.appendChild(a);
 a.click();



 function setUint16(data) {
 view.setUint16(pos, data, true);
 pos += 2;
 }

 function setUint32(data) {
 view.setUint32(pos, data, true);
 pos += 4;
 }
 }



-
FFMPEG - subtitles not showing for the full duration of video
26 juin 2023, par Caio MaiaI could not archive the following :


- 

- Have images loaded from a textfile to create a slideshow.
- Have a background music with volume control.
- Have my voice.mp3 over the bg music.
- Have subtitles in the ass format.
- Have a text shown with drawtext in the full duration of video.
- in only one command, if possible.














The command I tryed is :


ffmpeg.exe -f concat -i images.txt -i bg_music.m4a -i voice.mp3 -filter_complex "[0:v]drawtext=fontfile='fonte.TTF':fontsize=20:fontcolor=white:text='Imagens da internet':x=w-tw-10:y=h-th-10,[0]overlay=10:10,ass=subtitles.txt[out],[1]volume=0.3[a1];[2]volume=2[a2];[a1][a2]amix=inputs=2:duration=shortest[aud]" -map "[out]" -map "[aud]":a -pix_fmt yuv420p -c:v libx264 -c:s mov_text -r 30 -y out.mp4



It works but not for subtitles that are showing only after the first image of th slideshow appears.


the content of images.txt is :


file 'image1.png'
duration 20
file 'image2.png'
duration 5
file 'image3.png'
duration 5



the content of subtitles.txt is


Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
Dialogue: 0,0:00:01.00,0:00:06.00,Default,,0,0,0,,Subscribe!
Dialogue: 0,0:00:07.00,0:00:16.00,Default2,,0,0,0,,Like!
Dialogue: 0,0:00:17.00,0:00:26.00,Default,,0,0,0,,Share!



The problem is that only the "Share !" text is shown.


-
How to run ffmpeg with hardware encoding when using —filter_complex to hardcode subtitles ? [closed]
28 avril 2024, par user2006141I'm converting MKV files and hardcoding subtitles into MP4 format. I have over 100 files and want to speed the process up by enabling hardware encoding. I am able to hardware encode without hardcoding the subtitles via —filter_complex but as soon as I apply the filter it errors out.


Here is my command line that works perfectly fine.


ffmpeg -i input.mkv -filter_complex "[0:v:0]subtitles='input.mkv':si=0[v]" -map "[v]" -map 0:a:1 -c:a copy -map_chapters -1 "output.mp4"


Here is my command line that works with hardware encoding without —filter_complex


ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input.mkv -c:v hevc_nvenc -map 0:a:1 -c:a copy -map_chapters -1 "output.mp4"


What I need to do is enable hardware encoding with —filter_complex, So I tried this command


ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input.mkv -filter_complex "[0:v:0]subtitles='input.mkv':si=0[v]" -map "[v]" -c:v hevc_nvenc -map 0:a:1 -c:a copy -map_chapters -1 "output.mp4"


I get this error


Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scale_0'