
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (106)
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (13017)
-
Create a 44-byte header with ffmpeg
13 juillet 2015, par Joe AllenI made a program using ffmpeg libraries that converts an audio file to a wav file. Except the only problem is that it doesn’t create a 44-byte header. When input the file into Kaldi Speech Recognition, it produces the error :
ERROR (online2-wav-nnet2-latgen-faster:Read4ByteTag():wave-reader.cc:74) WaveData: expected 4-byte chunk-name, got read errror
I ran the file thru shntool and it reports a 78-byte header. Is there anyway I can get the standard 44-byte header using ffmpeg libraries ?
-
JavaScript MediaSource && ffmpeg chunks
17 mai 2023, par OmriHalifaI have written the following code for a player that can receive chunks sent by ffmpeg through stdout and display them using mediaSource :


index.js (server of this request)


const express = require('express')
const app = express()
const port = 4545
const cp = require('child_process')
const cors = require('cors')
const { Readable } = require('stream');



app.use(cors())

app.get('/startRecording', (req, res) => {
 const ffmpeg = cp.spawn('ffmpeg', ['-f', 'dshow', '-i', 'video=HP Wide Vision HD Camera', '-profile:v', 'high', '-pix_fmt', 'yuvj420p', '-level:v', '4.1', '-preset', 'ultrafast', '-tune', 'zerolatency', '-vcodec', 'libx264', '-r', '10', '-b:v', '512k', '-s', '640x360', '-acodec', 'aac', '-ac', '2', '-ab', '32k', '-ar', '44100', '-f', 'mpegts', '-flush_packets', '0', '-' /*'udp://235.235.235.235:12345?pkt_size=1316'*/ ]);
 
 ffmpeg.stdout.on('data', (data) => {
 //console.log(`stdout: ${data}`);
 res.write(data)
 });

 ffmpeg.stderr.on('data', (data) => {
 const byteData = Buffer.from(data, 'utf8'); // Replace with your actual byte data
 const byteStream = new Readable();
 byteStream.push(byteData);
 byteStream.push(null);
 const encoding = 'utf8';
 let text = '';
 byteStream.on('data', (chunk) => {
 text += chunk.toString(encoding);
 });

 byteStream.on('end', () => {
 console.log(text); // Output the converted text
 });


 //console.log({data})
 //res.write(data)
 });

 ffmpeg.on('close', (code) => {
 console.log(`child process exited with code ${code}`);
 });
})

app.listen(port, () => {
 console.log(`Video's Server listening on port ${port}`); 
});



App.js (In react, the side of the player) :


import { useEffect } from 'react';

function App() {
 async function transcode() {
 const mediaSource = new MediaSource();
 const videoElement = document.getElementById('videoElement');
 videoElement.src = URL.createObjectURL(mediaSource);
 
 
 mediaSource.addEventListener('sourceopen', async () => {
 console.log('MediaSource open');
 const sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.42c01e"');
 try {
 const response = await fetch('http://localhost:4545/startRecording');
 const reader = response.body.getReader();
 
 reader.read().then(async function processText({ done, value }) {
 if (done) {
 console.log('Stream complete');
 return;
 }

 console.log("B4 append", videoElement)
 await sourceBuffer.appendBuffer(value);
 console.log("after append",value);
 // Display the contents of the sourceBuffer
 sourceBuffer.addEventListener('updateend', function(e) { if (!sourceBuffer.updating && mediaSource.readyState === 'open') { mediaSource.endOfStream(); } });
 
 // Call next read and repeat the process
 return reader.read().then(processText);
 });
 } catch (error) {
 console.error(error);
 }
 });

 console.log("B4 play")
 await videoElement.play();
 console.log("after play")

 }
 
 
 useEffect(() => {}, []);

 return (
 <div classname="App">
 <div>
 <video></video>
 </div>
 <button>start streaming</button>
 </div>
 );
}

export default App;




this what i get :
what i get


the chunks are being received and passed to the Uint8Array correctly, but the video is not being displayed. why can be the result of this and how to correct it ?


-
python imageIO() ffmpeg output 3D ndarray
27 juillet 2017, par DaimojI’m trying to encode and than decode a collection of images using imageIO in python with the ffmpeg plugin and the HEVC codec.
The stream I’m using is an ndarray of shape (1024,512). When I use the writer.append_data() on each image, the shape is as above (1024,512). After writer.close() is called, I create another reader on the video just made from above. When interrogating a single image of the video it’s shape is (1024,512,3). This is all grayscale, so I only expected an array of uint8’s in the shape of (1024,512). Why did ImageIO add 2 more dimensions to my video ? I only want one.