
Recherche avancée
Autres articles (85)
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (8224)
-
Round number of bits read to next byte
4 décembre 2014, par watwat2014I have a header that can be any number of bits, and there is a variable called ByteAlign that’s calculated by subtracting the current file position from the file position at the beginning of the file, the point of this variable is to pad the header to the next complete byte. so if the header is taking up 57 bits, the ByteAlign variable needs to be 7 bits in length to pad the header to 64 bits total, or 8 bytes.
Solutions that don’t work :
Variable % 8 - 8, the result is the answer, but negative.
8 % Variable ; this is completely inaccurate, and gives answers like 29, which is blatantly wrong, the largest number it should be is 7.
how exactly do I do this ?
-
FFMPEG API - Recording video and audio - Syncing problems
16 juin 2016, par SolidusI’m developing an app which is able to record video from a webcam and audio from a microphone. I’ve been using QT but unfortunately the camera module does not work on windows which led me to use ffmpeg to record the video/audio.
My Camera module is now working well besides a slight problem with syncing. The audio and video sometimes end up out of sync by a small difference (less than 1 second I’d say, although it might be worse with longer recordings).
When I encode the frames I add the PTS in the following way (which I took from the muxing.c example) :
- For the video frames I increment the PTS one by one (starting at 0).
- For the audio frames I increment the PTS by the
nb_samples
of the audio frame (starting at 0).
I am saving the file at 25 fps and asking for the camera to give me 25 fps (which it can). I am also converting the video frames to the
YUV420P
format. For the audio frames conversion I need to use aAVAudioFifo
because the microfone sends bigger samples than the mp4 stream supports, so I have to split them in chuncks. I used the transcode.c example for this.I am out of ideas in what I should do to sync the audio and video. Do I need to use a clock or something to correctly sync up both streams ?
The full code is too big to post here but should it be necessary I can add it to github for example.
Here is the code for writing a frame :
int FFCapture::writeFrame(const AVRational *time_base, AVStream *stream, AVPacket *pkt) {
/* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(pkt, *time_base, stream->time_base);
pkt->stream_index = stream->index;
/* Write the compressed frame to the media file. */
return av_interleaved_write_frame(oFormatContext, pkt);
}Code for getting the elapsed time :
qint64 FFCapture::getElapsedTime(qint64 *previousTime) {
qint64 newTime = timer.elapsed();
if(newTime > *previousTime) {
*previousTime = newTime;
return newTime;
}
return -1;
}Code for adding the PTS (video and audio stream, respectively) :
qint64 time = getElapsedTime(&previousVideoTime);
if(time >= 0) outFrame->pts = time;
//if(time >= 0) outFrame->pts = av_rescale_q(time, outStream.videoStream->codec->time_base, outStream.videoStream->time_base);
qint64 time = getElapsedTime(&previousAudioTime);
if(time >= 0) {
AVRational aux;
aux.num = 1;
aux.den = 1000;
outFrame->pts = time;
//outFrame->pts = av_rescale_q(time, aux, outStream.audioStream->time_base);
} -
FFmpeg.wasm demuxing - Get encodedChunks in Javascript
16 mars 2023, par Kevin BavingI am building a video editor whose process looks like this :


Demuxing -> Decoding -> Editing -> Encoding -> Muxing.


The demuxing and muxing process is currently done with mp4box.js. I would like to replace mp4box.js with ffmpeg.wasm. Unfortunately, I can't get along with the process.


What should FFmpeg.wasm do in the demuxing process ?


- 

- load a .mp4 file
- extract the encodedVideoChunks and store them as EncodedVideoChunk objects in an array
- extract the encodedAudioChunks and store them as EncodedAudioChunk objects in an array
- get some metadata like : duration, timescale, fps, track_width, track_height, codec, audio_channel_count, sample_rate ....










public async loadFile(file: File) {
 let data = await fetchFile(file)
 let blob = new Blob();
 await this.ffmpeg.setProgress(({ratio }) => console.log(`Extracting frames: ${Math.round(ratio * 100)}%`));
 this.ffmpeg.FS('writeFile', 'videoTest.mp4', data);
 //Here is where I am struggling
 //Should look like this: 
 //const command = '-i videoTest.mp4 -c:v copy .... '
 //await this.ffmpeg.run(command);
 //....
}



Lets get deeper into my problem :


Because FFmpeg.wasm is still a cli tool, I have no idea what the best way to safe the encodedChunks into a file is (and what kind of filetype I should use). Further I would like to know how to read that file propertly so that i can safe the input of the file into seperate EncodedVideo- and AudioChunks.