
Recherche avancée
Autres articles (94)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur
8 février 2011, parLa visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
Configuration de la boite multimédia
Dès (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (17615)
-
How to encode a video from several images generated in a C++ program without writing the separate frame images to disk ?
29 janvier 2016, par ksb496I am writing a C++ code where a sequence of N different frames is generated after performing some operations implemented therein. After each frame is completed, I write it on the disk as IMG_%d.png, and finally I encode them to a video through ffmpeg using the x264 codec.
The summarized pseudocode of the main part of the program is the following one :
std::vector<int> B(width*height*3);
for (i=0; i/ void generateframe(std::vector<int> &, int)
generateframe(B, i); // Returns different images for different i values.
sprintf(s, "IMG_%d.png", i+1);
WriteToDisk(B, s); // void WriteToDisk(std::vector<int>, char[])
}
</int></int></int>The problem of this implementation is that the number of desired frames, N, is usually high (N 100000) as well as the resolution of the pictures (1920x1080), resulting into an overload of the disk, producing write cycles of dozens of GB after each execution.
In order to avoid this, I have been trying to find documentation about parsing directly each image stored in the vector B to an encoder such as x264 (without having to write the intermediate image files to the disk). Albeit some interesting topics were found, none of them solved specifically what I exactly want to, as many of them concern the execution of the encoder with existing images files on the disk, whilst others provide solutions for other programming languages such as Python (here you can find a fully satisfactory solution for that platform).
The pseudocode of what I would like to obtain is something similar to this :
std::vector<int> B(width*height*3);
video_file=open_video("Generated_Video.mp4", ...[encoder options]...);
for (i=0; icode></int>According to what I have read on related topics, the x264 C++ API might be able to do this, but, as stated above, I did not find a satisfactory answer for my specific question. I tried learning and using directly the ffmpeg source code, but both its low ease of use and compilation issues forced me to discard this possibility as a mere non-professional programmer I am (I take it as just as a hobby and unluckily I cannot waste that many time learning something so demanding).
Another possible solution that came to my mind is to find a way to call the ffmpeg binary file in the C++ code, and somehow manage to transfer the image data of each iteration (stored in B) to the encoder, letting the addition of each frame (that is, not "closing" the video file to write) until the last frame, so that more frames can be added until reaching the N-th one, where the video file will be "closed". In other words, call ffmpeg.exe through the C++ program to write the first frame to a video, but make the encoder "wait" for more frames. Then call again ffmpeg to add the second frame and make the encoder "wait" again for more frames, and so on until reaching the last frame, where the video will be finished. However, I do not know how to proceed or if it is actually possible.
Edit 1 :
As suggested in the replies, I have been documenting about named pipes and tried to use them in my code. First of all, it should be remarked that I am working with Cygwin, so my named pipes are created as they would be created under Linux. The modified pseudocode I used (including the corresponding system libraries) is the following one :
FILE *fd;
mkfifo("myfifo", 0666);
for (i=0; i/ void WriteToPipe(std::vector<int>, FILE *&fd)
fflush(fd);
fd=fclose("myfifo");
}
unlink("myfifo");
</int>WriteToPipe is a slight modification of the previous WriteToFile function, where I made sure that the write buffer to send the image data is small enough to fit the pipe buffering limitations.
Then I compile and write the following command in the Cygwin terminal :
./myprogram | ffmpeg -i pipe:myfifo -c:v libx264 -preset slow -crf 20 Video.mp4
However, it remains stuck at the loop when i=0 at the "fopen" line (that is, the first fopen call). If I had not called ffmpeg it would be natural as the server (my program) would be waiting for a client program to connect to the "other side" of the pipe, but it is not the case. It looks like they cannot be connected through the pipe somehow, but I have not been able to find further documentation in order to overcome this issue. Any suggestion ?
-
Merging multiple audios to a video with ffmpeg causes the volume being reduced. How to avoid that ?
25 janvier 2024, par Terry Windwalkerconst command = ffmpeg();

 const mp4Path = path.join(__dirname, '..', '..', 'temp', `q-video-${new Date().getTime()}.mp4`);

 fs.writeFileSync(mp4Path, videoBuff);
 console.log('mp4 file created at: ', mp4Path);

 // Set the video stream as the input for ffmpeg
 command.input(mp4Path);

 const mp3Paths = [];

 for (let i = 0; i < audios.length; i++) {
 const audio = audios[i];
 const mp3Path = path.join(__dirname, '..', '..', 'temp', `q-audio-${new Date().getTime()}-${i}.mp3`);
 mp3Paths.push(mp3Path);

 fs.writeFileSync(mp3Path, audio.questionBuf);
 console.log('mp3 file created at: ', mp3Path);
 // Set the audio stream as the input for ffmpeg
 command.input(mp3Path);
 }

 // -------
 // ChatGPT take 1
 const audioTags = [];
 const audioFilters = audios.map((audio, index) => {
 const startTime = audio.start_at; // Replace with your logic to calculate start time
 const endTime = audio.end_at; // Replace with your logic to calculate end time
 audioTags.push(`[delayed${index}]`);
 // Working
 // return `[${index + 1}:a]atrim=start=0:end=${(endTime - startTime) / 1000},adelay=${startTime}[delayed${index}]`;
 return `[${index + 1}:a]dynaudnorm=p=0.9:m=100:s=5,atrim=start=0:end=${(endTime - startTime) / 1000},adelay=${startTime}[delayed${index}]`;
 });
 
 // Concatenate the delayed audio streams
 const concatFilter = audioFilters.join(';');
 
 // Mix the concatenated audio streams
 const mixFilter = `${concatFilter};[0:a]${audioTags.join('')}amix=inputs=${audios.length + 1}:duration=first:dropout_transition=2[out]`;

 // Set the complex filter for ffmpeg
 command.complexFilter([mixFilter]);

 // Set the output size
 if (!isScreen) {
 command.videoFilter('scale=720:-1');
 }
 else {
 command.videoFilter('scale=1920:-1');
 }

 // Set input options
 command.inputOptions([
 '-analyzeduration 20M',
 '-probesize 100M'
 ]);

 // Set output options
 command.outputOptions([
 '-c:v libx264', // Specify a video codec
 '-c:a aac',
 '-map 0:v', // Map the video stream from the first input
 '-map [out]' // Map the audio stream from the complex filter
 ]);

 // Set the output format
 command.toFormat('mp4');

 // Set the output file path
 command.output(outputFilePath);

 // Event handling
 command
 .on('start', commandLine => {
 console.log('Spawned Ffmpeg with command: ' + commandLine);
 })
 .on('codecData', data => {
 console.log('Input is ' + data.audio + ' audio ' +
 'with ' + data.video + ' video');
 })
 .on('progress', progress => {
 // console.log('progress: ', progress);
 console.log(`Processing: ${
 progress.percent ?
 progress.percent.toFixed(2)
 :
 '0.00'
 }% done`);
 })
 .on('stderr', stderrLine => {
 console.log('Stderr output: ' + stderrLine);
 })
 .on('error', (err, stdout, stderr) => {
 console.error('Error merging streams:', err);
 console.error('ffmpeg stdout:', stdout);
 console.error('ffmpeg stderr:', stderr);
 reject(err);
 })
 .on('end', () => {
 console.log('Merging finished successfully.');
 const file = fs.readFileSync(outputFilePath);
 console.log('File read successfully.');
 setTimeout(() => {
 fs.unlinkSync(outputFilePath);
 console.log('Output file deleted successfully.');
 fs.unlinkSync(mp4Path);
 console.log('MP4 file deleted successfully.');
 console.log('mp3Paths: ', mp3Paths);
 for (let mp3Path of mp3Paths) {
 fs.unlinkSync(mp3Path);
 }
 console.log('MP3 file deleted successfully.');
 if (isScreen) {
 for (let path of pathsScreen) {
 fs.unlinkSync(path);
 }
 }
 else {
 for (let path of pathsCamera) {
 fs.unlinkSync(path);
 }
 }
 console.log('All temp files deleted successfully.');
 }, 3000);
 resolve(file);
 });
 
 // Run the command
 command.run();



This is how I am merging my video files (which is an array of webm files) right now. It seems this command is causing the volume of the video gradually increase from the beginning to the end (the earlier part of the video has much lower volume than later part of the video). How should I fix this ?


Things tried and investigated so far :


- 

- I have checked the original video, it does not have the volume issue. So the volume issue was caused by this piece of code without an doubt.
- I have tried dynaudnorm, not fully understanding how it works, though. Adding it to each of the audio file does not fix this issue, and adding it as a separated filter at the end of the combined filter string would break the session.






-
Survey of CD Image Formats
30 avril 2013, par Multimedia Mike — GeneralIn the course of exploring and analyzing the impressive library of CD images curated at the Internet Archive’s Shareware CD collection, one encounters a wealth of methods for copying a complete CD image onto other media for transport. In researching the formats, I have found that many of them are native to various binary, proprietary CD programs that operate under Windows. Since I have an interest in interpreting these image formats and I would also like to do so outside of Windows, I thought to conduct a survey to determine if enough information exists to write processing tools of my own.
Remember from my Grand Unified Theory of Compact Disc that CDs, from a high enough level of software abstraction, are just strings of 2352-byte sectors broken up into tracks. The difference among various types of CDs comes down to the specific meaning of these 2352 bytes.
Most imaging formats rip these strings of sectors into a giant file and then record some metadata information about the tracks and sectors.
ISO
This is perhaps the most common method for storing CD images. It’s generally only applicable to data CD-ROMs. File images generally end with a .iso extension. This refers to ISO-9660 which is the standard CD filesystem.Sometimes, disc images ripped from other types of discs (like Xbox/360 or GameCube discs) bear the extension .iso, which is a bit of a misnomer since they aren’t formatted using the ISO-9660 filesystem. But the extension sort of stuck.
BIN / CUE
I see the BIN & CUE file format combination quite frequently. Reportedly, a program named CDRWIN deployed this format first. This format can handle a mixed mode CD (e.g., starts with a data track and is followed by a series of audio tracks), whereas ISO can only handle the data track. The BIN file contains the raw data while the CUE file is a text file that defines how the BIN file is formatted (how many bytes in a sector, how many sectors to each individual track).CDI
This originates from a program called DiscJuggler. This is extremely prevalent in the Sega Dreamcast hobbyist community for some reason. I studied the raw hex dumps of some sample CDI files but there was no obvious data (mostly 0s). There is an open source utility called cdi2iso which is able to extract an ISO image from a CDI file. The program’s source clued me in that the metadata is actually sitting at the end of the image file. This makes sense when you consider how a ripping program needs to operate– copy tracks, sector by sector, and then do something with the metadata after the fact. Options include : 1) Write metadata at the end of the file (as seen here) ; 2) write metadata into a separate file (seen in other formats on this list) ; 3) write the data at the beginning of the file which would require a full rewrite of the entire (usually large) image file (I haven’t seen this yet).Anyway, I believe I have enough information to write a program that can interpret a CDI file. The reason this format is favored for Dreamcast disc images is likely due to the extreme weirdness of Dreamcast discs (it’s complicated, but eventually fits into my Grand Unified Theory of CDs, if you look at it from a high level).
MDF / MDS
MDF and MDS pairs come from a program called Alcohol 120%. The MDF file has the data while the MDS file contains the metadata. The metadata is in an opaque binary format, though. Thankfully, the Wikipedia page links to a description of the format. That’s another image format down.CCD / SUB / IMG
The CloneCD Control File is one I just ran across today thanks to a new image posted at the IA Shareware Archive (see Super Duke Volume 2). I haven’t found any definitive documentation on this, but it also doesn’t seen too complicated. The .ccd file is a text file that is pretty self-explanatory. The sample linked above, however, only has a .ccd file and a .sub file. I’m led to believe that the .sub file contains subchannel information while a .img file is supposed to contain the binary data.So this rip might be incomplete(nope, the .img file is on the page, in the sidebar ; thanks to Phil in the comments for pointing this out). The .sub file is a bit short compared to the Archive’s description of the disc’s contents (only about 4.6 MB of data) and when I briefly scrolled through, it didn’t look like it contains any real computer data. So it probably is just the disc’s subchannel data (something I glossed over in my Grand Unified Theory).CSO
I have dealt with the CISO (compressed ISO) format before. It’s basically the same as a .iso file described above except that each individual 2048-byte data sector is compressed using zlib. The format boasts up to 9 compression levels, which shouldn’t be a big surprise since that correlates to zlib’s own compression tiers.Others
Wikipedia has a category for optical disc image formats. Of course, there are numerous others. However, I haven’t encountered them in the wild for the purpose of broad image distribution.