
Recherche avancée
Autres articles (77)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (14636)
-
Decode h264 video bytes into JPEG frames in memory with ffmpeg
5 février 2024, par John KarkasI'm using python and ffmpeg (4.4.2) to generate a h264 video stream from images produced continuously from a process. I am aiming to send this stream over websocket connection and decode it to individual image frames at the receiving end, and emulate a stream by continuously pushing frames to an
<img style='max-width: 300px; max-height: 300px' />
tag in my HTML.

However, I cannot read images at the receiving end, after trying combinations of
rawvideo
input format,image2pipe
format, re-encoding the incoming stream withmjpeg
andpng
, etc. So I would be happy to know what the standard way of doing something like this would be.

At the source, I'm piping frames from a while loop into ffmpeg to assemble a h264 encoded video. My command is :


command = [
 'ffmpeg',
 '-f', 'rawvideo',
 '-pix_fmt', 'rgb24',
 '-s', f'{shape[1]}x{shape[0]}',
 '-re',
 '-i', 'pipe:',
 '-vcodec', 'h264',
 '-f', 'rawvideo',
 # '-vsync', 'vfr',
 '-hide_banner',
 '-loglevel', 'error',
 'pipe:'
 ]



At the receiving end of the websocket connection, I can store the images in storage by including :


command = [
 'ffmpeg',
 '-i', '-', # Read from stdin
 '-c:v', 'mjpeg',
 '-f', 'image2',
 '-hide_banner',
 '-loglevel', 'error',
 f'encoded/img_%d_encoded.jpg'
 ]



in my ffmpeg command.


But, I want to instead extract each individual frame coming in the pipe and load in my application, without saving them in storage. So basically, I want whatever is happening at by the
'encoded/img_%d_encoded.jpg'
line in ffmpeg, but allowing me to access each frame in the stdout subprocess pipe of an ffmpeg pipeline at the receiving end, running in its own thread.

- 

- What would be the most appropriate ffmpeg command to fulfil a use case like the above ? And how could it be tuned to be faster or have more quality ?
- Would I be able to read from the stdout buffer with
process.stdout.read(2560x1440x3)
for each frame ?






If you feel strongly about referring me to a more update version of ffmpeg, please do so.


PS : It is understandable this may not be the optimal way to create a stream. Nevertheless, I do not find there should be much complexity in this and the latency should be low. I could instead communicate JPEG images via the websocket and view them in my
<img style='max-width: 300px; max-height: 300px' />
tag, but I want to save on bandwidth and relay some computational effort at the receiving end.

-
Remove comma from ffmpeg output in AWS Lambda layer
8 mars 2019, par GracieI am using the ffmpeg Lambda layer to get the duration and channels from an audio file. I am then outputting these details to variables to use later in my code ?
Can anyone spot/tidy this code so it only outputs the actual value and not one prepended with a comma
const { spawnSync } = require("child_process");
var fs = require('fs');
const https = require('https');
exports.handler = async (event) => {
const source_url = 'https://upload.wikimedia.org/wikipedia/commons/b/b2/Bell-ring.flac';
const target_path = '/tmp/test.flac';
async function downloadFile() {
return new Promise((resolve, reject) => {
const file = fs.createWriteStream(target_path);
const request = https.get(source_url, function(response) {
response.pipe(file);
console.log('file_downloaded!');
resolve();
});
});
}
await downloadFile();
const duration = spawnSync(
"/opt/bin/ffprobe",
[
target_path,
"-show_entries",
"stream=duration",
"-select_streams",
"a",
"-of",
"compact=p=0:nk=1",
"-v",
"0"
]
);
const channel = spawnSync(
"/opt/bin/ffprobe",
[
target_path,
"-show_entries",
"stream=channels",
"-select_streams",
"a",
"-of",
"compact=p=0:nk=1",
"-v",
"0"
]
);
var durations = duration.output.toString('utf8');
console.log(durations);
var channels = channel.output.toString('utf8');
console.log(channels);
/*const response = {
statusCode: 200,
//body: JSON.stringify([channel.output.toString('utf8')])
body: 'Complete'
};
return response;*/
};Just not sure where these comma values are coming from and I need these as number values for comparison functions later in the code.
It uses this easy Lambda layer with no external modules required
-
FFMpeg : Resampling from AV_SAMPLE_FMT_S16 to AV_SAMPLE_FMT_FLTP divides bitrate by 2
26 novembre 2019, par Robert JonesI am very new with FFMpeg and I am currently trying to convert audio data from PCM AV_SAMPLE_FMT_S16 format to Mp3 AV_SAMPLE_FMT_FLTP format.
For this I am using the AudioResampleContext from FFMpeg
av_opt_set_int( audioResampleCtx, "in_sample_fmt", m_aplayer->aCodecCtx->sample_fmt, 0);
av_opt_set_int( audioResampleCtx, "in_sample_rate", m_aplayer->aCodecCtx->sample_rate, 0);
av_opt_set_int( audioResampleCtx, "in_channels", m_aplayer->aCodecCtx->channels,0);
av_opt_set_int( audioResampleCtx, "out_channel_layout", audioCodecCtx->channel_layout, 0);
av_opt_set_int( audioResampleCtx, "out_sample_fmt", audioCodecCtx->sample_fmt, 0);
av_opt_set_int( audioResampleCtx, "out_sample_rate", audioCodecCtx->sample_rate, 0);
av_opt_set_int( audioResampleCtx, "out_channels", audioCodecCtx->channels, 0);The conversion works well since I can listen to my mp3 file but the problems is that my original file is 60 seconds long and the output mp3 file is just 34 seconds. I can hear that it is very accelerated just like if something speeded up the sound. When looking for information with FFMpeg I see that the bitrate just went from 128kbps to 64 kbps.
EDIT :
To complete with more information, I want to compress some raw audio data with mp3 codec and have a output.mp3 output format. The raw audio data sample format is AV_SAMPLE_FMT_S16 and the supported sample format for mp3 codec is FLTP (or S16P).
Therefore I am doing a sample format conversion from AV_SAMPLE_FMT_S16 to AV_SAMPLE_FMT_FLTP but it is missing half of the data.Could anyone help me with this ? I know I’m missing something very simple but I just can’t figure what.
EDIT:2
Here is the code that does the resampling (coming fromhttps://github.com/martin-steghoefer/debian-karlyriceditor/blob/master/src/ffmpegvideoencoder.cpp) . The audio source isn’t an AVFrame but just an array of bytes :// Resample the input into the audioSampleBuffer until we proceed the whole decoded data
if ( (err = avresample_convert( audioResampleCtx,
NULL,
0,
0,
audioFrame->data,
0,
audioFrame->nb_samples )) < 0 )
{
qWarning( "Error resampling decoded audio: %d", err );
return -1;
}
if( avresample_available( audioResampleCtx ) >= audioFrame->nb_samples )
{
// Read a frame audio data from the resample fifo
if ( avresample_read( audioResampleCtx, audioFrame->data, audioFrame->nb_samples ) != audioFrame->nb_samples )
{
qWarning( "Error reading resampled audio: %d", err );
return -1;
}
//Init packet, do the encoding and write data to fileThank you for your help !