
Recherche avancée
Autres articles (102)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Modifier la date de publication
21 juin 2013, parComment changer la date de publication d’un média ?
Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
Dans la rubrique "Champs à ajouter, cocher "Date de publication "
Cliquer en bas de la page sur Enregistrer -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (11755)
-
How To Specify FFMPEG Input Volume For Multple Input Files
30 septembre 2014, par gbdI have 6 similar/separate audio input files feeding into ffmpeg - no video. I’m mixing the 6 input channels down to a stereo output using amix and that works fine. But now I need to change the volume of each individual input channel before the mix down - or maybe as part of the mix down. I’ve looked at and tried aeval (which seems very slow) in the form
'aeval=val(0)*volChg1:c=same|val(1)*volChg2:c=same|val(2)*volChg3:c=same|val(3)*volChg4:c=same|val(4)*volChg5:c=same|val(5)*volChg6:c=same'
but that only seems to change the volume of the first channel.
So the whole input part of my ffmpeg expression looks something like this right now :
-i inFilePath1 -i inFilePath2 -i inFilePath3 -i inFilePath4 -i inFilePath5 -i inFilePath6
-filter_complex 'aeval=val(0)*$volChg1:c=same|val(1)*$volChg2:c=same|val(2)*$volChg3:c=same|val(3)*$volChg4:c=same|val(4)*$volChg5:c=same|val(5)*$volChg6:c=same','amix=inputs=6:duration=longest'$volChg1
-6
are php variables containing the individual volume multipliers -0.5
for example should halve the volume of that individual channel while1.0
would leave it unaffected. How do I do this ? -
With ffmpeg's image to movie feature, is it possible to pass in frames over a period of time versus all at once ?
2 novembre 2013, par Zack YoshyaroFor the purpose of making a time lapse recording of desktop activity, it is possible so "stream" the frame list to ffmpeg over time, rather than all at once in the beginning.
Currently, it is a two step process.
-
save individual snapshots to disc
im = ImageGrab.grab()
im.save("frame_%s.jpg" % count, 'jpg') -
compile those snapshots with ffmpeg via
ffmpeg -r 1 -pattern_type glob -i '*.jpg' -c:v libx264 out.mp4
It would be nice if there were a way to merge the two steps so that I'm not flooding my hard drive with thousands of individual snapshots. Is it possible to do this ?
-
-
Duplicate a signal with FFmpeg library [on hold]
22 septembre 2016, par MeugiwaraI’m using the FFmpeg library to develop a program in C/C++. At this time, my aim is to duplicate the signal when I open an audio file. My scheme is the next : read an audio file, transform the stereo signal in mono signal and duplicate this mono signal to get two mono signal exactly the same.
I watched in the Doxygen documentation on the official website, but I don’t find something interesting. Do someone know a good way to do this ?
Here’s what I have so far :
#include
#include
#include
#define __STDC_CONSTANT_MACROS
#ifdef _WIN32
extern "C" /* Windows */
{
#include "libswresample/swresample.h"
#include "libavformat/avformat.h"
#include "libavcodec/avcodec.h"
#include "SDL2/SDL.h"
};
#else
#ifdef __cplusplus
extern "C" /* Linux */
{
#endif /* __cplusplus */
#include <libswresample></libswresample>swresample.h>
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <sdl2></sdl2>SDL.H>
#ifdef __cplusplus
};
#endif /* __cplusplus */
#endif /* _WIN32 */
#define MAX_AUDIO_FRAME_SIZE 192000 /* 1 second of 48 kHz 32 bits audio*/
#define OUTPUT_PCM 1 /* Output PCM */
#define USE_SDL 1 /* Use SDL */
/* Les incrémentations */
static Uint32 audio_len;
static Uint8 *audio_chunk;
static Uint8 *audio_pos;
void fill_audio(void *udata, Uint8 *stream, int len)
{
SDL_memset(stream, 0, len); /* SDL 2.0 */
if (audio_len == 0)
return;
len = (len > audio_len ? audio_len : len);
SDL_MixAudio(stream, audio_pos, len, SDL_MIX_MAXVOLUME);
audio_pos = audio_pos + len;
audio_len = audio_len - len;
}
int reading_sound(char *str)
{
SDL_AudioSpec wanted_spec;
AVFormatContext *pFormatCtx;
AVCodecContext *pCodecCtx;
AVSampleFormat out_sample_fmt;
AVPacket *packet;
AVFrame *pFrame;
AVCodec *pCodec;
uint64_t out_channel_layout;
uint8_t *out_buffer;
int64_t in_channel_layout;
struct SwrContext *au_convert_ctx;
int out_sample_rate;
int out_buffer_size;
int out_nb_samples;
int out_channels;
int audioStream;
int got_picture;
int index = 0;
int ret;
int i;
FILE *pFile = NULL;
/*
//char *sound = "WavinFlag.aac";
*/
av_register_all();
avformat_network_init();
pFormatCtx = avformat_alloc_context();
if (avformat_open_input(&pFormatCtx, str, NULL, NULL) != 0) /* Ouverture du fichier */
{
fprintf(stderr, "%s\n", "Couldn't open input stream");
return (-1);
}
if (avformat_find_stream_info(pFormatCtx, NULL) < 0) /* Récupérer les informations */
{
fprintf(stderr, "Couldn't find stream information\n");
return (-1);
}
av_dump_format(pFormatCtx, 0, str, false); /* Envoyer les informations utiles sur la sortie d'erreur */
audioStream = -1;
for (i = 0; i < pFormatCtx->nb_streams; i++) /* Trouver le début du son */
if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
audioStream = i;
break;
}
if (audioStream == -1)
{
fprintf(stderr, "%s\n", "Didn't find an audio stream");
return (-1);
}
pCodecCtx = pFormatCtx->streams[audioStream]->codec;
pCodec = avcodec_find_decoder(pCodecCtx->codec_id); /* Trouver le décodeur du fichier (.aac, .mp3, ...) */
if (pCodec == NULL)
{
fprintf(stderr, "%s\n", "Codec not found");
return (-1);
}
if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0) /* Ouvrir le décodeur du fichier */
{
fprintf(stderr, "%s\n", "Couldn't open codec");
return (-1);
}
#if OUTPUT_PCM
pFile = fopen("output.pcm", "wb"); /* Créer et écrire tout ce qui se passe dans ce fichier */
#endif /* OUTPUT_PCM */
packet = (AVPacket *)av_malloc(sizeof(AVPacket)); /* Allouer taille */
av_init_packet(packet);
out_channel_layout = AV_CH_LAYOUT_MONO; /* Canaux de sortie */
out_nb_samples = pCodecCtx->frame_size; /* L'échantillonnage - Le nombre de samples*/
out_sample_fmt = AV_SAMPLE_FMT_S16; /* Format */
out_sample_rate = 44100; /* Fréquence */
out_channels = av_get_channel_layout_nb_channels(out_channel_layout); /* Récupérer le nombre de canaux */
/*
// printf("%d\n", out_channels);
// system("PAUSE");
*/
out_buffer_size = av_samples_get_buffer_size(NULL, out_channels, out_nb_samples, out_sample_fmt, 1); /* Taille du buffer */
out_buffer = (uint8_t *)av_malloc(MAX_AUDIO_FRAME_SIZE * 2); /* Allouer taille */
pFrame = av_frame_alloc(); /* Allouer taille */
#if USE_SDL
if (SDL_Init(SDL_INIT_VIDEO |SDL_INIT_AUDIO | SDL_INIT_TIMER)) /* Initialiser SDL */
{
fprintf(stderr, "%s - %s\n", "Couldn't initialize SDL", SDL_GetError());
return (-1);
}
/*
// Attribution des valeurs avec les variables au-dessus
*/
wanted_spec.freq = out_sample_rate;
wanted_spec.format = AUDIO_S16SYS;
wanted_spec.channels = out_channels;
wanted_spec.silence = 0;
wanted_spec.samples = out_nb_samples;
wanted_spec.callback = fill_audio;
wanted_spec.userdata = pCodecCtx;
if (SDL_OpenAudio(&wanted_spec, NULL) < 0)
{
fprintf(stderr, "%s\n", "Can't open audio");
return (-1);
}
#endif /* USE_SDL */
in_channel_layout = av_get_default_channel_layout(pCodecCtx->channels);
au_convert_ctx = swr_alloc();
au_convert_ctx = swr_alloc_set_opts(au_convert_ctx, out_channel_layout, out_sample_fmt, out_sample_rate,
in_channel_layout, pCodecCtx->sample_fmt, pCodecCtx->sample_rate, 0, NULL);
swr_init(au_convert_ctx);
while (av_read_frame(pFormatCtx, packet) >= 0) /* Lecture du fichier */
{
if (packet->stream_index == audioStream)
{
ret = avcodec_decode_audio4(pCodecCtx, pFrame, &got_picture, packet); /* Décoder les packets */
if (ret < 0)
{
fprintf(stderr, "%s\n", "Error in decoding audio frame");
return (-1);
}
if (got_picture > 0)
{
swr_convert(au_convert_ctx, &out_buffer, MAX_AUDIO_FRAME_SIZE, (const uint8_t **)pFrame->data, pFrame->nb_samples);
#if 1
printf("Index : %5d\t Points : %lld\t Packet size : %d\n", index, packet->pts, packet->size); /* Affichage des informations sur la console */
#endif /* 1 */
#if OUTPUT_PCM
fwrite(out_buffer, 1, out_buffer_size, pFile); /* Faire l'écriture dans le fichier */
#endif /* OUTPUT_PCM */
index = index + 1;
}
#if USE_SDL
while (audio_len > 0)
SDL_Delay(1);
audio_chunk = (Uint8 *)out_buffer;
audio_len = out_buffer_size;
audio_pos = audio_chunk;
SDL_PauseAudio(0);
#endif /* USE_SDL */
}
av_free_packet(packet); /* Libérer les packets */
}
swr_free(&au_convert_ctx); /* Libérer la taille de conversion */
#if USE_SDL
SDL_CloseAudio(); /* Arrêter le son dans SDL */
SDL_Quit(); /* Quitter SDL */
#endif /* USE_SDL */
#if OUTPUT_PCM
fclose(pFile); /* Fermer le fichier d'écriture */
#endif /* OUTPUT_PCM */
av_free(out_buffer);
avcodec_close(pCodecCtx); /* Fermer le décodeur */
avformat_close_input(&pFormatCtx); /* Fermer le fichier lu */
return (0);
}
int main(int argc, char **argv)
{
if (argc != 2)
{
fprintf(stderr, "%s\n", "Usage : ./BASELFI [File]");
system("PAUSE");
return (-1);
}
else
{
reading_sound(argv[1]);
return (0);
}
return (0);
}