
Recherche avancée
Autres articles (28)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (2150)
-
MP3 audio recording from an input device using the FFmpeg API
25 novembre 2014, par Hascoet JulienI’ve been trying to use the ffmpeg library (I’m working in C with the ffmpeg API) to decode and encode in mp3 from my microphone on Linux. The mp3lane lib is installed and I manage to open all codecs and to decode input samples.
Here are my input settings :Input #1, alsa, from 'default':
Duration: N/A, start: 1416946099.454877, bitrate: 1536 kb/s
Stream #1:0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/sAnd I manage to decode it ; therefore, it gives me 2 channels of 64 samples after calling
avcodec_decode_audio4
right afterav_read_frame
. The output frame thatavcodec_decode_audio4
just gave me seems to be ok with 2 channels as well and 64 samples per channel. Packets are size of 256 and 16-bit*2*64 = 256 bytes so that makes sense.The problem is when i’m trying to encode this decoded frame with
avcodec_encode_audio2
and the codec sets toAV_CODEC_ID_MP3
(I don’t have any codec opening issues) it gives me a segmentation fault (core dumped) whereas everything is allocated... I wonder that perhaps I have too many samples or not enough therefore the encode function is going where nothing is allocated...Probably i have to use some resampling methods but i have no clue.Is anyone ever try to encode in mp3 from an input device using the ffmpeg C API and to mux it in a mp3 file or even an avi file ? ( from a microphone)
The ffmpeg command line works perfectly :
ffmpeg -f alsa -i default out.mp3
Here is my ffmpeg compilation setup with preinstalled stuffs :
sudo apt-get install libasound2-dev
sudo apt-get install libmp3lame-dev
./configure --disable-static --enable-shared --enable-gpl --enable-libx264 --enable-libv4l2 --enable-gpl --enable-swscale --enable-libmp3lame
sudo make install
export LD_LIBRARY_PATH=/usr/local/libThank you guys !
Here is the code is used, this is run with pthread after (see main()) :
#define DEFAULT_AUDIO_INPUT_DRIVER_NAME "alsa"
#define DEFAULT_AUDIO_INPUT_DEVICE_NAME "default"
#define DEFAULT_USED_AUDIO_CODEC AV_CODEC_ID_MP3
#define DEFAULT_OUTPUT_AUDIO_FRAMERATE 44100
#define DEFAULT_AUDIO_OUTPUT_FILE_NAME "audioTest.mp3"
/* Input and Output audio format.*/
static AVFormatContext *ifmt_ctx = NULL;
static AVFormatContext *ofmt_ctx = NULL; //from video
/* Codec contexts used to encode input and output. */
static AVCodecContext *dec_ctx = NULL;
static AVCodecContext *enc_ctx = NULL;
AVPacket audioPacket = { .data = NULL, .size = 0 };
AVPacket audioEncodedPacket = { .data = NULL, .size = 0 };
AVFrame *decodedAudioFrame = NULL;
AVFrame *rescaledAudioFrame = NULL;
AVStream *streamAudio = NULL;
AVCodec *audioEncodeCodec = NULL;
static struct SwrContext *swr_ctx;
/* Audio configuration */
char *AUDIO_INPUT_DRIVER_NAME = {DEFAULT_AUDIO_INPUT_DRIVER_NAME};
char *AUDIO_INPUT_DEVICE_NAME = {DEFAULT_AUDIO_INPUT_DEVICE_NAME};
enum AVCodecID AUDIO_ENCODED_CODEC_ID = DEFAULT_USED_AUDIO_CODEC;
int AUDIO_FRAME_RATE = DEFAULT_OUTPUT_AUDIO_FRAMERATE;
char* AUDIO_OUTPUT_FILE_NAME = {DEFAULT_AUDIO_OUTPUT_FILE_NAME};
static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,
uint64_t channel_layout,
int sample_rate, int nb_samples)
{
AVFrame *frame = av_frame_alloc();
int ret;
if (!frame) {
syslog(LOG_ERR, "Error allocating an audio frame\n");
exit(0);
}
frame->format = sample_fmt;
frame->channel_layout = channel_layout;
frame->sample_rate = sample_rate;
frame->nb_samples = nb_samples;
if (nb_samples) {
ret = av_frame_get_buffer(frame, 0);
if (ret < 0) {
syslog(LOG_ERR, "Error allocating an audio buffer\n");
exit(0);
}
}
return frame;
}
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
syslog(LOG_INFO, "AUDIO pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base), pkt->stream_index);
}
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
{
/* rescale output packet timestamp values from codec to stream timebase */
//printf("Write Time Rescale\n");
av_packet_rescale_ts(pkt, *time_base, st->time_base);
pkt->stream_index = st->index;
/* Write the compressed frame to the media file. */
log_packet(fmt_ctx, pkt);
//printf("Write In File Audio packet size of %d\n", pkt->size);
//return av_interleaved_write_frame(fmt_ctx, pkt);
return av_write_frame(fmt_ctx, pkt);
}
static void openAudioInput(const char *driverName, const char *deviceName){
int i; AVInputFormat *file_iformat = NULL;
if((file_iformat = av_find_input_format(driverName)) == NULL){
syslog(LOG_ERR ,"The %s doesn't seem to be registered\n", driverName);
exit(0);
}
/* Open the device, in order to use the audio linux driver. */
if((avformat_open_input(&ifmt_ctx, deviceName, file_iformat, NULL)) < 0){
syslog(LOG_ERR ,"Error while trying to open %s.\n", deviceName);
exit(0);
}
if((avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
syslog(LOG_ERR, "Cannot find stream information\n");
exit(0);
}
/* Open decoder */
//printf("Number of input stream: %d\n", ifmt_ctx->nb_streams);
/*if(ifmt_ctx->streams[0]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
printf("AUDIO_TYPE\n");*/
for(i = 0; i < ifmt_ctx->nb_streams; i++)
if(ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO
|| ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
if(avcodec_open2(ifmt_ctx->streams[i]->codec,
avcodec_find_decoder(ifmt_ctx->streams[i]->codec->codec_id), NULL) < 0){
syslog(LOG_ERR, "Cannot find stream information\n");
exit(0);
}
av_dump_format(ifmt_ctx, 1, deviceName, 0);
}
static void openAudioOutput(const char *deviceName, const char *fileName, enum AVCodecID encodeCodec){
AVStream *out_stream = NULL, *in_stream = NULL;
AVCodec *encoder;
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, fileName);
if(!ofmt_ctx){
syslog(LOG_ERR, "Could not create output context\n");
exit(0);
}
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if(!out_stream){
syslog(LOG_ERR, "Failed allocating output stream\n");
exit(0);
}
(ifmt_ctx!=NULL) ? in_stream = ifmt_ctx->streams[0] : exit(0);
dec_ctx = in_stream->codec;
enc_ctx = out_stream->codec;
/* find encoder */
encoder = avcodec_find_encoder(encodeCodec);
enc_ctx->codec = encoder;
/* AUDIO PARAMETERS */
enc_ctx->sample_fmt = encoder->sample_fmts[0];
enc_ctx->bit_rate = 128000; //added
enc_ctx->sample_rate = dec_ctx->sample_rate;
enc_ctx->channel_layout = AV_CH_LAYOUT_MONO;//dec_ctx->channel_layout;
out_stream->time_base = enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
printf("Sample Rate: %d Number Encoded channels: %d\n", dec_ctx->sample_rate, enc_ctx->channels);
/* Open encoder with the found codec */
if(avcodec_open2(enc_ctx, encoder, NULL) < 0) {
syslog(LOG_ERR, "Cannot open audio encoder for stream\n");
exit(0);
}
av_dump_format(ofmt_ctx, 0, fileName, 1);
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE))
if(avio_open(&ofmt_ctx->pb, fileName, AVIO_FLAG_WRITE) < 0){
syslog(LOG_ERR, "Could not open output file '%s'", fileName);
exit(0);
}
/* init muxer, write output file header */
if(avformat_write_header(ofmt_ctx, NULL) < 0){
syslog(LOG_ERR, "Error occurred when opening output file\n");
exit(0);
}
decodedAudioFrame = av_frame_alloc();
rescaledAudioFrame = av_frame_alloc();
}
void initAudio(void){
openAudioInput(AUDIO_INPUT_DRIVER_NAME, AUDIO_INPUT_DEVICE_NAME);
openAudioOutput(AUDIO_INPUT_DEVICE_NAME, AUDIO_OUTPUT_FILE_NAME, AUDIO_ENCODED_CODEC_ID);
}
void *audioThread(void){
int16_t * samples;
int gotDecodedFrame, dst_nb_samples, samples_count=0;
int packetCounter = 0;
int i = 0, got_packet, got_input, ret;
float sizeOfFile = 0;
AVPacket packet = { .data = NULL, .size = 0 };
struct timespec t0, t1;
int flags = fcntl(0, F_GETFL);
flags = fcntl(0, F_SETFL, flags | O_NONBLOCK); //set non-blocking read on stdin
packetCounter = 0;
do{
clock_gettime(CLOCK_REALTIME, &t0);
if ((av_read_frame(ifmt_ctx, &audioPacket)) < 0){
break;
}
packetCounter++;
clock_gettime(CLOCK_REALTIME, &t1);
av_init_packet(&audioEncodedPacket);
audioEncodedPacket.data = NULL;
audioEncodedPacket.size = 0;
if (avcodec_decode_audio4(dec_ctx, decodedAudioFrame, &gotDecodedFrame, &audioPacket) < 0) {
syslog(LOG_ERR ,"Can't Decode the packet received from the camera.\n");
exit(0);
}
printf("Audio Decoded, Nb Channel %d, Samples per Channel %d, Size %d, PTS %ld\n",
decodedAudioFrame->channels,
decodedAudioFrame->nb_samples,
decodedAudioFrame->pkt_size,
decodedAudioFrame->pkt_pts);
/*if((ret = swr_convert(swr_ctx, rescaledAudioFrame->data, rescaledAudioFrame->nb_samples,
(const uint8_t **)decodedAudioFrame->data, decodedAudioFrame->nb_samples)) < 0){
syslog(LOG_ERR, "Error while converting\n");
exit(0);
}*/
//decodedAudioFrame->pts = audioPacket.pts;//(int64_t)((1.0 / (float)64000) * (float)90 * (float)packetCounter);
ret = avcodec_encode_audio2(enc_ctx, &audioEncodedPacket, decodedAudioFrame, &got_packet);
printf("Audio encoded packet size = %d, packet nb = %d, sample rate = %d Ret Value = %d\n", audioEncodedPacket.size, packetCounter, enc_ctx->sample_rate, ret);
audioPacket.pts = (int64_t)((1.0 / (float)enc_ctx->sample_rate) * (float)90 * (float)packetCounter);
audioPacket.dts = audioPacket.pts-1;
ret = write_frame(ofmt_ctx, &enc_ctx->time_base, streamAudio, &audioEncodedPacket);
av_free_packet(&audioEncodedPacket);
ssize_t readVal = read(0, &videoAudioThreadExit, 1); // read non-blocking
}while(videoAudioThreadExit != 'q');
syslog(LOG_INFO ,"End Audio Thread\n");
return NULL;
}
int main(int argc, char** argv){
int i=0;
openlog ("TEST", LOG_CONS | LOG_PERROR | LOG_NDELAY, LOG_USER);
syslog (LOG_INFO, "Syslog correctly loaded.\n");
syslog (LOG_INFO, "Program started by user UID %d\n", getuid ());
av_register_all();
avdevice_register_all();
avcodec_register_all();
avfilter_register_all();
printf("\n\n\t START GLOBAL INIT\n");
initAudio();
pthread_create(&t[0], &ctrl[0], (void*)audioThread, NULL);
for(i=0;icode> -
Screen capture with libavcodec/ffmpeg, and write it to mp4 file
10 février 2015, par DrakkainenI’m trying to (programatically) record the screen with DirectShow screen driver. I wrote some quick and (very) dirty code to try to get that to work (http://pastebin.com/ZJuhZRCz) based on ffmpeg examples but I have lots of trouble figuring out what time_base/framerate to use. If i leave the time_base/framerate parts empty I only get a single still frame. If i change them to any values, the video just turns black.
I’m guessing it has something to do with the output file settings, but I just ran out of ideas on what to try. Any pointers/help would be greatly appreciated.
-
FFMPEG with hardware codec support
28 janvier 2012, par mctmaI have built a simple media player using
ffmpeg
on Android 2.2. Hardware is an arm cortex-a8 based 1GHz processor, 512 MB RAM. I am getting low performance, around 15 FPS for 800x600 mp4 video. I have a couple of questions on how I can improve the performance-
How can I use the hardware codecs available on my target device. How can I configure ffmpeg to use the available hardware decoders ? Does the GPU or graphics driver have to expose some standard API like OpenMax IL in order to this ?
-
What are the options that should be enabled when building ffmpeg so that it can be optimized for my target hardware ? Something like :
--cpu=cortex=a8 --extra-cflags="-mfpu=neon" ...
I have already looked around the net but I couldn't find the answers that I am looking for. I hope someone can advice me on this.
Thanks in advance !
-