
Recherche avancée
Médias (1)
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (71)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (11249)
-
Green image when encoding a video frame to PNG
19 août 2013, par William SeemannI'm wrote the following code to convert a decoded video frame to a PNG image. The code doesn't crash but the image data stored in 'avpkt' results in an all green image. What am I doing wrong ? Any help would be appreciated.
// pFrame - the decoded frame
// avpkt - the packet to fill with the converted image
void convert_image(AVCodecContext *pCodecCtx, AVFrame *pFrame, AVPacket *avpkt, int *got_packet_ptr) {
AVCodecContext *codecCtx;
AVCodec *codec;
*got_packet_ptr = 0;
codec = avcodec_find_encoder(TARGET_IMAGE_CODEC);
if (!codec) {
printf("avcodec_find_decoder() failed to find decoder\n");
goto fail;
}
codecCtx = avcodec_alloc_context3(codec);
if (!codecCtx) {
printf("avcodec_alloc_context3 failed\n");
goto fail;
}
codecCtx->bit_rate = pCodecCtx->bit_rate;
codecCtx->width = pCodecCtx->width;
codecCtx->height = pCodecCtx->height;
codecCtx->pix_fmt = TARGET_IMAGE_FORMAT;
codecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
codecCtx->time_base.num = pCodecCtx->time_base.num;
codecCtx->time_base.den = pCodecCtx->time_base.den;
if (!codec || avcodec_open2(codecCtx, codec, NULL) < 0) {
printf("avcodec_open2() failed\n");
goto fail;
}
int src_width = pCodecCtx->width;
int src_height = pCodecCtx->height;
enum PixelFormat src_pixfmt = pCodecCtx->pix_fmt;
int dst_width = pCodecCtx->width;
int dst_height = pCodecCtx->height;
struct SwsContext *scalerCtx;
scalerCtx = sws_getContext(src_width,
src_height,
src_pixfmt,
dst_width,
dst_height,
TARGET_IMAGE_FORMAT,
SWS_BILINEAR, //SWS_BICUBIC
NULL, NULL, NULL);
if (!scalerCtx) {
printf("sws_getContext() failed\n");
goto fail;
}
AVFrame *pSrcFrame = avcodec_alloc_frame();
if (!pSrcFrame) {
goto fail;
}
AVFrame *pFrameRGB = avcodec_alloc_frame();
if (!pFrameRGB) {
goto fail;
}
if (avpicture_fill((AVPicture *) pSrcFrame,
pFrame->data,
src_pixfmt,
src_width,
src_height) < 0) {
printf("avpicture_fill() failed\n");
goto fail;
}
int numBytes = avpicture_get_size(TARGET_IMAGE_FORMAT, src_width, src_height);
uint8_t *buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));
if (avpicture_fill((AVPicture *) pFrameRGB,
buffer,
TARGET_IMAGE_FORMAT,
src_width,
src_height) < 0) {
printf("avpicture_fill() failed\n");
goto fail;
}
sws_scale(scalerCtx,
(const uint8_t * const *) pSrcFrame->data,
pSrcFrame->linesize,
0,
src_height,
pFrameRGB->data,
pFrameRGB->linesize);
int ret = avcodec_encode_video2(codecCtx, avpkt, pFrameRGB, got_packet_ptr);
if (ret < 0) {
*got_packet_ptr = 0;
}
fail:
if (codecCtx) {
avcodec_close(codecCtx);
}
if (scalerCtx) {
sws_freeContext(scalerCtx);
}
if (ret < 0 || !*got_packet_ptr) {
av_free_packet(avpkt);
}
} -
avutil/{color_utils, csp} : merge color_utils into csp and expose API
30 janvier 2023, par Leo Izenavutil/color_utils, csp : merge color_utils into csp and expose API
libavutil/color_utils contains some avpriv_ symbols that map
enum AVTransferCharacteristic values to gamma-curve approximations and
to the actual transfer functions to invert them (i.e. -> linear).There's two issues with this :
(1) avpriv is evil and should be avoided whenever possible
(2) libavutil/csp.h exposes a public API for handling color that
already handles primaries and matriciesI don't see any reason this API has to be private, so this commit takes
the functionality from avutil/color_utils and merges it into avutil/csp
with an exposed av_ API rather than the previous avpriv_ API.Every reference to the previous API has been updated to point to the
new one. color_utils.h has been deleted as well. This should not break
any applications as it only contained avpriv_ symbols in the first
place, so nothing in that header could be referenced by other
applications.Signed-off-by : Leo Izen <leo.izen@gmail.com>
Signed-off-by : Anton Khirnov <anton@khirnov.net> -
MP3 audio recording from an input device using the FFmpeg API
25 novembre 2014, par Hascoet JulienI’ve been trying to use the ffmpeg library (I’m working in C with the ffmpeg API) to decode and encode in mp3 from my microphone on Linux. The mp3lane lib is installed and I manage to open all codecs and to decode input samples.
Here are my input settings :Input #1, alsa, from 'default':
Duration: N/A, start: 1416946099.454877, bitrate: 1536 kb/s
Stream #1:0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/sAnd I manage to decode it ; therefore, it gives me 2 channels of 64 samples after calling
avcodec_decode_audio4
right afterav_read_frame
. The output frame thatavcodec_decode_audio4
just gave me seems to be ok with 2 channels as well and 64 samples per channel. Packets are size of 256 and 16-bit*2*64 = 256 bytes so that makes sense.The problem is when i’m trying to encode this decoded frame with
avcodec_encode_audio2
and the codec sets toAV_CODEC_ID_MP3
(I don’t have any codec opening issues) it gives me a segmentation fault (core dumped) whereas everything is allocated... I wonder that perhaps I have too many samples or not enough therefore the encode function is going where nothing is allocated...Probably i have to use some resampling methods but i have no clue.Is anyone ever try to encode in mp3 from an input device using the ffmpeg C API and to mux it in a mp3 file or even an avi file ? ( from a microphone)
The ffmpeg command line works perfectly :
ffmpeg -f alsa -i default out.mp3
Here is my ffmpeg compilation setup with preinstalled stuffs :
sudo apt-get install libasound2-dev
sudo apt-get install libmp3lame-dev
./configure --disable-static --enable-shared --enable-gpl --enable-libx264 --enable-libv4l2 --enable-gpl --enable-swscale --enable-libmp3lame
sudo make install
export LD_LIBRARY_PATH=/usr/local/libThank you guys !
Here is the code is used, this is run with pthread after (see main()) :
#define DEFAULT_AUDIO_INPUT_DRIVER_NAME "alsa"
#define DEFAULT_AUDIO_INPUT_DEVICE_NAME "default"
#define DEFAULT_USED_AUDIO_CODEC AV_CODEC_ID_MP3
#define DEFAULT_OUTPUT_AUDIO_FRAMERATE 44100
#define DEFAULT_AUDIO_OUTPUT_FILE_NAME "audioTest.mp3"
/* Input and Output audio format.*/
static AVFormatContext *ifmt_ctx = NULL;
static AVFormatContext *ofmt_ctx = NULL; //from video
/* Codec contexts used to encode input and output. */
static AVCodecContext *dec_ctx = NULL;
static AVCodecContext *enc_ctx = NULL;
AVPacket audioPacket = { .data = NULL, .size = 0 };
AVPacket audioEncodedPacket = { .data = NULL, .size = 0 };
AVFrame *decodedAudioFrame = NULL;
AVFrame *rescaledAudioFrame = NULL;
AVStream *streamAudio = NULL;
AVCodec *audioEncodeCodec = NULL;
static struct SwrContext *swr_ctx;
/* Audio configuration */
char *AUDIO_INPUT_DRIVER_NAME = {DEFAULT_AUDIO_INPUT_DRIVER_NAME};
char *AUDIO_INPUT_DEVICE_NAME = {DEFAULT_AUDIO_INPUT_DEVICE_NAME};
enum AVCodecID AUDIO_ENCODED_CODEC_ID = DEFAULT_USED_AUDIO_CODEC;
int AUDIO_FRAME_RATE = DEFAULT_OUTPUT_AUDIO_FRAMERATE;
char* AUDIO_OUTPUT_FILE_NAME = {DEFAULT_AUDIO_OUTPUT_FILE_NAME};
static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,
uint64_t channel_layout,
int sample_rate, int nb_samples)
{
AVFrame *frame = av_frame_alloc();
int ret;
if (!frame) {
syslog(LOG_ERR, "Error allocating an audio frame\n");
exit(0);
}
frame->format = sample_fmt;
frame->channel_layout = channel_layout;
frame->sample_rate = sample_rate;
frame->nb_samples = nb_samples;
if (nb_samples) {
ret = av_frame_get_buffer(frame, 0);
if (ret < 0) {
syslog(LOG_ERR, "Error allocating an audio buffer\n");
exit(0);
}
}
return frame;
}
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
syslog(LOG_INFO, "AUDIO pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base), pkt->stream_index);
}
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
{
/* rescale output packet timestamp values from codec to stream timebase */
//printf("Write Time Rescale\n");
av_packet_rescale_ts(pkt, *time_base, st->time_base);
pkt->stream_index = st->index;
/* Write the compressed frame to the media file. */
log_packet(fmt_ctx, pkt);
//printf("Write In File Audio packet size of %d\n", pkt->size);
//return av_interleaved_write_frame(fmt_ctx, pkt);
return av_write_frame(fmt_ctx, pkt);
}
static void openAudioInput(const char *driverName, const char *deviceName){
int i; AVInputFormat *file_iformat = NULL;
if((file_iformat = av_find_input_format(driverName)) == NULL){
syslog(LOG_ERR ,"The %s doesn't seem to be registered\n", driverName);
exit(0);
}
/* Open the device, in order to use the audio linux driver. */
if((avformat_open_input(&ifmt_ctx, deviceName, file_iformat, NULL)) < 0){
syslog(LOG_ERR ,"Error while trying to open %s.\n", deviceName);
exit(0);
}
if((avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
syslog(LOG_ERR, "Cannot find stream information\n");
exit(0);
}
/* Open decoder */
//printf("Number of input stream: %d\n", ifmt_ctx->nb_streams);
/*if(ifmt_ctx->streams[0]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
printf("AUDIO_TYPE\n");*/
for(i = 0; i < ifmt_ctx->nb_streams; i++)
if(ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO
|| ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
if(avcodec_open2(ifmt_ctx->streams[i]->codec,
avcodec_find_decoder(ifmt_ctx->streams[i]->codec->codec_id), NULL) < 0){
syslog(LOG_ERR, "Cannot find stream information\n");
exit(0);
}
av_dump_format(ifmt_ctx, 1, deviceName, 0);
}
static void openAudioOutput(const char *deviceName, const char *fileName, enum AVCodecID encodeCodec){
AVStream *out_stream = NULL, *in_stream = NULL;
AVCodec *encoder;
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, fileName);
if(!ofmt_ctx){
syslog(LOG_ERR, "Could not create output context\n");
exit(0);
}
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if(!out_stream){
syslog(LOG_ERR, "Failed allocating output stream\n");
exit(0);
}
(ifmt_ctx!=NULL) ? in_stream = ifmt_ctx->streams[0] : exit(0);
dec_ctx = in_stream->codec;
enc_ctx = out_stream->codec;
/* find encoder */
encoder = avcodec_find_encoder(encodeCodec);
enc_ctx->codec = encoder;
/* AUDIO PARAMETERS */
enc_ctx->sample_fmt = encoder->sample_fmts[0];
enc_ctx->bit_rate = 128000; //added
enc_ctx->sample_rate = dec_ctx->sample_rate;
enc_ctx->channel_layout = AV_CH_LAYOUT_MONO;//dec_ctx->channel_layout;
out_stream->time_base = enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
printf("Sample Rate: %d Number Encoded channels: %d\n", dec_ctx->sample_rate, enc_ctx->channels);
/* Open encoder with the found codec */
if(avcodec_open2(enc_ctx, encoder, NULL) < 0) {
syslog(LOG_ERR, "Cannot open audio encoder for stream\n");
exit(0);
}
av_dump_format(ofmt_ctx, 0, fileName, 1);
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE))
if(avio_open(&ofmt_ctx->pb, fileName, AVIO_FLAG_WRITE) < 0){
syslog(LOG_ERR, "Could not open output file '%s'", fileName);
exit(0);
}
/* init muxer, write output file header */
if(avformat_write_header(ofmt_ctx, NULL) < 0){
syslog(LOG_ERR, "Error occurred when opening output file\n");
exit(0);
}
decodedAudioFrame = av_frame_alloc();
rescaledAudioFrame = av_frame_alloc();
}
void initAudio(void){
openAudioInput(AUDIO_INPUT_DRIVER_NAME, AUDIO_INPUT_DEVICE_NAME);
openAudioOutput(AUDIO_INPUT_DEVICE_NAME, AUDIO_OUTPUT_FILE_NAME, AUDIO_ENCODED_CODEC_ID);
}
void *audioThread(void){
int16_t * samples;
int gotDecodedFrame, dst_nb_samples, samples_count=0;
int packetCounter = 0;
int i = 0, got_packet, got_input, ret;
float sizeOfFile = 0;
AVPacket packet = { .data = NULL, .size = 0 };
struct timespec t0, t1;
int flags = fcntl(0, F_GETFL);
flags = fcntl(0, F_SETFL, flags | O_NONBLOCK); //set non-blocking read on stdin
packetCounter = 0;
do{
clock_gettime(CLOCK_REALTIME, &t0);
if ((av_read_frame(ifmt_ctx, &audioPacket)) < 0){
break;
}
packetCounter++;
clock_gettime(CLOCK_REALTIME, &t1);
av_init_packet(&audioEncodedPacket);
audioEncodedPacket.data = NULL;
audioEncodedPacket.size = 0;
if (avcodec_decode_audio4(dec_ctx, decodedAudioFrame, &gotDecodedFrame, &audioPacket) < 0) {
syslog(LOG_ERR ,"Can't Decode the packet received from the camera.\n");
exit(0);
}
printf("Audio Decoded, Nb Channel %d, Samples per Channel %d, Size %d, PTS %ld\n",
decodedAudioFrame->channels,
decodedAudioFrame->nb_samples,
decodedAudioFrame->pkt_size,
decodedAudioFrame->pkt_pts);
/*if((ret = swr_convert(swr_ctx, rescaledAudioFrame->data, rescaledAudioFrame->nb_samples,
(const uint8_t **)decodedAudioFrame->data, decodedAudioFrame->nb_samples)) < 0){
syslog(LOG_ERR, "Error while converting\n");
exit(0);
}*/
//decodedAudioFrame->pts = audioPacket.pts;//(int64_t)((1.0 / (float)64000) * (float)90 * (float)packetCounter);
ret = avcodec_encode_audio2(enc_ctx, &audioEncodedPacket, decodedAudioFrame, &got_packet);
printf("Audio encoded packet size = %d, packet nb = %d, sample rate = %d Ret Value = %d\n", audioEncodedPacket.size, packetCounter, enc_ctx->sample_rate, ret);
audioPacket.pts = (int64_t)((1.0 / (float)enc_ctx->sample_rate) * (float)90 * (float)packetCounter);
audioPacket.dts = audioPacket.pts-1;
ret = write_frame(ofmt_ctx, &enc_ctx->time_base, streamAudio, &audioEncodedPacket);
av_free_packet(&audioEncodedPacket);
ssize_t readVal = read(0, &videoAudioThreadExit, 1); // read non-blocking
}while(videoAudioThreadExit != 'q');
syslog(LOG_INFO ,"End Audio Thread\n");
return NULL;
}
int main(int argc, char** argv){
int i=0;
openlog ("TEST", LOG_CONS | LOG_PERROR | LOG_NDELAY, LOG_USER);
syslog (LOG_INFO, "Syslog correctly loaded.\n");
syslog (LOG_INFO, "Program started by user UID %d\n", getuid ());
av_register_all();
avdevice_register_all();
avcodec_register_all();
avfilter_register_all();
printf("\n\n\t START GLOBAL INIT\n");
initAudio();
pthread_create(&t[0], &ctrl[0], (void*)audioThread, NULL);
for(i=0;icode>