
Recherche avancée
Médias (91)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
avec chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
sans chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
config chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (12)
-
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Configuration spécifique d’Apache
4 février 2011, parModules spécifiques
Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
Création d’un (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (3446)
-
How to resample an audio with ffmpeg API ?
13 mai 2016, par tangrenI’d like to decode audio file(.wav .aac etc.) into raw data(pcm) and then resample it with ffmpeg API.
I try to do it with the help of the official examples. However, the result file always loses a little data at the end of the file compared to the file generated by ffmpeg command :
ffmpeg -i audio.wav -f s16le -ac 1 -ar 8000 -acodec pcm_s16le out.pcm
.In this case, the input audio metadata :
pcm_s16le, 16000 Hz, 1 channels, s16, 256 kb/s
And there are 32 bites lost at the end of result file.
What’s more, I’d like to process audios in memory, so how do I calculate the size of audio’s raw data (pcm) ?
My code :
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>samplefmt.h>
#include <libavutil></libavutil>timestamp.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libswresample></libswresample>swresample.h>
int convert_to_PCM(const char* src_audio_filename, const char* dst_pcm_filename, int dst_sample_rate)
{
FILE *audio_dst_file = NULL;
// for demuxing and decoding
int ret = 0, got_frame, decoded, tmp;
AVFormatContext *fmt_ctx = NULL;
AVCodecContext *audio_dec_ctx;
AVStream *audio_stream = NULL;
int audio_stream_idx = -1;
AVFrame *frame = NULL;
AVPacket pkt;
// for resampling
struct SwrContext *swr_ctx = NULL;
int src_sample_rate;
uint8_t **dst_data = NULL;
int dst_nb_samples, max_dst_nb_samples;
enum AVSampleFormat src_sample_fmt, dst_sample_fmt = AV_SAMPLE_FMT_S16;
int nb_channels = 1;
int dst_bufsize, dst_linesize;
int64_t src_ch_layout, dst_ch_layout;
dst_ch_layout = src_ch_layout = av_get_default_channel_layout(1);
av_register_all();
if (avformat_open_input(&fmt_ctx, src_audio_filename, NULL, NULL) < 0) {
fprintf(stderr, "Could not open source file %s\n", src_audio_filename);
return 1;
}
/* retrieve stream information */
if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
fprintf(stderr, "Could not find stream information\n");
return 1;
}
if (open_codec_context(&audio_stream_idx, fmt_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {
audio_stream = fmt_ctx->streams[audio_stream_idx];
audio_dec_ctx = audio_stream->codec;
audio_dst_file = fopen(dst_pcm_filename, "wb");
if (!audio_dst_file) {
fprintf(stderr, "Could not open destination file %s\n", dst_pcm_filename);
ret = 1;
goto end;
}
}
/* dump input information to stderr */
av_dump_format(fmt_ctx, 0, src_audio_filename, 0);
if (!audio_stream) {
fprintf(stderr, "Could not find audio stream in the input, aborting\n");
ret = 1;
goto end;
}
src_sample_rate = audio_dec_ctx->sample_rate;
if (src_sample_rate != dst_sample_rate) {
/* create resampler context */
swr_ctx = swr_alloc();
if (!swr_ctx) {
fprintf(stderr, "Could not allocate resampler context\n");
ret = AVERROR(ENOMEM);
goto end;
}
src_sample_fmt = audio_dec_ctx->sample_fmt;
/* set options */
av_opt_set_int(swr_ctx, "in_channel_layout", src_ch_layout, 0);
av_opt_set_int(swr_ctx, "in_sample_rate", src_sample_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", src_sample_fmt, 0);
av_opt_set_int(swr_ctx, "out_channel_layout", dst_ch_layout, 0);
av_opt_set_int(swr_ctx, "out_sample_rate", dst_sample_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", dst_sample_fmt, 0);
/* initialize the resampling context */
if ((ret = swr_init(swr_ctx)) < 0) {
fprintf(stderr, "Failed to initialize the resampling context\n");
goto end;
}
}
frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Could not allocate frame\n");
ret = AVERROR(ENOMEM);
goto end;
}
/* initialize packet, set data to NULL, let the demuxer fill it */
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
printf("Convert audio from file '%s' into '%s'\n", src_audio_filename, dst_pcm_filename);
/* read frames from the file */
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
AVPacket orig_pkt = pkt;
do {
decoded = pkt.size;
got_frame = 0;
tmp = 0;
/* decode audio frame */
tmp = avcodec_decode_audio4(audio_dec_ctx, frame, &got_frame, &pkt);
if (tmp < 0) {
fprintf(stderr, "Error decoding audio frame (%s)\n", av_err2str(tmp));
break;
}
decoded = FFMIN(tmp, pkt.size);
if (got_frame) {
if (swr_ctx) {
if (dst_data == NULL) {
max_dst_nb_samples = dst_nb_samples = av_rescale_rnd(frame->nb_samples, dst_sample_rate, src_sample_rate, AV_ROUND_UP);
ret = av_samples_alloc_array_and_samples(&dst_data, &dst_linesize, nb_channels,
dst_nb_samples, dst_sample_fmt, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate destination samples\n");
goto end;
}
}
dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_sample_rate) + frame->nb_samples, dst_sample_rate, src_sample_rate, AV_ROUND_UP);
if (dst_nb_samples > max_dst_nb_samples) {
av_freep(&dst_data[0]);
ret = av_samples_alloc(dst_data, &dst_linesize, nb_channels, dst_nb_samples, dst_sample_fmt, 1);
if (ret < 0)
break;
max_dst_nb_samples = dst_nb_samples;
}
/* convert to destination format */
ret = swr_convert(swr_ctx, dst_data, dst_nb_samples, (const uint8_t **)frame->data, frame->nb_samples);
if (ret < 0) {
fprintf(stderr, "Error while converting\n");
goto end;
}
dst_bufsize = av_samples_get_buffer_size(&dst_linesize, nb_channels, ret, dst_sample_fmt, 1);
if (dst_bufsize < 0) {
fprintf(stderr, "Could not get sample buffer size\n");
goto end;
}
printf("Decoded size: %d, dst_nb_samples: %d, converted size: %d, sample buffer size: %d\n", decoded, dst_nb_samples, ret, dst_bufsize);
fwrite(dst_data[0], 1, dst_bufsize, audio_dst_file);
}
else {
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
fwrite(frame->extended_data[0], 1, unpadded_linesize, audio_dst_file);
}
}
if (decoded < 0)
break;
pkt.data += tmp;
pkt.size -= tmp;
} while (pkt.size > 0);
av_packet_unref(&orig_pkt);
}
printf("Convert succeeded.\n");
end:
avcodec_close(audio_dec_ctx);
avformat_close_input(&fmt_ctx);
if (audio_dst_file)
fclose(audio_dst_file);
av_frame_free(&frame);
if (dst_data)
av_freep(&dst_data[0]);
av_freep(&dst_data);
swr_free(&swr_ctx);
return ret;
}
static int open_codec_context(int *stream_idx,
AVFormatContext *fmt_ctx, enum AVMediaType type)
{
int ret, stream_index;
AVStream *st;
AVCodecContext *dec_ctx = NULL;
AVCodec *dec = NULL;
AVDictionary *opts = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
if (ret < 0) {
fprintf(stderr, "Could not find %s stream in input file.\n",
av_get_media_type_string(type));
return ret;
} else {
stream_index = ret;
st = fmt_ctx->streams[stream_index];
/* find decoder for the stream */
dec_ctx = st->codec;
dec = avcodec_find_decoder(dec_ctx->codec_id);
if (!dec) {
fprintf(stderr, "Failed to find %s codec\n",
av_get_media_type_string(type));
return AVERROR(EINVAL);
}
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
}
*stream_idx = stream_index;
}
return 0;
}
static int get_format_from_sample_fmt(const char **fmt,
enum AVSampleFormat sample_fmt)
{
int i;
struct sample_fmt_entry {
enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le;
} sample_fmt_entries[] = {
{ AV_SAMPLE_FMT_U8, "u8", "u8" },
{ AV_SAMPLE_FMT_S16, "s16be", "s16le" },
{ AV_SAMPLE_FMT_S32, "s32be", "s32le" },
{ AV_SAMPLE_FMT_FLT, "f32be", "f32le" },
{ AV_SAMPLE_FMT_DBL, "f64be", "f64le" },
};
*fmt = NULL;
for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++) {
struct sample_fmt_entry *entry = &sample_fmt_entries[i];
if (sample_fmt == entry->sample_fmt) {
*fmt = AV_NE(entry->fmt_be, entry->fmt_le);
return 0;
}
}
fprintf(stderr,
"sample format %s is not supported as output format\n",
av_get_sample_fmt_name(sample_fmt));
return -1;
} -
A *hot* Piwik Community Meetup 2015 !
10 août 2015, par André Bräkling — CommunityLast weekend I arrived in Germany to attend the Piwik Community Meetup 2015 and now I am in Poland.
The meetup was HOT in every sense ! Berlin temperatures reached 35 degrees (celsius), as I finally meet in person several long-time, dedicated Piwik community contributors.
Meetup preparation in Berlin, photo by M. Zawadziński, licensed under CC-BY-SA 4.0
Pictures from the meetup preparation sessions
In the first leg of my trip I was in Berlin to meet Piwik community members to prepare for the 2015 annual Piwik community meetup. These are my notes taken during the meeting at the request of one of my colleagues. I also relayed live on Framasphère, Twitter and IRC.
Community discussion at the meetup, photo by D.Czajka, licensed under CC-BY-SA 4.0
More pictures from the Piwik meetup
This was harder than I expected, as I took notes with my laptop, pictures with my phone, wrote live to social media (using the Android Diaspora Native Web App), and used my laptop to relay on IRC. Going forward this requires better preparation, I was glad I had a few links and pictures ready before hand but it really requires intense focus to achieve this. I am glad presenters were patient when I requested repeating some of the ideas they shared. I am also a bit disappointed not much happened in IRC.
Two day preparation sessions
The discussions and session we had during the two days prior to the meetup are available here.
We gathered in rented apartments in Berlin, this reminded me very much of similar community gatherings and perhaps of BarCamp and, at a much smaller scale, UDS sessions.
Piwik Pizza !, photo by F. Rodríguez, licensed under CC-BY-SA 4.0
A list of ideas of topics was initially submitted, we then proceeded to have scheduled sessions for open discussion. Several people shared their concern there was no possible remote participation which led to making public the Trello boards used/linked here.
Note : The Trello links below still have action items and notes that are pending bug report / feature requests filing which should happen over the coming weeks. Most importantly, many action items will need identifying leads for different community team including Translations and Documentation, and better coordination of coming community engagement.
Monday sessions consisted of the following subjects :
- What are Piwik values & how to communicate them ? (see below for details)
- How to encourage and recognize new external contributors ?
- How could we double the Piwik userbase ?
- How Community can organise help resources
On Tuesday we met again to discuss the following subjects :
- Piwik Long Term support (LTS)
- How do Piwik.org (project) and Piwik PRO (company) sit together / are organized ? – An important part of this session was about having better communication channels and improving the new team page (bug #8520 and bug #8519, respectively)
- Improving usability of Piwik e.g. for new users – this last session was not held has we ran out of time and prepared to go to the meetup venue.
Some more details about individual preparation sessions
What are Piwik values & how to communicate them ?
The main subjects in this session were important changes proposed in the project mission and values. This was edited directly on on the wiki page on GitHub, some of the changes can be seen by comparing revisions.
Piwik mission statement (bug #7376)
“To create the leading Free and open source analytics platform, and to support global organisations and communities to keep full control over their data.”
Our values
- Openness
- Freedom
- Transparency
- Data ownership
- Privacy
- Kaizen (改善) : continuous improvement
This was also presented by Matthieu Aubry at the meetup and is published in the Roadmap page. Bringing more visibility and perhaps having a top page for Mission and Values was also brought up.
Meetup agenda and notes
The official agenda is available here.
Many Piwik PRO employees stayed in Berlin for the meetup, and we had good participation although less than last year in Munich as my colleagues told me. Some were consultants, others staff from public organizations, universities, etc. In retrospect considering the very hot weather and summer holidays the attendance was good. I was very happy to arrive at the beautiful Kulturbrauerei and enter the air-conditioned Soda Club. T-Shirts were waiting for all attendees and free drinks (non-alcohol !) were welcome
-
VP8 And FFmpeg
18 juin 2010, par Multimedia Mike — VP8UPDATE, 2010-06-17 : You don’t need to struggle through these instructions anymore. libvpx 0.9.1 and FFmpeg 0.6 work together much better. Please see this post for simple instructions on getting up and running quickly.
Let’s take the VP8 source code (in Google’s new libvpx library) for a spin ; get it to compile and hook it up to FFmpeg. I am hesitant to publish specific instructions for building in the somewhat hackish manner available on day 1 (download FFmpeg at a certain revision and apply a patch) since that kind of post has a tendency to rise in Google rankings. I will just need to remember to update this post after the library patches are applied to the official FFmpeg tree.
Statement of libvpx’s Relationship to FFmpeg
I don’t necessarily speak officially for FFmpeg. But I’ve been with the project long enough to explain how certain things work.Certainly, some may wonder if FFmpeg will incorporate Google’s newly open sourced libvpx library into FFmpeg. In the near term, FFmpeg will support encoding and decoding VP8 via external library as it does with a number of other libraries (most popularly, libx264). FFmpeg will not adopt the code for its own codebase, even if the license may allow it. That just isn’t how the FFmpeg crew rolls.
In the longer term, expect the FFmpeg project to develop an independent, interoperable implementation of the VP8 decoder. Sometime after that, there may also be an independent VP8 encoder as well.
Building libvpx
Download and build libvpx. This is a basic ’configure && make’ process. The build process creates a static library, a bunch of header files, and 14 utilities. A bunch of these utilities operate on a file format called IVF which is apparently a simple transport method for VP8. I have recorded the file format on the wiki.We could use a decoder for this in the FFmpeg code base for testing VP8 in the future.
Who’s game ?Just as I was proofreading this post, I saw that David Conrad has sent an IVF demuxer to the ffmpeg-devel list.There doesn’t seem to be a ’make install’ step for the library. Instead, go into the overly long directory (on my system, this is generated as vpx-vp8-nopost-nodocs-generic-gnu-v0.9.0), copy the contents of include/ to /usr/local/include and the static library in lib/ to /usr/local/lib .
Building FFmpeg with libvpx
Download FFmpeg source code at the revision specified or take your chances with the latest version (as I did). Download and apply provided patches. This part hurts since there is one diff per file. Most of them applied for me.Configure FFmpeg with
'configure --enable-libvpx_vp8 --enable-pthreads'
. Ideally, this should yield no complaints and ’libvpx_vp8’ should show up in the enabled decoders and encoders sections. The library apparently relies on threading which is why'--enable-pthreads'
is necessary. After I did this, I was able to create a new webm/VP8/Vorbis file simply with :ffmpeg -i input_file output_file.webm
Unfortunately, I can’t complete the round trip as decoding doesn’t seem to work. Passing the generated .webm file back into FFmpeg results in a bunch of errors of this format :
[libvpx_vp8 @ 0x8c4ab20]v0.9.0 [libvpx_vp8 @ 0x8c4ab20]Failed to initialize decoder : Codec does not implement requested capability
Maybe this is the FFmpeg revision mismatch biting me.
FFmpeg Presets
FFmpeg features support for preset files which contain collections of tuning options to be loaded into the program. Google provided some presets along with their FFmpeg patches :- 1080p50
- 1080p
- 360p
- 720p50
- 720p
To invoke one of these (assuming the program has been installed via ’make install’ so that the presets are in the right place) :
ffmpeg -i input_file -vcodec libvpx_vp8 -vpre 720p output_file.webm
This will use a set of parameters that are known to do well when encoding a 720p video.
Code Paths
One of goals with this post was to visualize a call graph after I got the decoder hooked up to FFmpeg. Fortunately, this recon is greatly simplified by libvpx’s simple_decoder utility. Steps :- Build libvpx with
--enable-gprof
- Run simple_decoder on an IVF file
- Get the pl_from_gprof.pl and dot_from_pl.pl scripts frome Graphviz’s gprof filters
- gprof simple_decoder | ./pl_from_gprof.pl | ./dot_from_pl.pl > 001.dot
- Remove the 2 [graph] and 1 [node] modifiers from the dot file (they only make the resulting graph very hard to read)
- dot -Tpng 001.dot > 001.png
Here are call graphs generated from decoding test vectors 001 and 017.
It’s funny to see several functions calling an empty bubble. Probably nothing to worry about. More interesting is the fact that a lot of function_c() functions are called. The ’_c’ at the end is important— that generally indicates that there are (or could be) SIMD-optimized versions. I know this codebase has plenty of assembly. All of the x86 ASM files appear to be written such that they could be compiled with NASM.
Leftovers
One interesting item in the code was vpx_scale/leapster. Is this in reference to the Leapster handheld educational gaming unit ? Based on this item from 2005 (archive.org copy), some Leapster titles probably used VP6. This reminds me of finding references to the PlayStation in Duck/On2’s original VpVision source release. I don’t know of any PlayStation games that used Duck’s original codecs but with thousands to choose from, it’s possible that we may find a few some day.