Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (91)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (10074)

  • VP8 And FFmpeg

    18 juin 2010, par Multimedia Mike — VP8

    UPDATE, 2010-06-17 : You don’t need to struggle through these instructions anymore. libvpx 0.9.1 and FFmpeg 0.6 work together much better. Please see this post for simple instructions on getting up and running quickly.

    Let’s take the VP8 source code (in Google’s new libvpx library) for a spin ; get it to compile and hook it up to FFmpeg. I am hesitant to publish specific instructions for building in the somewhat hackish manner available on day 1 (download FFmpeg at a certain revision and apply a patch) since that kind of post has a tendency to rise in Google rankings. I will just need to remember to update this post after the library patches are applied to the official FFmpeg tree.

    Statement of libvpx’s Relationship to FFmpeg
    I don’t necessarily speak officially for FFmpeg. But I’ve been with the project long enough to explain how certain things work.

    Certainly, some may wonder if FFmpeg will incorporate Google’s newly open sourced libvpx library into FFmpeg. In the near term, FFmpeg will support encoding and decoding VP8 via external library as it does with a number of other libraries (most popularly, libx264). FFmpeg will not adopt the code for its own codebase, even if the license may allow it. That just isn’t how the FFmpeg crew rolls.

    In the longer term, expect the FFmpeg project to develop an independent, interoperable implementation of the VP8 decoder. Sometime after that, there may also be an independent VP8 encoder as well.

    Building libvpx
    Download and build libvpx. This is a basic ’configure && make’ process. The build process creates a static library, a bunch of header files, and 14 utilities. A bunch of these utilities operate on a file format called IVF which is apparently a simple transport method for VP8. I have recorded the file format on the wiki.

    We could use a decoder for this in the FFmpeg code base for testing VP8 in the future. Who’s game ? Just as I was proofreading this post, I saw that David Conrad has sent an IVF demuxer to the ffmpeg-devel list.

    There doesn’t seem to be a ’make install’ step for the library. Instead, go into the overly long directory (on my system, this is generated as vpx-vp8-nopost-nodocs-generic-gnu-v0.9.0), copy the contents of include/ to /usr/local/include and the static library in lib/ to /usr/local/lib .

    Building FFmpeg with libvpx
    Download FFmpeg source code at the revision specified or take your chances with the latest version (as I did). Download and apply provided patches. This part hurts since there is one diff per file. Most of them applied for me.

    Configure FFmpeg with 'configure --enable-libvpx_vp8 --enable-pthreads'. Ideally, this should yield no complaints and ’libvpx_vp8’ should show up in the enabled decoders and encoders sections. The library apparently relies on threading which is why '--enable-pthreads' is necessary. After I did this, I was able to create a new webm/VP8/Vorbis file simply with :

     ffmpeg -i input_file output_file.webm
    

    Unfortunately, I can’t complete the round trip as decoding doesn’t seem to work. Passing the generated .webm file back into FFmpeg results in a bunch of errors of this format :

    [libvpx_vp8 @ 0x8c4ab20]v0.9.0
    [libvpx_vp8 @ 0x8c4ab20]Failed to initialize decoder : Codec does not implement requested capability
    

    Maybe this is the FFmpeg revision mismatch biting me.

    FFmpeg Presets
    FFmpeg features support for preset files which contain collections of tuning options to be loaded into the program. Google provided some presets along with their FFmpeg patches :

    • 1080p50
    • 1080p
    • 360p
    • 720p50
    • 720p

    To invoke one of these (assuming the program has been installed via ’make install’ so that the presets are in the right place) :

     ffmpeg -i input_file -vcodec libvpx_vp8 -vpre 720p output_file.webm
    

    This will use a set of parameters that are known to do well when encoding a 720p video.

    Code Paths
    One of goals with this post was to visualize a call graph after I got the decoder hooked up to FFmpeg. Fortunately, this recon is greatly simplified by libvpx’s simple_decoder utility. Steps :

    • Build libvpx with --enable-gprof
    • Run simple_decoder on an IVF file
    • Get the pl_from_gprof.pl and dot_from_pl.pl scripts frome Graphviz’s gprof filters
    • gprof simple_decoder | ./pl_from_gprof.pl | ./dot_from_pl.pl > 001.dot
    • Remove the 2 [graph] and 1 [node] modifiers from the dot file (they only make the resulting graph very hard to read)
    • dot -Tpng 001.dot > 001.png

    Here are call graphs generated from decoding test vectors 001 and 017.


    Like this, only much larger and scarier (click for full graph)


    It’s funny to see several functions calling an empty bubble. Probably nothing to worry about. More interesting is the fact that a lot of function_c() functions are called. The ’_c’ at the end is important— that generally indicates that there are (or could be) SIMD-optimized versions. I know this codebase has plenty of assembly. All of the x86 ASM files appear to be written such that they could be compiled with NASM.

    Leftovers
    One interesting item in the code was vpx_scale/leapster. Is this in reference to the Leapster handheld educational gaming unit ? Based on this item from 2005 (archive.org copy), some Leapster titles probably used VP6. This reminds me of finding references to the PlayStation in Duck/On2’s original VpVision source release. I don’t know of any PlayStation games that used Duck’s original codecs but with thousands to choose from, it’s possible that we may find a few some day.

  • How to resample an audio with ffmpeg API ?

    13 mai 2016, par tangren

    I’d like to decode audio file(.wav .aac etc.) into raw data(pcm) and then resample it with ffmpeg API.

    I try to do it with the help of the official examples. However, the result file always loses a little data at the end of the file compared to the file generated by ffmpeg command :

    ffmpeg -i audio.wav -f s16le -ac 1 -ar 8000 -acodec pcm_s16le out.pcm.

    In this case, the input audio metadata : pcm_s16le, 16000 Hz, 1 channels, s16, 256 kb/s

    And there are 32 bites lost at the end of result file.

    What’s more, I’d like to process audios in memory, so how do I calculate the size of audio’s raw data (pcm) ?

    My code :

    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libavutil></libavutil>samplefmt.h>
    #include <libavutil></libavutil>timestamp.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>channel_layout.h>
    #include <libswresample></libswresample>swresample.h>

    int convert_to_PCM(const char* src_audio_filename, const char* dst_pcm_filename, int dst_sample_rate)
    {
       FILE *audio_dst_file = NULL;

       // for demuxing and decoding
       int ret = 0, got_frame, decoded, tmp;
       AVFormatContext *fmt_ctx = NULL;
       AVCodecContext *audio_dec_ctx;
       AVStream *audio_stream = NULL;
       int audio_stream_idx = -1;
       AVFrame *frame = NULL;
       AVPacket pkt;

       // for resampling
       struct SwrContext *swr_ctx = NULL;
       int src_sample_rate;
       uint8_t **dst_data = NULL;
       int dst_nb_samples, max_dst_nb_samples;
       enum AVSampleFormat src_sample_fmt, dst_sample_fmt = AV_SAMPLE_FMT_S16;
       int nb_channels = 1;
       int dst_bufsize, dst_linesize;
       int64_t src_ch_layout, dst_ch_layout;
       dst_ch_layout = src_ch_layout = av_get_default_channel_layout(1);

       av_register_all();

       if (avformat_open_input(&amp;fmt_ctx, src_audio_filename, NULL, NULL) &lt; 0) {
           fprintf(stderr, "Could not open source file %s\n", src_audio_filename);
           return 1;
       }

       /* retrieve stream information */
       if (avformat_find_stream_info(fmt_ctx, NULL) &lt; 0) {
           fprintf(stderr, "Could not find stream information\n");
           return 1;
       }

       if (open_codec_context(&amp;audio_stream_idx, fmt_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {
           audio_stream = fmt_ctx->streams[audio_stream_idx];
           audio_dec_ctx = audio_stream->codec;
           audio_dst_file = fopen(dst_pcm_filename, "wb");
           if (!audio_dst_file) {
               fprintf(stderr, "Could not open destination file %s\n", dst_pcm_filename);
               ret = 1;
               goto end;
           }
       }

       /* dump input information to stderr */
       av_dump_format(fmt_ctx, 0, src_audio_filename, 0);

       if (!audio_stream) {
           fprintf(stderr, "Could not find audio stream in the input, aborting\n");
           ret = 1;
           goto end;
       }

       src_sample_rate = audio_dec_ctx->sample_rate;
       if (src_sample_rate != dst_sample_rate) {
           /* create resampler context */
           swr_ctx = swr_alloc();
           if (!swr_ctx) {
               fprintf(stderr, "Could not allocate resampler context\n");
               ret = AVERROR(ENOMEM);
               goto end;
           }
           src_sample_fmt = audio_dec_ctx->sample_fmt;


           /* set options */
           av_opt_set_int(swr_ctx, "in_channel_layout",    src_ch_layout, 0);
           av_opt_set_int(swr_ctx, "in_sample_rate",       src_sample_rate, 0);
           av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", src_sample_fmt, 0);

           av_opt_set_int(swr_ctx, "out_channel_layout",    dst_ch_layout, 0);
           av_opt_set_int(swr_ctx, "out_sample_rate",       dst_sample_rate, 0);
           av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", dst_sample_fmt, 0);

           /* initialize the resampling context */
           if ((ret = swr_init(swr_ctx)) &lt; 0) {
               fprintf(stderr, "Failed to initialize the resampling context\n");
               goto end;
           }

       }

       frame = av_frame_alloc();
       if (!frame) {
           fprintf(stderr, "Could not allocate frame\n");
           ret = AVERROR(ENOMEM);
           goto end;
       }

       /* initialize packet, set data to NULL, let the demuxer fill it */
       av_init_packet(&amp;pkt);
       pkt.data = NULL;
       pkt.size = 0;

       printf("Convert audio from file '%s' into '%s'\n", src_audio_filename, dst_pcm_filename);

       /* read frames from the file */
       while (av_read_frame(fmt_ctx, &amp;pkt) >= 0) {
           AVPacket orig_pkt = pkt;
           do {
               decoded = pkt.size;
               got_frame = 0;
               tmp = 0;
               /* decode audio frame */
               tmp = avcodec_decode_audio4(audio_dec_ctx, frame, &amp;got_frame, &amp;pkt);
               if (tmp &lt; 0) {
                   fprintf(stderr, "Error decoding audio frame (%s)\n", av_err2str(tmp));
                   break;
               }
               decoded = FFMIN(tmp, pkt.size);
               if (got_frame) {
                   if (swr_ctx) {
                       if (dst_data == NULL) {
                           max_dst_nb_samples = dst_nb_samples = av_rescale_rnd(frame->nb_samples, dst_sample_rate, src_sample_rate, AV_ROUND_UP);
                           ret = av_samples_alloc_array_and_samples(&amp;dst_data, &amp;dst_linesize, nb_channels,
                                                                   dst_nb_samples, dst_sample_fmt, 0);
                           if (ret &lt; 0) {
                               fprintf(stderr, "Could not allocate destination samples\n");
                               goto end;
                           }
                       }
                       dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_sample_rate) + frame->nb_samples, dst_sample_rate, src_sample_rate, AV_ROUND_UP);
                       if (dst_nb_samples > max_dst_nb_samples) {
                           av_freep(&amp;dst_data[0]);
                           ret = av_samples_alloc(dst_data, &amp;dst_linesize, nb_channels, dst_nb_samples, dst_sample_fmt, 1);
                           if (ret &lt; 0)
                               break;
                           max_dst_nb_samples = dst_nb_samples;
                       }

                       /* convert to destination format */
                       ret = swr_convert(swr_ctx, dst_data, dst_nb_samples, (const uint8_t **)frame->data, frame->nb_samples);
                       if (ret &lt; 0) {
                           fprintf(stderr, "Error while converting\n");
                           goto end;
                       }
                       dst_bufsize = av_samples_get_buffer_size(&amp;dst_linesize, nb_channels, ret, dst_sample_fmt, 1);
                       if (dst_bufsize &lt; 0) {
                           fprintf(stderr, "Could not get sample buffer size\n");
                           goto end;
                       }
                       printf("Decoded size: %d, dst_nb_samples: %d, converted size: %d, sample buffer size: %d\n", decoded, dst_nb_samples, ret, dst_bufsize);
                       fwrite(dst_data[0], 1, dst_bufsize, audio_dst_file);
                   }
                   else {
                       size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
                       fwrite(frame->extended_data[0], 1, unpadded_linesize, audio_dst_file);
                   }
               }

               if (decoded &lt; 0)
                   break;
               pkt.data += tmp;
               pkt.size -= tmp;
           } while (pkt.size > 0);
           av_packet_unref(&amp;orig_pkt);
       }

       printf("Convert succeeded.\n");

    end:
       avcodec_close(audio_dec_ctx);
       avformat_close_input(&amp;fmt_ctx);
       if (audio_dst_file)
           fclose(audio_dst_file);
       av_frame_free(&amp;frame);
       if (dst_data)
           av_freep(&amp;dst_data[0]);
       av_freep(&amp;dst_data);
       swr_free(&amp;swr_ctx);

       return ret;
    }

    static int open_codec_context(int *stream_idx,
                               AVFormatContext *fmt_ctx, enum AVMediaType type)
    {
       int ret, stream_index;
       AVStream *st;
       AVCodecContext *dec_ctx = NULL;
       AVCodec *dec = NULL;
       AVDictionary *opts = NULL;

       ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not find %s stream in input file.\n",
                   av_get_media_type_string(type));
           return ret;
       } else {
           stream_index = ret;
           st = fmt_ctx->streams[stream_index];

           /* find decoder for the stream */
           dec_ctx = st->codec;
           dec = avcodec_find_decoder(dec_ctx->codec_id);
           if (!dec) {
               fprintf(stderr, "Failed to find %s codec\n",
                       av_get_media_type_string(type));
               return AVERROR(EINVAL);
           }

           if ((ret = avcodec_open2(dec_ctx, dec, &amp;opts)) &lt; 0) {
               fprintf(stderr, "Failed to open %s codec\n",
                       av_get_media_type_string(type));
               return ret;
           }
           *stream_idx = stream_index;
       }

       return 0;
    }

    static int get_format_from_sample_fmt(const char **fmt,
                                       enum AVSampleFormat sample_fmt)
    {
       int i;
       struct sample_fmt_entry {
           enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le;
       } sample_fmt_entries[] = {
           { AV_SAMPLE_FMT_U8,  "u8",    "u8"    },
           { AV_SAMPLE_FMT_S16, "s16be", "s16le" },
           { AV_SAMPLE_FMT_S32, "s32be", "s32le" },
           { AV_SAMPLE_FMT_FLT, "f32be", "f32le" },
           { AV_SAMPLE_FMT_DBL, "f64be", "f64le" },
       };
       *fmt = NULL;

       for (i = 0; i &lt; FF_ARRAY_ELEMS(sample_fmt_entries); i++) {
           struct sample_fmt_entry *entry = &amp;sample_fmt_entries[i];
           if (sample_fmt == entry->sample_fmt) {
               *fmt = AV_NE(entry->fmt_be, entry->fmt_le);
               return 0;
           }
       }

       fprintf(stderr,
               "sample format %s is not supported as output format\n",
               av_get_sample_fmt_name(sample_fmt));
       return -1;
    }
  • Join us at MatomoCamp 2024 world tour edition

    13 novembre 2024, par Daniel Crough — Uncategorized

    Join us at MatomoCamp 2024 world tour edition, our online conference dedicated to Matomo Analytics—the leading open-source web analytics platform that prioritises data privacy.

    • 🗓️ Date : 14 November 2024
    • 🌐 Format : 24-hour virtual conference accessible worldwide
    • 💰 Cost : Free and no need to register

    Event highlights

    Opening ceremony

    Begin the day with a welcome from Ronan Chardonneau, co-organiser of MatomoCamp and customer success manager at Matomo.

    View session | iCal link

    Keynote : “Matomo by its creator”

    Attend a special session with Matthieu Aubry, the founder of Matomo Analytics. Learn about the platform’s evolution and future developments.

    View session | iCal link

    Explore MatomoCamp 2024’s diverse tracks and topics

    MatomoCamp 2024 offers a wide range of topics across several tracks, including using Matomo, integration, digital analytics, privacy, plugin development, system administration, business, other free analytics, use cases, and workshops and panel talks.

    Featured sessions

    1. Using AI to fetch raw data with Python

    Speaker : Ralph Conti
    Time : 14 November, 12:00 PM UTC

    Discover how to combine AI and Matomo’s API to create unique reporting solutions. Leverage Python for advanced data analysis and unlock new possibilities in your analytics workflow.

    View session | iCal link

    2. Supercharge Matomo event tracking with custom reports

    Speaker : Thomas Steur
    Time : 14 November, 2:00 PM UTC

    Learn how to enhance event tracking and simplify data analysis using Matomo’s custom reports feature. This session will help you unlock the full potential of your event data.

    View session | iCal link

    3. GDPR with AI and AI Act

    Speaker : Stefanie Bauer
    Time : 14 November, 4:00 PM UTC

    Navigate the complexities of data protection requirements for AI systems under GDPR. Explore the implications of the new AI Act and receive practical tips for compliance.

    View session | iCal link

    4. A new data mesh era !

    Speaker : Jorge Powers
    Time : 14 November, 4:00 PM UTC

    Explore how Matomo supports the data mesh approach, enabling decentralised data ownership and privacy-focused analytics. Learn how to empower teams to manage and analyse data without third-party reliance.

    View session | iCal link

    5. Why Matomo has to create a MTM server side : The future of data privacy and user tracking

    Panel discussion
    Time : 14 November, 6:00 PM UTC

    Join experts in a discussion on the necessity of server-side tag management for enhanced privacy and compliance. Delve into the future of data privacy and user tracking.

    View session | iCal link

    6. Visualisation of Matomo data using external tools

    Speaker : Leticia Rodríguez Morado
    Time : 14 November, 8:00 PM UTC

    Learn how to create compelling dashboards using Grafana and Matomo data. Enhance your data visualisation skills and gain better insights.

    View session | iCal link

    7. Keep it simple : Tracking what matters with Matomo

    Speaker : Scott Fillman
    Time : 14 November, 9:00 PM UTC

    Discover how to focus on essential metrics and simplify your analytics setup for more effective insights. Learn tactics for a powerful, streamlined Matomo configuration.

    View session | iCal link

    Stay connected

    Stay updated with the latest news and announcements :

    Don’t miss out

    MatomoCamp 2024 world tour edition is more than a conference ; it’s a global gathering shaping the future of ethical analytics. Whether you aim to enhance your skills, stay informed about industry trends, or network with professionals worldwide, this event offers valuable opportunities.

    For any enquiries, please contact us at info@matomocamp.org. We look forward to your participation.