Recherche avancée

Médias (91)

Autres articles (76)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (8792)

  • Top 4 CRO Tools to Boost Your Conversion Rates in 2024

    31 octobre 2023, par Erin

    Are you tired of watching potential customers leave your website without converting ? You’ve spent countless hours creating an engaging website, but those high bounce rates keep haunting you.

    The good news ? The solution lies in the transformative power of Conversion Rate Optimisation (CRO) tools. In this guide, we’ll dive deep into the world of CRO tools. We will equip you with strategies to turn those bounces into conversions.

    Why are conversion rate optimisation tools so crucial ?

    CRO tools can be assets in digital marketing, playing a pivotal role in enhancing online businesses’ performance. CRO tools empower businesses to improve website conversion rates by analysing user behaviour. You can then leverage this user data to optimise web elements.

    Improving website conversion rates is paramount because it increases revenue and customer satisfaction. A study by VentureBeat revealed an average return on investment (ROI) of 223% thanks to CRO tools.

    173 marketers out of the surveyed group reported returns exceeding 1,000%. Both of these data points highlight the impact CRO tools can have.

    Toolbox with a "CRO" label full of various tools

    Coupled with CRO tools, certain testing tools and web analytics tools play a crucial role. They offer insight into user behaviour patterns, enabling businesses to choose effective strategies. By understanding what resonates with users, these tools help inform data-driven decisions. This allows businesses to refine online strategies and enhance the customer experience.

    CRO tools enhance user experiences and ensure business sustainability. Integrating these tools is crucial for staying ahead. CRO and web analytics work together to optimise digital presence. 

    Real-world examples of CRO tools in action

    In this section, we’ll explore real case studies showcasing CRO tools in action. See how businesses enhance conversion rates, user experiences, and online performance. These studies reveal the practical impact of data-driven decisions and user-focused strategies.

    A computer with A and B on both sides and a magnifying glass hovering over the keyboard

    Case study : How Matomo’s Form Analytics helped Concrete CMS 3x leads

    Concrete CMS, is a content management system provider that helps users build and manage websites. They used Matomo’s Form Analytics to uncover that users were getting stuck at the address input stage of the onboarding process. Using these insights to make adjustments to their onboarding form, Concrete CMS was able to achieve 3 times the amount of leads in just a few days.

    Read the full Concrete CMS case study.

    Best analytics tools for enhancing conversion rate optimisation in 2023

    Jump to the comparison table to see an overview of each tool.

    1. Matomo

    Matomo main dashboard

    Matomo stands out as an all-encompassing tool that seamlessly combines traditional web analytics features (like pageviews and bounce rates) with advanced behavioural analytics capabilities, providing a full spectrum of insights for effective CRO.

    Key features

    • Heatmaps and Session Recordings :
      These features empower businesses to see their websites through the eyes of their visitors. By visually mapping user engagement and observing individual sessions, businesses can make informed decisions, enhance user experience and ultimately increase conversions. These tools are invaluable assets for businesses aiming to create user-friendly websites.
    • Form Analytics :
      Matomo’s Form Analytics offers comprehensive tracking of user interactions within forms. This includes covering input fields, dropdowns, buttons and submissions. Businesses can create custom conversion funnels and pinpoint form abandonment reasons. 
    • Users Flow :
      Matomo’s Users Flow feature tracks visitor paths, drop-offs and successful routes, helping businesses optimise their websites. This insight informs decisions, enhances user experience, and boosts conversion rates.
    • Surveys plugin :
      The Matomo Surveys plugin allows businesses to gather direct feedback from users. This feature enhances understanding by capturing user opinions, adding another layer to the analytical depth Matomo offers.
    • A/B testing :
      The platform allows you to conduct A/B tests to compare different versions of web pages. This helps determine which performs better in conversions. By conducting experiments and analysing the results within Matomo, businesses can iteratively refine their content and design elements.
    • Funnels :
      Matomo’s Funnels feature empower businesses to visualise, analyse and optimise their conversion paths. By identifying drop-off points, tailoring user experiences and conducting A/B tests within the funnel, businesses can make data-driven decisions that significantly boost conversions and enhance the overall user journey on their websites.

    Pros

    • Starting at $19 per month, Matomo is an affordable CRO solution.
    • Matomo guarantees accurate data, eliminating the need to fill gaps with artificial intelligence (AI) or machine learning. 
    • Matomo’s open-source framework ensures enhanced security, privacy, customisation, community support and long-term reliability. 

    Cons

    • The On-Premise (self-hosted) version is free, with additional charges for advanced features.
    • Managing Matomo On-Premise requires servers and technical know-how.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    2. Google Analytics

    Traffic tracking chart and life cycle

    Google Analytics provides businesses and website owners valuable insights into their online audience. It tracks website traffic, user interactions and analyses conversion data to enhance the user experience.

    While Google Analytics may not provide the extensive CRO-specific features found in other tools on this list, it can still serve as a valuable resource for basic analysis and optimisation of conversion rates.

    Key features

    • Comprehensive Data Tracking :
      Google Analytics meticulously tracks website traffic, user behaviour and conversion rates. These insights form the foundation for CRO efforts. Businesses can identify patterns, user bottlenecks and high-performing areas.
    • Real-Time Reporting :
      Access to real-time data is invaluable for CRO efforts. Monitor current website activity, user interactions, and campaign performance as they unfold. This immediate feedback empowers businesses to make instant adjustments, optimising web elements and content for maximum conversions.
    • User flow analysis
      Visualise and understand how visitors navigate through your website. It provides insights into the paths users take as they move from one page to another, helping you identify the most common routes and potential drop-off points in the user journey.
    • Event-based tracking :
      GA4’s event-based reporting offers greater flexibility and accuracy in data collection. By tracking various interactions, including video views and checkout processes, businesses can gather more precise insights into user behaviour. 
    • Funnels :
      GA4 offers multistep funnels, path analysis, custom metrics that integrate with audience segments. These user behaviour insights help businesses to tailor their websites, marketing campaigns and user experiences.

    Pros

    • Flexible audience management across products, regions or brands allow businesses to analyse data from multiple source properties. 
    • Google Analytics integrates with other Google services and third-party platforms. This enables a comprehensive view of online activities.
    • Free to use, although enterprises may need to switch to the paid version to accommodate higher data volumes.

    Cons

    • Google Analytics raises privacy concerns, primarily due to its tracking capabilities and the extensive data it collects.
    • Limitations imposed by thresholding can significantly hinder efforts to enhance user experience and boost conversions effectively.
    • Property and sampling limits exist. This creates problems when you’re dealing with extensive datasets or high-traffic websites. 
    • The interface is difficult to navigate and configure, resulting in a steep learning curve.

    3. Contentsquare

    Pie chart with landing page journey data

    Contentsquare is a web analytics and CRO platform. It stands out for its in-depth behavioural analytics. Contentsquare offers detailed data on how users interact with websites and mobile applications.

    Key features

    • Heatmaps and Session Replays :
      Users can visualise website interactions through heatmaps, highlighting popular areas and drop-offs. Session replay features enable the playback of user sessions. These provide in-depth insights into individual user experiences.
    • Conversion Funnel Analysis :
      Contentsquare tracks users through conversion funnels, identifying where users drop off during conversion. This helps in optimising the user journey and increasing conversion rates.
    • Segmentation and Personalisation :
      Businesses can segment their audience based on various criteria. Segments help create personalised experiences, tailoring content and offers to specific user groups.
    • Integration Capabilities :
      Contentsquare integrates with various third-party tools and platforms, enhancing its functionality and allowing businesses to leverage their existing tech stack.

    Pros

    • Comprehensive support and resources.
    • User-friendly interface.
    • Personalisation capabilities.

    Cons

    • High price point.
    • Steep learning curve.

    4. Hotjar

    Pricing page heatmap data

    Hotjar is a robust tool designed to unravel user behaviour intricacies. With its array of features including visual heatmaps, session recordings and surveys, it goes beyond just identifying popular areas and drop-offs.

    Hotjar provides direct feedback and offers an intuitive interface, enabling seamless experience optimisation.

    Key features

    • Heatmaps :
      Hotjar provides visual heatmaps that display user interactions on your website. Heatmaps show where users click, scroll, and how far they read. This feature helps identify popular areas and points of abandonment.
    • Session Recordings :
      Hotjar allows you to record user sessions and watch real interactions on your site. This insight is invaluable for understanding user behaviour and identifying usability issues.
    • Surveys and Feedback :
      Hotjar offers on-site surveys and feedback forms that can get triggered based on user behaviour. These tools help collect qualitative data from real users, providing valuable insights.
    • Recruitment Tool :
      Hotjar’s recruitment tool lets you recruit participants from your website for user testing. This feature streamlines the process of finding participants for usability studies.
    • Funnel and Form Analysis :
      Hotjar enables the tracking of user journeys through funnels. It provides insights into where users drop off during the conversion process. It also offers form analysis to optimise form completion rates.
    • User Polls :
      You can create customisable polls to engage with visitors. Gather specific feedback on your website, products, or services.

    Pros

    • Starting at $32 per month, Hotjar is a cost-effective solution for most businesses. 
    • Hotjar provides a user-friendly interface that is easy for the majority of users to pick up quickly.

    Cons

    • Does not provide traditional web analytics and requires combining with another tool, potentially creating a less streamlined and cohesive user experience, which can complicate conversion rate optimization efforts.
    • Hotjar’s limited integrations can hinder its ability to seamlessly work with other essential tools and platforms, potentially further complicating CRO.

    Comparison Table

    Please note : We aim to keep this table accurate and up to date. However, if you see any inaccuracies or outdated information, please email us at marketing@matomo.org

    To make comparing these tools even easier, we’ve put together a table for you to compare features and price points :

    A comparison chart comparing the CRO/web analytics features and price points of Matomo, Google Analytics, ContentSquare, and HotJar

    Conclusion

    CRO tools and web analytics are essential for online success. Businesses thrive by investing wisely, understanding user behaviour and using targeted strategies. The key : generate traffic and convert it into leads and customers. The right tools and strategies lead to remarkable conversions and online success. Each click, each interaction, becomes an opportunity to create an engaging user journey. This careful orchestration of data and insight separates thriving businesses from the rest.

    Are you ready to embark on a journey toward improved conversions and enhanced user experiences ? Matomo offers analytics solutions meticulously designed to complement your CRO strategy. Take the next step in your CRO journey. Start your 21-day free trial today—no credit card required.

  • Decode mp3 using FFMpeg, Android NDK - What is wrong with my AVFormatContext ?

    27 février 2020, par michpohl

    I am trying to decode am MP3 file to a raw PCM stream using FFMpeg via JNI on Android. I have compiled the latest FFMpeg version (4.2) and added it to my app. This did not make any problems.
    The goal is to be able to use mp3 files from the device’s storage for playback with oboe

    Since I am relatively inexperienced with both C++ and FFMpeg, my approach is based upon this :
    oboe’s RhythmGame example

    I have based my FFMpegExtractorclass on the one found in the example here. With the help of StackOverflow the AAssetManageruse was removed and instead a MediaSource helper class now serves as a wrapper for my stream (see here)

    But unfortunately, creating the AVFormatContext doesn’t work right - and I can’t seem to understand why. Since I have very limited understanding of correct pointer usage and C++ memory management, I suspect it’s most likely I’m doing something wrong in that area. But honestly, I have no idea.

    This is my FFMpegExtractor.h :

    #define MYAPP_FFMPEGEXTRACTOR_H

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libavutil></libavutil>opt.h>
    }

    #include <cstdint>
    #include <android></android>asset_manager.h>
    #include
    #include <fstream>
    #include "MediaSource.cpp"


    class FFMpegExtractor {
    public:

       FFMpegExtractor();

       ~FFMpegExtractor();

       int64_t decode2(char *filepath, uint8_t *targetData, AudioProperties targetProperties);

    private:
       MediaSource *mSource;

       bool createAVFormatContext(AVIOContext *avioContext, AVFormatContext **avFormatContext);

       bool openAVFormatContext(AVFormatContext *avFormatContext);

       int32_t cleanup(AVIOContext *avioContext, AVFormatContext *avFormatContext);

       bool getStreamInfo(AVFormatContext *avFormatContext);

       AVStream *getBestAudioStream(AVFormatContext *avFormatContext);

       AVCodec *findCodec(AVCodecID id);

       void printCodecParameters(AVCodecParameters *params);

       bool createAVIOContext2(const std::string &amp;filePath, uint8_t *buffer, uint32_t bufferSize,
                               AVIOContext **avioContext);
    };


    #endif //MYAPP_FFMPEGEXTRACTOR_H
    </fstream></cstdint>

    This is FFMPegExtractor.cpp :

    #include <memory>
    #include <oboe></oboe>Definitions.h>
    #include "FFMpegExtractor.h"
    #include "logging.h"
    #include <fstream>

    FFMpegExtractor::FFMpegExtractor() {
       mSource = new MediaSource;
    }

    FFMpegExtractor::~FFMpegExtractor() {
       delete mSource;
    }

    constexpr int kInternalBufferSize = 1152; // Use MP3 block size. https://wiki.hydrogenaud.io/index.php?title=MP3

    /**
    * Reads from an IStream into FFmpeg.
    *
    * @param ptr       A pointer to the user-defined IO data structure.
    * @param buf       A buffer to read into.
    * @param buf_size  The size of the buffer buff.
    *
    * @return The number of bytes read into the buffer.
    */


    // If FFmpeg needs to read the file, it will call this function.
    // We need to fill the buffer with file's data.
    int read(void *opaque, uint8_t *buffer, int buf_size) {
       MediaSource *source = (MediaSource *) opaque;
       return source->read(buffer, buf_size);
    }

    // If FFmpeg needs to seek in the file, it will call this function.
    // We need to change the read pos.
    int64_t seek(void *opaque, int64_t offset, int whence) {
       MediaSource *source = (MediaSource *) opaque;
       return source->seek(offset, whence);
    }


    // Create and save a MediaSource instance.
    bool FFMpegExtractor::createAVIOContext2(const std::string &amp;filepath, uint8_t *buffer, uint32_t bufferSize,
                                            AVIOContext **avioContext) {

       mSource = new MediaSource;
       mSource->open(filepath);
       constexpr int isBufferWriteable = 0;

       *avioContext = avio_alloc_context(
               buffer, // internal buffer for FFmpeg to use
               bufferSize, // For optimal decoding speed this should be the protocol block size
               isBufferWriteable,
               mSource, // Will be passed to our callback functions as a (void *)
               read, // Read callback function
               nullptr, // Write callback function (not used)
               seek); // Seek callback function

       if (*avioContext == nullptr) {
           LOGE("Failed to create AVIO context");
           return false;
       } else {
           return true;
       }
    }

    bool
    FFMpegExtractor::createAVFormatContext(AVIOContext *avioContext,
                                          AVFormatContext **avFormatContext) {

       *avFormatContext = avformat_alloc_context();
       (*avFormatContext)->pb = avioContext;

       if (*avFormatContext == nullptr) {
           LOGE("Failed to create AVFormatContext");
           return false;
       } else {
           LOGD("Successfully created AVFormatContext");
           return true;
       }
    }

    bool FFMpegExtractor::openAVFormatContext(AVFormatContext *avFormatContext) {

       int result = avformat_open_input(&amp;avFormatContext,
                                        "", /* URL is left empty because we're providing our own I/O */
                                        nullptr /* AVInputFormat *fmt */,
                                        nullptr /* AVDictionary **options */
       );

       if (result == 0) {
           return true;
       } else {
           LOGE("Failed to open file. Error code %s", av_err2str(result));
           return false;
       }
    }

    bool FFMpegExtractor::getStreamInfo(AVFormatContext *avFormatContext) {

       int result = avformat_find_stream_info(avFormatContext, nullptr);
       if (result == 0) {
           return true;
       } else {
           LOGE("Failed to find stream info. Error code %s", av_err2str(result));
           return false;
       }
    }

    AVStream *FFMpegExtractor::getBestAudioStream(AVFormatContext *avFormatContext) {

       int streamIndex = av_find_best_stream(avFormatContext, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);

       if (streamIndex &lt; 0) {
           LOGE("Could not find stream");
           return nullptr;
       } else {
           return avFormatContext->streams[streamIndex];
       }
    }

    int64_t FFMpegExtractor::decode2(
           char* filepath,
           uint8_t *targetData,
           AudioProperties targetProperties) {

       LOGD("Decode SETUP");
       int returnValue = -1; // -1 indicates error

       // Create a buffer for FFmpeg to use for decoding (freed in the custom deleter below)
       auto buffer = reinterpret_cast(av_malloc(kInternalBufferSize));


       // Create an AVIOContext with a custom deleter
       std::unique_ptr ioContext{
               nullptr,
               [](AVIOContext *c) {
                   av_free(c->buffer);
                   avio_context_free(&amp;c);
               }
       };
       {
           AVIOContext *tmp = nullptr;
           if (!createAVIOContext2(filepath, buffer, kInternalBufferSize, &amp;tmp)) {
               LOGE("Could not create an AVIOContext");
               return returnValue;
           }
           ioContext.reset(tmp);
       }
       // Create an AVFormatContext using the avformat_free_context as the deleter function
       std::unique_ptr formatContext{
               nullptr,
               &amp;avformat_free_context
       };

       {
           AVFormatContext *tmp;
           if (!createAVFormatContext(ioContext.get(), &amp;tmp)) return returnValue;
           formatContext.reset(tmp);
       }
       if (!openAVFormatContext(formatContext.get())) return returnValue;
       LOGD("172");

       if (!getStreamInfo(formatContext.get())) return returnValue;
       LOGD("175");

       // Obtain the best audio stream to decode
       AVStream *stream = getBestAudioStream(formatContext.get());
       if (stream == nullptr || stream->codecpar == nullptr) {
           LOGE("Could not find a suitable audio stream to decode");
           return returnValue;
       }
       LOGD("183");

       printCodecParameters(stream->codecpar);

       // Find the codec to decode this stream
       AVCodec *codec = avcodec_find_decoder(stream->codecpar->codec_id);
       if (!codec) {
           LOGE("Could not find codec with ID: %d", stream->codecpar->codec_id);
           return returnValue;
       }

       // Create the codec context, specifying the deleter function
       std::unique_ptr codecContext{
               nullptr,
               [](AVCodecContext *c) { avcodec_free_context(&amp;c); }
       };
       {
           AVCodecContext *tmp = avcodec_alloc_context3(codec);
           if (!tmp) {
               LOGE("Failed to allocate codec context");
               return returnValue;
           }
           codecContext.reset(tmp);
       }

       // Copy the codec parameters into the context
       if (avcodec_parameters_to_context(codecContext.get(), stream->codecpar) &lt; 0) {
           LOGE("Failed to copy codec parameters to codec context");
           return returnValue;
       }

       // Open the codec
       if (avcodec_open2(codecContext.get(), codec, nullptr) &lt; 0) {
           LOGE("Could not open codec");
           return returnValue;
       }

       // prepare resampler
       int32_t outChannelLayout = (1 &lt;&lt; targetProperties.channelCount) - 1;
       LOGD("Channel layout %d", outChannelLayout);

       SwrContext *swr = swr_alloc();
       av_opt_set_int(swr, "in_channel_count", stream->codecpar->channels, 0);
       av_opt_set_int(swr, "out_channel_count", targetProperties.channelCount, 0);
       av_opt_set_int(swr, "in_channel_layout", stream->codecpar->channel_layout, 0);
       av_opt_set_int(swr, "out_channel_layout", outChannelLayout, 0);
       av_opt_set_int(swr, "in_sample_rate", stream->codecpar->sample_rate, 0);
       av_opt_set_int(swr, "out_sample_rate", targetProperties.sampleRate, 0);
       av_opt_set_int(swr, "in_sample_fmt", stream->codecpar->format, 0);
       av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_FLT, 0);
       av_opt_set_int(swr, "force_resampling", 1, 0);

       // Check that resampler has been inited
       int result = swr_init(swr);
       if (result != 0) {
           LOGE("swr_init failed. Error: %s", av_err2str(result));
           return returnValue;
       };
       if (!swr_is_initialized(swr)) {
           LOGE("swr_is_initialized is false\n");
           return returnValue;
       }

       // Prepare to read data
       int bytesWritten = 0;
       AVPacket avPacket; // Stores compressed audio data
       av_init_packet(&amp;avPacket);
       AVFrame *decodedFrame = av_frame_alloc(); // Stores raw audio data
       int bytesPerSample = av_get_bytes_per_sample((AVSampleFormat) stream->codecpar->format);

       LOGD("Bytes per sample %d", bytesPerSample);

       // While there is more data to read, read it into the avPacket
       while (av_read_frame(formatContext.get(), &amp;avPacket) == 0) {

           if (avPacket.stream_index == stream->index) {

               while (avPacket.size > 0) {
                   // Pass our compressed data into the codec
                   result = avcodec_send_packet(codecContext.get(), &amp;avPacket);
                   if (result != 0) {
                       LOGE("avcodec_send_packet error: %s", av_err2str(result));
                       goto cleanup;
                   }

                   // Retrieve our raw data from the codec
                   result = avcodec_receive_frame(codecContext.get(), decodedFrame);
                   if (result != 0) {
                       LOGE("avcodec_receive_frame error: %s", av_err2str(result));
                       goto cleanup;
                   }

                   // DO RESAMPLING
                   auto dst_nb_samples = (int32_t) av_rescale_rnd(
                           swr_get_delay(swr, decodedFrame->sample_rate) + decodedFrame->nb_samples,
                           targetProperties.sampleRate,
                           decodedFrame->sample_rate,
                           AV_ROUND_UP);

                   short *buffer1;
                   av_samples_alloc(
                           (uint8_t **) &amp;buffer1,
                           nullptr,
                           targetProperties.channelCount,
                           dst_nb_samples,
                           AV_SAMPLE_FMT_FLT,
                           0);
                   int frame_count = swr_convert(
                           swr,
                           (uint8_t **) &amp;buffer1,
                           dst_nb_samples,
                           (const uint8_t **) decodedFrame->data,
                           decodedFrame->nb_samples);

                   int64_t bytesToWrite = frame_count * sizeof(float) * targetProperties.channelCount;
                   memcpy(targetData + bytesWritten, buffer1, (size_t) bytesToWrite);
                   bytesWritten += bytesToWrite;
                   av_freep(&amp;buffer1);

                   avPacket.size = 0;
                   avPacket.data = nullptr;
               }
           }
       }

       av_frame_free(&amp;decodedFrame);

       returnValue = bytesWritten;

       cleanup:
       return returnValue;
    }

    void FFMpegExtractor::printCodecParameters(AVCodecParameters *params) {

       LOGD("Stream properties");
       LOGD("Channels: %d", params->channels);
       LOGD("Channel layout: %"
                    PRId64, params->channel_layout);
       LOGD("Sample rate: %d", params->sample_rate);
       LOGD("Format: %s", av_get_sample_fmt_name((AVSampleFormat) params->format));
       LOGD("Frame size: %d", params->frame_size);
    }
    </fstream></memory>

    And this is the MediaSource.cpp :

    #ifndef MYAPP_MEDIASOURCE_CPP
    #define MYAPP_MEDIASOURCE_CPP

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libavutil></libavutil>opt.h>
    }

    #include <cstdint>
    #include <android></android>asset_manager.h>
    #include
    #include <fstream>
    #include "logging.h"

    // wrapper class for file stream
    class MediaSource {
    public:

       MediaSource() {
       }

       ~MediaSource() {
           source.close();
       }

       void open(const std::string &amp;filePath) {
           const char *x = filePath.c_str();
           LOGD("Opened %s", x);
           source.open(filePath, std::ios::in | std::ios::binary);
       }

       int read(uint8_t *buffer, int buf_size) {
           // read data to buffer
           source.read((char *) buffer, buf_size);
           // return how many bytes were read
           return source.gcount();
       }

       int64_t seek(int64_t offset, int whence) {
           if (whence == AVSEEK_SIZE) {
               // FFmpeg needs file size.
               int oldPos = source.tellg();
               source.seekg(0, std::ios::end);
               int64_t length = source.tellg();
               // seek to old pos
               source.seekg(oldPos);
               return length;
           } else if (whence == SEEK_SET) {
               // set pos to offset
               source.seekg(offset);
           } else if (whence == SEEK_CUR) {
               // add offset to pos
               source.seekg(offset, std::ios::cur);
           } else {
               // do not support other flags, return -1
               return -1;
           }
           // return current pos
           return source.tellg();
       }

    private:
       std::ifstream source;
    };

    #endif //MYAPP_MEDIASOURCE_CPP
    </fstream></cstdint>

    When the code is executed, I can see that I submit the correct file path, so I assume the resource mp3 is there.
    When this code is executed the app crashes in line 103 of FFMpegExtractor.cpp, at formatContext.reset(tmp);

    This is what Android Studio logs when the app crashes :

    --------- beginning of crash
    2020-02-27 14:31:26.341 9852-9945/com.user.myapp A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x7fffffff0 in tid 9945 (chaelpohl.loopy), pid 9852 (user.myapp)

    This is the (sadly very short) output I get with ndk-stack :

    ********** Crash dump: **********
    Build fingerprint: 'samsung/dreamltexx/dreamlte:9/PPR1.180610.011/G950FXXU6DSK9:user/release-keys'
    #00 0x0000000000016c50 /data/app/com.user.myapp-D7dBCgHF-vdQNNSald4lWA==/lib/arm64/libavformat.so (avformat_free_context+260)
                                                                                                            avformat_free_context
                                                                                                            ??:0:0
    Crash dump is completed

    I tested a bit around, and every call to my formatContext crashes the app. So I assume there is something wrong with the input I provide to build it but I have no clue how to debug this.

    Any help is appreciated ! (Happy to provide additional resources if something crucial is missing).

  • Open Media Developers Track at OVC 2011

    11 octobre 2011, par silvia

    The Open Video Conference that took place on 10-12 September was so overwhelming, I’ve still not been able to catch my breath ! It was a dense three days for me, even though I only focused on the technology sessions of the conference and utterly missed out on all the policy and content discussions.

    Roughly 60 people participated in the Open Media Software (OMS) developers track. This was an amazing group of people capable and willing to shape the future of video technology on the Web :

    • HTML5 video developers from Apple, Google, Opera, and Mozilla (though we missed the NZ folks),
    • codec developers from WebM, Xiph, and MPEG,
    • Web video developers from YouTube, JWPlayer, Kaltura, VideoJS, PopcornJS, etc.,
    • content publishers from Wikipedia, Internet Archive, YouTube, Netflix, etc.,
    • open source tool developers from FFmpeg, gstreamer, flumotion, VideoLAN, PiTiVi, etc,
    • and many more.

    To provide a summary of all the discussions would be impossible, so I just want to share the key take-aways that I had from the main sessions.

    WebRTC : Realtime Communications and HTML5

    Tim Terriberry (Mozilla), Serge Lachapelle (Google) and Ethan Hugg (CISCO) moderated this session together (slides). There are activities both at the W3C and at IETF – the ones at IETF are supposed to focus on protocols, while the W3C ones on HTML5 extensions.

    The current proposal of a PeerConnection API has been implemented in WebKit/Chrome as open source. It is expected that Firefox will have an add-on by Q1 next year. It enables video conferencing, including media capture, media encoding, signal processing (echo cancellation etc), secure transmission, and a data stream exchange.

    Current discussions are around the signalling protocol and whether SIP needs to be required by the standard. Further, the codec question is under discussion with a question whether to mandate VP8 and Opus, since transcoding gateways are not desirable. Another question is how to measure the quality of the connection and how to report errors so as to allow adaptation.

    What always amazes me around RTC is the sheer number of specialised protocols that seem to be required to implement this. WebRTC does not disappoint : in fact, the question was asked whether there could be a lighter alternative than to re-use dozens of years of protocol development – is it over-engineered ? Can desktop players connect to a WebRTC session ?

    We are already in a second or third revision of this part of the HTML5 specification and yet it seems the requirements are still being collected. I’m quietly confident that everything is done to make the lives of the Web developer easier, but it sure looks like a huge task.

    The Missing Link : Flash to HTML5

    Zohar Babin (Kaltura) and myself moderated this session and I must admit that this session was the biggest eye-opener for me amongst all the sessions. There was a large number of Flash developers present in the room and that was great, because sometimes we just don’t listen enough to lessons learnt in the past.

    This session gave me one of those aha-moments : it the form of the Flash appendBytes() API function.

    The appendBytes() function allows a Flash developer to take a byteArray out of a connected video resource and do something with it – such as feed it to a video for display. When I heard that Web developers want that functionality for JavaScript and the video element, too, I instinctively rejected the idea wondering why on earth would a Web developer want to touch encoded video bytes – why not leave that to the browser.

    But as it turns out, this is actually a really powerful enabler of functionality. For example, you can use it to :

    • display mid-roll video ads as part of the same video element,
    • sequence playlists of videos into the same video element,
    • implement DVR functionality (high-speed seeking),
    • do mash-ups,
    • do video editing,
    • adaptive streaming.

    This totally blew my mind and I am now completely supportive of having such a function in HTML5. Together with media fragment URIs you could even leave all the header download management for resources to the Web browser and just request time ranges from a video through an appendBytes() function. This would be easier on the Web developer than having to deal with byte ranges and making sure that appropriate decoding pipelines are set up.

    Standards for Video Accessibility

    Philip Jagenstedt (Opera) and myself moderated this session. We focused on the HTML5 track element and the WebVTT file format. Many issues were identified that will still require work.

    One particular topic was to find a standard means of rendering the UI for caption, subtitle, und description selection. For example, what icons should be used to indicate that subtitles or captions are available. While this is not part of the HTML5 specification, it’s still important to get this right across browsers since otherwise users will get confused with diverging interfaces.

    Chaptering was discussed and a particular need to allow URLs to directly point at chapters was expressed. I suggested the use of named Media Fragment URLs.

    The use of WebVTT for descriptions for the blind was also discussed. A suggestion was made to use the voice tag <v> to allow for “styling” (i.e. selection) of the screen reader voice.

    Finally, multitrack audio or video resources were also discussed and the @mediagroup attribute was explained. A question about how to identify the language used in different alternative dubs was asked. This is an issue because @srclang is not on audio or video, only on text, so it’s a missing feature for the multitrack API.

    Beyond this session, there was also a breakout session on WebVTT and the track element. As a consequence, a number of bugs were registered in the W3C bug tracker.

    WebM : Testing, Metrics and New features

    This session was moderated by John Luther and John Koleszar, both of the WebM Project. They started off with a presentation on current work on WebM, which includes quality testing and improvements, and encoder speed improvement. Then they moved on to questions about how to involve the community more.

    The community criticised that communication of what is happening around WebM is very scarce. More sharing of information was requested, including a move to using open Google+ hangouts instead of Google internal video conferences. More use of the public bug tracker can also help include the community better.

    Another pain point of the community was that code is introduced and removed without much feedback. It was requested to introduce a peer review process. Also it was requested that example code snippets are published when new features are announced so others can replicate the claims.

    This all indicates to me that the WebM project is increasingly more open, but that there is still a lot to learn.

    Standards for HTTP Adaptive Streaming

    This session was moderated by Frank Galligan and Aaron Colwell (Google), and Mark Watson (Netflix).

    Mark started off by giving us an introduction to MPEG DASH, the MPEG file format for HTTP adaptive streaming. MPEG has just finalized the format and he was able to show us some examples. DASH is XML-based and thus rather verbose. It is covering all eventualities of what parameters could be switched during transmissions, which makes it very broad. These include trick modes e.g. for fast forwarding, 3D, multi-view and multitrack content.

    MPEG have defined profiles – one for live streaming which requires chunking of the files on the server, and one for on-demand which requires keyframe alignment of the files. There are clear specifications for how to do these with MPEG. Such profiles would need to be created for WebM and Ogg Theora, too, to make DASH universally applicable.

    Further, the Web case needs a more restrictive adaptation approach, since the video element’s API is already accounting for some of the features that DASH provides for desktop applications. So, a Web-specific profile of DASH would be required.

    Then Aaron introduced us to the MediaSource API and in particular the webkitSourceAppend() extension that he has been experimenting with. It is essentially an implementation of the appendBytes() function of Flash, which the Web developers had been asking for just a few sessions earlier. This was likely the biggest announcement of OVC, alas a quiet and technically-focused one.

    Aaron explained that he had been trying to find a way to implement HTTP adaptive streaming into WebKit in a way in which it could be standardised. While doing so, he also came across other requirements around such chunked video handling, in particular around dynamic ad insertion, live streaming, DVR functionality (fast forward), constraint video editing, and mashups. While trying to sort out all these requirements, it became clear that it would be very difficult to implement strategies for stream switching, buffering and delivery of video chunks into the browser when so many different and likely contradictory requirements exist. Also, once an approach is implemented and specified for the browser, it becomes very difficult to innovate on it.

    Instead, the easiest way to solve it right now and learn about what would be necessary to implement into the browser would be to actually allow Web developers to queue up a chunk of encoded video into a video element for decoding and display. Thus, the webkitSourceAppend() function was born (specification).

    The proposed extension to the HTMLMediaElement is as follows :

    partial interface HTMLMediaElement 
      // URL passed to src attribute to enable the media source logic.
      readonly attribute [URL] DOMString webkitMediaSourceURL ;
    

    bool webkitSourceAppend(in Uint8Array data) ;

    // end of stream status codes.
    const unsigned short EOS_NO_ERROR = 0 ;
    const unsigned short EOS_NETWORK_ERR = 1 ;
    const unsigned short EOS_DECODE_ERR = 2 ;

    void webkitSourceEndOfStream(in unsigned short status) ;

    // states
    const unsigned short SOURCE_CLOSED = 0 ;
    const unsigned short SOURCE_OPEN = 1 ;
    const unsigned short SOURCE_ENDED = 2 ;

    readonly attribute unsigned short webkitSourceState ;
     ;

    The code is already checked into WebKit, but commented out behind a command-line compiler flag.

    Frank then stepped forward to show how webkitSourceAppend() can be used to implement HTTP adaptive streaming. His example uses WebM – there are no examples with MPEG or Ogg yet.

    The chunks that Frank’s demo used were 150 video frames long (6.25s) and 5s long audio. Stream switching only switched video, since audio data is much lower bandwidth and more important to retain at high quality. Switching was done on multiplexed files.

    Every chunk requires an XHR range request – this could be optimised if the connections were kept open per adaptation. Seeking works, too, but since decoding requires download of a whole chunk, seeking latency is determined by the time it takes to download and decode that chunk.

    Similar to DASH, when using this approach for live streaming, the server has to produce one file per chunk, since byte range requests are not possible on a continuously growing file.

    Frank did not use DASH as the manifest format for his HTTP adaptive streaming demo, but instead used a hacked-up custom XML format. It would be possible to use JSON or any other format, too.

    After this session, I was actually completely blown away by the possibilities that such a simple API extension allows. If I wasn’t sold on the idea of a appendBytes() function in the earlier session, this one completely changed my mind. While I still believe we need to standardise a HTTP adaptive streaming file format that all browsers will support for all codecs, and I still believe that a native implementation for support of such a file format is necessary, I also believe that this approach of webkitSourceAppend() is what HTML needs – and maybe it needs it faster than native HTTP adaptive streaming support.

    Standards for Browser Video Playback Metrics

    This session was moderated by Zachary Ozer and Pablo Schklowsky (JWPlayer). Their motivation for the topic was, in fact, also HTTP adaptive streaming. Once you leave the decisions about when to do stream switching to JavaScript (through a function such a wekitSourceAppend()), you have to expose stream metrics to the JS developer so they can make informed decisions. The other use cases is, of course, monitoring of the quality of video delivery for reporting to the provider, who may then decide to change their delivery environment.

    The discussion found that we really care about metrics on three different levels :

    • measuring the network performance (bandwidth)
    • measuring the decoding pipeline performance
    • measuring the display quality

    In the end, it seemed that work previously done by Steve Lacey on a proposal for video metrics was generally acceptable, except for the playbackJitter metric, which may be too aggregate to mean much.

    Device Inputs / A/V in the Browser

    I didn’t actually attend this session held by Anant Narayanan (Mozilla), but from what I heard, the discussion focused on how to manage permission of access to video camera, microphone and screen, e.g. when multiple applications (tabs) want access or when the same site wants access in a different session. This may apply to real-time communication with screen sharing, but also to photo sharing, video upload, or canvas access to devices e.g. for time lapse photography.

    Open Video Editors

    This was another session that I wasn’t able to attend, but I believe the creation of good open source video editing software and similar video creation software is really crucial to giving video a broader user appeal.

    Jeff Fortin (PiTiVi) moderated this session and I was fascinated to later see his analysis of the lifecycle of open source video editors. It is shocking to see how many people/projects have tried to create an open source video editor and how many have stopped their project. It is likely that the creation of a video editor is such a complex challenge that it requires a larger and more committed open source project – single people will just run out of steam too quickly. This may be comparable to the creation of a Web browser (see the size of the Mozilla project) or a text processing system (see the size of the OpenOffice project).

    Jeff also mentioned the need to create open video editor standards around playlist file formats etc. Possibly the Open Video Alliance could help. In any case, something has to be done in this space – maybe this would be a good topic to focus next year’s OVC on ?

    Monday’s Breakout Groups

    The conference ended officially on Sunday night, but we had a third day of discussions / hackday at the wonderful New York Lawschool venue. We had collected issues of interest during the two previous days and organised the breakout groups on the morning (Schedule).

    In the Content Protection/DRM session, Mark Watson from Netflix explained how their API works and that they believe that all we need in browsers is a secure way to exchange keys and an indicator of protection scheme is used – the actual protection scheme would not be implemented by the browser, but be provided by the underlying system (media framework/operating system). I think that until somebody actually implements something in a browser fork and shows how this can be done, we won’t have much progress. In my understanding, we may also need to disable part of the video API for encrypted content, because otherwise you can always e.g. grab frames from the video element into canvas and save them from there.

    In the Playlists and Gapless Playback session, there was massive brainstorming about what new cool things can be done with the video element in browsers if playback between snippets can be made seamless. Further discussions were about a standard playlist file formats (such as XSPF, MRSS or M3U), media fragment URIs in playlists for mashups, and the need to expose track metadata for HTML5 media elements.

    What more can I say ? It was an amazing three days and the complexity of problems that we’re dealing with is a tribute to how far HTML5 and open video has already come and exciting news for the kind of applications that will be possible (both professional and community) once we’ve solved the problems of today. It will be exciting to see what progress we will have made by next year’s conference.

    Thanks go to Google for sponsoring my trip to OVC.

    UPDATE : We actually have a mailing list for open media developers who are interested in these and similar topics – do join at http://lists.annodex.net/cgi-bin/mailman/listinfo/foms.