Recherche avancée

Médias (0)

Mot : - Tags -/performance

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (37)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Les thèmes de MediaSpip

    4 juin 2013

    3 thèmes sont proposés à l’origine par MédiaSPIP. L’utilisateur MédiaSPIP peut rajouter des thèmes selon ses besoins.
    Thèmes MediaSPIP
    3 thèmes ont été développés au départ pour MediaSPIP : * SPIPeo : thème par défaut de MédiaSPIP. Il met en avant la présentation du site et les documents média les plus récents ( le type de tri peut être modifié - titre, popularité, date) . * Arscenic : il s’agit du thème utilisé sur le site officiel du projet, constitué notamment d’un bandeau rouge en début de page. La structure (...)

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

Sur d’autres sites (4172)

  • I am a newbie in FFmpeg and I am trying to use FFMPEG to play RTSP stream on Android, but it will play slower and slower

    8 mai 2020, par Ajax

    I am a newbie in FFmpeg and I am trying to use FFMPEG to play RTSP stream on Android, but it will play slower and slower. The picture of my player and video source will increase with the time difference. The video are not synchronized. I'm pulling on the local area network。
The longer it is played, the more the picture of the video source will be. The more it cannot automatically return to the real-time picture like MediaCode's hardware decoding.The decoded picture is in slow motion, and it will freeze after a while.。What causes this ? How can i optimize it。
this is my code

    



    2020-5-8/Problem has been solved

    


  • Decode mp3 using FFMpeg, Android NDK - What is wrong with my AVFormatContext ?

    27 février 2020, par michpohl

    I am trying to decode am MP3 file to a raw PCM stream using FFMpeg via JNI on Android. I have compiled the latest FFMpeg version (4.2) and added it to my app. This did not make any problems.
    The goal is to be able to use mp3 files from the device’s storage for playback with oboe

    Since I am relatively inexperienced with both C++ and FFMpeg, my approach is based upon this :
    oboe’s RhythmGame example

    I have based my FFMpegExtractorclass on the one found in the example here. With the help of StackOverflow the AAssetManageruse was removed and instead a MediaSource helper class now serves as a wrapper for my stream (see here)

    But unfortunately, creating the AVFormatContext doesn’t work right - and I can’t seem to understand why. Since I have very limited understanding of correct pointer usage and C++ memory management, I suspect it’s most likely I’m doing something wrong in that area. But honestly, I have no idea.

    This is my FFMpegExtractor.h :

    #define MYAPP_FFMPEGEXTRACTOR_H

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libavutil></libavutil>opt.h>
    }

    #include <cstdint>
    #include <android></android>asset_manager.h>
    #include
    #include <fstream>
    #include "MediaSource.cpp"


    class FFMpegExtractor {
    public:

       FFMpegExtractor();

       ~FFMpegExtractor();

       int64_t decode2(char *filepath, uint8_t *targetData, AudioProperties targetProperties);

    private:
       MediaSource *mSource;

       bool createAVFormatContext(AVIOContext *avioContext, AVFormatContext **avFormatContext);

       bool openAVFormatContext(AVFormatContext *avFormatContext);

       int32_t cleanup(AVIOContext *avioContext, AVFormatContext *avFormatContext);

       bool getStreamInfo(AVFormatContext *avFormatContext);

       AVStream *getBestAudioStream(AVFormatContext *avFormatContext);

       AVCodec *findCodec(AVCodecID id);

       void printCodecParameters(AVCodecParameters *params);

       bool createAVIOContext2(const std::string &amp;filePath, uint8_t *buffer, uint32_t bufferSize,
                               AVIOContext **avioContext);
    };


    #endif //MYAPP_FFMPEGEXTRACTOR_H
    </fstream></cstdint>

    This is FFMPegExtractor.cpp :

    #include <memory>
    #include <oboe></oboe>Definitions.h>
    #include "FFMpegExtractor.h"
    #include "logging.h"
    #include <fstream>

    FFMpegExtractor::FFMpegExtractor() {
       mSource = new MediaSource;
    }

    FFMpegExtractor::~FFMpegExtractor() {
       delete mSource;
    }

    constexpr int kInternalBufferSize = 1152; // Use MP3 block size. https://wiki.hydrogenaud.io/index.php?title=MP3

    /**
    * Reads from an IStream into FFmpeg.
    *
    * @param ptr       A pointer to the user-defined IO data structure.
    * @param buf       A buffer to read into.
    * @param buf_size  The size of the buffer buff.
    *
    * @return The number of bytes read into the buffer.
    */


    // If FFmpeg needs to read the file, it will call this function.
    // We need to fill the buffer with file's data.
    int read(void *opaque, uint8_t *buffer, int buf_size) {
       MediaSource *source = (MediaSource *) opaque;
       return source->read(buffer, buf_size);
    }

    // If FFmpeg needs to seek in the file, it will call this function.
    // We need to change the read pos.
    int64_t seek(void *opaque, int64_t offset, int whence) {
       MediaSource *source = (MediaSource *) opaque;
       return source->seek(offset, whence);
    }


    // Create and save a MediaSource instance.
    bool FFMpegExtractor::createAVIOContext2(const std::string &amp;filepath, uint8_t *buffer, uint32_t bufferSize,
                                            AVIOContext **avioContext) {

       mSource = new MediaSource;
       mSource->open(filepath);
       constexpr int isBufferWriteable = 0;

       *avioContext = avio_alloc_context(
               buffer, // internal buffer for FFmpeg to use
               bufferSize, // For optimal decoding speed this should be the protocol block size
               isBufferWriteable,
               mSource, // Will be passed to our callback functions as a (void *)
               read, // Read callback function
               nullptr, // Write callback function (not used)
               seek); // Seek callback function

       if (*avioContext == nullptr) {
           LOGE("Failed to create AVIO context");
           return false;
       } else {
           return true;
       }
    }

    bool
    FFMpegExtractor::createAVFormatContext(AVIOContext *avioContext,
                                          AVFormatContext **avFormatContext) {

       *avFormatContext = avformat_alloc_context();
       (*avFormatContext)->pb = avioContext;

       if (*avFormatContext == nullptr) {
           LOGE("Failed to create AVFormatContext");
           return false;
       } else {
           LOGD("Successfully created AVFormatContext");
           return true;
       }
    }

    bool FFMpegExtractor::openAVFormatContext(AVFormatContext *avFormatContext) {

       int result = avformat_open_input(&amp;avFormatContext,
                                        "", /* URL is left empty because we're providing our own I/O */
                                        nullptr /* AVInputFormat *fmt */,
                                        nullptr /* AVDictionary **options */
       );

       if (result == 0) {
           return true;
       } else {
           LOGE("Failed to open file. Error code %s", av_err2str(result));
           return false;
       }
    }

    bool FFMpegExtractor::getStreamInfo(AVFormatContext *avFormatContext) {

       int result = avformat_find_stream_info(avFormatContext, nullptr);
       if (result == 0) {
           return true;
       } else {
           LOGE("Failed to find stream info. Error code %s", av_err2str(result));
           return false;
       }
    }

    AVStream *FFMpegExtractor::getBestAudioStream(AVFormatContext *avFormatContext) {

       int streamIndex = av_find_best_stream(avFormatContext, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);

       if (streamIndex &lt; 0) {
           LOGE("Could not find stream");
           return nullptr;
       } else {
           return avFormatContext->streams[streamIndex];
       }
    }

    int64_t FFMpegExtractor::decode2(
           char* filepath,
           uint8_t *targetData,
           AudioProperties targetProperties) {

       LOGD("Decode SETUP");
       int returnValue = -1; // -1 indicates error

       // Create a buffer for FFmpeg to use for decoding (freed in the custom deleter below)
       auto buffer = reinterpret_cast(av_malloc(kInternalBufferSize));


       // Create an AVIOContext with a custom deleter
       std::unique_ptr ioContext{
               nullptr,
               [](AVIOContext *c) {
                   av_free(c->buffer);
                   avio_context_free(&amp;c);
               }
       };
       {
           AVIOContext *tmp = nullptr;
           if (!createAVIOContext2(filepath, buffer, kInternalBufferSize, &amp;tmp)) {
               LOGE("Could not create an AVIOContext");
               return returnValue;
           }
           ioContext.reset(tmp);
       }
       // Create an AVFormatContext using the avformat_free_context as the deleter function
       std::unique_ptr formatContext{
               nullptr,
               &amp;avformat_free_context
       };

       {
           AVFormatContext *tmp;
           if (!createAVFormatContext(ioContext.get(), &amp;tmp)) return returnValue;
           formatContext.reset(tmp);
       }
       if (!openAVFormatContext(formatContext.get())) return returnValue;
       LOGD("172");

       if (!getStreamInfo(formatContext.get())) return returnValue;
       LOGD("175");

       // Obtain the best audio stream to decode
       AVStream *stream = getBestAudioStream(formatContext.get());
       if (stream == nullptr || stream->codecpar == nullptr) {
           LOGE("Could not find a suitable audio stream to decode");
           return returnValue;
       }
       LOGD("183");

       printCodecParameters(stream->codecpar);

       // Find the codec to decode this stream
       AVCodec *codec = avcodec_find_decoder(stream->codecpar->codec_id);
       if (!codec) {
           LOGE("Could not find codec with ID: %d", stream->codecpar->codec_id);
           return returnValue;
       }

       // Create the codec context, specifying the deleter function
       std::unique_ptr codecContext{
               nullptr,
               [](AVCodecContext *c) { avcodec_free_context(&amp;c); }
       };
       {
           AVCodecContext *tmp = avcodec_alloc_context3(codec);
           if (!tmp) {
               LOGE("Failed to allocate codec context");
               return returnValue;
           }
           codecContext.reset(tmp);
       }

       // Copy the codec parameters into the context
       if (avcodec_parameters_to_context(codecContext.get(), stream->codecpar) &lt; 0) {
           LOGE("Failed to copy codec parameters to codec context");
           return returnValue;
       }

       // Open the codec
       if (avcodec_open2(codecContext.get(), codec, nullptr) &lt; 0) {
           LOGE("Could not open codec");
           return returnValue;
       }

       // prepare resampler
       int32_t outChannelLayout = (1 &lt;&lt; targetProperties.channelCount) - 1;
       LOGD("Channel layout %d", outChannelLayout);

       SwrContext *swr = swr_alloc();
       av_opt_set_int(swr, "in_channel_count", stream->codecpar->channels, 0);
       av_opt_set_int(swr, "out_channel_count", targetProperties.channelCount, 0);
       av_opt_set_int(swr, "in_channel_layout", stream->codecpar->channel_layout, 0);
       av_opt_set_int(swr, "out_channel_layout", outChannelLayout, 0);
       av_opt_set_int(swr, "in_sample_rate", stream->codecpar->sample_rate, 0);
       av_opt_set_int(swr, "out_sample_rate", targetProperties.sampleRate, 0);
       av_opt_set_int(swr, "in_sample_fmt", stream->codecpar->format, 0);
       av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_FLT, 0);
       av_opt_set_int(swr, "force_resampling", 1, 0);

       // Check that resampler has been inited
       int result = swr_init(swr);
       if (result != 0) {
           LOGE("swr_init failed. Error: %s", av_err2str(result));
           return returnValue;
       };
       if (!swr_is_initialized(swr)) {
           LOGE("swr_is_initialized is false\n");
           return returnValue;
       }

       // Prepare to read data
       int bytesWritten = 0;
       AVPacket avPacket; // Stores compressed audio data
       av_init_packet(&amp;avPacket);
       AVFrame *decodedFrame = av_frame_alloc(); // Stores raw audio data
       int bytesPerSample = av_get_bytes_per_sample((AVSampleFormat) stream->codecpar->format);

       LOGD("Bytes per sample %d", bytesPerSample);

       // While there is more data to read, read it into the avPacket
       while (av_read_frame(formatContext.get(), &amp;avPacket) == 0) {

           if (avPacket.stream_index == stream->index) {

               while (avPacket.size > 0) {
                   // Pass our compressed data into the codec
                   result = avcodec_send_packet(codecContext.get(), &amp;avPacket);
                   if (result != 0) {
                       LOGE("avcodec_send_packet error: %s", av_err2str(result));
                       goto cleanup;
                   }

                   // Retrieve our raw data from the codec
                   result = avcodec_receive_frame(codecContext.get(), decodedFrame);
                   if (result != 0) {
                       LOGE("avcodec_receive_frame error: %s", av_err2str(result));
                       goto cleanup;
                   }

                   // DO RESAMPLING
                   auto dst_nb_samples = (int32_t) av_rescale_rnd(
                           swr_get_delay(swr, decodedFrame->sample_rate) + decodedFrame->nb_samples,
                           targetProperties.sampleRate,
                           decodedFrame->sample_rate,
                           AV_ROUND_UP);

                   short *buffer1;
                   av_samples_alloc(
                           (uint8_t **) &amp;buffer1,
                           nullptr,
                           targetProperties.channelCount,
                           dst_nb_samples,
                           AV_SAMPLE_FMT_FLT,
                           0);
                   int frame_count = swr_convert(
                           swr,
                           (uint8_t **) &amp;buffer1,
                           dst_nb_samples,
                           (const uint8_t **) decodedFrame->data,
                           decodedFrame->nb_samples);

                   int64_t bytesToWrite = frame_count * sizeof(float) * targetProperties.channelCount;
                   memcpy(targetData + bytesWritten, buffer1, (size_t) bytesToWrite);
                   bytesWritten += bytesToWrite;
                   av_freep(&amp;buffer1);

                   avPacket.size = 0;
                   avPacket.data = nullptr;
               }
           }
       }

       av_frame_free(&amp;decodedFrame);

       returnValue = bytesWritten;

       cleanup:
       return returnValue;
    }

    void FFMpegExtractor::printCodecParameters(AVCodecParameters *params) {

       LOGD("Stream properties");
       LOGD("Channels: %d", params->channels);
       LOGD("Channel layout: %"
                    PRId64, params->channel_layout);
       LOGD("Sample rate: %d", params->sample_rate);
       LOGD("Format: %s", av_get_sample_fmt_name((AVSampleFormat) params->format));
       LOGD("Frame size: %d", params->frame_size);
    }
    </fstream></memory>

    And this is the MediaSource.cpp :

    #ifndef MYAPP_MEDIASOURCE_CPP
    #define MYAPP_MEDIASOURCE_CPP

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libavutil></libavutil>opt.h>
    }

    #include <cstdint>
    #include <android></android>asset_manager.h>
    #include
    #include <fstream>
    #include "logging.h"

    // wrapper class for file stream
    class MediaSource {
    public:

       MediaSource() {
       }

       ~MediaSource() {
           source.close();
       }

       void open(const std::string &amp;filePath) {
           const char *x = filePath.c_str();
           LOGD("Opened %s", x);
           source.open(filePath, std::ios::in | std::ios::binary);
       }

       int read(uint8_t *buffer, int buf_size) {
           // read data to buffer
           source.read((char *) buffer, buf_size);
           // return how many bytes were read
           return source.gcount();
       }

       int64_t seek(int64_t offset, int whence) {
           if (whence == AVSEEK_SIZE) {
               // FFmpeg needs file size.
               int oldPos = source.tellg();
               source.seekg(0, std::ios::end);
               int64_t length = source.tellg();
               // seek to old pos
               source.seekg(oldPos);
               return length;
           } else if (whence == SEEK_SET) {
               // set pos to offset
               source.seekg(offset);
           } else if (whence == SEEK_CUR) {
               // add offset to pos
               source.seekg(offset, std::ios::cur);
           } else {
               // do not support other flags, return -1
               return -1;
           }
           // return current pos
           return source.tellg();
       }

    private:
       std::ifstream source;
    };

    #endif //MYAPP_MEDIASOURCE_CPP
    </fstream></cstdint>

    When the code is executed, I can see that I submit the correct file path, so I assume the resource mp3 is there.
    When this code is executed the app crashes in line 103 of FFMpegExtractor.cpp, at formatContext.reset(tmp);

    This is what Android Studio logs when the app crashes :

    --------- beginning of crash
    2020-02-27 14:31:26.341 9852-9945/com.user.myapp A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x7fffffff0 in tid 9945 (chaelpohl.loopy), pid 9852 (user.myapp)

    This is the (sadly very short) output I get with ndk-stack :

    ********** Crash dump: **********
    Build fingerprint: 'samsung/dreamltexx/dreamlte:9/PPR1.180610.011/G950FXXU6DSK9:user/release-keys'
    #00 0x0000000000016c50 /data/app/com.user.myapp-D7dBCgHF-vdQNNSald4lWA==/lib/arm64/libavformat.so (avformat_free_context+260)
                                                                                                            avformat_free_context
                                                                                                            ??:0:0
    Crash dump is completed

    I tested a bit around, and every call to my formatContext crashes the app. So I assume there is something wrong with the input I provide to build it but I have no clue how to debug this.

    Any help is appreciated ! (Happy to provide additional resources if something crucial is missing).

  • What is data anonymization in web analytics ?

    11 février 2020, par Joselyn Khor — Analytics Tips, Privacy

    Collecting information via web analytics platforms is needed to help a website grow and improve. When doing so, it’s best to strike a balance between getting valuable insights, and keeping the trust of your users by protecting their privacy.

    This means not collecting or processing any personally identifiable information (PII). But what if your organisation requires you to collect PII ?

    That’s where data anonymization comes in.

    What is data anonymization ?

    Data anonymization makes identifiable information unidentifiable. This is done through data processing techniques which remove or modify PII data. So data becomes anonymous and can’t be linked to any individual.

    In the context of web analytics, data anonymization is handy because you can collect useful data while protecting the privacy of website visitors.

    Why is data anonymization important ?

    Through modern threats of identity theft, credit card fraud and the like, data anonymization is a way to protect the identity and privacy of individuals. As well as protect private and sensitive information of organisations. 

    Data anonymization lets you follow the many laws around the world which protect user privacy. These laws provide safeguards around collecting personal data or personally identifiable information (PII), so data anonymization is a good solution to ensure you’re not processing such sensitive information.

    In some cases, implementing data anonymization techniques means you can avoid having to show your users a consent screen. Which means you may not need to ask for consent in order to track data. This is a bonus as consent screens can annoy and stop people from engaging with your site.

    GDPR and data anonymization

    Matomo Analytics GDPR Google Analytics

    The GDPR is a law in the EU that limits the collection and processing of personal data. The aim is to give people more control over their online personal information. Which is why website owners need to follow certain rules to become GDPR compliant and protect user privacy. According to the GDPR, you can be fined up to 4% of your yearly revenue for data breaches or non-compliance. 

    In the case of web analytics, tools can be easily made compliant by following a number of steps

    This is why anonymizing data is a big deal.

    Anonymized data isn’t personal data according to the GDPR : 

    “The principles of data protection should therefore not apply to anonymous information, namely information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.”

    This means, you still get the best of both worlds. By anonymizing data, you’re still able to collect useful information like visitor behavioural data.

    US privacy laws and data anonymization

    In the US, there isn’t one single law that governs the protection of personal data, called personally identifiable information (PII). There are hundreds of federal and state laws that protect the personal data of US residents. As well as, industry-specific statutes related to data privacy, like the California Consumer Privacy Act (CCPA) and the Health Insurance Portability and Accountability Act (HIPAA).

    Website owners in the US need to know exactly what laws govern their area of business in order to follow them.

    A general guideline is to protect user privacy regardless of whether you are or aren’t allowed to collect PII. This means anonymizing identifiable information so your website users aren’t put at risk.

    Data anonymization techniques in Matomo Analytics

    If you carry these out, you won’t need to ask your website visitors for tracking consent since anonymized data is no longer considered personal data under the GDPR.

    The techniques listed above make it easy for you when using a tool like Matomo, as they are automatically anonymized.

    Tools like Google Analytics on the other hand don’t provide some of the privacy options and leave it up to you to take on the burden of implementation without providing steps.

    Data anonymization tools

    If you’re a website owner who wants to grow your business or learn more about your website visitors, privacy-friendly tools like Matomo Analytics are a great option. By following the easy steps to be GDPR compliant, you can anonymize all data that could put your visitors at risk.