Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (63)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (10642)

  • FFmpeg leaks memory after closing activity

    28 juillet 2015, par grunk

    I’m trying to implements a rtsp player based on the roman10 tutorial.
    I can play a stream but each time i leave the activity a lot of memory is leaked.
    After some research it appears that the bitmap which is a global jobject is the cause :

    jobject createBitmap(JNIEnv *pEnv, int pWidth, int pHeight) {
       int i;
       //get Bitmap class and createBitmap method ID
       jclass javaBitmapClass = (jclass)(*pEnv)->FindClass(pEnv, "android/graphics/Bitmap");
       jmethodID mid = (*pEnv)->GetStaticMethodID(pEnv, javaBitmapClass, "createBitmap", "(IILandroid/graphics/Bitmap$Config;)Landroid/graphics/Bitmap;");
       //create Bitmap.Config
       //reference: https://forums.oracle.com/thread/1548728
       const wchar_t* configName = L"ARGB_8888";
       int len = wcslen(configName);
       jstring jConfigName;
       if (sizeof(wchar_t) != sizeof(jchar)) {
           //wchar_t is defined as different length than jchar(2 bytes)
           jchar* str = (jchar*)malloc((len+1)*sizeof(jchar));
           for (i = 0; i < len; ++i) {
               str[i] = (jchar)configName[i];
           }
           str[len] = 0;
           jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)str, len);
       } else {
           //wchar_t is defined same length as jchar(2 bytes)
           jConfigName = (*pEnv)->NewString(pEnv, (const jchar*)configName, len);
       }
       jclass bitmapConfigClass = (*pEnv)->FindClass(pEnv, "android/graphics/Bitmap$Config");
       jobject javaBitmapConfig = (*pEnv)->CallStaticObjectMethod(pEnv, bitmapConfigClass,
               (*pEnv)->GetStaticMethodID(pEnv, bitmapConfigClass, "valueOf", "(Ljava/lang/String;)Landroid/graphics/Bitmap$Config;"), jConfigName);
       //create the bitmap
       return (*pEnv)->CallStaticObjectMethod(pEnv, javaBitmapClass, mid, pWidth, pHeight, javaBitmapConfig);
    }

    The bitmap is created like this :

    bitmap = createBitmap(...);

    When the activity is closed this method is called :

    void finish(JNIEnv *pEnv) {
       //unlock the bitmap
       AndroidBitmap_unlockPixels(pEnv, bitmap);
       av_free(buffer);
       // Free the RGB image
       av_free(frameRGBA);
       // Free the YUV frame
       av_free(decodedFrame);
       // Close the codec
       avcodec_close(codecCtx);
       // Close the video file
       avformat_close_input(&formatCtx);
    }

    The bitmap seems to never be freed, just unlocked.

    What should i do be sure to get back all the memory ?

    Note : i’m using ffmpeg 2.5.2.

  • Android Java Jni c++ Memory leaks

    6 avril 2023, par Edson Magombe

    I'm facing problems with my code. It has a pretty high memory consumption and with time can hit 1GB RAM.
I'm using c++ and ffmpeg lib to read audio samples and generate waveforms but I really can't find where the leak is.

    


    Here is my code :

    


    extern "C"
JNIEXPORT jint JNICALL
Java_modules_Waveform_decode_1to_1pcm(JNIEnv *env, jobject thiz, jstring input, jstring output) {
    const char * filename = (*env).GetStringUTFChars(input, 0);
    const char * outfilename = (*env).GetStringUTFChars(output, 0);
        if((*env).PushLocalFrame(1) != JNI_OK) {
            __android_log_print(ANDROID_LOG_ERROR, "exception: ", "%s", "Failed to open capacity");
            return -1;
        }
        /* Open File */
        AVFormatContext * format_ctx{nullptr};
        int result_open = avformat_open_input(&format_ctx, filename, nullptr, nullptr);

        result_open = avformat_find_stream_info(format_ctx, nullptr);

        int index = av_find_best_stream(format_ctx, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);

        /* Finding decoder */
        AVStream *streams = format_ctx->streams[index];
        const AVCodec *decoder = avcodec_find_decoder(streams->codecpar->codec_id);

        AVCodecContext *codec_ctx{avcodec_alloc_context3(decoder)};

        avcodec_parameters_to_context(codec_ctx, streams->codecpar);

        /* Opening decoder */
        result_open = avcodec_open2(codec_ctx, decoder, nullptr);

        /* Decoding the audio */
        AVPacket *packet = av_packet_alloc();
        AVFrame *frame = av_frame_alloc();

        SwrContext *resampler{swr_alloc_set_opts(
                nullptr,
                streams->codecpar->channel_layout,
                AV_SAMPLE_FMT_FLT,
                streams->codecpar->sample_rate,
                streams->codecpar->channel_layout,
                (AVSampleFormat) streams->codecpar->format,
                streams->codecpar->format,
                streams->codecpar->sample_rate,
                0
        )};

        std::ofstream out(outfilename, std::ios::binary);
        while (av_read_frame(format_ctx, packet) == 0) {
            if(packet->stream_index != streams->index) {
                continue;
            }

            result_open = avcodec_send_packet(codec_ctx, packet);
            if(result_open < 0) {
                // AVERROR(EAGAIN) --> Send the packet again getting frames out!
                if(result_open != AVERROR(EAGAIN)) {
                    __android_log_print(ANDROID_LOG_ERROR, "exception: ", "%s", "Error decoding...");
                }
            }
            while (avcodec_receive_frame(codec_ctx, frame) == 0) {
                /* Resample the frame */
                AVFrame *resampler_frame = av_frame_alloc();
                resampler_frame->sample_rate = 100;
                resampler_frame->channel_layout = frame->channel_layout;
                resampler_frame->channels = frame->channels;
                resampler_frame->format = AV_SAMPLE_FMT_S16;

                result_open = swr_convert_frame(resampler, resampler_frame, frame);
                if(result_open >= 0) {
                    int16_t *samples = (int16_t *) frame->data[0];
                    for(int c = 0; c < resampler_frame->channels; c ++) {
                        float sum = 0;
                        for(int i = 0; i < resampler_frame->nb_samples; i ++) {
                            if(samples[i * resampler_frame->channels + c] < 0) {
                                sum += (float) samples[i * resampler_frame->channels + c] * (-1);
                            } else {
                                sum += (float) samples[i * resampler_frame->channels + c];
                            }
                            int average_point = (int) ((sum * 2) / (float) resampler_frame->nb_samples);
                            if(average_point > 0) {
                                out << average_point << "\n";
                            }
                        }
                    }
                }
                av_frame_unref(frame);
                av_frame_free(&resampler_frame);
            }
        }
        av_frame_free(&frame);
        av_packet_unref(packet);
        av_packet_free(&packet);
        out.close();
        (*env).PopLocalFrame(nullptr);
        (*env).ReleaseStringUTFChars(input, filename);
        (*env).ReleaseStringUTFChars(output, outfilename);
    return 1;
}


    


    I tried ticks like (*env).ReleaseStringUTFChars and (*env).PopLocalFrame(nullptr) but it's not working. The memory consumption is still very high

    


  • Append video files of different width, height

    28 novembre 2013, par Jatin

    I am building an application where user can record a screencast. Integral part of application is that, one can pause recording and resume it later any time (the session is maintained on server side).

    So say when user starts recording the screen, the width and height is :1024*768. Using xuggler (java wrapper for ffmpeg), I am able to generate a video. But say later he is on a different system and wishes to resume screen cast, then resolution changes to 1080 * 720. At this stage, I record it seperately and then try merging two files. But because the width & height are not same, I get the below exception :

    16:38:03.916 [main] WARN com.xuggle.xuggler - Got error : picture is
    not of the same width as this Coder
    (../../../../../../../csrc/com/xuggle/xuggler/StreamCoder.cpp:1430)
    Exception in thread "main" java.lang.RuntimeException : failed to
    encode video

    What is the best way to solve this Issue. The user can be on screen with different width and height. How do I merge (or any other alternatives, probably append) video files of different width and height ?