Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (102)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (10062)

  • How to handle queueing of video encoding during multiple video uploads ?

    6 mars 2016, par Yash Desai

    I am working on developing a video streaming site where users can upload videos to the site (multiple videos at once using the uploadify jquery plugin).

    Now, I am faced with the question of encoding the videos to FLV for streaming them online.

    When should the video encoding process take place ? Should it take place immediately after uploads have finished (i.e redirect the user to upload success page, and then start encoding in the background using exec command for ffmpeg ?) However, using this approach, how do i determine if the encoding has finished successfully ? What if users upload a corrupt video and ffmpeg fails to encode it ? How do i handle this in PHP ?

    How do i queue encoding of videos since multiple users can upload videos at the same ? Does FFMpeg has its own encoding queue ?

    I also read about gearman and message queueing options such as redis and AMQP in another related SO thread. Are these one of the potential solutions ?

    I would really appreciate if someone could give answers to my questions.

  • Rendering Bitmap using ANativeWindow

    19 février 2016, par William Seemann

    I’m decoding a video frame and trying to render is using Android’s ANativeWindow API in conjunction with a SurfaceView. I know I’m decoding the frame successfully because my demo application returns the decoded (and re-encoded) frame as a bitmap and displays it in an ImageView (bottom image). However, when trying to draw the decoded frame to a SurfaceView I’m getting garbage output (top image). Can someone explain why ?

    const int TARGET_IMAGE_FORMAT = AV_PIX_FMT_RGBA;
    const int TARGET_IMAGE_CODEC = AV_CODEC_ID_PNG;

    void convert_image(State *state, AVCodecContext *pCodecCtx, AVFrame *pFrame, AVPacket *avpkt, int *got_packet_ptr, int width, int height) {
           AVCodecContext *codecCtx;
           AVCodec *codec;
           AVFrame *frame;

           *got_packet_ptr = 0;

           if (width == -1) {
               width = pCodecCtx->width;
           }

           if (height == -1) {
               height = pCodecCtx->height;
           }

           codec = avcodec_find_encoder(TARGET_IMAGE_CODEC);
           if (!codec) {
               printf("avcodec_find_decoder() failed to find decoder\n");
               goto fail;
           }

           codecCtx = avcodec_alloc_context3(codec);
           if (!codecCtx) {
               printf("avcodec_alloc_context3 failed\n");
               goto fail;
           }

           codecCtx->bit_rate = pCodecCtx->bit_rate;
           //codecCtx->width = pCodecCtx->width;
           //codecCtx->height = pCodecCtx->height;
           codecCtx->width = width;
           codecCtx->height = height;
           codecCtx->pix_fmt = TARGET_IMAGE_FORMAT;
           codecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
           codecCtx->time_base.num = pCodecCtx->time_base.num;
           codecCtx->time_base.den = pCodecCtx->time_base.den;

           if (!codec || avcodec_open2(codecCtx, codec, NULL) < 0) {
               printf("avcodec_open2() failed\n");
               goto fail;
           }

           frame = av_frame_alloc();

           if (!frame) {
               goto fail;
           }

           // Determine required buffer size and allocate buffer
           int numBytes = avpicture_get_size(TARGET_IMAGE_FORMAT, codecCtx->width, codecCtx->height);
           void * buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));

           printf("wqwq %d\n", numBytes);

           avpicture_fill(((AVPicture *)frame),
                   buffer,
                   TARGET_IMAGE_FORMAT,
                   codecCtx->width,
                   codecCtx->height);

           avpicture_alloc(((AVPicture *)frame),
                   TARGET_IMAGE_FORMAT,
                   codecCtx->width,
                   codecCtx->height);

           struct SwsContext *scalerCtx = sws_getContext(pCodecCtx->width,
                   pCodecCtx->height,
                   pCodecCtx->pix_fmt,
                   //pCodecCtx->width,
                   //pCodecCtx->height,
                   width,
                   height,
                   TARGET_IMAGE_FORMAT,
                   SWS_FAST_BILINEAR, 0, 0, 0);

           if (!scalerCtx) {
               printf("sws_getContext() failed\n");
               goto fail;
           }

           sws_scale(scalerCtx,
                   (const uint8_t * const *) pFrame->data,
                   pFrame->linesize,
                   0,
                   pFrame->height,
                   frame->data,
                   frame->linesize);

           int ret = avcodec_encode_video2(codecCtx, avpkt, frame, got_packet_ptr);

           // code to draw the re-encoded frame on the surface view
           if (state->native_window) {    
               ANativeWindow_Buffer windowBuffer;

               if (ANativeWindow_lock(state->native_window, &windowBuffer, NULL) == 0) {
                   memcpy(windowBuffer.bits, avpkt->data, windowBuffer.width * windowBuffer.height * 4);
                   ANativeWindow_unlockAndPost(state->native_window);
               }
           }

           if (ret < 0) {
               *got_packet_ptr = 0;
           }

           fail:
           av_free(frame);

           free(buffer);

           if (codecCtx) {
               avcodec_close(codecCtx);
               av_free(codecCtx);
           }

           if (scalerCtx) {
               sws_freeContext(scalerCtx);
           }

           if (ret < 0 || !*got_packet_ptr) {
               av_free_packet(avpkt);
           }
       }

    enter image description here

  • x86inc : Add debug symbols indicating sizes of compiled functions

    18 janvier 2016, par Geza Lore
    x86inc : Add debug symbols indicating sizes of compiled functions
    

    Some debuggers/profilers use this metadata to determine which function a
    given instruction is in ; without it they get can confused by local labels
    (if you haven’t stripped those). On the other hand, some tools are still
    confused even with this metadata. e.g. this fixes `gdb`, but not `perf`.

    Currently only implemented for ELF.

    Signed-off-by : Anton Khirnov <anton@khirnov.net>

    • [DBH] libavcodec/x86/proresdsp.asm
    • [DBH] libavutil/x86/x86inc.asm
    • [DBH] tests/checkasm/x86/checkasm.asm