Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (45)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

Sur d’autres sites (9272)

  • ffmpeg decoding MP4 to Open GL texture black screen

    5 septembre 2019, par Dakiaiu

    I decoded a MP4 video and want to display the AVFrame via glTexImage2D and glTexSubImage2D but all I get is a blank GL Window.

    I’ve tried looking at the various examples in the ffmpeg github examples tree. https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples and different posts from the past and recently on this site learning slowly from them but I could not find anything I am doing wrong.

    while(av_read_frame(format_context,packet) >= 0){

           if(packet->stream_index == video_stream_index){

               av_frame = decode(codec_context,av_frame,packet);

               sws_context = sws_getContext(codec_context->width,codec_context->height,codec_context->pix_fmt,codec_context->width,codec_context->height,
                       AV_PIX_FMT_RGB24, SWS_BICUBIC,nullptr,nullptr,nullptr);

               sws_scale(sws_context,
                       av_frame->data,
                       av_frame->linesize,
                       0,
                       codec_context->height,
                       gl_frame->data,
                       gl_frame->linesize);

               sws_freeContext(sws_context);

               if(first_use == true) {

                   glTexImage2D(GL_TEXTURE_2D,
                                0,
                                GL_RGB,
                                codec_context->width,
                                codec_context->height,
                                0,
                                GL_RGB,
                                GL_UNSIGNED_BYTE,
                                gl_frame->data[0]);

                   first_use = false;

               }else{

                   glTexSubImage2D(GL_TEXTURE_2D,
                           0,
                           0,
                           0,
                           codec_context->width,
                           codec_context->height,
                           GL_RGB,
                           GL_UNSIGNED_BYTE,
                           gl_frame->data[0]);

               }

           }


       }


       while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS &&
              glfwWindowShouldClose(window) == 0 );

    }

    The frames decode successfully but I cannot see anything. It has to be something with the above gl code that I have done wrong. I can show the ffmpeg code above if necessary.

  • C/C++ ffmpeg output is low quality and blurry

    27 octobre 2022, par Turgut

    I've made a program that takes a video file as input, edits it using opengl/glfw, then encodes that edited video. The program works just fine, I get the desired output. However the video quality is really low and I don't know how to adjust it. The editing seems fine, since the display on the glfw window is high resolution. I don'T think its about scaling since it just reads the pixels on the glfw window and passes it to the encoder, and the glfw window is high res.

    


    Here is what the glfw window looks like when the program is running :

    


    enter image description here

    


    I'm encoding in YUV420P formatting, but the information I'm getting from the glfw window is in RGBA format. I'm getting the data using :

    


       glReadPixels(0, 0,
   gl_width, gl_height, 
   GL_RGBA, GL_UNSIGNED_BYTE, 
   (GLvoid*) state.glBuffer
);


    


    I simply got the muxing.c example from ffmpeg's docs and edited it slightly so it looks something like this :

    


    AVFrame* video_encoder::get_video_frame(OutputStream *ost)
{
    AVCodecContext *c = ost->enc;

    /* check if we want to generate more frames */
    if (av_compare_ts(ost->next_pts, c->time_base,
                      (float) STREAM_DURATION / 1000, (AVRational){ 1, 1 }) > 0)
        return NULL;

    /* when we pass a frame to the encoder, it may keep a reference to it
     * internally; make sure we do not overwrite it here */
    if (av_frame_make_writable(ost->frame) < 0)
        exit(1);

        
    if (c->pix_fmt != AV_PIX_FMT_YUV420P) {           
        /* as we only generate a YUV420P picture, we must convert it
         * to the codec pixel format if needed */
        if (!ost->sws_ctx) {
            ost->sws_ctx = sws_getContext(c->width, c->height,
                                          AV_PIX_FMT_YUV420P,
                                          c->width, c->height,
                                          c->pix_fmt,
                                          SCALE_FLAGS, NULL, NULL, NULL);
            if (!ost->sws_ctx) {
                fprintf(stderr,
                        "Could not initialize the conversion context\n");
                exit(1);
            } 
        }
    #if __AUDIO_ONLY
        image_for_audio_only(ost->tmp_frame, ost->next_pts, c->width, c->height);
    #endif

        sws_scale(ost->sws_ctx, (const uint8_t * const *) ost->tmp_frame->data,
                  ost->tmp_frame->linesize, 0, c->height, ost->frame->data,
                  ost->frame->linesize);
    } else {
        //This is where I set the information I got from the glfw window.
        set_frame_yuv_from_rgb(ost->frame, ost->sws_ctx);
    }
    ost->frame->pts = ost->next_pts++;

    return ost->frame;
}

void video_encoder::set_frame_yuv_from_rgb(AVFrame *frame, struct SwsContext *sws_context) {
    const int in_linesize[1] = { 4 * width };
    //uint8_t* dest[4] = { rgb_data, NULL, NULL, NULL };
    sws_context = sws_getContext(
            width, height, AV_PIX_FMT_RGBA,
            width, height, AV_PIX_FMT_YUV420P,
            SWS_BICUBIC, 0, 0, 0);

    sws_scale(sws_context, (const uint8_t * const *)&rgb_data, in_linesize, 0,
            height, frame->data, frame->linesize);
}


    


    rgb_data is the buffer I got from the glfw window. It's simply an uint8_t*.

    


    And at the end of all this, here is what the encoded output looks like when ran through mplayer :

    


    enter image description here

    


    It's much lower quality compare to the glfw window. How can I improve the quality of the video ?

    


  • ffmpeg on rasbian to rtmd server - no output

    5 mai 2020, par TwoSeven

    I have set up a nginx-rtmd server on a raspberry pi and am using it to output to obs. I can successfully stream from my GoPro 7 and pick up the output on VLC on my phone.

    



    I have set up a pi camera on another rpi and using raspivd I can see the camera video in a window on a small touch display attached to it.

    



    I have set up ffmpeg with h264/aac support and piped the output of raspivid into it. Apart from a warning saying cur_dts is invalid, ffmpeg appears to be running (the output says 100k frames so far).

    



    The issue is that I get no output in VLC (just a spinning icon) when I try and connect to the nginx-rtmp server. I do still get the video window on the rpi screen (which is unexpected).

    



    The command I am using is

    



    raspivid -o - -t 0 -w 1920 -h 1080 -fps 25 -b 4000000 -g 50 | ./ffmpeg -loglevel debug -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://<address>/<app>/<fname>&#xA;</fname></app></address>

    &#xA;&#xA;

    does anyone have any pointers as to what might be incorrect. I am not so familiar with ffmpeg other than a cursory understanding of the parameters and what it does.

    &#xA;&#xA;

    Regards.

    &#xA;