Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP 0.2

Autres articles (78)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (13008)

  • Display FFMPEG decoded frame in a GLFW window

    17 juin 2020, par Infecto

    I am implementing the client program of a game where the server sends encoded frames of the game to the client (via UDP), while the client decodes them (via FFMPEG) and displays them in a GLFW window. 
My program has two threads :

    



      

    1. Thread 1 : renders the content of the uint8_t* variable dataToRender
    2. 


    3. Thread 2 : keeps obtaining frames from the server, decodes them and updates dataToRender accordingly
    4. 


    



    Thread 1 does the typical rendering of a GLFW window in a while-loop. I have already tried to display some dummy frame data (a completely red frame) and it worked :

    



    while (!glfwWindowShouldClose(window)) {
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    ...

    glBindTexture(GL_TEXTURE_2D, tex_handle);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, window_width, window_height, 0, GL_RGB, GL_UNSIGNED_BYTE, dataToRender);
    ...
    glfwSwapBuffers(window);
}


    



    Thread 2 is where I am having trouble. I am unable to properly store the decoded frame into my dataToRender variable. On top if it, the frame data is originally in YUV format and needs to be converted to RGB. I use FFMPEG's sws_scale for that, which also gives me a bad dst image pointers error output in the console. Here's the code snippet responsible for that part :

    



            size_t data_size = frameBuffer.size();  // frameBuffer is a std::vector where I accumulate the frame data chunks
        uint8_t* data = frameBuffer.data();  // convert the vector to a pointer
        picture->format = AV_PIX_FMT_RGB24;
        av_frame_get_buffer(picture, 1);
        while (data_size > 0) {
            int ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,
                data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
            if (ret < 0) {
                fprintf(stderr, "Error while parsing\n");
                exit(1);
            }
            data += ret;
            data_size -= ret;

            if (pkt->size) {
                swsContext = sws_getContext(
                    c->width, c->height,
                    AV_PIX_FMT_YUV420P, c->width, c->height,
                    AV_PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL
                );
                uint8_t* rgb24[1] = { data };
                int rgb24_stride[1] = { 3 * c->width };
                sws_scale(swsContext, rgb24, rgb24_stride, 0, c->height, picture->data, picture->linesize);

                decode(c, picture, pkt, outname);
                // TODO: copy content of picture->data[0] to "dataToRender" maybe?
            }
        }


    



    I have already tried doing another sws_scale to copy the content to dataToRender and I cannot get rid of the bad dst image pointers error. Any advice or solution to the problem would be greatly appreciated as I have been stuck for days on this.

    


  • Set "start" field to 0 in mp3 with ffmpeg

    11 juin 2020, par TomatoCo

    I'm trying to change the bitrate and sample rate of an MP3 to match another to try and stop a small audio glitch from occurring when some game tries to play it. I've got the sample rate and bitrate right where I want them, but I can't get the "start" portion of

    



      Duration: 00:03:33.81, start: 0.025057, bitrate: 196 kb/s
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 196 kb/s


    



    to go to 0, like the mp3 I'm trying to replace. The target looks like :

    



      Duration: 00:06:47.59, start: 0.000000, bitrate: 196 kb/s
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 196 kb/s


    



    I've tried a variety of silenceremove filters and -ss flags to try and trim it, but I can't get rid of that "start" field. Google is failing me. What args am I looking for ?

    


  • FFMPEG : How to extract a PNG sequence from a video, remove duplicate frames in the process and keep the original frame number ?

    16 mai 2020, par Simon

    I have a recording of an old game which has variable framerate. Since I want to process individual frames to upscale and modernize the footage I would like to avoid any duplicate frames. I know that I can use this function to extract all frames from a video :

    



    ffmpeg -i input.mov -r 60/1 out%04d.png


    



    And I know that I can remove duplicate frames using this function :

    



    ffmpeg -i input.mov -vf mpdecimate,setpts=N/FRAME_RATE/TB output.mov


    



    However, the above command removes duplicate frames and puts frames next to each other whereas in order to keep a timecode of sorts it would be a lot more useful to be able to extract PNGs with frame number (video is progressive 60fps) but without all of the duplicates.

    



    So, the question is : what if I want to extract PNG files BUT maintain the original corresponding framenumber within the sequence ? So, if we have a video with 10 frames and frames 2-8 are duplicates it spits out 1.png 2.png 9.png and 10.png ? How do I combine both bits of code listed above ?