Recherche avancée

Médias (0)

Mot : - Tags -/gis

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (99)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Installation en mode standalone

    4 février 2011, par

    L’installation de la distribution MediaSPIP se fait en plusieurs étapes : la récupération des fichiers nécessaires. À ce moment là deux méthodes sont possibles : en installant l’archive ZIP contenant l’ensemble de la distribution ; via SVN en récupérant les sources de chaque modules séparément ; la préconfiguration ; l’installation définitive ;
    [mediaspip_zip]Installation de l’archive ZIP de MediaSPIP
    Ce mode d’installation est la méthode la plus simple afin d’installer l’ensemble de la distribution (...)

Sur d’autres sites (9713)

  • Set "start" field to 0 in mp3 with ffmpeg

    11 juin 2020, par TomatoCo

    I'm trying to change the bitrate and sample rate of an MP3 to match another to try and stop a small audio glitch from occurring when some game tries to play it. I've got the sample rate and bitrate right where I want them, but I can't get the "start" portion of

    



      Duration: 00:03:33.81, start: 0.025057, bitrate: 196 kb/s
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 196 kb/s


    



    to go to 0, like the mp3 I'm trying to replace. The target looks like :

    



      Duration: 00:06:47.59, start: 0.000000, bitrate: 196 kb/s
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 196 kb/s


    



    I've tried a variety of silenceremove filters and -ss flags to try and trim it, but I can't get rid of that "start" field. Google is failing me. What args am I looking for ?

    


  • Display FFMPEG decoded frame in a GLFW window

    17 juin 2020, par Infecto

    I am implementing the client program of a game where the server sends encoded frames of the game to the client (via UDP), while the client decodes them (via FFMPEG) and displays them in a GLFW window. 
My program has two threads :

    



      

    1. Thread 1 : renders the content of the uint8_t* variable dataToRender
    2. 


    3. Thread 2 : keeps obtaining frames from the server, decodes them and updates dataToRender accordingly
    4. 


    



    Thread 1 does the typical rendering of a GLFW window in a while-loop. I have already tried to display some dummy frame data (a completely red frame) and it worked :

    



    while (!glfwWindowShouldClose(window)) {
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    ...

    glBindTexture(GL_TEXTURE_2D, tex_handle);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, window_width, window_height, 0, GL_RGB, GL_UNSIGNED_BYTE, dataToRender);
    ...
    glfwSwapBuffers(window);
}


    



    Thread 2 is where I am having trouble. I am unable to properly store the decoded frame into my dataToRender variable. On top if it, the frame data is originally in YUV format and needs to be converted to RGB. I use FFMPEG's sws_scale for that, which also gives me a bad dst image pointers error output in the console. Here's the code snippet responsible for that part :

    



            size_t data_size = frameBuffer.size();  // frameBuffer is a std::vector where I accumulate the frame data chunks
        uint8_t* data = frameBuffer.data();  // convert the vector to a pointer
        picture->format = AV_PIX_FMT_RGB24;
        av_frame_get_buffer(picture, 1);
        while (data_size > 0) {
            int ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,
                data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
            if (ret < 0) {
                fprintf(stderr, "Error while parsing\n");
                exit(1);
            }
            data += ret;
            data_size -= ret;

            if (pkt->size) {
                swsContext = sws_getContext(
                    c->width, c->height,
                    AV_PIX_FMT_YUV420P, c->width, c->height,
                    AV_PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL
                );
                uint8_t* rgb24[1] = { data };
                int rgb24_stride[1] = { 3 * c->width };
                sws_scale(swsContext, rgb24, rgb24_stride, 0, c->height, picture->data, picture->linesize);

                decode(c, picture, pkt, outname);
                // TODO: copy content of picture->data[0] to "dataToRender" maybe?
            }
        }


    



    I have already tried doing another sws_scale to copy the content to dataToRender and I cannot get rid of the bad dst image pointers error. Any advice or solution to the problem would be greatly appreciated as I have been stuck for days on this.

    


  • avformat/smacker : Improve timestamps

    24 juin 2020, par Andreas Rheinhardt
    avformat/smacker : Improve timestamps
    

    A Smacker file can contain up to seven audio tracks. Up until now,
    the pts for the i. audio packet contained in a Smacker frame was
    simply the end timestamp of the last i. audio packet contained in
    an earlier Smacker frame.

    The problem with this is that a Smacker stream need not contain data in
    every Smacker frame and so the current i. audio packet present may come
    from a different underlying stream than the last i. audio packet
    contained in an earlier frame.

    The sample hypnotix.smk* exhibits this. It has three audio tracks and
    the first of the three has a longer first packet, so that the audio for
    the first track is contained in only 235 packets contained in the first
    235 Smacker frames ; the end timestamp of this track is 166696 (about 7.56s
    at a timebase of 1/22050) ; the other two audio tracks both have 253 packets
    contained in the first 253 Smacker frames. Up until now, the 236th
    packet of the second track being the first audio packet in the 236th
    Smacker frame would get the end timestamp of the last first audio packet
    from the last Smacker frame containing a first audio packet and said
    last audio packet is the first audio packet from the 235th Smacker frame
    from the first audio track, so that the timestamp is 166696. In contrast,
    the 236th packet from the third track (whose packets contain the same number
    of samples as the packets from the second track) has a timestamp of
    156116 (because its timestamp is derived from the end timestamp of the
    235th packet of the second audio track). In the end, the second track
    ended up being 177360/22050 s = 8.044s long ; in contrast, the third
    track was 166780/22050 s = 7.56s long which also coincided with the
    video.

    This commit fixes this by not using timestamps from other tracks for
    a packet's pts.

    * : https://samples.ffmpeg.org/game-formats/smacker/wetlands/hypnotix.smk

    Reviewed-by : Timotej Lazar <timotej.lazar@araneo.si>
    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavformat/smacker.c