Recherche avancée

Médias (91)

Autres articles (103)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • La gestion des forums

    3 novembre 2011, par

    Si les forums sont activés sur le site, les administrateurs ont la possibilité de les gérer depuis l’interface d’administration ou depuis l’article même dans le bloc de modification de l’article qui se trouve dans la navigation de la page.
    Accès à l’interface de modération des messages
    Lorsqu’il est identifié sur le site, l’administrateur peut procéder de deux manières pour gérer les forums.
    S’il souhaite modifier (modérer, déclarer comme SPAM un message) les forums d’un article particulier, il a à sa (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (9036)

  • How to prevent the video generated by ffmpeg from flickering when converting from RGB to YUV using sws_scale() ?

    30 mai 2019, par LeFrosch

    I am generating images and loading them as a RGBA Bitmaps into my c++ program. Then I want to generate a video with ffmpeg using those images. As far as I found out I need to convert them to YUV using sws_scale(). Everything works fine except the generated video is flickering.

    I already tried to change the sws_getContext() flag. SWS_BILINEAR | SWS_ACCURATE_RND are preventing some of the flickering but not all. I also tried to change my EncodeVideo method but this also did effect the result.

    // creating the SwsContext
    this->data->convertContext = sws_getContext(
               data->outputCodecContext->width,
               data->outputCodecContext->height,
               IN_PXEL_FORMAT, // libffmpeg::AV_PIX_FMT_RGB24
               data->outputCodecContext->width,
               data->outputCodecContext->height,
               PIXEL_FORMAT, // libffmpeg::AV_PIX_FMT_YUV420P
               SWS_BILINEAR | SWS_ACCURATE_RND,
               NULL, NULL, NULL);

    // the bitmap which contains the rgb image
    Bitmap^ frame = generateImage();

    // copying the bitmap to the libffmpeg::AVFrame
    for (int x = 0; x < width; x++)
    {
       for (int y = 0; y < height; y++)
       {
           Color c = frame->GetPixel(x, y);
           data->outputFrameRGB->data[0][y * data->outputFrameRGB->linesize[0] + 3 * x] = c.R;
           data->outputFrameRGB->data[0][y * data->outputFrameRGB->linesize[0] + 3 * x + 1] = c.G;
           data->outputFrameRGB->data[0][y * data->outputFrameRGB->linesize[0] + 3 * x + 2] = c.B;
       }
    }

    // converting the rgb frame to the yuv frame
    libffmpeg::sws_scale(data->convertContext, data->outputFrameRGB->data, data->outputFrameRGB->linesize, 0, height, data->outputFrameYUV->data, data->outputFrameYUV->linesize);

    for (int i = 0; i < time.TotalSeconds * frameRate; i++)
    {
       data->outputFrameYUV->pts = data->frameCount++;

       Helper::EncodeVideo(data->outputCodecContext, data->outputFrameYUV, data->packet, data->outputFile);
    }

    The Methode which encodes the video :

    void EncodeVideo(libffmpeg::AVCodecContext *context, libffmpeg::AVFrame *frame, libffmpeg::AVPacket *packet, libffmpeg::FILE *file)
    {
       int ret = 0;

       /* Send the frame to the encoder */
       ret = libffmpeg::avcodec_send_frame(context, frame);
       if (ret < 0)
       {
           ThrowError("Could not send frame to the encoder.", ret);
       }

       while (ret >= 0)
       {
           ret = libffmpeg::avcodec_receive_packet(context, packet);
           if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
           {
               return;
           }
           else if (ret < 0)
           {
               ThrowError("Error during encoding.", ret);
           }

           libffmpeg::fwrite(packet->data, 1, packet->size, file);
           libffmpeg::av_packet_unref(packet);
       }
    }

    The original image and this is a screenshot of the video. In the screenshot the area around the text is blurry which causes a flickering effect when played. How can I prevent this from happening ?

  • Insert Random silence multiple times between Audios in ffmpeg

    23 septembre 2021, par Gracie williams

    suppose I have MP3 of around 1 minute duration and increase its duration longer.Duration can be any.

    


    I want to randomly add N seconds of silence to that audio.

    


    I can generate the silence like below

    


    ffmpeg -f lavfi -i anullsrc=r=44100:cl=mono -t <seconds> -q:a 9 -acodec libmp3lame out.mp3&#xA;</seconds>

    &#xA;

    And I can insert silence to audio like below

    &#xA;

    ffmpeg -i in.wav -filter_complex "anullsrc,atrim=0:4[s];[0]atrim=0:14[a];[0]atrim=14,asetpts=N/SR/TB[b];[a][s][b]concat=3:v=0:a=1" out.wav&#xA;

    &#xA;

    Is there anyway to insert silence at multiple areas.With php I can generate random time to insert based on 60 seconds audio like below.

    &#xA;

    $arr = [];&#xA;$arr[3] = 2; //here 3 is original audio insert time and 2 is the duration of silence&#xA;$arr[6] = 3;  //here 6 is original audio insert time and 3 is the duration of silence&#xA;$arr[10] = 1;&#xA;$arr[12] = 3;&#xA;

    &#xA;

    Now I want to insert silence between seconds based on above array like

    &#xA;

    2 seconds silence at 3rd second&#xA;3 seconds silence at 6th second&#xA;1 second silence at 10th second&#xA;

    &#xA;

    and son on..

    &#xA;

  • Fast seeking ffmpeg multiple times for screenshots

    7 mars 2017, par user3786834

    I have come across http://askubuntu.com/questions/377579/ffmpeg-output-screenshot-gallery/377630#377630, it’s perfect. That has done exactly what I wanted.

    However, I’m using remote URLs to generate the screenshot timeline. I do know it’s possible to fast seek with remote files using https://trac.ffmpeg.org/wiki/Seeking%20with%20FFmpeg (using -ss before the -i) but this only runs the once.

    I’m looking for a way to use the

    ./ffmpeg -i input -vf "select=gt(scene\,0.4),scale=160:-1,tile,scale=600:-1" \
    -frames:v 1 -qscale:v 3 preview.jpg

    command but using the fast seek method as it’s currently very slow when used with a remote file. I use PHP but I am aware that a C method exists by using av_seek_frame, I barely know C so I’m unable to implement this into a PHP script I’m writing. So hopefully, it is possible to do this directly with ffmpeg in the PHP system() function.

    Currently, I run seperate ffmpeg commands (with the -ss method) and then combine the screenshots together in PHP. However, with this method it will be refetching the metadata each time and a more optimized method would be to have it all happen in the same command line because I want to reduce the amount of requests made to the remote url so I can run more scripts in sequence with each other.

    Thank you for your help.