Recherche avancée

Médias (1)

Mot : - Tags -/remix

Autres articles (75)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (7515)

  • FFmpeg hangs on buffer fill c++

    2 mars 2021, par Derek Halden

    I'm processing a bunch of frames from an RTSP stream using ffmpeg. I end up doing a lot of processing on these frames, which means that I'm not always pulling in real-time. If the buffer gets full, the process hangs. I'm wondering if one of the following solutions is feasible/fixes the problem, and if so, how I would implement it using the ffmpeg libraries :

    



    1) Is there a way to clear the buffer if I ever reach a point where it's hanging ? (I can determine when it's hung, I just don't know what to do about it).

    



    2) Is there a way to make the buffer overwrite the old data, and just always read the most recent data ? It doesn't matter to me if I lose frames.

    



    3) I've already discovered that I can make the buffer arbtrarily large with : av_dict_set(&avd, "buffer_size", "655360", 0);. This could be a solution, but I don't know how large/small it needs to be, because I don't know how long the stream will post video for ?

    



    4) Is this just a bug that I need to bring up with the ffmpeg people ?

    



    5) Something else I haven't considered ?

    



    while(av_read_frame(context, &(packet)) >= 0 && fcount < fps*SECONDS) {
    clock_t start, end;
    int ret = avcodec_send_packet(codec_context, packet);
    if(!(packet->stream_index == video_stream_index)) {
      continue;
    }

    if (ret == AVERROR(EAGAIN) || ret == AVERROR(EINVAL)) {
      continue;
    } else if (ret < 0) {
      cerr << "Error while decoding frame " << fcount << endl;
      exit(1);
    }

    ret = avcodec_receive_frame(codec_context, frame);
    if (ret == AVERROR(EAGAIN) || ret == AVERROR(EINVAL)) {
      continue;
    } else if (ret < 0) {
      cerr << "Error while decoding frame " << fcount << endl;
      exit(1);
    }

    sws_scale(img_convert_ctx, frame->data, frame->linesize, 0,
              codec_context->height, picture_rgb->data, picture_rgb->linesize);

    if(!frame) {
      cerr << "Could not allocate video frame" << endl;
      exit(1);
    }

    if(codec_context == NULL) {
      cerr << "Cannot initialize the conversion context!" << endl;
      exit(1);
    }

    // Do something with the frame here

    fcount++;
    av_packet_unref(&(packet));

}


    



    I have added the code that causes the program to hang.

    


  • Setting ffmpeg properly in ubuntu 16.04

    22 août 2017, par pro neon

    I am following this website for ffmpeg tutorial : http://dranger.com
    I tried to compile the programs after setting up ffmpeg in ubuntu by looking on some online videos but none of them worked. Some times GCC gives me undefined reference error and sometimes header not found error. I looked on some of the answers on SO that said that we need to do some change in the code as the new api is not backwards compatible but still GCC gives me undefined reference error.
    Here is the code that I am trying to compile :

       // tutorial01.c
    // Code based on a tutorial by Martin Bohme (boehme@inb.uni-luebeckREMOVETHIS.de)
    // Tested on Gentoo, CVS version 5/01/07 compiled with GCC 4.1.1
    // With updates from https://github.com/chelyaev/ffmpeg-tutorial
    // Updates tested on:
    // LAVC 54.59.100, LAVF 54.29.104, LSWS 2.1.101
    // on GCC 4.7.2 in Debian February 2015

    // A small sample program that shows how to use libavformat and libavcodec to
    // read video from a file.
    //
    // Use
    //
    // gcc -o tutorial01 tutorial01.c -lavformat -lavcodec -lswscale -lz
    //
    // to build (assuming libavformat and libavcodec are correctly installed
    // your system).
    //
    // Run using
    //
    // tutorial01 myvideofile.mpg
    //
    // to write the first five frames from "myvideofile.mpg" to disk in PPM
    // format.

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>

    #include

    // compatibility with newer API
    #if LIBAVCODEC_VERSION_INT &lt; AV_VERSION_INT(55,28,1)
    #define av_frame_alloc avcodec_alloc_frame
    #define av_frame_free avcodec_free_frame
    #endif

    void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame) {
     FILE *pFile;
     char szFilename[32];
     int  y;

     // Open file
     sprintf(szFilename, "frame%d.ppm", iFrame);
     pFile=fopen(szFilename, "wb");
     if(pFile==NULL)
       return;

     // Write header
     fprintf(pFile, "P6\n%d %d\n255\n", width, height);

     // Write pixel data
     for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);

     // Close file
     fclose(pFile);
    }

    int main(int argc, char *argv[]) {
     // Initalizing these to NULL prevents segfaults!
     AVFormatContext   *pFormatCtx = NULL;
     int               i, videoStream;
     AVCodecContext    *pCodecCtxOrig = NULL;
     AVCodecContext    *pCodecCtx = NULL;
     AVCodec           *pCodec = NULL;
     AVFrame           *pFrame = NULL;
     AVFrame           *pFrameRGB = NULL;
     AVPacket          packet;
     int               frameFinished;
     int               numBytes;
     uint8_t           *buffer = NULL;
     struct SwsContext *sws_ctx = NULL;

     if(argc &lt; 2) {
       printf("Please provide a movie file\n");
       return -1;
     }
     // Register all formats and codecs
     av_register_all();

     // Open video file
     if(avformat_open_input(&amp;pFormatCtx, argv[1], NULL, NULL)!=0)
       return -1; // Couldn't open file

     // Retrieve stream information
     if(avformat_find_stream_info(pFormatCtx, NULL)&lt;0)
       return -1; // Couldn't find stream information

     // Dump information about file onto standard error
     av_dump_format(pFormatCtx, 0, argv[1], 0);

     // Find the first video stream
     videoStream=-1;
     for(i=0; inb_streams; i++)
       if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
         videoStream=i;
         break;
       }
     if(videoStream==-1)
       return -1; // Didn't find a video stream

     // Get a pointer to the codec context for the video stream
     pCodecCtxOrig=pFormatCtx->streams[videoStream]->codec;
     // Find the decoder for the video stream
     pCodec=avcodec_find_decoder(pCodecCtxOrig->codec_id);
     if(pCodec==NULL) {
       fprintf(stderr, "Unsupported codec!\n");
       return -1; // Codec not found
     }
     // Copy context
     pCodecCtx = avcodec_alloc_context3(pCodec);
     if(avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0) {
       fprintf(stderr, "Couldn't copy codec context");
       return -1; // Error copying codec context
     }

     // Open codec
     if(avcodec_open2(pCodecCtx, pCodec, NULL)&lt;0)
       return -1; // Could not open codec

     // Allocate video frame
     pFrame=av_frame_alloc();

     // Allocate an AVFrame structure
     pFrameRGB=av_frame_alloc();
     if(pFrameRGB==NULL)
       return -1;

     // Determine required buffer size and allocate buffer
     numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
                     pCodecCtx->height);
     buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

     // Assign appropriate parts of buffer to image planes in pFrameRGB
     // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
     // of AVPicture
     avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
            pCodecCtx->width, pCodecCtx->height);

     // initialize SWS context for software scaling
     sws_ctx = sws_getContext(pCodecCtx->width,
                  pCodecCtx->height,
                  pCodecCtx->pix_fmt,
                  pCodecCtx->width,
                  pCodecCtx->height,
                  PIX_FMT_RGB24,
                  SWS_BILINEAR,
                  NULL,
                  NULL,
                  NULL
                  );

     // Read frames and save first five frames to disk
     i=0;
     while(av_read_frame(pFormatCtx, &amp;packet)>=0) {
       // Is this a packet from the video stream?
       if(packet.stream_index==videoStream) {
         // Decode video frame
         avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);

         // Did we get a video frame?
         if(frameFinished) {
       // Convert the image from its native format to RGB
       sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
             pFrame->linesize, 0, pCodecCtx->height,
             pFrameRGB->data, pFrameRGB->linesize);

       // Save the frame to disk
       if(++i&lt;=5)
         SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height,
               i);
         }
       }

       // Free the packet that was allocated by av_read_frame
       av_free_packet(&amp;packet);
     }

     // Free the RGB image
     av_free(buffer);
     av_frame_free(&amp;pFrameRGB);

     // Free the YUV frame
     av_frame_free(&amp;pFrame);

     // Close the codecs
     avcodec_close(pCodecCtx);
     avcodec_close(pCodecCtxOrig);

     // Close the video file
     avformat_close_input(&amp;pFormatCtx);

     return 0;
    }

    This is the command I use to compile :

    gcc -o tutorial01 tutorial01.c -lavformat -lavcodec -lswscale -lz
  • FFmpeg : Jpeg file to AVFrame

    1er janvier 2020, par darja

    I need to join several jpg files into video using FFmpeg library. But I have a problem with reading this files. Here is a function which reads image file and makes AVFrame :

    AVFrame* OpenImage(const char* imageFileName)
    {
       AVFormatContext *pFormatCtx;

       if(av_open_input_file(&amp;pFormatCtx, imageFileName, NULL, 0, NULL)!=0)
       {
           printf("Can't open image file '%s'\n", imageFileName);
           return NULL;
       }      

       dump_format(pFormatCtx, 0, imageFileName, false);

       AVCodecContext *pCodecCtx;

       pCodecCtx = pFormatCtx->streams[0]->codec;
       pCodecCtx->width = W_VIDEO;
       pCodecCtx->height = H_VIDEO;
       pCodecCtx->pix_fmt = PIX_FMT_YUV420P;

       // Find the decoder for the video stream
       AVCodec *pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
       if (!pCodec)
       {
           printf("Codec not found\n");
           return NULL;
       }

       // Open codec
       if(avcodec_open(pCodecCtx, pCodec)&lt;0)
       {
           printf("Could not open codec\n");
           return NULL;
       }

       //
       AVFrame *pFrame;

       pFrame = avcodec_alloc_frame();

       if (!pFrame)
       {
           printf("Can't allocate memory for AVFrame\n");
           return NULL;
       }

       int frameFinished;
       int numBytes;

       // Determine required buffer size and allocate buffer
       numBytes = avpicture_get_size(PIX_FMT_YUVJ420P, pCodecCtx->width, pCodecCtx->height);
       uint8_t *buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));

       avpicture_fill((AVPicture *) pFrame, buffer, PIX_FMT_YUVJ420P, pCodecCtx->width, pCodecCtx->height);

       // Read frame

       AVPacket packet;

       int framesNumber = 0;
       while (av_read_frame(pFormatCtx, &amp;packet) >= 0)
       {
           if(packet.stream_index != 0)
               continue;

           int ret = avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);
           if (ret > 0)
           {
               printf("Frame is decoded, size %d\n", ret);
               pFrame->quality = 4;
               return pFrame;
           }
           else
               printf("Error [%d] while decoding frame: %s\n", ret, strerror(AVERROR(ret)));
       }
    }

    This causes no error but creates only black frame, no image. What is wrong ?