Recherche avancée

Médias (0)

Mot : - Tags -/utilisateurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (104)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Soumettre bugs et patchs

    10 avril 2011

    Un logiciel n’est malheureusement jamais parfait...
    Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
    Si vous pensez avoir résolu vous même le bug (...)

Sur d’autres sites (5747)

  • ffmpeg convertation imagevideo causes artefacts

    24 mars 2016, par mrgloom

    I want to convert video to images, do some image processing and convert images back to video.

    Here is my commands :

    ./ffmpeg -r 30 -i $VIDEO_NAME "image%d.png"

    ./ffmpeg -r 30 -y -i "image%d.png" output.mpg

    But in output.mpg video I have some artefacts like in jpeg.

    Also I don’t know how to detrmine fps, I set fps=30 (-r 30).
    When I use above first command without -r it produces a lot of images > 1kk, but than I use -r 30 option it produce same number of images as this command calculationg number of frames :

    FRAME_COUNT=`./ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 $VIDEO_NAME`

    So my questions are :

    1. How to determine frame rate ?

    2. How to convert images to video and don’t reduce initial quality ?

    UPDATE :

    Seems this helped, after I removed -r option
    Image sequence to video quality

    so resulting command is :

    ./ffmpeg -y -i "image%d.png" -vcodec mpeg4 -b $BITRATE output_$BITRATE.avi

    but I’m still not sure how to select bitrate.

    How can I see bitrate of original .mp4 file ?

  • Torn images acquired when decoding video frames with FFmpeg

    22 mars 2016, par bot1131357

    I am trying to decode the images using the tutorial at dranger.com. Below is the code I’m working with. The code is pretty much untouched aside from pgm_save() function and replacing the deprecated functions.

    The program compiled successfully, but when I tried to process a video, I’m getting tearing effect like this : image1 and this image2.

    (Side question : I’ve tried to replace avpicture_fill() which is deprecated with av_image_copy_to_buffer() but I’m getting an access violation error, so I left it as is. I wonder if there is a proper way for me to assign the frame data to a buffer.)

    The library that I’m using is ffmpeg-20160219-git-98a0053-win32-dev. Would really appreciate it if someone could help me with this.

    // Decode video and save frames

    char filename[] = "test%0.3d.ppm";
    static void ppm_save(unsigned char *buf, int wrap, int xsize, int ysize,
                        int framenum )
    {

       char filenamestr[sizeof(filename)];
       FILE *f;
       int i;

       sprintf_s(filenamestr, sizeof(filenamestr), filename, framenum);
       fopen_s(&f,filenamestr,"w");
       fprintf(f,"P6\n%d %d\n%d\n",xsize,ysize,255);
       for(i=0;i/ Register all formats and codecs
       av_register_all();

       // Open video file
       if (avformat_open_input(&pFormatCtx, argv[1], NULL, NULL) != 0)
           return -1; // Couldn't open file

       // Retrieve stream information
       if (avformat_find_stream_info(pFormatCtx, NULL) < 0)
           return -1; // Couldn't find stream information

       // Dump information about file onto standard error (Not necessary)
       av_dump_format(pFormatCtx, 0, argv[1], 0);

       // Find the first video stream
       videoStream = -1;
       for (i = 0; i < pFormatCtx->nb_streams; i++)
           if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
               videoStream = i;
               break;
           }
       if (videoStream == -1)
           return -1; // Didn't find a video stream

       /* find the video decoder */
       codec = avcodec_find_decoder(pFormatCtx->streams[videoStream]->codec->codec_id);
       if (!codec) {
           fprintf(stderr, "codec not found\n");
           exit(1);
       }

       codecCtx= avcodec_alloc_context3(codec);
       if(avcodec_copy_context(codecCtx, pFormatCtx->streams[i]->codec) != 0) {
           fprintf(stderr, "Couldn't copy codec context");
           return -1; // Error copying codec context
       }  

       /* open it */
       if (avcodec_open2(codecCtx, codec, NULL) < 0) {
           fprintf(stderr, "could not open codec\n");
           exit(1);
       }

       // Allocate video frame
       inframe= av_frame_alloc();
       if(inframe==NULL)
           return -1;

       // Allocate output frame
       outframe=av_frame_alloc();
       if(outframe==NULL)
           return -1;

       // Determine required buffer size and allocate buffer
       int numBytes=av_image_get_buffer_size(AV_PIX_FMT_RGB24, codecCtx->width,
                       codecCtx->height,1);
       uint8_t* buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

       // Assign appropriate parts of buffer to image planes in outframe
       // Note that outframe is an AVFrame, but AVFrame is a superset
       // of AVPicture


       avpicture_fill((AVPicture *)outframe, buffer, AV_PIX_FMT_RGB24,
            codecCtx->width, codecCtx->height );
       //av_image_copy_to_buffer(buffer, numBytes,
    //                           outframe->data, outframe->linesize,
    //                           AV_PIX_FMT_RGB24, codecCtx->width, codecCtx->height,1);

       // initialize SWS context for software scaling
       sws_ctx = sws_getContext(codecCtx->width,
                  codecCtx->height,
                  codecCtx->pix_fmt,
                  codecCtx->width,
                  codecCtx->height,
                  AV_PIX_FMT_RGB24,
                  SWS_BILINEAR,
                  NULL,
                  NULL,
                  NULL
                  );  


       // av_init_packet(&avpkt);


       i = 0;
       while(av_read_frame(pFormatCtx, &avpkt)>=0) {
           // Is this a packet from the video stream?
           if(avpkt.stream_index==videoStream) {
             // Decode video frame
             avcodec_decode_video2(codecCtx, inframe, &frameFinished, &avpkt);

             // Did we get a video frame?
             if(frameFinished) {
           // Convert the image from its native format to RGB
           sws_scale(sws_ctx, (uint8_t const * const *)inframe->data,
                 inframe->linesize, 0, codecCtx->height,
                 outframe->data, outframe->linesize);

           // Save the frame to disk
           if(++i%15 == 0)
               ppm_save(outframe->data[0], outframe->linesize[0],
                           codecCtx->width, codecCtx->height, i);

             }
           }

       // Free the packet that was allocated by av_read_frame
       av_packet_unref(&avpkt);
       }


       // Free the RGB image
       av_free(buffer);
       av_frame_free(&outframe);

       // Free the original frame
       av_frame_free(&inframe);

       // Close the codecs
       avcodec_close(codecCtx);
       av_free(codecCtx);

       // Close the video file
       avformat_close_input(&pFormatCtx);


       printf("\n");


       return 0;
    }
  • How we can open /dev/video0 or any v4l2 node with ffmpeg for capture raw frames and encode it in jpeg format ?

    19 mars 2016, par satinder

    I am new in video domain . I am working now on ffmpeg , I use ffmpeg command line but there is very big issue and challenge for use ffmpeg in my own c code .So I read some tutorials like dranger.com etc . But I am not able to capture v4l2 or my laptop /dev/video0 node . I want capture raw video stream and overlay it with some text and then compress it in jpeg format . I have a little idea and that is in following code that code is working for ant .mp4 format or encoded file but not work on /dev/video0 node format . so please any one help me . Thanks in advance !!

    please see following code snapshot that is a tutorial1.c from dranger.com :

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>

    #include

    void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame) {
     FILE *pFile;
     char szFilename[32];
     int  y;

     // Open file
     sprintf(szFilename, "frame%d.ppm", iFrame);
     pFile=fopen(szFilename, "wb");
     if(pFile==NULL)
       return;

     // Write header
     fprintf(pFile, "P6\n%d %d\n255\n", width, height);

     // Write pixel data
     for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);

     // Close file
     fclose(pFile);
    }

    int main(int argc, char *argv[]) {
     AVFormatContext *pFormatCtx = NULL;
     int             i, videoStream;
     AVCodecContext  *pCodecCtx = NULL;
     AVCodec         *pCodec = NULL;
     AVFrame         *pFrame = NULL;
     AVFrame         *pFrameRGB = NULL;
     AVPacket        packet;
     int             frameFinished;
     int             numBytes;
     uint8_t         *buffer = NULL;

     AVDictionary    *optionsDict = NULL;
     struct SwsContext      *sws_ctx = NULL;

     if(argc &lt; 2) {
       printf("Please provide a movie file\n");
       return -1;
     }
     // Register all formats and codecs
     av_register_all();

     // Open video file
     if(avformat_open_input(&amp;pFormatCtx, argv[1], NULL, NULL)!=0)
       return -1; // Couldn't open file

     // Retrieve stream information
     if(avformat_find_stream_info(pFormatCtx, NULL)&lt;0)
       return -1; // Couldn't find stream information

     // Dump information about file onto standard error
     av_dump_format(pFormatCtx, 0, argv[1], 0);

     // Find the first video stream
     videoStream=-1;
     for(i=0; inb_streams; i++)
       if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
         videoStream=i;
         break;
       }
     if(videoStream==-1)
       return -1; // Didn't find a video stream

     // Get a pointer to the codec context for the video stream
     pCodecCtx=pFormatCtx->streams[videoStream]->codec;

     // Find the decoder for the video stream
     pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
     if(pCodec==NULL) {
       fprintf(stderr, "Unsupported codec!\n");
       return -1; // Codec not found
     }
     // Open codec
     if(avcodec_open2(pCodecCtx, pCodec, &amp;optionsDict)&lt;0)
       return -1; // Could not open codec

     // Allocate video frame
     pFrame=av_frame_alloc();

     // Allocate an AVFrame structure
     pFrameRGB=av_frame_alloc();
     if(pFrameRGB==NULL)
       return -1;

     // Determine required buffer size and allocate buffer
     numBytes=avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width,
                     pCodecCtx->height);
     buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

     sws_ctx =
       sws_getContext
       (
           pCodecCtx->width,
           pCodecCtx->height,
           pCodecCtx->pix_fmt,
           pCodecCtx->width,
           pCodecCtx->height,
           AV_PIX_FMT_RGB24,
           SWS_BILINEAR,
           NULL,
           NULL,
           NULL
       );

     // Assign appropriate parts of buffer to image planes in pFrameRGB
     // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
     // of AVPicture
     avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_RGB24,
            pCodecCtx->width, pCodecCtx->height);

     // Read frames and save first five frames to disk
     i=0;
     while(av_read_frame(pFormatCtx, &amp;packet)>=0) {
       // Is this a packet from the video stream?
       if(packet.stream_index==videoStream) {
         // Decode video frame
         avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished,
                  &amp;packet);

         // Did we get a video frame?
         if(frameFinished) {
       // Convert the image from its native format to RGB
           sws_scale
           (
               sws_ctx,
               (uint8_t const * const *)pFrame->data,
               pFrame->linesize,
               0,
               pCodecCtx->height,
               pFrameRGB->data,
               pFrameRGB->linesize
           );

       // Save the frame to disk
       if(++i&lt;=5)
         SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height,
               i);
         }
       }

       // Free the packet that was allocated by av_read_frame
       av_free_packet(&amp;packet);
     }

     // Free the RGB image
     av_free(buffer);
     av_free(pFrameRGB);

     // Free the YUV frame
     av_free(pFrame);

     // Close the codec
     avcodec_close(pCodecCtx);

     // Close the video file
     avformat_close_input(&amp;pFormatCtx);

     return 0;
    }