Recherche avancée

Médias (1)

Mot : - Tags -/portrait

Autres articles (77)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (7547)

  • Setting ffmpeg properly in ubuntu 16.04

    22 août 2017, par pro neon

    I am following this website for ffmpeg tutorial : http://dranger.com
    I tried to compile the programs after setting up ffmpeg in ubuntu by looking on some online videos but none of them worked. Some times GCC gives me undefined reference error and sometimes header not found error. I looked on some of the answers on SO that said that we need to do some change in the code as the new api is not backwards compatible but still GCC gives me undefined reference error.
    Here is the code that I am trying to compile :

       // tutorial01.c
    // Code based on a tutorial by Martin Bohme (boehme@inb.uni-luebeckREMOVETHIS.de)
    // Tested on Gentoo, CVS version 5/01/07 compiled with GCC 4.1.1
    // With updates from https://github.com/chelyaev/ffmpeg-tutorial
    // Updates tested on:
    // LAVC 54.59.100, LAVF 54.29.104, LSWS 2.1.101
    // on GCC 4.7.2 in Debian February 2015

    // A small sample program that shows how to use libavformat and libavcodec to
    // read video from a file.
    //
    // Use
    //
    // gcc -o tutorial01 tutorial01.c -lavformat -lavcodec -lswscale -lz
    //
    // to build (assuming libavformat and libavcodec are correctly installed
    // your system).
    //
    // Run using
    //
    // tutorial01 myvideofile.mpg
    //
    // to write the first five frames from "myvideofile.mpg" to disk in PPM
    // format.

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>

    #include

    // compatibility with newer API
    #if LIBAVCODEC_VERSION_INT &lt; AV_VERSION_INT(55,28,1)
    #define av_frame_alloc avcodec_alloc_frame
    #define av_frame_free avcodec_free_frame
    #endif

    void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame) {
     FILE *pFile;
     char szFilename[32];
     int  y;

     // Open file
     sprintf(szFilename, "frame%d.ppm", iFrame);
     pFile=fopen(szFilename, "wb");
     if(pFile==NULL)
       return;

     // Write header
     fprintf(pFile, "P6\n%d %d\n255\n", width, height);

     // Write pixel data
     for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);

     // Close file
     fclose(pFile);
    }

    int main(int argc, char *argv[]) {
     // Initalizing these to NULL prevents segfaults!
     AVFormatContext   *pFormatCtx = NULL;
     int               i, videoStream;
     AVCodecContext    *pCodecCtxOrig = NULL;
     AVCodecContext    *pCodecCtx = NULL;
     AVCodec           *pCodec = NULL;
     AVFrame           *pFrame = NULL;
     AVFrame           *pFrameRGB = NULL;
     AVPacket          packet;
     int               frameFinished;
     int               numBytes;
     uint8_t           *buffer = NULL;
     struct SwsContext *sws_ctx = NULL;

     if(argc &lt; 2) {
       printf("Please provide a movie file\n");
       return -1;
     }
     // Register all formats and codecs
     av_register_all();

     // Open video file
     if(avformat_open_input(&amp;pFormatCtx, argv[1], NULL, NULL)!=0)
       return -1; // Couldn't open file

     // Retrieve stream information
     if(avformat_find_stream_info(pFormatCtx, NULL)&lt;0)
       return -1; // Couldn't find stream information

     // Dump information about file onto standard error
     av_dump_format(pFormatCtx, 0, argv[1], 0);

     // Find the first video stream
     videoStream=-1;
     for(i=0; inb_streams; i++)
       if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
         videoStream=i;
         break;
       }
     if(videoStream==-1)
       return -1; // Didn't find a video stream

     // Get a pointer to the codec context for the video stream
     pCodecCtxOrig=pFormatCtx->streams[videoStream]->codec;
     // Find the decoder for the video stream
     pCodec=avcodec_find_decoder(pCodecCtxOrig->codec_id);
     if(pCodec==NULL) {
       fprintf(stderr, "Unsupported codec!\n");
       return -1; // Codec not found
     }
     // Copy context
     pCodecCtx = avcodec_alloc_context3(pCodec);
     if(avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0) {
       fprintf(stderr, "Couldn't copy codec context");
       return -1; // Error copying codec context
     }

     // Open codec
     if(avcodec_open2(pCodecCtx, pCodec, NULL)&lt;0)
       return -1; // Could not open codec

     // Allocate video frame
     pFrame=av_frame_alloc();

     // Allocate an AVFrame structure
     pFrameRGB=av_frame_alloc();
     if(pFrameRGB==NULL)
       return -1;

     // Determine required buffer size and allocate buffer
     numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
                     pCodecCtx->height);
     buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

     // Assign appropriate parts of buffer to image planes in pFrameRGB
     // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
     // of AVPicture
     avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
            pCodecCtx->width, pCodecCtx->height);

     // initialize SWS context for software scaling
     sws_ctx = sws_getContext(pCodecCtx->width,
                  pCodecCtx->height,
                  pCodecCtx->pix_fmt,
                  pCodecCtx->width,
                  pCodecCtx->height,
                  PIX_FMT_RGB24,
                  SWS_BILINEAR,
                  NULL,
                  NULL,
                  NULL
                  );

     // Read frames and save first five frames to disk
     i=0;
     while(av_read_frame(pFormatCtx, &amp;packet)>=0) {
       // Is this a packet from the video stream?
       if(packet.stream_index==videoStream) {
         // Decode video frame
         avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);

         // Did we get a video frame?
         if(frameFinished) {
       // Convert the image from its native format to RGB
       sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
             pFrame->linesize, 0, pCodecCtx->height,
             pFrameRGB->data, pFrameRGB->linesize);

       // Save the frame to disk
       if(++i&lt;=5)
         SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height,
               i);
         }
       }

       // Free the packet that was allocated by av_read_frame
       av_free_packet(&amp;packet);
     }

     // Free the RGB image
     av_free(buffer);
     av_frame_free(&amp;pFrameRGB);

     // Free the YUV frame
     av_frame_free(&amp;pFrame);

     // Close the codecs
     avcodec_close(pCodecCtx);
     avcodec_close(pCodecCtxOrig);

     // Close the video file
     avformat_close_input(&amp;pFormatCtx);

     return 0;
    }

    This is the command I use to compile :

    gcc -o tutorial01 tutorial01.c -lavformat -lavcodec -lswscale -lz
  • FFmpeg : Jpeg file to AVFrame

    1er janvier 2020, par darja

    I need to join several jpg files into video using FFmpeg library. But I have a problem with reading this files. Here is a function which reads image file and makes AVFrame :

    AVFrame* OpenImage(const char* imageFileName)
    {
       AVFormatContext *pFormatCtx;

       if(av_open_input_file(&amp;pFormatCtx, imageFileName, NULL, 0, NULL)!=0)
       {
           printf("Can't open image file '%s'\n", imageFileName);
           return NULL;
       }      

       dump_format(pFormatCtx, 0, imageFileName, false);

       AVCodecContext *pCodecCtx;

       pCodecCtx = pFormatCtx->streams[0]->codec;
       pCodecCtx->width = W_VIDEO;
       pCodecCtx->height = H_VIDEO;
       pCodecCtx->pix_fmt = PIX_FMT_YUV420P;

       // Find the decoder for the video stream
       AVCodec *pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
       if (!pCodec)
       {
           printf("Codec not found\n");
           return NULL;
       }

       // Open codec
       if(avcodec_open(pCodecCtx, pCodec)&lt;0)
       {
           printf("Could not open codec\n");
           return NULL;
       }

       //
       AVFrame *pFrame;

       pFrame = avcodec_alloc_frame();

       if (!pFrame)
       {
           printf("Can't allocate memory for AVFrame\n");
           return NULL;
       }

       int frameFinished;
       int numBytes;

       // Determine required buffer size and allocate buffer
       numBytes = avpicture_get_size(PIX_FMT_YUVJ420P, pCodecCtx->width, pCodecCtx->height);
       uint8_t *buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));

       avpicture_fill((AVPicture *) pFrame, buffer, PIX_FMT_YUVJ420P, pCodecCtx->width, pCodecCtx->height);

       // Read frame

       AVPacket packet;

       int framesNumber = 0;
       while (av_read_frame(pFormatCtx, &amp;packet) >= 0)
       {
           if(packet.stream_index != 0)
               continue;

           int ret = avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);
           if (ret > 0)
           {
               printf("Frame is decoded, size %d\n", ret);
               pFrame->quality = 4;
               return pFrame;
           }
           else
               printf("Error [%d] while decoding frame: %s\n", ret, strerror(AVERROR(ret)));
       }
    }

    This causes no error but creates only black frame, no image. What is wrong ?

  • How can I reencode a video to match another's codec exactly ?

    24 janvier 2020, par Stephen Schrauger

    When I’m on vacation, I usually use our camcorder to record videos. Since they’re all the same format, I can use ffmpeg to concat them into one large, smooth video without re-encoding.

    However, sometimes I will use a phone or other camera to record a video (if the camcorder ran out of space/battery or was left at a hotel).

    I’d like to determine the codec, framerate, etc used by my camcorder and use those parameters to convert the phone vidoes into the same format. That way, I will be able to concatonate all the videos without re-encoding the camcorder videos.

    Using ffprobe, I found my camcorder has this encoding :

     Input #0, mpegts, from 'camcorderfile.MTS':
     Duration: 00:00:09.54, start: 1.936367, bitrate: 24761 kb/s
     Program 1
       Stream #0:0[0x1011]: Video: h264 (High) (HDPR / 0x52504448), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
       Stream #0:1[0x1100]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 256 kb/s
       Stream #0:2[0x1200]: Subtitle: hdmv_pgs_subtitle ([144][0][0][0] / 0x0090), 1920x1080

    The phone (iPhone 5s) encoding is :

     Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'mov.MOV':
     Metadata:
       major_brand     : qt  
       minor_version   : 0
       compatible_brands: qt  
       creation_time   : 2017-01-02T03:04:05.000000Z
       com.apple.quicktime.location.ISO6709: +12.3456-789.0123+456.789/
       com.apple.quicktime.make: Apple
       com.apple.quicktime.model: iPhone 5s
       com.apple.quicktime.software: 10.2.1
       com.apple.quicktime.creationdate: 2017-01-02T03:04:05-0700
     Duration: 00:00:14.38, start: 0.000000, bitrate: 11940 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 11865 kb/s, 29.98 fps, 29.97 tbr, 600 tbn, 1200 tbc (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler
         encoder         : H.264
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 63 kb/s (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler
       Stream #0:2(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler
       Stream #0:3(und): Data: none (mebx / 0x7862656D), 0 kb/s (default)
       Metadata:
         creation_time   : 2017-01-02T03:04:05.000000Z
         handler_name    : Core Media Data Handler

    I’m presuming that ffmpeg will automatically take any acceptable video format, and that I only need to figure out the output settings. I think I need to use -s 1920x1080 and -pix_fmt yuv420p for the output, but what other flags do I need in order to make the phone video into the same encoding as the camcorder video ?

    Can I get some pointers as to how I can translate the ffprobe output into the flags I need to give to ffmpeg ?

    Edit : Added the entire Input #0 for both media files.