Recherche avancée

Médias (0)

Mot : - Tags -/acrobat

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (12384)

  • How to save image from the middle of a video ?

    29 octobre 2017, par puppon -su

    I need to make a thumbnail for a video, to seek to the 25% of a video and save the image. Here is what I’m doing right now, but it only saves black image.

    #include

    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>dict.h>

    int main (int argc, char **argv)
    {

       av_register_all();

       AVFormatContext *pFormatCtx = avformat_alloc_context();

       int res;

       res = avformat_open_input(&amp;pFormatCtx, "test.mp4", NULL, NULL);
       if (res) {
           return res;
       }


       avformat_find_stream_info(pFormatCtx, NULL);

       int64_t duration = pFormatCtx->duration;


       // Find the first video stream
       int videoStream=-1;
       for(int i=0; inb_streams; i++) {
           if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
               videoStream=i;
               break;
           }
       }
       if(videoStream==-1) {
           return -1;
       }

       AVCodecContext *pCodecCtxOrig = NULL;

       // Get a pointer to the codec context for the video stream
       pCodecCtxOrig=pFormatCtx->streams[videoStream]->codec;


       AVCodec *pCodec = NULL;
       // Find the decoder for the video stream
       pCodec=avcodec_find_decoder(pCodecCtxOrig->codec_id);
       if(pCodec==NULL) {
           fprintf(stderr, "Unsupported codec!\n");
           return -1; // Codec not found
       }


       AVCodecContext *pCodecCtx = NULL;
       // Copy context
       pCodecCtx = avcodec_alloc_context3(pCodec);
       if(avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0) {
           fprintf(stderr, "Couldn't copy codec context");
           return -1; // Error copying codec context
       }


       // Open codec
       if(avcodec_open2(pCodecCtx, pCodec, NULL)&lt;0) {
           return -1; // Could not open codec
       }


       AVFrame *pFrame = NULL;

       pFrame=av_frame_alloc();

       AVFrame *pFrameRGB = NULL;

       pFrameRGB=av_frame_alloc();



       // Determine required buffer size and allocate buffer
       int numBytes=avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width,
                                   pCodecCtx->height);

       uint8_t *buffer = NULL;
       buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));


       // Assign appropriate parts of buffer to image planes in pFrameRGB
       // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
       // of AVPicture
       res = avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_RGB24,
                       pCodecCtx->width, pCodecCtx->height);
       if (res&lt;0) {
           return;
       }



       // I've set this number randomly
       res = av_seek_frame(pFormatCtx, videoStream, 20.0, AVSEEK_FLAG_FRAME);
       if (res&lt;0) {
           return;
       }



       AVPacket packet;
       while(1) {
           av_read_frame(pFormatCtx, &amp;packet);
           if(packet.stream_index==videoStream) {
               int frameFinished;
               avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);
               if(frameFinished) {
                   SaveFrame(pFrameRGB, pCodecCtx->width,
                       pCodecCtx->height);
                   break;
               }

           }
       }


       avformat_close_input(&amp;pFormatCtx);
       return 0;
    }



    void SaveFrame(AVFrame *pFrame, int width, int height) {
     FILE *pFile;
     char szFilename[] = "frame.ppm";
     int  y;

     // Open file
     pFile=fopen(szFilename, "wb");
     if(pFile==NULL)
       return;

     // Write header
     fprintf(pFile, "P6\n%d %d\n255\n", width, height);

     // Write pixel data
     for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);

     // Close file
     fclose(pFile);
    }

    I was following this tutorial http://dranger.com/ffmpeg/tutorial01.html http://dranger.com/ffmpeg/tutorial07.html . It says that it was updated in 2015, but there already are some warnings about deprecated code, for example here : pFormatCtx->streams[i]->codec.

    I got video duration (in microseconds), but I don’t understand what I should send to av_seek_frame. Can I somehow use frame number for both duration and seeking, instead of time ?

  • FFmpeg - What muxer do i need to save an AAC audio stream

    3 août 2022, par David Barishev

    I'm developing Android application, and im using ffmpeg for conversion of files.
    &#xA;I want my binary file to be as slim as possible since i don't have many input formats and output formats, and my operation is quite basic.And of course not to bloat the APK.

    &#xA;&#xA;

    In my program ffmpeg receives a file, and copys the audio stream (-acodec copy), the audio stream will always be aac (mp4a). What i need is to save the stream to file.
    &#xA;My command looks like this : ffmpeg -i {Input} -vn -acodec copy output.aac.

    &#xA;&#xA;

    What muxer do i need to for muxing aac to file ? I have tried flv,mp3,mov but i always get
    &#xA;Unable to find a suitable output format for &#x27;output.aac&#x27;, so these options are wrong.
    &#xA;I don't need an encoder for stream copy btw.

    &#xA;&#xA;

    Side note : this command work flawlessly on full installation of ffmpeg , but I don't know which muxer it uses. If there is a way to output the muxer it uses from regular ffmpeg run, it would work too.

    &#xA;

  • Unable to save Matplotlib animations using ffmpeg

    23 décembre 2017, par jtpointon

    I have installed ffmpeg and added it to path, and checked it works in command prompt, but I am still unable to save animations. I have tried creating a sin wave that will animate when I don’t try to save it, but throws an error when I do to demonstrate ;

    from __future__ import division

    import numpy as numpy
    from matplotlib import pyplot as pyplot
    from matplotlib import animation

    fig = pyplot.figure()
    ax = pyplot.axes(xlim=(0, 2), ylim=(-2, 2))
    line, = ax.plot([], [], lw=2)

    def init():
       line.set_data([], [])
       return line,

    def animate(i):
       x = numpy.linspace(0, 2, 1000)
       y = numpy.sin(2 * numpy.pi * (x - 0.01 * i))
       line.set_data(x, y)
       return line,

    anim = animation.FuncAnimation(fig, animate, init_func=init, frames=200,
       interval=20, blit=True, repeat=False)

    FFwriter = animation.FFMpegWriter()
    anim.save('animation_testing.mp4', writer = FFwriter)

    pyplot.show()

    When I try to run this it throws the same errors over and over again, I assume as it iterates through each frame ;

    Traceback (most recent call last):
     File "c:\users\james\appdata\local\enthought\canopy\user\lib\site-
    packages\matplotlib\backends\backend_wx.py", line 212, in _on_timer
       TimerBase._on_timer(self)
     File "c:\users\james\appdata\local\enthought\canopy\user\lib\site-
    packages\matplotlib\backend_bases.py", line 1273, in _on_timer
       ret = func(*args, **kwargs)
     File "c:\users\james\appdata\local\enthought\canopy\user\lib\site-
    packages\matplotlib\animation.py", line 910, in _step
       still_going = Animation._step(self, *args)
     File "c:\users\james\appdata\local\enthought\canopy\user\lib\site-
    packages\matplotlib\animation.py", line 769, in _step
       self._draw_next_frame(framedata, self._blit)
     File "c:\users\james\appdata\local\enthought\canopy\user\lib\site-
    packages\matplotlib\animation.py", line 787, in _draw_next_frame
       self._pre_draw(framedata, blit)
     File "c:\users\james\appdata\local\enthought\canopy\user\lib\site-
    packages\matplotlib\animation.py", line 800, in _pre_draw
       self._blit_clear(self._drawn_artists, self._blit_cache)
     File "c:\users\james\appdata\local\enthought\canopy\user\lib\site-
    packages\matplotlib\animation.py", line 840, in _blit_clear
       a.figure.canvas.restore_region(bg_cache[a])
    KeyError:

    Since it mentioned an error in _blit_clear I tried changing blit to False in FuncAnimation, but then it wouldn’t animate in the pyplot.show() when I didn’t try to save.

    I’m unsure as to where the error could be and so can’t work out how to fix this.

    I’m using Windows 10, python 2.7.6 and matplotlib version 1.4.2

    Many Thanks !