Recherche avancée

Médias (1)

Mot : - Tags -/sintel

Autres articles (82)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (8089)

  • swscaler bad src image pointers

    7 mars 2018, par user1496491

    I’m completely lost. I’m trying to capture 30 screenshots and put them into a video with FFMPEG under Windows 10. And it keeps telling me that [swscaler @ 073890a0] bad src image pointers. As a result the video is entirely green. If I change format to dshow using video=screen-capture-recorder the video looks to be mostly garbage. Here’s my short code for that. I’m completely stuck and don’t know even in which direction to look.

    MainWindow.h

    #ifndef MAINWINDOW_H
    #define MAINWINDOW_H

    #include <qmainwindow>
    #include <qfuture>
    #include <qfuturewatcher>
    #include <qmutex>
    #include <qmutexlocker>

    extern "C" {
    #include "libavcodec/avcodec.h"
    #include "libavcodec/avfft.h"

    #include "libavdevice/avdevice.h"

    #include "libavfilter/avfilter.h"
    #include "libavfilter/avfiltergraph.h"
    #include "libavfilter/buffersink.h"
    #include "libavfilter/buffersrc.h"

    #include "libavformat/avformat.h"
    #include "libavformat/avio.h"

    #include "libavutil/opt.h"
    #include "libavutil/common.h"
    #include "libavutil/channel_layout.h"
    #include "libavutil/imgutils.h"
    #include "libavutil/mathematics.h"
    #include "libavutil/samplefmt.h"
    #include "libavutil/time.h"
    #include "libavutil/opt.h"
    #include "libavutil/pixdesc.h"
    #include "libavutil/file.h"

    #include "libswscale/swscale.h"
    }

    class MainWindow : public QMainWindow
    {
       Q_OBJECT

    public:
       MainWindow(QWidget *parent = 0);
       ~MainWindow();

    private:
       AVFormatContext *inputFormatContext = nullptr;
       AVFormatContext *outFormatContext = nullptr;

       AVStream* videoStream = nullptr;

       AVDictionary* options = nullptr;

       AVCodec* outCodec = nullptr;
       AVCodec* inputCodec = nullptr;
       AVCodecContext* inputCodecContext = nullptr;
       AVCodecContext* outCodecContext = nullptr;
       SwsContext* swsContext = nullptr;

    private:
       void init();
       void initOutFile();
       void collectFrame();
    };

    #endif // MAINWINDOW_H
    </qmutexlocker></qmutex></qfuturewatcher></qfuture></qmainwindow>

    MainWindow.cpp

    #include "MainWindow.h"

    #include <qguiapplication>
    #include <qlabel>
    #include <qscreen>
    #include <qtimer>
    #include <qlayout>
    #include <qimage>
    #include <qtconcurrent></qtconcurrent>QtConcurrent>
    #include <qthreadpool>

    #include "ScreenCapture.h"

    MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent)
    {
       resize(800, 600);

       auto label = new QLabel();
       label->setAlignment(Qt::AlignHCenter | Qt::AlignVCenter);

       auto layout = new QHBoxLayout();
       layout->addWidget(label);

       auto widget = new QWidget();
       widget->setLayout(layout);
       setCentralWidget(widget);

       init();
       initOutFile();
       collectFrame();
    }

    MainWindow::~MainWindow()
    {
       avformat_close_input(&amp;inputFormatContext);
       avformat_free_context(inputFormatContext);

       QThreadPool::globalInstance()->waitForDone();
    }

    void MainWindow::init()
    {
       av_register_all();
       avcodec_register_all();
       avdevice_register_all();
       avformat_network_init();

       auto screen = QGuiApplication::screens()[0];
       QRect geometry = screen->geometry();

       inputFormatContext = avformat_alloc_context();

       options = NULL;
       av_dict_set(&amp;options, "framerate", "30", NULL);
       av_dict_set(&amp;options, "offset_x", QString::number(geometry.x()).toLatin1().data(), NULL);
       av_dict_set(&amp;options, "offset_y", QString::number(geometry.y()).toLatin1().data(), NULL);
       av_dict_set(&amp;options, "video_size", QString(QString::number(geometry.width()) + "x" + QString::number(geometry.height())).toLatin1().data(), NULL);
       av_dict_set(&amp;options, "show_region", "1", NULL);

       AVInputFormat* inputFormat = av_find_input_format("gdigrab");
       avformat_open_input(&amp;inputFormatContext, "desktop", inputFormat, &amp;options);

       int videoStreamIndex = av_find_best_stream(inputFormatContext, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);

       inputCodecContext = inputFormatContext->streams[videoStreamIndex]->codec;
       inputCodecContext->width = geometry.width();
       inputCodecContext->height = geometry.height();
       inputCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;

       inputCodec = avcodec_find_decoder(inputCodecContext->codec_id);
       avcodec_open2(inputCodecContext, inputCodec, NULL);
    }

    void MainWindow::initOutFile()
    {
       const char* filename = "C:/Temp/output.mp4";

       avformat_alloc_output_context2(&amp;outFormatContext, NULL, NULL, filename);

       outCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);

       videoStream = avformat_new_stream(outFormatContext, outCodec);
       videoStream->time_base = {1, 30};

       outCodecContext = videoStream->codec;
       outCodecContext->codec_id = AV_CODEC_ID_MPEG4;
       outCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
       outCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
       outCodecContext->bit_rate = 400000;
       outCodecContext->width = inputCodecContext->width;
       outCodecContext->height = inputCodecContext->height;
       outCodecContext->gop_size = 3;
       outCodecContext->max_b_frames = 2;
       outCodecContext->time_base = videoStream->time_base;

       if (outFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER)
           outCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

       avcodec_open2(outCodecContext, outCodec, NULL);

       if (!(outFormatContext->flags &amp; AVFMT_NOFILE))
           avio_open2(&amp;outFormatContext->pb, filename, AVIO_FLAG_WRITE, NULL, NULL);

       swsContext = sws_getContext(inputCodecContext->width,
                                   inputCodecContext->height,
                                   inputCodecContext->pix_fmt,
                                   outCodecContext->width,
                                   outCodecContext->height,
                                   outCodecContext->pix_fmt,
                                   SWS_BICUBIC, NULL, NULL, NULL);

       avformat_write_header(outFormatContext, &amp;options);
    }

    void MainWindow::collectFrame()
    {
       AVFrame* frame = av_frame_alloc();
       frame->data[0] = NULL;
       frame->width = inputCodecContext->width;
       frame->height = inputCodecContext->height;
       frame->format = inputCodecContext->pix_fmt;

       av_image_alloc(frame->data, frame->linesize, inputCodecContext->width, inputCodecContext->height, (AVPixelFormat)frame->format, 32);

       AVFrame* outFrame = av_frame_alloc();
       outFrame->data[0] = NULL;
       outFrame->width = outCodecContext->width;
       outFrame->height = outCodecContext->height;
       outFrame->format = outCodecContext->pix_fmt;

       av_image_alloc(outFrame->data, outFrame->linesize, outCodecContext->width, outCodecContext->height, (AVPixelFormat)outFrame->format, 32);

       int bufferSize = av_image_get_buffer_size(outCodecContext->pix_fmt,
                                                 outCodecContext->width,
                                                 outCodecContext->height,
                                                 24);

       uint8_t* outBuffer = (uint8_t*)av_malloc(bufferSize);

       avpicture_fill((AVPicture*)outFrame, outBuffer,
                      AV_PIX_FMT_YUV420P,
                      outCodecContext->width, outCodecContext->height);

       int frameCount = 30;
       int count = 0;

       AVPacket* packet = (AVPacket*)av_malloc(sizeof(AVPacket));
       av_init_packet(packet);

       while(av_read_frame(inputFormatContext, packet) >= 0)
       {
           if(packet->stream_index == videoStream->index)
           {
               int frameFinished = 0;
               avcodec_decode_video2(inputCodecContext, frame, &amp;frameFinished, packet);

               if(frameFinished)
               {
                   if(++count > frameCount)
                   {
                       qDebug() &lt;&lt; "FINISHED!";
                       break;
                   }

                   sws_scale(swsContext, frame->data, frame->linesize, 0, inputCodecContext->height, outFrame->data, outFrame->linesize);

                   AVPacket outPacket;
                   av_init_packet(&amp;outPacket);
                   outPacket.data = NULL;
                   outPacket.size = 0;

                   int got_picture = 0;
                   avcodec_encode_video2(outCodecContext, &amp;outPacket, outFrame, &amp;got_picture);

                   if(got_picture)
                   {
                       if(outPacket.pts != AV_NOPTS_VALUE) outPacket.pts = av_rescale_q(outPacket.pts, videoStream->codec->time_base, videoStream->time_base);
                       if(outPacket.dts != AV_NOPTS_VALUE) outPacket.dts = av_rescale_q(outPacket.dts, videoStream->codec->time_base, videoStream->time_base);

                       av_write_frame(outFormatContext , &amp;outPacket);
                   }

                   av_packet_unref(&amp;outPacket);
               }
           }
       }

       av_write_trailer(outFormatContext);

       av_free(outBuffer);
    }
    </qthreadpool></qimage></qlayout></qtimer></qscreen></qlabel></qguiapplication>
  • Pygame : Frame ghosting ?

    31 décembre 2013, par Sam Tubb

    I am working on a animation environment in python using pygame. The user draw's each frame, and then using ffmpeg the animation is saved as an .avi movie. I would like to implement a feature, but am not sure how.. frame ghosting. Like display the previous frame while you draw the current.

    I tried creating a surface called ghost that copies the current frame when the next-frame key is pressed. Then draws it with an alpha level of 10, but this didn't work out correctly.

    I am not sure what to do, here is the source code for anyone that thinks they have an idea :

    #Anim8

    import pygame,subprocess,shutil
    from os import makedirs
    from pygame.locals import *
    from random import randrange
    pygame.init()
    screen=pygame.display.set_mode((740,580))
    draw=pygame.Surface((740,540))
    draw.fill((200,200,200))
    bcol=(200,200,200)
    gui=pygame.Surface((740,40))
    gui.fill((50,50,50))
    size=2
    color=(0,0,0)
    screen.fill((200,200,200))
    prevcol=0
    newcol=0
    f=0
    msg=&#39;&#39;
    framerate=60
    try:
       makedirs(&#39;anim&#39;)
    except:
       pass
    def DrawColors(x,y):
       pygame.draw.rect(gui, (255,0,0), (x+3,y+3,15,15),0)
       pygame.draw.rect(gui, (0,0,0), (x+3,y+21,15,15),0)
       pygame.draw.rect(gui, (0,255,0), (x+21,y+3,15,15),0)
       pygame.draw.rect(gui, (200,200,200), (x+21,y+21,15,15),0)
       pygame.draw.rect(gui, (0,0,255), (x+39,y+3,15,15),0)
    while True:
       pygame.display.set_caption(&#39;Anim8 - Sam Tubb - &#39;+&#39;Frame: &#39;+str(f)+&#39; &#39;+str(msg))
       mse=pygame.mouse.get_pos()
       screen.blit(gui, (0,0))
       DrawColors(0,0)
       screen.blit(draw,(0,40))
       key=pygame.key.get_pressed()
       if key[K_1]:
           framerate=10
           msg=&#39;Frame Rate set to 10&#39;
       if key[K_2]:
           framerate=20
           msg=&#39;Frame Rate set to 20&#39;
       if key[K_3]:
           framerate=30
           msg=&#39;Frame Rate set to 30&#39;
       if key[K_4]:
           framerate=40
           msg=&#39;Frame Rate set to 40&#39;
       if key[K_5]:
           framerate=50
           msg=&#39;Frame Rate set to 50&#39;
       if key[K_6]:
           framerate=60
           msg=&#39;Frame Rate set to 60&#39;
       if key[K_7]:
           framerate=70
           msg=&#39;Frame Rate set to 70&#39;
       if key[K_8]:
           framerate=80
           msg=&#39;Frame Rate set to 80&#39;
       if key[K_9]:
           framerate=90
           msg=&#39;Frame Rate set to 90&#39;
       if key[K_0]:
           framerate=100
           msg=&#39;Frame Rate set to 100&#39;

       if key[K_a]:
           pygame.image.save(draw, &#39;anim/frame&#39;+str(f)+&#39;.png&#39;)
           f+=1
       for e in pygame.event.get():
           if e.type==QUIT:
               shutil.rmtree(&#39;anim&#39;)
               exit()
           if e.type==KEYDOWN:
               if e.key==K_s:
                   msg=&#39;Added Frame!&#39;
                   pygame.image.save(draw, &#39;anim/frame&#39;+str(f)+&#39;.png&#39;)
                   f+=1
               if e.key==K_c:
                   draw.fill(bcol)
               if e.key==K_r:
                   name=&#39;anim&#39;+str(randrange(0,999))+str(randrange(0,999))+&#39;.avi&#39;
                   msg=&#39;Rendering: &#39;+name
                   pygame.display.set_caption(&#39;Anim8 - Sam Tubb - &#39;+&#39;Frame: &#39;+str(f)+&#39; &#39;+str(msg))
                   subprocess.call(&#39;ffmpeg -f image2 -s 640x480 -i anim/frame%01d.png -r &#39;+str(framerate)+&#39; &#39;+name,shell=True)
                   msg=&#39;Done!&#39;
               if e.key==K_p:
                   subprocess.call(&#39;ffplay &#39;+name,shell=True)
           if e.type==MOUSEBUTTONDOWN:
               if e.button==1:
                   try:
                       prevcol=color
                       newcol=gui.get_at(mse)
                       if newcol==(50,50,50):
                           newcol=prevcol
                       color=newcol
                   except:
                       pass
               if e.button==3:
                   try:
                       prevcol=bcol
                       newcol=gui.get_at(mse)
                       if newcol==(50,50,50):
                           newcol=prevcol
                       draw.fill(newcol)
                       bcol=newcol
                   except:
                       pass
               if e.button==4:
                   size+=1
                   if size>7:
                       size=7
               if e.button==5:
                   size-=1
                   if size==0:
                       size=1
           if e.type == pygame.MOUSEMOTION:
               lineEnd = pygame.mouse.get_pos()
               lineEnd = (lineEnd[0],lineEnd[1]-40)
               if pygame.mouse.get_pressed() == (1, 0, 0):
                       pygame.draw.line(draw, color, lineStart, lineEnd, size)
               lineStart = lineEnd

       pygame.display.flip()

    Oh, and on another note, just if anyone was curious, here is what the output looks like.. I made a little new year's animation :

    Animation Test

  • Using ffmpeg without hardware acceleration (C++)

    13 février 2018, par MadMarky

    I’m working on an application (c++/Linux) that uses the ffmpeg 3.4 libraries to do video encoding. Since version 3.3 hardware acceleration is enabled by default if the platform supports it. The graphics card in my dev system has hardware acceleration support, but the tool also has to run on older systems that do not.
    How can i configure ffmpeg to disable hardware acceleration for video encoding ? There is a ton of info about enabling, but i just cant find how to disable it.

    ps.
    There already is a similar question : How to turn off ffmpeg hardware acceleration but its a year old and unfortunately still unanswered.