Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (44)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

Sur d’autres sites (2801)

  • FFMPEG and DirectX Capture in C++

    13 décembre 2016, par tankyx

    I have a system that allows me to capture a window and save it as a mp4, using ffmpeg. I use gdigrab to capture the frame, but it is fairly slow (60ms per av_read_frame calls)

    I know I can capture a game using the DirectX API, but I don’t know how to convert the resulting BMP to an AVFrame.

    The following code is the DirectX code I use to capture the frame

    extern void* pBits;
    extern IDirect3DDevice9* g_pd3dDevice;
    IDirect3DSurface9* pSurface;
    g_pd3dDevice->CreateOffscreenPlainSurface(ScreenWidth, ScreenHeight,
                                         D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH,
                                         &pSurface, NULL);
    g_pd3dDevice->GetFrontBufferData(0, pSurface);
    D3DLOCKED_RECT lockedRect;
    pSurface->LockRect(&lockedRect,NULL,
                  D3DLOCK_NO_DIRTY_UPDATE|
                  D3DLOCK_NOSYSLOCK|D3DLOCK_READONLY)));
    for( int i=0 ; i < ScreenHeight ; i++)
    {
       memcpy( (BYTE*) pBits + i * ScreenWidth * BITSPERPIXEL / 8 ,
            (BYTE*) lockedRect.pBits + i* lockedRect.Pitch ,
            ScreenWidth * BITSPERPIXEL / 8);
    }
    g_pSurface->UnlockRect();
    pSurface->Release();

    And here is my read loop :

    {
       while (1) {
       if (av_read_frame(pFormatCtx, &packet) < 0 || exit)
           break;
       if (packet.stream_index == videoindex) {
           // Decode video frame
           av_packet_rescale_ts(&packet, { 1, std::stoi(pParser->GetVal("video-fps")) }, pCodecCtx->time_base);
           avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

           if (frameFinished) {
               pFrame->pts = i;
               i++;
               sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
               pFrameRGB->pts = pFrame->pts;
               enc.encodeFrame(pFrameRGB);

       }
       // Free the packet that was allocated by av_read_frame
       av_free_packet(&packet);
    }

    How can I create an AVFrame using the bmp I have, without using the av_read_frame ?

  • How to route FFMPEG screen capture output to Python for OpenCV processing ?

    16 décembre 2016, par Bitani

    I am trying to capture the screen of a Mac, specifically game windows, and process them in real-time using OpenCV. I currently have it working with pyscreenshot and a conversion to a numpy array, but the max framerate of this method, on any system, appears to be around 10 FPS.

    Obviously with games it would be very beneficial to be able to process 30+ FPS, and so I found that FFMPEG could record the screen using this command :

    ffmpeg -y -f avfoundation -capture_cursor 1 -i "1" -pix_fmt yuv420p -vf scale=1024:-1 -r 30 out.mp4

    That will write the screen to a file, out.mp4, with a width of 1024 and a frame rate of 30 FPS. Works perfectly ! But now I’ve searched and searched and searched for a method to route FFMPEG’s output to Python and.. no luck.

    If anybody knows of a way to do this I would be greatly appreciative, as I’ve been trying to solve this for quite a while !

    Thanks, and happy holidays !

  • FFmpeg stream dynamic png

    13 avril 2017, par Azarus

    I would like to know if its possible to stream a png or any kind of image using ffmpeg. I would like to generate the image contiously using nodejs that updates every 10 seconds. I would like to display game stats with this in a corner and mix it with some background music or pre recorded commentary on it. Additionaly i would like to mix a video and the image should act like an overlay.

    I am also not sure if using a transparent png image its possible to do

    I couldn’t get my head around doing the mixing with ffmpeg and its looks very complicated so i would like to get some help on it.

    I have video files stored in a folder that i would like to continously stream and mix different music and an image on it. I would like to have it all continously working without stopping the stream.

    Is it possible with ffmpeg cli on linux or i cant avoid using a desktop windows pc for such thing ?