Recherche avancée

Médias (5)

Mot : - Tags -/open film making

Autres articles (70)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (10387)

  • Saving the openGL context as a video output

    2 juin 2023, par psiyum

    I am currently trying to save the animation made in openGL to a video file. I have tried using openCV's videowriter but to no advantage. I have successfully been able to generate a snapshot and save it as bmp using the SDL library. If I save all snapshots and then generate the video using ffmpeg, that is like collecting 4 GB worth of images. Not practical.
How can I write video frames directly during rendering ?
Here the code i use to take snapshots when I require :

    



    void snapshot(){&#xA;SDL_Surface* snap = SDL_CreateRGBSurface(SDL_SWSURFACE,WIDTH,HEIGHT,24, 0x000000FF, 0x0000FF00, 0x00FF0000, 0);&#xA;char * pixels = new char [3 *WIDTH * HEIGHT];&#xA;glReadPixels(0, 0,WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);&#xA;&#xA;for (int i = 0 ; i <height>pixels) &#x2B; snap->pitch * i, pixels &#x2B; 3 * WIDTH * (HEIGHT-i - 1), WIDTH*3 );&#xA;&#xA;delete [] pixels;&#xA;SDL_SaveBMP(snap, "snapshot.bmp");&#xA;SDL_FreeSurface(snap);&#xA;}&#xA;</height>

    &#xA;&#xA;

    I need the video output. I have discovered that ffmpeg can be used to create videos from C++ code but have not been able to figure out the process. Please help !

    &#xA;&#xA;

    EDIT : I have tried using openCV CvVideoWriter class but the program crashes ("segmentation fault") the moment it is declared.Compilation shows no errors ofcourse. Any suggestions to that ?

    &#xA;&#xA;

    SOLUTION FOR PYTHON USERS (Requires Python2.7,python-imaging,python-opengl,python-opencv, codecs of format you want to write to, I am on Ubuntu 14.04 64-bit) :

    &#xA;&#xA;

    def snap():&#xA;    pixels=[]&#xA;    screenshot = glReadPixels(0,0,W,H,GL_RGBA,GL_UNSIGNED_BYTE)&#xA;    snapshot = Image.frombuffer("RGBA",W,H),screenshot,"raw","RGBA",0,0)&#xA;    snapshot.save(os.path.dirname(videoPath) &#x2B; "/temp.jpg")&#xA;    load = cv2.cv.LoadImage(os.path.dirname(videoPath) &#x2B; "/temp.jpg")&#xA;    cv2.cv.WriteFrame(videoWriter,load)&#xA;

    &#xA;&#xA;

    Here W and H are the window dimensions (width,height). What is happening is I am using PIL to convert the raw pixels read from the glReadPixels command into a JPEG image. I am loading that JPEG into the openCV image and writing to the videowriter. I was having certain issues by directly using the PIL image into the videowriter (which would save millions of clock cycles of I/O), but right now I am not working on that. Image is a PIL module cv2 is a python-opencv module.

    &#xA;

  • Saving the openGL context as a video output

    16 septembre 2016, par activatedgeek

    I am currently trying to save the animation made in openGL to a video file. I have tried using openCV’s videowriter but to no advantage. I have successfully been able to generate a snapshot and save it as bmp using the SDL library. If I save all snapshots and then generate the video using ffmpeg, that is like collecting 4 GB worth of images. Not practical.
    How can I write video frames directly during rendering ?
    Here the code i use to take snapshots when I require :

    void snapshot(){
    SDL_Surface* snap = SDL_CreateRGBSurface(SDL_SWSURFACE,WIDTH,HEIGHT,24, 0x000000FF, 0x0000FF00, 0x00FF0000, 0);
    char * pixels = new char [3 *WIDTH * HEIGHT];
    glReadPixels(0, 0,WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);

    for (int i = 0 ; i <height>pixels) + snap->pitch * i, pixels + 3 * WIDTH * (HEIGHT-i - 1), WIDTH*3 );

    delete [] pixels;
    SDL_SaveBMP(snap, "snapshot.bmp");
    SDL_FreeSurface(snap);
    }
    </height>

    I need the video output. I have discovered that ffmpeg can be used to create videos from C++ code but have not been able to figure out the process. Please help !

    EDIT : I have tried using openCV CvVideoWriter class but the program crashes ("segmentation fault") the moment it is declared.Compilation shows no errors ofcourse. Any suggestions to that ?

    SOLUTION FOR PYTHON USERS (Requires Python2.7,python-imaging,python-opengl,python-opencv, codecs of format you want to write to, I am on Ubuntu 14.04 64-bit) :

    def snap():
       pixels=[]
       screenshot = glReadPixels(0,0,W,H,GL_RGBA,GL_UNSIGNED_BYTE)
       snapshot = Image.frombuffer("RGBA",W,H),screenshot,"raw","RGBA",0,0)
       snapshot.save(os.path.dirname(videoPath) + "/temp.jpg")
       load = cv2.cv.LoadImage(os.path.dirname(videoPath) + "/temp.jpg")
       cv2.cv.WriteFrame(videoWriter,load)

    Here W and H are the window dimensions (width,height). What is happening is I am using PIL to convert the raw pixels read from the glReadPixels command into a JPEG image. I am loading that JPEG into the openCV image and writing to the videowriter. I was having certain issues by directly using the PIL image into the videowriter (which would save millions of clock cycles of I/O), but right now I am not working on that. Image is a PIL module cv2 is a python-opencv module.

  • How to reduce bitrate with out change video quality in FFMPEG

    28 mai 2021, par Muthu GM

    I'm using FFMPEG C library. I'm use modified muxing.c example to encode video. Video quality is reduce frame by frame when I control bitrate (like 1080 x 720 - bitrate 680k ). But same image I using FFMPEG command line tool to encode same bitrate 680k image quality to not change.

    &#xA;

    What is the reason for same image and bitrate encoded video quality reduce whan I encode using C API and reason why quality did not change command line tool.

    &#xA;

    I use :

    &#xA;

    Command line arg :

    &#xA;

      &#xA;
    • ffmpeg -framerate 5 image%d.jpg  -c:v libx264 -b:v 64k -pix_fmt yuv420p out.mp4
    • &#xA;

    &#xA;

    Muxing.c(modified) Codec setting :

    &#xA;

      &#xA;
    • fps = 5;
    • &#xA;

    • CODEC_ID = H264 (libx264);
    • &#xA;

    • Pixel_fmt = yuv420;
    • &#xA;

    • Image decoder = MJPEG;
    • &#xA;

    • bitrate = 64000;
    • &#xA;

    &#xA;

    The video size are same but quality is reduce frame by frame in muxing.c&#xA;but same bitrate video quality is perfect.

    &#xA;

    please provide how to I reduce bitrate with out change quality using FFMPEG C API.

    &#xA;