
Recherche avancée
Médias (5)
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
Autres articles (70)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (10387)
-
Saving the openGL context as a video output
2 juin 2023, par psiyumI am currently trying to save the animation made in
openGL
to a video file. I have tried usingopenCV
'svideowriter
but to no advantage. I have successfully been able to generate a snapshot and save it asbmp
using theSDL
library. If I save all snapshots and then generate the video usingffmpeg
, that is like collecting 4 GB worth of images. Not practical.
How can I write video frames directly during rendering ?
Here the code i use to take snapshots when I require :


void snapshot(){
SDL_Surface* snap = SDL_CreateRGBSurface(SDL_SWSURFACE,WIDTH,HEIGHT,24, 0x000000FF, 0x0000FF00, 0x00FF0000, 0);
char * pixels = new char [3 *WIDTH * HEIGHT];
glReadPixels(0, 0,WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);

for (int i = 0 ; i <height>pixels) + snap->pitch * i, pixels + 3 * WIDTH * (HEIGHT-i - 1), WIDTH*3 );

delete [] pixels;
SDL_SaveBMP(snap, "snapshot.bmp");
SDL_FreeSurface(snap);
}
</height>



I need the video output. I have discovered that
ffmpeg
can be used to create videos from C++ code but have not been able to figure out the process. Please help !


EDIT : I have tried using
openCV
CvVideoWriter
class but the program crashes ("segmentation fault
") the moment it is declared.Compilation shows no errors ofcourse. Any suggestions to that ?


SOLUTION FOR PYTHON USERS (Requires
Python2.7
,python-imaging
,python-opengl
,python-opencv
, codecs of format you want to write to, I am onUbuntu 14.04 64-bit
) :


def snap():
 pixels=[]
 screenshot = glReadPixels(0,0,W,H,GL_RGBA,GL_UNSIGNED_BYTE)
 snapshot = Image.frombuffer("RGBA",W,H),screenshot,"raw","RGBA",0,0)
 snapshot.save(os.path.dirname(videoPath) + "/temp.jpg")
 load = cv2.cv.LoadImage(os.path.dirname(videoPath) + "/temp.jpg")
 cv2.cv.WriteFrame(videoWriter,load)




Here
W
andH
are the window dimensions (width,height). What is happening is I am using PIL to convert the raw pixels read from theglReadPixels
command into aJPEG
image. I am loading that JPEG into theopenCV
image and writing to the videowriter. I was having certain issues by directly using the PIL image into the videowriter (which would save millions of clock cycles ofI/O
), but right now I am not working on that.Image
is aPIL
modulecv2
is apython-opencv
module.

-
Saving the openGL context as a video output
16 septembre 2016, par activatedgeekI am currently trying to save the animation made in
openGL
to a video file. I have tried usingopenCV
’svideowriter
but to no advantage. I have successfully been able to generate a snapshot and save it asbmp
using theSDL
library. If I save all snapshots and then generate the video usingffmpeg
, that is like collecting 4 GB worth of images. Not practical.
How can I write video frames directly during rendering ?
Here the code i use to take snapshots when I require :void snapshot(){
SDL_Surface* snap = SDL_CreateRGBSurface(SDL_SWSURFACE,WIDTH,HEIGHT,24, 0x000000FF, 0x0000FF00, 0x00FF0000, 0);
char * pixels = new char [3 *WIDTH * HEIGHT];
glReadPixels(0, 0,WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
for (int i = 0 ; i <height>pixels) + snap->pitch * i, pixels + 3 * WIDTH * (HEIGHT-i - 1), WIDTH*3 );
delete [] pixels;
SDL_SaveBMP(snap, "snapshot.bmp");
SDL_FreeSurface(snap);
}
</height>I need the video output. I have discovered that
ffmpeg
can be used to create videos from C++ code but have not been able to figure out the process. Please help !EDIT : I have tried using
openCV
CvVideoWriter
class but the program crashes ("segmentation fault
") the moment it is declared.Compilation shows no errors ofcourse. Any suggestions to that ?SOLUTION FOR PYTHON USERS (Requires
Python2.7
,python-imaging
,python-opengl
,python-opencv
, codecs of format you want to write to, I am onUbuntu 14.04 64-bit
) :def snap():
pixels=[]
screenshot = glReadPixels(0,0,W,H,GL_RGBA,GL_UNSIGNED_BYTE)
snapshot = Image.frombuffer("RGBA",W,H),screenshot,"raw","RGBA",0,0)
snapshot.save(os.path.dirname(videoPath) + "/temp.jpg")
load = cv2.cv.LoadImage(os.path.dirname(videoPath) + "/temp.jpg")
cv2.cv.WriteFrame(videoWriter,load)Here
W
andH
are the window dimensions (width,height). What is happening is I am using PIL to convert the raw pixels read from theglReadPixels
command into aJPEG
image. I am loading that JPEG into theopenCV
image and writing to the videowriter. I was having certain issues by directly using the PIL image into the videowriter (which would save millions of clock cycles ofI/O
), but right now I am not working on that.Image
is aPIL
modulecv2
is apython-opencv
module. -
How to reduce bitrate with out change video quality in FFMPEG
28 mai 2021, par Muthu GMI'm using FFMPEG C library. I'm use modified muxing.c example to encode video. Video quality is reduce frame by frame when I control bitrate (like 1080 x 720 - bitrate 680k ). But same image I using FFMPEG command line tool to encode same bitrate 680k image quality to not change.


What is the reason for same image and bitrate encoded video quality reduce whan I encode using C API and reason why quality did not change command line tool.


I use :


Command line arg :


- 

ffmpeg -framerate 5 image%d.jpg -c:v libx264 -b:v 64k -pix_fmt yuv420p out.mp4




Muxing.c(modified) Codec setting :


- 

fps = 5;
CODEC_ID = H264 (libx264);
Pixel_fmt = yuv420;
Image decoder = MJPEG;
bitrate = 64000;












The video size are same but quality is reduce frame by frame in muxing.c
but same bitrate video quality is perfect.


please provide how to I reduce bitrate with out change quality using FFMPEG C API.