
Recherche avancée
Autres articles (74)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)
Sur d’autres sites (8548)
-
ffmpeg video to opengl texture
23 avril 2017, par Infiniti FizzI’m trying to render frames grabbed and converted from a video using ffmpeg to an OpenGL texture to be put on a quad. I’ve pretty much exhausted google and not found an answer, well I’ve found answers but none of them seem to have worked.
Basically, I am using
avcodec_decode_video2()
to decode the frame and thensws_scale()
to convert the frame to RGB and thenglTexSubImage2D()
to create an openGL texture from it but can’t seem to get anything to work.I’ve made sure the "destination" AVFrame has power of 2 dimensions in the SWS Context setup. Here is my code :
SwsContext *img_convert_ctx = sws_getContext(pCodecCtx->width,
pCodecCtx->height, pCodecCtx->pix_fmt, 512,
256, PIX_FMT_RGB24, SWS_BICUBIC, NULL,
NULL, NULL);
//While still frames to read
while(av_read_frame(pFormatCtx, &packet)>=0) {
glClear(GL_COLOR_BUFFER_BIT);
//If the packet is from the video stream
if(packet.stream_index == videoStream) {
//Decode the video
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
//If we got a frame then convert it and put it into RGB buffer
if(frameFinished) {
printf("frame finished: %i\n", number);
sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
glBindTexture(GL_TEXTURE_2D, texture);
//gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pCodecCtx->width, pCodecCtx->height, GL_RGB, GL_UNSIGNED_INT, pFrameRGB->data);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, 512, 256, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);
SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, number);
number++;
}
}
glColor3f(1,1,1);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2f(0,1);
glVertex3f(0,0,0);
glTexCoord2f(1,1);
glVertex3f(pCodecCtx->width,0,0);
glTexCoord2f(1,0);
glVertex3f(pCodecCtx->width, pCodecCtx->height,0);
glTexCoord2f(0,0);
glVertex3f(0,pCodecCtx->height,0);
glEnd();As you can see in that code, I am also saving the frames to .ppm files just to make sure they are actually rendering, which they are.
The file being used is a .wmv at 854x480, could this be the problem ? The fact I’m just telling it to go 512x256 ?
P.S. I’ve looked at this Stack Overflow question but it didn’t help.
Also, I have
glEnable(GL_TEXTURE_2D)
as well and have tested it by just loading in a normal bmp.EDIT
I’m getting an image on the screen now but it is a garbled mess, I’m guessing something to do with changing things to a power of 2 (in the decode,
swscontext
andgluBuild2DMipmaps
as shown in my code). I’m usually nearly exactly the same code as shown above, only I’ve changedglTexSubImage2D
togluBuild2DMipmaps
and changed the types toGL_RGBA
.Here is what the frame looks like :
EDIT AGAIN
Just realised I haven’t showed the code for how pFrameRGB is set up :
//Allocate video frame for 24bit RGB that we convert to.
AVFrame *pFrameRGB;
pFrameRGB = avcodec_alloc_frame();
if(pFrameRGB == NULL) {
return -1;
}
//Allocate memory for the raw data we get when converting.
uint8_t *buffer;
int numBytes;
numBytes = avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
buffer = (uint8_t *) av_malloc(numBytes*sizeof(uint8_t));
//Associate frame with our buffer
avpicture_fill((AVPicture *) pFrameRGB, buffer, PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);Now that I ahve changed the
PixelFormat
inavgpicture_get_size
toPIX_FMT_RGB24
, I’ve done that inSwsContext
as well and changedGluBuild2DMipmaps
toGL_RGB
and I get a slightly better image but it looks like I’m still missing lines and it’s still a bit stretched :Another Edit
After following Macke’s advice and passing the actual resolution to OpenGL I get the frames nearly proper but still a bit skewed and in black and white, also it’s only getting 6fps now rather than 110fps :
P.S.
I’ve got a function to save the frames to image after
sws_scale()
and they are coming out fine as colour and everything so something in OGL is making it B&W.LAST EDIT
Working ! Okay I have it working now, basically I am not padding out the texture to a power of 2 and just using the resolution the video is.
I got the texture showing up properly with a lucky guess at the correct glPixelStorei()
glPixelStorei(GL_UNPACK_ALIGNMENT, 2);
Also, if anyone else has the
subimage()
showing blank problem like me, you have to fill the texture at least once withglTexImage2D()
and so I use it once in the loop and then useglTexSubImage2D()
after that.Thanks Macke and datenwolf for all your help.
-
vaapi_encode : Check config attributes before creating config
18 mai 2016, par Mark Thompson -
Forwarding RTSP streams to client from private networked server via proxy
21 juin 2016, par beNerdI have a setup where I have two physical machines (remote VPSes) :
-
Server One - This has good processing power in terms of hardware and it’s IP can’t be accessed publicly. It’s private networked to a proxy (server 2) i.e it can only be accessed by the proxy server. Running nodejs/expressjs and ffmpeg/ffserver on ubuntu.
-
Server Two : Reverse Proxy. Publicly accessible. Implements nginx which pipes the requests to Server One.
Now, I have client apps that needs to play RTSP streams configured in the FFSERVER residing on Server One. Since I cannot access server one directly and only via proxy, I need a mechanism where I can accept RTSP requests on my nodejs api (which receives requests from nginx proxy via proxy pass config block), do some validations (session tokens here) and then when validated, contact the underlying ffserver asking for the stream. As soon as I receive the stream, I should be able to forward to the asking client.
Possible ? How ?
-