
Advanced search
Medias (1)
-
Rennes Emotion Map 2010-11
19 October 2011, by
Updated: July 2013
Language: français
Type: Text
Other articles (86)
-
Taille des images et des logos définissables
9 February 2011, byDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...) -
Pas question de marché, de cloud etc...
10 April 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...) -
Supporting all media types
13 April 2011, byUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats: images: png, gif, jpg, bmp and more audio: MP3, Ogg, Wav and more video: AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data: OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
On other websites (7892)
-
FFMPEG to OpenGL Texture
23 April 2014, by SpamdarkI was here to ask, how can I convert an AVFrame to an opengl texture. Actually, I created a renderer the outputs me the audio (Audio is working) and the video, but the video is not outputing. Here is my code:
Texture creation:
glGenTextures(1,&_texture);
glBindTexture(GL_TEXTURE_2D,_texture);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );Code Info: _texture variable is a GLuint that keeps the texture ID
Function that gets the AVFrame and convert it to OpenGL Texture:
int VideoGL::NextVideoFrame(){
// Get a packet from the queue
AVPacket *videopacket = this->DEQUEUE(VIDEO);
int frameFinished;
if(videopacket!=0){
avcodec_decode_video2(_codec_context_video, _std_frame,&frameFinished,videopacket);
if(frameFinished){
sws_scale(sws_ctx, _std_frame->data, _std_frame->linesize, 0, _codec_context_video->height, _rgb_frame->data, _rgb_frame->linesize);
if(_firstrendering){
glBindTexture(GL_TEXTURE_2D,_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _codec_context_video->width,_codec_context_video->height,0,GL_RGB,GL_UNSIGNED_BYTE,_rgb_frame->data[0]);
_firstrendering = false;
}else{
glActiveTexture(_texture);
glBindTexture(GL_TEXTURE_2D,_texture);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,_codec_context_video->width,_codec_context_video->height,GL_RGB,GL_UNSIGNED_BYTE,_rgb_frame->data[0]);
}
av_free_packet(videopacket);
return 0;
}else{
av_free_packet(videopacket);
return -1;
}
}else{
return -1;
}
return 0;
}Code Information: There is a queue where a thread store the AVFrames, this function is frequently called to get the AVFrames, until it gets a NULL it stops to being called.
That’s actually not working. (I tried to look at some questions in stack overflow, it’s still not working)
Any example, or someone that helps me to correct any error there?Additional Data: I tried to change the GL_RGB to GL_RGBA and started to play with the formats, anyway it crashes when I try GL_RGBA (Because the width and height are very big, anyway I tried to resize them). I have tried to change the sizes to Power Of 2, stills not working.
1 Edit:
Thread function:
DWORD WINAPI VideoGL::VidThread(LPVOID myparam){
VideoGL * instance = (VideoGL*) myparam;
instance->wave_audio->Start();
int quantity=0;
AVPacket packet;
while(av_read_frame(instance->_format_context,&packet) >= 0){
if(packet.stream_index==instance->videoStream){
instance->ENQUEUE(VIDEO,&packet);
}
if(packet.stream_index==instance->audioStream){
instance->ENQUEUE(AUDIO,&packet);
}
}
instance->ENQUEUE(AUDIO,NULL);
instance->ENQUEUE(VIDEO,NULL);
return 0;
}Thread creation function:
CreateThread(NULL, 0, VidThread, this, NULL, NULL);
Where this refers to the class that contains the NextVideoFrame, and the _texture members.
Solved:
I followed some of the datenwolf tips, and now the video is displaying correctly with the audio/video:
-
Revision 3330: Un élément de menu pour la configuration
25 April 2010, by kent1 — LogUn élément de menu pour la configuration
-
Sending video stream from NodeJS to python in real time [closed]
17 June 2021, by Tristan DelortI'm using a NodeJS server to catch a video stream through a WebRTC PeerConnection and I need to send it to a python script.


I use NodeJS mainly because it's easy to use WebRTC in it and the package 'wrtc' supports RTCVideoSink and python's aiortc doesn't.


I was thinking of using a named pipe with ffmpeg to stream the video stream but 3 questions arose :


- 

-
Should I use python instead of NodeJS and completely avoid the stream through a named pipe part ? (This means there is a way to extract individual frames from a MediaStreamTrack in python)


-
If I stick with the "NodeJS - Python" approach, how do I send the stream from one script to the other ? Named pipe ? Unix domain sockets ? And with FFMpeg ?


-
Finally, for performance purpose I think that sending a stream and not each individual frames is better and simpler but is this true ?










Thanks all !


-