
Advanced search
Medias (91)
-
999,999
26 September 2011, by
Updated: September 2011
Language: English
Type: Audio
-
The Slip - Artworks
26 September 2011, by
Updated: September 2011
Language: English
Type: Text
-
Demon seed (wav version)
26 September 2011, by
Updated: April 2013
Language: English
Type: Audio
-
The four of us are dying (wav version)
26 September 2011, by
Updated: April 2013
Language: English
Type: Audio
-
Corona radiata (wav version)
26 September 2011, by
Updated: April 2013
Language: English
Type: Audio
-
Lights in the sky (wav version)
26 September 2011, by
Updated: April 2013
Language: English
Type: Audio
Other articles (29)
-
Support audio et vidéo HTML5
10 April 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 April 2011, byMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 January 2010, byLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier; La génération d’une vignette : extraction d’une (...)
On other websites (3473)
-
avcodec_decode_video2 fails to decode after frame resolution change
7 October 2016, by Krzysztof KansyI’m using ffmpeg in Android project via JNI to decode real-time H264 video stream. On the Java side I’m only sending the the byte arrays into native module. Native code is running a loop and checking data buffers for new data to decode. Each data chunk is processed with:
int bytesLeft = data->GetSize();
int paserLength = 0;
int decodeDataLength = 0;
int gotPicture = 0;
const uint8_t* buffer = data->GetData();
while (bytesLeft > 0) {
AVPacket packet;
av_init_packet(&packet);
paserLength = av_parser_parse2(_codecPaser, _codecCtx, &packet.data, &packet.size, buffer, bytesLeft, AV_NOPTS_VALUE, AV_NOPTS_VALUE, AV_NOPTS_VALUE);
bytesLeft -= paserLength;
buffer += paserLength;
if (packet.size > 0) {
decodeDataLength = avcodec_decode_video2(_codecCtx, _frame, &gotPicture, &packet);
}
else {
break;
}
av_free_packet(&packet);
}
if (gotPicture) {
// pass the frame to rendering
}The system works pretty well until incoming video’s resolution changes. I need to handle transition between 4:3 and 16:9 aspect ratios. While having AVCodecContext configured as follows:
_codecCtx->flags2|=CODEC_FLAG2_FAST;
_codecCtx->thread_count = 2;
_codecCtx->thread_type = FF_THREAD_FRAME;
if(_codec->capabilities&CODEC_FLAG_LOW_DELAY){
_codecCtx->flags|=CODEC_FLAG_LOW_DELAY;
}I wasn’t able to continue decoding new frames after video resolution change. The
got_picture_ptr
flag thatavcodec_decode_video2
enables when whole frame is available was never true after that.
This ticket made me wonder if the issue isn’t connected with multithreading. Only useful thing I’ve noticed is that when I changethread_type
toFF_THREAD_SLICE
the decoder is not always blocked after resolution change, about half of my attempts were successfull. Switching to single-threaded processing is not possible, I need more computing power. Setting up the context to one thread does not solve the problem and makes the decoder not keeping up with processing incoming data.
Everything work well after app restart.I can only think of one workoround (it doesn’t really solve the problem): unloading and loading the whole library after stream resolution change (e.g as mentioned in here). I don’t think it’s good tho, it will propably introduce other bugs and take a lot of time (from user’s viewpoint).
Is it possible to fix this issue?
EDIT:
I’ve dumped the stream data that is passed to decoding pipeline. I’ve changed the resolution few times while stream was being captured. Playing it with ffplay showed that in moment when resolution changed and preview in application froze, ffplay managed to continue, but preview is glitchy for a second or so. You can see full ffplay log here. In this case video preview stopped when I changed resolution to 960x720 for the second time. (Reinit context to 960x720, pix_fmt: yuv420p
in log). -
Render FFmpeg AVFrame as OpenGL texture?
5 March 2019, by ZeroDefectI’m attempting to to render a jpeg image (1024x1024 pixels) in the form of an FFmpeg AVFrame as a texture in OpenGL. What I get instead is something that appears as a 1024x1024 dark green quad:
The code to render the AVFrame data in OpenGL is shown below. I have convinced myself that the raw RGB data held within the FFmpeg AVFrame data is not solely dark green.
GLuint g_texture = {};
//////////////////////////////////////////////////////////////////////////
void display()
{
// Clear color and depth buffers
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); // Operate on model-view matrix
glEnable(GL_TEXTURE_2D);
GLuint texture = g_texture;
glBindTexture(GL_TEXTURE_2D, texture);
// Draw a quad
glBegin(GL_QUADS);
glVertex2i(0, 0); // top left
glVertex2i(1024, 0); // top right
glVertex2i(1024, 1024); // bottom right
glVertex2i(0, 1024); // bottom left
glEnd();
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
glFlush();
}
/* Initialize OpenGL Graphics */
void initGL(int w, int h)
{
glViewport(0, 0, w, h); // use a screen size of WIDTH x HEIGHT
glEnable(GL_TEXTURE_2D); // Enable 2D texturing
glMatrixMode(GL_PROJECTION); // Make a simple 2D projection on the entire window
glOrtho(0.0, w, h, 0.0, 0.0, 100.0);
glMatrixMode(GL_MODELVIEW); // Set the matrix mode to object modeling
//glTranslatef( 0, 0, -15 );
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the window
}
//////////////////////////////////////////////////////////////////////////
int main(int argc, char *argv[])
{
std::shared_ptr<avframe> apAVFrame;
if (!load_image_to_AVFrame(apAVFrame, "marble.jpg"))
{
assert(false);
return 1;
}
// From here on out, the AVFrame is RGB interleaved
// and is sized to 1,024 x 1,024 (power of 2).
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
glutInitWindowSize(1060, 1060);
glutInitWindowPosition(0, 0);
glutCreateWindow("OpenGL - Creating a texture");
glGenTextures(1, &g_texture);
//glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, g_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, apAVFrame->width,
apAVFrame->height, 0, GL_RGB, GL_UNSIGNED_BYTE,
apAVFrame->data[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* We will use linear interpolation for magnification filter */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); /* We will use linear interpolation for minifying filter */
initGL(1060, 1060);
glutDisplayFunc(display);
glutMainLoop();
return 0;
}
</avframe>Environment:
- Ubuntu 18.04
- GCC v8.2
EDIT: As per @immibis’ suggestion below, it all works when I change the rendering of the quad to:
// Draw a quad
glBegin(GL_QUADS);
glTexCoord2f(0, 0);
glVertex2i(0, 0); // top left
glTexCoord2f(1, 0);
glVertex2i(1024, 0); // top right
glTexCoord2f(1, 1);
glVertex2i(1024, 1024); // bottom right
glTexCoord2f(0, 1);
glVertex2i(0, 1024); // bottom left
glEnd(); -
FFMPEG to OpenGL Texture
23 April 2014, by SpamdarkI was here to ask, how can I convert an AVFrame to an opengl texture. Actually, I created a renderer the outputs me the audio (Audio is working) and the video, but the video is not outputing. Here is my code:
Texture creation:
glGenTextures(1,&_texture);
glBindTexture(GL_TEXTURE_2D,_texture);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );Code Info: _texture variable is a GLuint that keeps the texture ID
Function that gets the AVFrame and convert it to OpenGL Texture:
int VideoGL::NextVideoFrame(){
// Get a packet from the queue
AVPacket *videopacket = this->DEQUEUE(VIDEO);
int frameFinished;
if(videopacket!=0){
avcodec_decode_video2(_codec_context_video, _std_frame,&frameFinished,videopacket);
if(frameFinished){
sws_scale(sws_ctx, _std_frame->data, _std_frame->linesize, 0, _codec_context_video->height, _rgb_frame->data, _rgb_frame->linesize);
if(_firstrendering){
glBindTexture(GL_TEXTURE_2D,_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _codec_context_video->width,_codec_context_video->height,0,GL_RGB,GL_UNSIGNED_BYTE,_rgb_frame->data[0]);
_firstrendering = false;
}else{
glActiveTexture(_texture);
glBindTexture(GL_TEXTURE_2D,_texture);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,_codec_context_video->width,_codec_context_video->height,GL_RGB,GL_UNSIGNED_BYTE,_rgb_frame->data[0]);
}
av_free_packet(videopacket);
return 0;
}else{
av_free_packet(videopacket);
return -1;
}
}else{
return -1;
}
return 0;
}Code Information: There is a queue where a thread store the AVFrames, this function is frequently called to get the AVFrames, until it gets a NULL it stops to being called.
That’s actually not working. (I tried to look at some questions in stack overflow, it’s still not working)
Any example, or someone that helps me to correct any error there?Additional Data: I tried to change the GL_RGB to GL_RGBA and started to play with the formats, anyway it crashes when I try GL_RGBA (Because the width and height are very big, anyway I tried to resize them). I have tried to change the sizes to Power Of 2, stills not working.
1 Edit:
Thread function:
DWORD WINAPI VideoGL::VidThread(LPVOID myparam){
VideoGL * instance = (VideoGL*) myparam;
instance->wave_audio->Start();
int quantity=0;
AVPacket packet;
while(av_read_frame(instance->_format_context,&packet) >= 0){
if(packet.stream_index==instance->videoStream){
instance->ENQUEUE(VIDEO,&packet);
}
if(packet.stream_index==instance->audioStream){
instance->ENQUEUE(AUDIO,&packet);
}
}
instance->ENQUEUE(AUDIO,NULL);
instance->ENQUEUE(VIDEO,NULL);
return 0;
}Thread creation function:
CreateThread(NULL, 0, VidThread, this, NULL, NULL);
Where this refers to the class that contains the NextVideoFrame, and the _texture members.
Solved:
I followed some of the datenwolf tips, and now the video is displaying correctly with the audio/video: