Recherche avancée

Médias (0)

Mot : - Tags -/acrobat

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (39)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (9872)

  • Black screen when playing a video with ffmpeg and SDL on iOS

    1er avril 2012, par patrick

    I'm attempting to create a video player on iOS using ffmpeg and SDL. I'm decoding the video stream and attempting to convert the pixel data into a SDL_Surface and then convert that over to an SDL_Texture and render it on screen. However, all I'm getting is a black screen. I know the video file is good and can be viewed fine from VLC. Any idea what I'm missing here ?

    Initialization code :

       // initialize SDL (Simple DirectMedia Layer) to playback the content
       if( SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER) )
       {
           DDLogError(@"Unable to initialize SDL");
           return NO;
       }

       // create window and renderer
       window = SDL_CreateWindow(NULL, 0, 0, SCREEN_WIDTH, SCREEN_HEIGHT,
                                 SDL_WINDOW_OPENGL | SDL_WINDOW_BORDERLESS |
                                 SDL_WINDOW_SHOWN);
       if ( window == 0 )
       {
           DDLogError(@"Unable to initialize SDL Window");
       }

       renderer = SDL_CreateRenderer(window, -1, 0);
       if ( !renderer )
       {
           DDLogError(@"Unable to initialize SDL Renderer");
       }

       // Initialize the FFMpeg and register codecs and their respected file formats
       av_register_all();

    Playback code :
    AVFormatContext *formatContext = NULL ;

    DDLogInfo(@"Opening media file at location:%@", filePath);

    const char *filename = [filePath cStringUsingEncoding:NSUTF8StringEncoding];
    // Open media file
    if( avformat_open_input(&formatContext, filename, NULL, NULL) != 0 )
    {
       DDLogWarn(@"Unable to open media file. [File:%@]", filePath);

       NSString *failureReason = NSLocalizedString(@"Unable to open file.", @"Media playback failed, unable to open file.");

       if ( error != NULL )
       {
           *error = [NSError errorWithDomain:MediaPlayerErrorDomain
                                        code:UNABLE_TO_OPEN
                                    userInfo:[NSDictionary dictionaryWithObject:failureReason
                                                                         forKey:NSLocalizedFailureReasonErrorKey]];
       }

       return NO; // Couldn't open file
    }

    // Retrieve stream information
    if( avformat_find_stream_info(formatContext, NULL) <= 0 )
    {
       DDLogWarn(@"Unable to locate stream information for file. [File:%@]", filePath);

       NSString *failureReason = NSLocalizedString(@"Unable to find audio/video stream information.", @"Media playback failed, unable to find stream information.");

       if ( error != NULL )
       {
           *error = [NSError errorWithDomain:MediaPlayerErrorDomain
                                        code:UNABLE_TO_FIND_STREAM
                                    userInfo:[NSDictionary dictionaryWithObject:failureReason
                                                                         forKey:NSLocalizedFailureReasonErrorKey]];
       }

       return NO;  // Missing stream information
    }

    // Find the first video or audio stream
    int videoStream = -1;
    int audioStream = -1;

    DDLogInfo(@"Locating stream information for media file");

    for( int index=0; index<(formatContext->nb_streams); index++)
    {
       if( formatContext->streams[index]->codec->codec_type==AVMEDIA_TYPE_VIDEO )
       {
           DDLogInfo(@"Found video stream");
           videoStream = index;
           break;
       }
       else if( mediaType == AUDIO_FILE &&
               (formatContext->streams[index]->codec->codec_type==AVMEDIA_TYPE_AUDIO) )
       {
           DDLogInfo(@"Found audio stream");
           audioStream = index;
           break;
       }
    }

    if( videoStream == -1 && (audioStream == -1) )
    {
       DDLogWarn(@"Unable to find video or audio stream for file");

       NSString *failureReason = NSLocalizedString(@"Unable to locate audio/video stream.", @"Media playback failed, unable to locate media stream.");

       if ( error != NULL )
       {
           *error = [NSError errorWithDomain:MediaPlayerErrorDomain
                                        code:UNABLE_TO_FIND_STREAM
                                    userInfo:[NSDictionary dictionaryWithObject:failureReason
                                                                         forKey:NSLocalizedFailureReasonErrorKey]];
       }

       return NO; // Didn't find a video or audio stream
    }

    // Get a pointer to the codec context for the video/audio stream
    AVCodecContext *codecContext;

    DDLogInfo(@"Attempting to locate the codec for the media file");

    if ( videoStream > -1 )
    {
       codecContext = formatContext->streams[videoStream]->codec;

    }
    else
    {
       codecContext = formatContext->streams[audioStream]->codec;
    }

    // Now that we have information about the codec that the file is using,
    // we need to actually open the codec to decode the content

    DDLogInfo(@"Attempting to open the codec to playback the media file");

    AVCodec *codec;

    // Find the decoder for the video stream
    codec = avcodec_find_decoder(codecContext->codec_id);
    if( codec == NULL )
    {
       DDLogWarn(@"Unsupported codec! Cannot playback meda file [File:%@]", filePath);

       NSString *failureReason = NSLocalizedString(@"Unsupported file format. Cannot playback media.", @"Media playback failed, unsupported codec.");
       if ( error != NULL )
       {
           *error = [NSError errorWithDomain:MediaPlayerErrorDomain
                                        code:UNSUPPORTED_CODEC
                                    userInfo:[NSDictionary dictionaryWithObject:failureReason
                                                                         forKey:NSLocalizedFailureReasonErrorKey]];
       }

       return NO; // Codec not found
    }

    // Open codec
    if( avcodec_open2(codecContext, codec, NULL) < 0 )
    {
       DDLogWarn(@"Unable to open codec! Cannot playback meda file [File:%@]", filePath);

       NSString *failureReason = NSLocalizedString(@"Unable to open media codec. Cannot playback media.", @"Media playback failed, cannot open codec.");
       if ( error != NULL )
       {
           *error = [NSError errorWithDomain:MediaPlayerErrorDomain
                                        code:UNABLE_TO_LOAD_CODEC
                                    userInfo:[NSDictionary dictionaryWithObject:failureReason
                                                                         forKey:NSLocalizedFailureReasonErrorKey]];
       }

       return NO; // Could not open codec
    }

    // Allocate player frame
    AVFrame *playerFrame=avcodec_alloc_frame();

    // Allocate an AVFrame structure
    AVFrame *RGBframe=avcodec_alloc_frame();
    if( RGBframe==NULL )
    {
       // could not create a frame to convert our video frame
       // to a 16-bit RGB565 frame.

       DDLogWarn(@"Unable to convert video frame. Cannot playback meda file [File:%@]", filePath);

       NSString *failureReason = NSLocalizedString(@"Problems interpreting video frame information.", @"Media playback failed, cannot convert frame.");
       if ( error != NULL )
       {
           *error = [NSError errorWithDomain:MediaPlayerErrorDomain
                                        code:UNABLE_TO_LOAD_FRAME
                                    userInfo:[NSDictionary dictionaryWithObject:failureReason
                                                                         forKey:NSLocalizedFailureReasonErrorKey]];
       }

       return NO; // Could not open codec
    }

    int frameFinished = 0;
    AVPacket packet;

    // Figure out the destination width/height based on the screen size
    int destHeight = codecContext->height;
    int destWidth  = codecContext->width;
    if ( destHeight > SCREEN_HEIGHT || (destWidth > SCREEN_WIDTH) )
    {
       if ( destWidth > SCREEN_WIDTH )
       {
           float percentDiff = ( destWidth - SCREEN_WIDTH ) / (float)destWidth;
           destWidth  = destWidth  - (int)(destWidth * percentDiff );
           destHeight = destHeight - (int)(destHeight * percentDiff );
       }

       if ( destHeight > SCREEN_HEIGHT )
       {
           float percentDiff = (destHeight - SCREEN_HEIGHT ) / (float)destHeight;
           destWidth  = destWidth  - (int)(destWidth * percentDiff );
           destHeight = destHeight - (int)(destHeight * percentDiff );
       }
    }

    SwsContext *swsContext = sws_getContext(codecContext->width, codecContext->height, codecContext->pix_fmt, destWidth, destHeight, PIX_FMT_RGB565, SWS_BICUBIC, NULL, NULL, NULL);

    while( av_read_frame(formatContext, &packet) >= 0 )
    {
       // Is this a packet from the video stream?
       if( packet.stream_index == videoStream )
       {
           // Decode video frame
           avcodec_decode_video2(codecContext, playerFrame, &frameFinished, &packet);

           // Did we get a video frame?
           if( frameFinished != 0 )
           {
               // Convert the content over to RGB565 (16-bit RGB) to playback with SDL

               uint8_t *dst[3];
               int dstStride[3];

               // Set the destination stride
               for (int plane = 0; plane < 3; plane++)
               {
                   dstStride[plane] = codecContext->width*2;
                   dst[plane]= (uint8_t*) malloc(dstStride[plane]*destHeight);
               }

               sws_scale(swsContext, playerFrame->data,
                         playerFrame->linesize, 0,
                         destHeight,
                         dst, dstStride);

               // Create the SDL surface frame that we are going to use to draw our video

               // 16-bit RGB so 2 bytes per pixel (pitch = width*(bytes per pixel))
               int pitch = destWidth*2;
               SDL_Surface *frameSurface = SDL_CreateRGBSurfaceFrom(dst[0], destWidth, destHeight, 16, pitch, 0, 0, 0, 0);

               // Clear the old frame first
               SDL_RenderClear(renderer);

               // Move the frame over to a texture and render it on screen
               SDL_Texture *texture = SDL_CreateTextureFromSurface(renderer, frameSurface);
               SDL_SetTextureBlendMode(texture, SDL_BLENDMODE_BLEND);

               // Draw the new frame on the screen
               SDL_RenderPresent(renderer);

               SDL_DestroyTexture(texture);
               SDL_FreeSurface(frameSurface);
           }
  • Video Overlay not returning to black once video stopped

    24 septembre 2020, par Matt Nelson

    I am combining multiple videos from a webrtc call and aligning the "user-terminal" videos to the left, and the "user-visitor" videos to the right. As the visitor feed can start and strop, they have multiple videos so I'm offsetting them by the timestamp.

    


    This is working, however crazy it looks !

    


    The one last issue I have is that when the first visitor video stop on the right, it shows the last frame of that video until the next video starts on the left. Can I have it return o the black background ?

    


    Here is the command passed to ffmpeg :

    


    ffmpeg -y 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-terminal-1600953586531366-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953592694430-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953609873223-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953628668227-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953663905342-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-MasterTerminal-52350116-1600953681107272-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953697832165-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-MasterTerminal-52350116-1600953723320364-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953725307043-audio.mjr.opus 
-filter_complex [1:a]adelay=6163|6163[1adelay];[2:a]adelay=23341|23341[2adelay];[3:a]adelay=42136|42136[3adelay];[4:a]adelay=77373|77373[4adelay];[5:a]adelay=94575|94575[5adelay];[6:a]adelay=111300|111300[6adelay];[7:a]adelay=136788|136788[7adelay];[8:a]adelay=138775|138775[8adelay];[0:a][1adelay][2adelay][3adelay][4adelay][5adelay][6adelay][7adelay][8adelay]amix=inputs=9:duration=longest[a] 
-map [a] -ac -2 /recordings/process/5f6c9c3/5f6c9c3.mp3


    


  • Rendering YUV420P ffmpeg decoded images on QT with OpenGL, only see black screen

    17 février 2019, par Lucas Zanella

    I’ve found this QT OpenGL Widget which should render a 420PYUV image on screen. I’m feeding a ffmpeg decoded buffer into its paintGL() function but I see nothing. Neither noises or correct images, only a black screen. I’m trying to understand why.

    I want to exclude the possibilities of other things being wrong, but I need to be sure first that my code will produce anything. I std::couted some bytes from the ffmpeg just to see if they were arriving and they were. So I should see at least some noise.

    Can you see anything wrong with my code that wouldn’t make it able to render images on screen ?

    This is the widget that should output the image :

    #include "XVideoWidget.h"
    #include <qdebug>
    #include <qtimer>
    #include <iostream>
    //自动加双引号
    #define GET_STR(x) #x
    #define A_VER 3
    #define T_VER 4

    //顶点shader
    const char *vString = GET_STR(
       attribute vec4 vertexIn;
       attribute vec2 textureIn;
       varying vec2 textureOut;
       void main(void)
       {
           gl_Position = vertexIn;
           textureOut = textureIn;
       }
    );


    //片元shader
    const char *tString = GET_STR(
       varying vec2 textureOut;
       uniform sampler2D tex_y;
       uniform sampler2D tex_u;
       uniform sampler2D tex_v;
       void main(void)
       {
           vec3 yuv;
           vec3 rgb;
           yuv.x = texture2D(tex_y, textureOut).r;
           yuv.y = texture2D(tex_u, textureOut).r - 0.5;
           yuv.z = texture2D(tex_v, textureOut).r - 0.5;
           rgb = mat3(1.0, 1.0, 1.0,
               0.0, -0.39465, 2.03211,
               1.13983, -0.58060, 0.0) * yuv;
           gl_FragColor = vec4(rgb, 1.0);
       }

    );



    //准备yuv数据
    // ffmpeg -i v1080.mp4 -t 10 -s 240x128 -pix_fmt yuv420p  out240x128.yuv
    XVideoWidget::XVideoWidget(QWidget * parent)
    {
      // setWindowFlags (Qt::WindowFullscreenButtonHint);
     //  showFullScreen();

    }

    XVideoWidget::~XVideoWidget()
    {
    }

    //初始化opengl
    void XVideoWidget::initializeGL()
    {
       //qDebug() &lt;&lt; "initializeGL";
       std::cout &lt;&lt; "initializing gl" &lt;&lt; std::endl;
       //初始化opengl (QOpenGLFunctions继承)函数
       initializeOpenGLFunctions();

       this->m_F  = QOpenGLContext::currentContext()->functions();

       //program加载shader(顶点和片元)脚本
       //片元(像素)
       std::cout &lt;&lt; program.addShaderFromSourceCode(QOpenGLShader::Fragment, tString) &lt;&lt; std::endl;
       //顶点shader
       std::cout &lt;&lt; program.addShaderFromSourceCode(QOpenGLShader::Vertex, vString) &lt;&lt; std::endl;

       //设置顶点坐标的变量
       program.bindAttributeLocation("vertexIn",A_VER);

       //设置材质坐标
       program.bindAttributeLocation("textureIn",T_VER);

       //编译shader
       std::cout &lt;&lt; "program.link() = " &lt;&lt; program.link() &lt;&lt; std::endl;

       std::cout &lt;&lt; "program.bind() = " &lt;&lt; program.bind() &lt;&lt; std::endl;

       //传递顶点和材质坐标
       //顶点
       static const GLfloat ver[] = {
           -1.0f,-1.0f,
           1.0f,-1.0f,
           -1.0f, 1.0f,
           1.0f,1.0f
       };

       //材质
       static const GLfloat tex[] = {
           0.0f, 1.0f,
           1.0f, 1.0f,
           0.0f, 0.0f,
           1.0f, 0.0f
       };

       //顶点
       glVertexAttribPointer(A_VER, 2, GL_FLOAT, 0, 0, ver);
       glEnableVertexAttribArray(A_VER);

       //材质
       glVertexAttribPointer(T_VER, 2, GL_FLOAT, 0, 0, tex);
       glEnableVertexAttribArray(T_VER);

       //glUseProgram(&amp;program);
       //从shader获取材质
       unis[0] = program.uniformLocation("tex_y");
       unis[1] = program.uniformLocation("tex_u");
       unis[2] = program.uniformLocation("tex_v");

       //创建材质
       glGenTextures(3, texs);

       //Y
       glBindTexture(GL_TEXTURE_2D, texs[0]);
       //放大过滤,线性插值   GL_NEAREST(效率高,但马赛克严重)
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
       //创建材质显卡空间
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, 0);

       //U
       glBindTexture(GL_TEXTURE_2D, texs[1]);
       //放大过滤,线性插值
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
       //创建材质显卡空间
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width/2, height / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);

       //V
       glBindTexture(GL_TEXTURE_2D, texs[2]);
       //放大过滤,线性插值
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
       //创建材质显卡空间
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width / 2, height / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);

       ///分配材质内存空间
       datas[0] = new unsigned char[width*height];     //Y
       datas[1] = new unsigned char[width*height/4];   //U
       datas[2] = new unsigned char[width*height/4];   //V
    }

    //刷新显示
    void XVideoWidget::paintGL(unsigned char**data)
    //void QFFmpegGLWidget::updateData(unsigned char**data)
    {
       std::cout &lt;&lt; "painting!" &lt;&lt; std::endl;
       memcpy(datas[0], data[0], width*height);
       memcpy(datas[1], data[1], width*height/4);
       memcpy(datas[2], data[2], width*height/4);

       glActiveTexture(GL_TEXTURE0);
       glBindTexture(GL_TEXTURE_2D, texs[0]); //0层绑定到Y材质
       //修改材质内容(复制内存内容)
       glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, datas[0]);
       //与shader uni遍历关联
       glUniform1i(unis[0], 0);


       glActiveTexture(GL_TEXTURE0+1);
       glBindTexture(GL_TEXTURE_2D, texs[1]); //1层绑定到U材质
                                              //修改材质内容(复制内存内容)
       glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width/2, height / 2, GL_RED, GL_UNSIGNED_BYTE, datas[1]);
       //与shader uni遍历关联
       glUniform1i(unis[1],1);


       glActiveTexture(GL_TEXTURE0+2);
       glBindTexture(GL_TEXTURE_2D, texs[2]); //2层绑定到V材质
                                              //修改材质内容(复制内存内容)
       glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width / 2, height / 2, GL_RED, GL_UNSIGNED_BYTE, datas[2]);
       //与shader uni遍历关联
       glUniform1i(unis[2], 2);

       glDrawArrays(GL_TRIANGLE_STRIP,0,4);
       qDebug() &lt;&lt; "paintGL";
    }


    // 窗口尺寸变化
    void XVideoWidget::resizeGL(int width, int height)
    {
       m_F->glViewport(0, 0, width, height);

       qDebug() &lt;&lt; "resizeGL "&lt;code></iostream></qtimer></qdebug>

    Here’s a bit of code from my MainWindow :

    MainWindow::MainWindow(QWidget *parent):
       QMainWindow(parent)
       {
           FfmpegDecoder* ffmpegDecoder = new FfmpegDecoder();
           if(!ffmpegDecoder->Init()) {
               std::cout &lt;&lt; "problem with ffmpeg decoder init"  &lt;&lt; std::endl;
           } else {
               std::cout &lt;&lt; "fmmpeg decoder initiated"  &lt;&lt; std::endl;
           }
           XVideoWidget * xVideoWidget = new XVideoWidget(parent);
           ffmpegDecoder->setOpenGLWidget(xVideoWidget);

           mediaStream = new MediaStream(uri, ffmpegDecoder, videoConsumer);//= new MediaStream(uri, ffmpegDecoder, videoConsumer);
           //...
       }
       void MainWindow::run()
       {
           mediaStream->receiveFrame();
       }

    My main.cpp makes sure my window run() method runs in the background.

       MainWindow w;
       w.setFixedSize(1280,720);
       w.show();
       boost::thread mediaThread(&amp;MainWindow::run, &amp;w);
       std::cout &lt;&lt; "mediaThread running"  &lt;&lt; std::endl;

    If someone wants to view the entire code, please feel free to visit the commit I just did : https://github.com/lucaszanella/orwell/tree/bbd74e42bd42df685bacc5d51cacbee3a178689f