Recherche avancée

Médias (0)

Mot : - Tags -/performance

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (63)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (11056)

  • Overthinking My Search Engine Problem

    31 décembre 2013, par Multimedia Mike — General

    I wrote a search engine for my Game Music Appreciation website, because the site would have been significantly less valuable without it (and I would eventually realize that the search feature is probably the most valuable part of this endeavor). I came up with a search solution that was a bit sketchy, but worked… until it didn’t. I thought of a fix but still searched for more robust and modern solutions (where ‘modern’ is defined as something that doesn’t require compiling a C program into a static CGI script and hoping that it works on a server I can’t debug on).

    Finally, I realized that I was overthinking the problem– did you know that a bunch of relational database management systems (RDBMSs) support full text search (FTS) ? Okay, maybe you did, but I didn’t know this.

    Problem Statement
    My goal is to enable users to search the metadata (title, composer, copyright, other tags) attached to various games. To do this, I want to index a series of contrived documents that describe the metadata. 2 examples of these contrived documents, interesting because both of these games have very different titles depending on region, something the search engine needs to account for :

    system : Nintendo NES
    game : Snoopy’s Silly Sports Spectacular
    author : None ; copyright : 1988 Kemco ; dumped by : None
    additional tags : Donald Duck.nsf Donald Duck
    

    system : Super Nintendo
    game : Arcana
    author : Jun Ishikawa, Hirokazu Ando ; copyright : 1992 HAL Laboratory ; dumped by : Datschge
    additional tags : card.rsn.gamemusic Card Master Cardmaster

    The index needs to map these documents to various pieces of game music and the search solution needs to efficiently search these documents and find the various game music entries that match a user’s request.

    Now that I’ve been looking at it for long enough, I’m able to express the problem surprisingly succinctly. If I had understood that much originally, this probably would have been simpler.

    First Solution & Breakage
    My original solution was based on SWISH-E. The CGI script was a C program that statically linked the SWISH-E library into a binary that miraculously ran on my web provider. At least, it ran until it decided to stop working a month ago when I added a new feature unrelated to search. It was a very bizarre problem, the details of which would probably bore you to tears. But if you care, the details are all there in the Stack Overflow question I asked on the matter.

    While no one could think of a direct answer to the problem, I eventually thought of a roundabout fix. The problem seemed to pertain to the static linking. Since I couldn’t count on the relevant SWISH-E library to be on my host’s system, I uploaded the shared library to the same directory as the CGI script and used dlopen()/dlsym() to fetch the functions I needed. It worked again, but I didn’t know for how long.

    Searching For A Hosted Solution
    I know that anything is possible in this day and age ; while my web host is fairly limited, there are lots of solutions for things like this and you can deploy any technology you want, and for reasonable prices. I figured that there must be a hosted solution out there.

    I have long wanted a compelling reason to really dive into Amazon Web Services (AWS) and this sounded like a good opportunity. After all, my script works well enough ; if I could just find a simple Linux box out there where I could install the SWISH-E library and compile the CGI script, I should be good to go. AWS has a free tier and I started investigating this approach. But it seems like a rabbit hole with a lot of moving pieces necessary for such a simple task.

    I had heard that AWS had something in this area. Sure enough, it’s called CloudSearch. However, I’m somewhat discouraged by the fact that it would cost me around $75 per month to run the smallest type of search instance which is at the core of the service.

    Finally, I came to another platform called Heroku. It’s supposed to be super-scalable while having a free tier for hobbyists. I started investigating FTS on Heroku and found this article which recommends using the FTS capabilities of their standard hosted PostgreSQL solution. However, the free tier of Postgres hosting only allows for 10,000 rows of data. Right now, my database has about 5400 rows. I expect it to easily overflow the 10,000 limit as soon as I incorporate the C64 SID music corpus.

    However, this Postgres approach planted a seed.

    RDBMS Revelation
    I have 2 RDBMSs available on my hosting plan– MySQL and SQLite (the former is a separate service while SQLite is built into PHP). I quickly learned that both have FTS capabilities. Since I like using SQLite so much, I elected to leverage its FTS functionality. And it’s just this simple :

    CREATE VIRTUAL TABLE gamemusic_metadata_fts USING fts3
    ( content TEXT, game_id INT, title TEXT ) ;
    

    SELECT id, title FROM gamemusic_metadata_fts WHERE content MATCH "arcana" ;
    479|Arcana

    The ‘content’ column gets the metadata pseudo-documents. The SQL gets wrapped up in a little PHP so that it queries this small database and turns the result into JSON. The script is then ready as a drop-in replacement for the previous script.

  • My journey to Coviu

    27 octobre 2015, par silvia

    My new startup just released our MVP – this is the story of what got me here.

    I love creating new applications that let people do their work better or in a manner that wasn’t possible before.

    German building and loan socityMy first such passion was as a student intern when I built a system for a building and loan association’s monthly customer magazine. The group I worked with was managing their advertiser contacts through a set of paper cards and I wrote a dBase based system (yes, that long ago) that would manage their customer relationships. They loved it – until it got replaced by an SAP system that cost 100 times what I cost them, had really poor UX, and only gave them half the functionality. It was a corporate system with ongoing support, which made all the difference to them.

    Dr Scholz und Partner GmbHThe story repeated itself with a CRM for my Uncle’s construction company, and with a resume and quotation management system for Accenture right after Uni, both of which I left behind when I decided to go into research.

    Even as a PhD student, I never lost sight of challenges that people were facing and wanted to develop technology to overcome problems. The aim of my PhD thesis was to prepare for the oncoming onslaught of audio and video on the Internet (yes, this was 1994 !) by developing algorithms to automatically extract and locate information in such files, which would enable users to structure, index and search such content.

    Many of the use cases that we explored are now part of products or continue to be challenges : finding music that matches your preferences, identifying music or video pieces e.g. to count ads on the radio or to mark copyright infringement, or the automated creation of video summaries such as trailers.

    CSIRO

    This continued when I joined the CSIRO in Australia – I was working on segmenting speech into words or talk spurts since that would simplify captioning & subtitling, and on MPEG-7 which was a (slightly over-engineered) standard to structure metadata about audio and video.

    In 2001 I had the idea of replicating the Web for videos : i.e. creating hyperlinked and searchable video-only experiences. We called it “Annodex” for annotated and indexed video and it needed full-screen hyperlinked video in browsers – man were we ahead of our time ! It was my first step into standards, got several IETF RFCs to my name, and started my involvement with open codecs through Xiph.

    vquence logoAround the time that YouTube was founded in 2006, I founded Vquence – originally a video search company for the Web, but pivoted to a video metadata mining company. Vquence still exists and continues to sell its data to channel partners, but it lacks the user impact that has always driven my work.

    As the video element started being developed for HTML5, I had to get involved. I contributed many use cases to the W3C, became a co-editor of the HTML5 spec and focused on video captioning with WebVTT while contracting to Mozilla and later to Google. We made huge progress and today the technology exists to publish video on the Web with captions, making the Web more inclusive for everybody. I contributed code to YouTube and Google Chrome, but was keen to make a bigger impact again.

    NICTA logoThe opportunity came when a couple of former CSIRO colleagues who now worked for NICTA approached me to get me interested in addressing new use cases for video conferencing in the context of WebRTC. We worked on a kiosk-style solution to service delivery for large service organisations, particularly targeting government. The emerging WebRTC standard posed many technical challenges that we addressed by building rtc.io , by contributing to the standards, and registering bugs on the browsers.

    Fast-forward through the development of a few further custom solutions for customers in health and education and we are starting to see patterns of need emerge. The core learning that we’ve come away with is that to get things done, you have to go beyond “talking heads” in a video call. It’s not just about seeing the other person, but much more about having a shared view of the things that need to be worked on and a shared way of interacting with them. Also, we learnt that the things that are being worked on are quite varied and may include multiple input cameras, digital documents, Web pages, applications, device data, controls, forms.

    Coviu logoSo we set out to build a solution that would enable productive remote collaboration to take place. It would need to provide an excellent user experience, it would need to be simple to work with, provide for the standard use cases out of the box, yet be architected to be extensible for specialised data sharing needs that we knew some of our customers had. It would need to be usable directly on Coviu.com, but also able to integrate with specialised applications that some of our customers were already using, such as the applications that they spend most of their time in (CRMs, practice management systems, learning management systems, team chat systems). It would need to require our customers to sign up, yet their clients to join a call without sign-up.

    Collaboration is a big problem. People are continuing to get more comfortable with technology and are less and less inclined to travel distances just to get a service done. In a country as large as Australia, where 12% of the population lives in rural and remote areas, people may not even be able to travel distances, particularly to receive or provide recurring or specialised services, or to achieve work/life balance. To make the world a global village, we need to be able to work together better remotely.

    The need for collaboration is being recognised by specialised Web applications already, such as the LiveShare feature of Invision for Designers, Codassium for pair programming, or the recently announced Dropbox Paper. Few go all the way to video – WebRTC is still regarded as a complicated feature to support.

    Coviu in action

    With Coviu, we’d like to offer a collaboration feature to every Web app. We now have a Web app that provides a modern and beautifully designed collaboration interface. To enable other Web apps to integrate it, we are now developing an API. Integration may entail customisation of the data sharing part of Coviu – something Coviu has been designed for. How to replicate the data and keep it consistent when people collaborate remotely – that is where Coviu makes a difference.

    We have started our journey and have just launched free signup to the Coviu base product, which allows individuals to own their own “room” (i.e. a fixed URL) in which to collaborate with others. A huge shout out goes to everyone in the Coviu team – a pretty amazing group of people – who have turned the app from an idea to reality. You are all awesome !

    With Coviu you can share and annotate :

    • images (show your mum photos of your last holidays, or get feedback on an architecture diagram from a customer),
    • pdf files (give a presentation remotely, or walk a customer through a contract),
    • whiteboards (brainstorm with a colleague), and
    • share an application window (watch a YouTube video together, or work through your task list with your colleagues).

    All of these are regarded as “shared documents” in Coviu and thus have zooming and annotations features and are listed in a document tray for ease of navigation.

    This is just the beginning of how we want to make working together online more productive. Give it a go and let us know what you think.

    http://coviu.com/

  • FFmpeg C++ api decode h264 error

    29 mai 2015, par arms

    I’m trying to use the C++ API of FFMpeg (version 20150526) under Windows using the prebuilt binaries to decode an h264 video file (*.ts).

    I’ve written a very simple code that automatically detects the required codec from the file itself (and it is AV_CODEC_ID_H264, as expected).

    Then I re-open the video file in read-binary mode and I read a fixed-size buffer of bytes from it and provide the read bytes to the decoder within a while-loop until the end of file. However when I call the function avcodec_decode_video2 a large amount of errors happen like the following ones :

    [h264 @ 008df020] top block unavailable for requested intro mode at 34 0

    [h264 @ 008df020] error while decoding MB 34 0, bytestream 3152

    [h264 @ 008df020] decode_slice_header error

    Sometimes the function avcodec_decode_video2 sets the value of got_picture_ptr to 1 and hence I expect to find a good frame. Instead, though all the computations are successful, when I view the decoded frame (using OpenCV only for visualization purposes) I see a gray one with some artifacts.

    If I employ the same code to decode an *.avi file it works fine.

    Reading the examples of FFMpeg I did not find a solution to my problem. I’ve also implemented the solution proposed in the simlar question FFmpeg c++ H264 decoding error but it did not work.

    Does anyone know where the error is ?

    Thank you in advance for any reply !

    The code is the following [EDIT : code updated including the parser management] :

    #include <iostream>
    #include <iomanip>
    #include <string>
    #include <sstream>

    #include <opencv2></opencv2>opencv.hpp>

    #ifdef __cplusplus
    extern "C"
    {
    #endif // __cplusplus
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavdevice></libavdevice>avdevice.h>
    #include <libavfilter></libavfilter>avfilter.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavformat></libavformat>avio.h>
    #include <libavutil></libavutil>avutil.h>
    #include <libpostproc></libpostproc>postprocess.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libswscale></libswscale>swscale.h>
    #ifdef __cplusplus
    } // end extern "C".
    #endif // __cplusplus

    #define INBUF_SIZE  4096

    void main()
    {
       AVCodec*            l_pCodec;
       AVCodecContext*     l_pAVCodecContext;
       SwsContext*         l_pSWSContext;
       AVFormatContext*    l_pAVFormatContext;
       AVFrame*            l_pAVFrame;
       AVFrame*            l_pAVFrameBGR;
       AVPacket            l_AVPacket;
       AVPacket            l_AVPacket_out;
       AVStream*           l_pStream;
       AVCodecParserContext*   l_pParser;
       FILE*               l_pFile_in;
       FILE*               l_pFile_out;
       std::string         l_sFile;
       int                 l_iResult;
       int                 l_iFrameCount;
       int                 l_iGotFrame;
       int                 l_iBufLength;
       int                 l_iParsedBytes;
       int                 l_iPts;
       int                 l_iDts;
       int                 l_iPos;
       int                 l_iSize;
       int                 l_iDecodedBytes;
       uint8_t             l_auiInBuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
       uint8_t*            l_pData;
       cv::Mat             l_cvmImage;

       l_pCodec = NULL;
       l_pAVCodecContext = NULL;
       l_pSWSContext = NULL;
       l_pAVFormatContext = NULL;
       l_pAVFrame = NULL;
       l_pAVFrameBGR = NULL;
       l_pParser = NULL;
       l_pStream = NULL;
       l_pFile_in = NULL;
       l_pFile_out = NULL;
       l_iPts = 0;
       l_iDts = 0;
       l_iPos = 0;
       l_pData = NULL;

       l_sFile = "myvideo.ts";

       avdevice_register_all();
       avfilter_register_all();
       avcodec_register_all();
       av_register_all();
       avformat_network_init();

       l_pAVFormatContext = avformat_alloc_context();

       l_iResult = avformat_open_input(&amp;l_pAVFormatContext,
                                       l_sFile.c_str(),
                                       NULL,
                                       NULL);

       if (l_iResult >= 0)
       {
           l_iResult = avformat_find_stream_info(l_pAVFormatContext, NULL);

           if (l_iResult >= 0)
           {
               for (int i=0; inb_streams; i++)
               {
                   if (l_pAVFormatContext->streams[i]->codec->codec_type ==
                           AVMEDIA_TYPE_VIDEO)
                   {
                       l_pCodec = avcodec_find_decoder(
                                   l_pAVFormatContext->streams[i]->codec->codec_id);

                       l_pStream = l_pAVFormatContext->streams[i];
                   }
               }
           }
       }

       av_init_packet(&amp;l_AVPacket);
       av_init_packet(&amp;l_AVPacket_out);

       memset(l_auiInBuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);

       if (l_pCodec)
       {
           l_pAVCodecContext = avcodec_alloc_context3(l_pCodec);

           l_pParser = av_parser_init(l_pAVCodecContext->codec_id);

           if (l_pParser)
           {
               av_register_codec_parser(l_pParser->parser);
           }

           if (l_pAVCodecContext)
           {
               if (l_pCodec->capabilities &amp; CODEC_CAP_TRUNCATED)
               {
                   l_pAVCodecContext->flags |= CODEC_FLAG_TRUNCATED;
               }

               l_iResult = avcodec_open2(l_pAVCodecContext, l_pCodec, NULL);

               if (l_iResult >= 0)
               {
                   l_pFile_in = fopen(l_sFile.c_str(), "rb");

                   if (l_pFile_in)
                   {
                       l_pAVFrame = av_frame_alloc();
                       l_pAVFrameBGR = av_frame_alloc();

                       if (l_pAVFrame)
                       {
                           l_iFrameCount = 0;

                           avcodec_get_frame_defaults(l_pAVFrame);

                           while (1)
                           {
                               l_iBufLength = fread(l_auiInBuf,
                                                    1,
                                                    INBUF_SIZE,
                                                    l_pFile_in);

                               if (l_iBufLength == 0)
                               {
                                   break;
                               }
                               else
                               {
                                   l_pData = l_auiInBuf;
                                   l_iSize = l_iBufLength;

                                   while (l_iSize > 0)
                                   {
                                       if (l_pParser)
                                       {
                                           l_iParsedBytes = av_parser_parse2(
                                                       l_pParser,
                                                       l_pAVCodecContext,
                                                       &amp;l_AVPacket_out.data,
                                                       &amp;l_AVPacket_out.size,
                                                       l_pData,
                                                       l_iSize,
                                                       l_AVPacket.pts,
                                                       l_AVPacket.dts,
                                                       AV_NOPTS_VALUE);

                                           if (l_iParsedBytes &lt;= 0)
                                           {
                                               break;
                                           }

                                           l_AVPacket.pts = l_AVPacket.dts = AV_NOPTS_VALUE;
                                           l_AVPacket.pos = -1;
                                       }
                                       else
                                       {
                                           l_AVPacket_out.data = l_pData;
                                           l_AVPacket_out.size = l_iSize;
                                       }

                                       l_iDecodedBytes =
                                               avcodec_decode_video2(
                                                   l_pAVCodecContext,
                                                   l_pAVFrame,
                                                   &amp;l_iGotFrame,
                                                   &amp;l_AVPacket_out);

                                       if (l_iDecodedBytes >= 0)
                                       {
                                           if (l_iGotFrame)
                                           {
                                               l_pSWSContext = sws_getContext(
                                                           l_pAVCodecContext->width,
                                                           l_pAVCodecContext->height,
                                                           l_pAVCodecContext->pix_fmt,
                                                           l_pAVCodecContext->width,
                                                           l_pAVCodecContext->height,
                                                           AV_PIX_FMT_BGR24,
                                                           SWS_BICUBIC,
                                                           NULL,
                                                           NULL,
                                                           NULL);

                                               if (l_pSWSContext)
                                               {
                                                   l_iResult = avpicture_alloc(
                                                               reinterpret_cast(l_pAVFrameBGR),
                                                               AV_PIX_FMT_BGR24,
                                                               l_pAVFrame->width,
                                                               l_pAVFrame->height);

                                                   l_iResult = sws_scale(
                                                               l_pSWSContext,
                                                               l_pAVFrame->data,
                                                               l_pAVFrame->linesize,
                                                               0,
                                                               l_pAVCodecContext->height,
                                                               l_pAVFrameBGR->data,
                                                               l_pAVFrameBGR->linesize);

                                                   if (l_iResult > 0)
                                                   {
                                                       l_cvmImage = cv::Mat(
                                                                   l_pAVFrame->height,
                                                                   l_pAVFrame->width,
                                                                   CV_8UC3,
                                                                   l_pAVFrameBGR->data[0],
                                                               l_pAVFrameBGR->linesize[0]);

                                                       if (l_cvmImage.empty() == false)
                                                       {
                                                           cv::imshow("image", l_cvmImage);
                                                           cv::waitKey(10);
                                                       }
                                                   }
                                               }

                                               l_iFrameCount++;
                                           }
                                       }
                                       else
                                       {
                                           break;
                                       }

                                       l_pData += l_iParsedBytes;
                                       l_iSize -= l_iParsedBytes;
                                   }
                               }

                           } // end while(1).
                       }

                       fclose(l_pFile_in);
                   }
               }
           }
       }
    }
    </sstream></string></iomanip></iostream>

    EDIT : The following is the final code that solves my problem, thanks to the suggestions of Ronald.

    #include <iostream>
    #include <iomanip>
    #include <string>
    #include <sstream>

    #include <opencv2></opencv2>opencv.hpp>

    #ifdef __cplusplus
    extern "C"
    {
    #endif // __cplusplus
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavdevice></libavdevice>avdevice.h>
    #include <libavfilter></libavfilter>avfilter.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavformat></libavformat>avio.h>
    #include <libavutil></libavutil>avutil.h>
    #include <libpostproc></libpostproc>postprocess.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libswscale></libswscale>swscale.h>
    #ifdef __cplusplus
    } // end extern "C".
    #endif // __cplusplus

    void main()
    {
       AVCodec*            l_pCodec;
       AVCodecContext*     l_pAVCodecContext;
       SwsContext*         l_pSWSContext;
       AVFormatContext*    l_pAVFormatContext;
       AVFrame*            l_pAVFrame;
       AVFrame*            l_pAVFrameBGR;
       AVPacket            l_AVPacket;
       std::string         l_sFile;
       uint8_t*            l_puiBuffer;
       int                 l_iResult;
       int                 l_iFrameCount;
       int                 l_iGotFrame;
       int                 l_iDecodedBytes;
       int                 l_iVideoStreamIdx;
       int                 l_iNumBytes;
       cv::Mat             l_cvmImage;

       l_pCodec = NULL;
       l_pAVCodecContext = NULL;
       l_pSWSContext = NULL;
       l_pAVFormatContext = NULL;
       l_pAVFrame = NULL;
       l_pAVFrameBGR = NULL;
       l_puiBuffer = NULL;

       l_sFile = "myvideo.ts";

       av_register_all();

       l_iResult = avformat_open_input(&amp;l_pAVFormatContext,
                                       l_sFile.c_str(),
                                       NULL,
                                       NULL);

       if (l_iResult >= 0)
       {
           l_iResult = avformat_find_stream_info(l_pAVFormatContext, NULL);

           if (l_iResult >= 0)
           {
               for (int i=0; inb_streams; i++)
               {
                   if (l_pAVFormatContext->streams[i]->codec->codec_type ==
                           AVMEDIA_TYPE_VIDEO)
                   {
                       l_iVideoStreamIdx = i;

                       l_pAVCodecContext =
                               l_pAVFormatContext->streams[l_iVideoStreamIdx]->codec;

                       if (l_pAVCodecContext)
                       {
                           l_pCodec = avcodec_find_decoder(l_pAVCodecContext->codec_id);
                       }

                       break;
                   }
               }
           }
       }

       if (l_pCodec &amp;&amp; l_pAVCodecContext)
       {
           l_iResult = avcodec_open2(l_pAVCodecContext, l_pCodec, NULL);

           if (l_iResult >= 0)
           {
               l_pAVFrame = av_frame_alloc();
               l_pAVFrameBGR = av_frame_alloc();

               l_iNumBytes = avpicture_get_size(PIX_FMT_BGR24,
                                                l_pAVCodecContext->width,
                                                l_pAVCodecContext->height);

               l_puiBuffer = (uint8_t *)av_malloc(l_iNumBytes*sizeof(uint8_t));

               avpicture_fill((AVPicture *)l_pAVFrameBGR,
                              l_puiBuffer,
                              PIX_FMT_RGB24,
                              l_pAVCodecContext->width,
                              l_pAVCodecContext->height);

               l_pSWSContext = sws_getContext(
                           l_pAVCodecContext->width,
                           l_pAVCodecContext->height,
                           l_pAVCodecContext->pix_fmt,
                           l_pAVCodecContext->width,
                           l_pAVCodecContext->height,
                           AV_PIX_FMT_BGR24,
                           SWS_BICUBIC,
                           NULL,
                           NULL,
                           NULL);

               while (av_read_frame(l_pAVFormatContext, &amp;l_AVPacket) >= 0)
               {
                   if (l_AVPacket.stream_index == l_iVideoStreamIdx)
                   {
                       l_iDecodedBytes = avcodec_decode_video2(
                                   l_pAVCodecContext,
                                   l_pAVFrame,
                                   &amp;l_iGotFrame,
                                   &amp;l_AVPacket);

                       if (l_iGotFrame)
                       {
                           if (l_pSWSContext)
                           {
                               l_iResult = sws_scale(
                                           l_pSWSContext,
                                           l_pAVFrame->data,
                                           l_pAVFrame->linesize,
                                           0,
                                           l_pAVCodecContext->height,
                                           l_pAVFrameBGR->data,
                                           l_pAVFrameBGR->linesize);

                               if (l_iResult > 0)
                               {
                                   l_cvmImage = cv::Mat(
                                               l_pAVFrame->height,
                                               l_pAVFrame->width,
                                               CV_8UC3,
                                               l_pAVFrameBGR->data[0],
                                           l_pAVFrameBGR->linesize[0]);

                                   if (l_cvmImage.empty() == false)
                                   {
                                       cv::imshow("image", l_cvmImage);
                                       cv::waitKey(1);
                                   }
                               }
                           }

                           l_iFrameCount++;
                       }
                   }
               }
           }
       }
    }
    </sstream></string></iomanip></iostream>