Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (64)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (8191)

  • My python script using ffmpeg captures video content, but the captured content freezes in the middle and jumps frames

    11 novembre 2022, par Supriyo Mitra

    I am new to ffmpeg and I am trying to use it through a python script. The python functions that captures the video content is given below. The problem I am facing is that the captured content freezes at (uneven) intervals and skips a few frames every time it happens.

    


    `    def capturelivestream(self, argslist):
        streamurl, outnum, feedid, outfilename = argslist[0], argslist[1], argslist[2], argslist[3]
        try:
            info = ffmpeg.probe(streamurl, select_streams='a')
            streams = info.get('streams', [])
        except:
            streams = []
        if len(streams) == 0:
            print('There are no streams available')
            stream = {}
        else:
            stream = streams[0]
            for stream in streams:
                if stream.get('codec_type') != 'audio':
                    continue
                else:
                    break
        if 'channels' in stream.keys():
            channels = stream['channels']
            samplerate = float(stream['sample_rate'])
        else:
            channels = None
            samplerate = 44100
        process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)
        fpath = os.path.dirname(outfilename)
        fnamefext = os.path.basename(outfilename)
        fname = fnamefext.split(".")[0]
        read_size = 320 * 180 * 3 # This is width * height * 3
        lastcaptured = time.time()
        maxtries = 12
        ntries = 0
        while True:
            if process:
                inbytes = process.stdout.read(read_size)
                if inbytes is not None and inbytes.__len__() > 0:
                    try:
                        frame = (np.frombuffer(inbytes, np.uint8).reshape([180, 320, 3]))
                    except:
                        print("Failed to reshape frame: %s"%sys.exc_info()[1].__str__())
                        continue # This could be an issue if there is a continuous supply of frames that cannot be reshaped
                    self.processq.put([outnum, frame])
                    lastcaptured = time.time()
                    ntries = 0
                else:
                    if self.DEBUG:
                        print("Could not read frame for feed ID %s"%feedid)
                    t = time.time()
                    if t - lastcaptured > 30: # If the frames can't be read for more than 30 seconds...
                        print("Reopening feed identified by feed ID %s"%feedid)
                        process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)
                        ntries += 1
                    if ntries > maxtries:
                        if self.DEBUG:
                            print("Stream %s is no longer available."%streamurl)
                        # DB statements removed here
                        
                        break # Break out of infinite loop.
                    continue
        
        return None`



    


    The function that captures the frames is as follows :

    


    
`    def framewriter(self, outlist):
        isempty = False
        endofrun = False
        while True:
            frame = None
            try:
                args = self.processq.get()
            except: # Sometimes, the program crashes at this point due to lack of memory...
                print("Error in framewriter while reading from queue: %s"%sys.exc_info()[1].__str__())
                continue
            outnum = args[0]
            frame = args[1]
            if outlist.__len__() > outnum:
                out = outlist[outnum]
            else:
                if self.DEBUG == 2:
                    print("Could not get writer %s"%outnum)
                continue
            if frame is not None and out is not None:
                out.write(frame)
                isempty = False
                endofrun = False
            else:
                if self.processq.empty() and not isempty:
                    isempty = True
                elif self.processq.empty() and isempty: # processq queue is empty now and was empty last time
                    print("processq is empty")
                    endofrun = True
                elif endofrun and isempty:
                    print("Could not find any frames to process. Quitting")
                    break
        print("Done writing feeds. Quitting.")
        return None`


    


    The scenario is as follows : There are multiple video streams from a certain website at any time during the day, and the program containing these functions has to capture them as they get streamed. The memory available to this program is 6GB and there could be upto 3 streams running at any instant. Given below is the relevant main section of the script that uses the functions given above.

    


    


    


    `itftennis = VideoBot(siteurl)
outlist = []
t = Thread(target=itftennis.framewriter, args=(outlist,))
t.daemon = True
t.start()
tp = Thread(target=handleprocesstermination, args=())
tp.daemon = True
tp.start()
# Create a database connection and as associated cursor object. We will handle database operations from main thread only.
# DB statements removed from here...
feedidlist = []
vidsdict = {}
streampattern = re.compile("\?vid=(\d+)$")
while True:
    streampageurls = itftennis.checkforlivestream()
    if itftennis.DEBUG:
        print("Checking for new urls...")
        print(streampageurls.__len__())
    if streampageurls.__len__() > 0:
        argslist = []
        newurlscount = 0
        for streampageurl in streampageurls:
            newstream = False
            sps = re.search(streampattern, streampageurl)
            if sps:
                streamnum = sps.groups()[0]
                if streamnum not in vidsdict.keys(): # Check if this stream has already been processed.
                    vidsdict[streamnum] = 1
                    newstream = True
                else:
                    continue
            else:
                continue
            print("Detected new live stream... Getting it.")
            streamurl = itftennis.getstreamurlfrompage(streampageurl)
            print("Adding %s to list..."%streamurl)
            if streamurl is not None:
                # Now, get feed metadata...
                metadata = itftennis.getfeedmetadata(streampageurl)
                if metadata is None:
                    continue
                # lines to get matchescounter omitted here...
                if matchescounter >= itftennis.__class__.MAX_CONCURRENT_MATCHES:
                    break
                if newstream is True:
                    newurlscount += 1
                outfilename = time.strftime("./videodump/" + "%Y%m%d%H%M%S",time.localtime())+".avi"
                out = open(outfilename, "wb")
                outlist.append(out) # Save it in the list and take down the number for usage in framewriter
                outnum = outlist.__len__() - 1
                # Save metadata in DB
                # lines omitted here....
                argslist.append([streamurl, outnum, feedid, outfilename])   
            else:
                print("Couldn't get the stream url from page")
        if newurlscount > 0:
            for args in argslist:
                try:
                    p = Process(target=itftennis.capturelivestream, args=(args,))
                    p.start()
                    processeslist.append(p)
                    if itftennis.DEBUG:
                        print("Started process with args %s"%args)
                except:
                    print("Could not start process due to error: %s"%sys.exc_info()[1].__str__())
            print("Created processes, continuing now...")
            continue
    time.sleep(itftennis.livestreamcheckinterval)
t.join()
tp.join()
for out in outlist:
    out.close()`


    


    


    


    Please accept my apologies for swamping with this amount of code. I wanted to provide maximum context to my problem. I have removed the absolutely irrelevant DB statements, but apart from that this is what the code looks like.

    


    If you need to know anything else about the code, please let me know. What I would really like to know is if I am using the ffmpeg streams capturing statements correctly. The stream contains both video and audio components and I need to capture both. Hence I am making the following call :

    


    process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)


    


    Is this how it is supposed to be done ? More importantly, why do I keep getting the freezes in the output video. I have monitored the streams manually, and they are quite consistent. Frame losses do not happen when I view them on the website (at least it is not obviously noticeable). Also, I have run 'top' command on the host running the program. The CPU usage sometimes go over 100% (which, I came to understand from some answers on SO, is to be expected when running ffmpeg) but the memory usage usually remain below 30%. So what is the issue here. What do I need to do in order to fix this problem (other than learn more about how ffmpeg works).

    


    Thanks

    


    I have tried using various ffmpeg options (while trying to find similar issues that others encountered). I also tried running ffmpeg from command line for a limited period of time (11 mins), using the same options as used in the python code, and the captured content came out quite well. No freezes. No jumps in frames. But I need to use it in an automated way and there would be multiple streams at any time. Also, when I try playing the captured content using ffplay, I sometimes get the message "co located POCs unavailable" when these freezes happen. What does it mean ?

    


  • How Media Analytics for Piwik gives you the insights you need to measure how effective your video and audio marketing is – Part 2

    https://piwik.org/media.mp4
    2 février 2017, par InnoCraft — Community

    In Part 1 we have covered some of the Media Analytics features and explained why you cannot afford to not measure the media usage on your website. Chances are, you are wasting or losing money and time by not making the most out of your marketing strategy this very second. In this part, we continue showing you some more insights you can expect to get from Media Analytics and how nicely it is integrated into Piwik.

    Video, Audio and Media Player reports

    Media Analytics adds several new reports around videos, audios and media players. They are all quite similar and give you similar insights so we will mainly focus on the Video Titles report.

    Metrics

    The above mentioned reports give you all the same insights and features so we will mainly focus on the “Video Titles” report. When you open such a report for the first time, you will see a report like this with the following metrics :

    • “Impressions”, the number of times a visitor has viewed a page where this media was included.
    • “Plays”, the number of times a visitor watched or listened to this media.
    • “Play rate”, the percentage of visitors that watched or listened to a media after they have visited a page where this media was included.
    • “Finishes”, the percentage of visitors who played a media and finished it.
    • “Avg. time spent”, the average amount of time a visitor spent watching or listening to this media.
    • “Avg. media length” the average length of a video or audio media file. This number may vary for example if the media is a stream.
    • “Avg completion” the percentage of how much visitors have watched of a video.

    If you are not sure what a certain metric means, simply hover the metric title in the UI and you will get a detailed explanation. By changing the visualization to the “All Columns Table” in the bottom of the report, you get to see even more metrics like “Plays by unique visitors”, “Impressions by unique visitors”, “Finish rate”, “Avg. time to play aka hesitation time”, “Fullscreen rate” and we are always adding more metrics.

    These metrics are available for the following reports :

    • “Video / Audio Titles” shows you all metrics aggregated by video or audio title
    • “Video / Audio Resource URLs” shows you all metrics aggregated by the video or audio resource URL, for example “https://piwik.org/media.mp4”.
    • “Video / Audio Resource URLs grouped” removes some information from the URLs like subdomain, file extensions and other information to get aggregated metrics when you provide the same media in different formats.
    • “Videos per hour in website’s timezone” lets you find out how your media content is consumed depending on the hour of the day. You might realize that your media is consumed very differently in the morning vs at night.
    • “Video Resolutions” lets you discover how your video is consumed depending on the resolution.
    • “Media players” report is useful if you use different media players on your websites or apps and want to see how engagement with your media compares by media player.

    Row evolution

    At InnoCraft, we understand that static numbers are not so useful. When you see for example that yesterday 20 visitors played a certain media, would you know whether this is good or bad ? This is why we always give you the possibility to see the data in relation to the recorded data in the past. To see how a specific media performs over time, simply hover a media title or media resource URL and click on the “Row Evolution” icon.

    Now you can see whether actually more or less visitors played your chosen video for the selected period. Simply click on any metric name and the chosen metrics will be plotted in the big evolution graph.

    This feature is similar to the Media Overall evolution graph introduced in Part 1, but shows you a detailed evolution for an individual media title or resource.

    Media details

    Now that you know some of the most important media metrics, you might want to look a bit deeper into the user behaviour. For example we mentioned before the “Avg time spent on media” metric. Such an average number doesn’t let you know whether most visitors spent about the same time watching the video, or whether there were many more visitors that watched it only for a few seconds and a few that watched it for very long.

    One of the ways to get this insight is by again hovering any media title or resource URL and clicking on the “Media details” icon. It will open a new popup showing you a new set of reports like these :

    The “Time spent watching” and “How far visitors reached in the media” bar charts show you on the X-Axis how much time each visitor spent on watching a video and how far in the video they reached. On the Y-Axis you see the number of visitors. This lets you discover whether your users for example jump often to the middle or end of the video and which parts of your video was seen most often.

    The “How often the media was watched in a certain hour” and “Which resolutions the media was watched” is similar to the reports introduced in Part 1 of the blog post. However, this time instead of showing aggregated video or audio content data, they display data for a specific media title or media resource URL.

    Segmented audience log

    In Part 1 we have already introduced the Audience Log and explained that it is useful to better understand the user behaviour. Just a quick recap : The Audience Log shows you chronologically every action a specific visitor has performed on your website : Which pages they viewed, how they interacted with your media, when they clicked somewhere, and much more.

    By hovering a media title or a media resource and then selecting “Segmented audience log” you get to see the same log, but this time it will show only visitors that have interacted with the selected media. This will be useful for you for example when you notice an unusual value for a metric and then want to better understand why a metric is like that.

    Applying segments

    Media Analytics lets you apply any Piwik segment to the media reports allowing you to dice your visitors or personas multiplying the value that you get out of Media Analytics. For example you may want to apply a segment and analyze the media usage for visitors that have visited your website or mobile app for the first time vs. recurring visitors. Sometimes it may be interesting how visitors that converted a specific goal or purchased something consume your media, the possibilities are endless. We really recommend to take advantage of segments to understand your different target groups even better.

    The plugin also adds a lot of new segments to your Piwik letting you segment any Piwik report by visitors that have viewed or interacted with your media. For example you could go to the “Visitors => Devices” report and apply a media segment to see which devices were used the most to view your media. You can also combine segments to see for example how often your goals were converted when a visitor viewed media for longer than 10 seconds after waiting for at least 20 seconds before playing your media and when they played at least 3 videos during their visit.

    Widgets, Scheduled Reports, and more.

    This is not where the fun ends. Media Analytics defines more than 15 new widgets that you can add to your dashboard or export it into a third party website. You can set up Scheduled Reports to receive the Media reports automatically via email or sms or download the report to share it with your colleagues. It works also very well with Custom Alerts and you can view the Media reports in the Piwik Mobile app for Android and iOS. Via the HTTP Reporting API you can fetch any report in various formats. The plugin is really nicely integrated into Piwik we would need some more blog posts to fully cover all the ways Media Analytics advances your Piwik experience and how you can use and dig into all the data to increase your conversions and sales.

    How to get Media Analytics and related features

    You can get Media Analytics on the Piwik Marketplace. If you want to learn more about this feature, you might be also interested in the Media Analytics User Guide and the Media Analytics FAQ.

  • FFmpeg to get usb camera video and push RSTP stream by c++

    8 octobre 2022, par CrazyJack123

    What I want to do is get usb camera video and push rtsp stream via ffmpeg (not by command).
I've tried a few things and have successfully played RTSP streams through VLC media player using c++.

    


    The problem now is that the rstp video received through the VLC media player has a high delay and is relatively stuck, and it will freeze after a period of time. But this phenomenon does not occur with the ffmpeg command (although there is a little delay, there will be no sucks and freeze).

    


    The ffmpeg command and the c++ code are posted below.Can you help me locate the problem ? any help is greatly appreciated ! Thanks in advance !

    


    By the way, the encoding environment is as follows : windows10, Qt5.9.0 msvc2013_64, ffmpeg-4.4.1-full_build-shared

    


    The ffmpeg command is as follows :

    


    .\ffmpeg.exe -f dshow -rtbufsize 100M -i video="USB Camera" -vcodec libx264 -preset:v ultrafast -tune:v zerolatency -rtsp_transport udp -f rtsp rtsp://127.0.0.1/test


    


    c++ code is as follows,here is .h :

    


    #ifndef CAMERATHREADA_H&#xA;#define CAMERATHREADA_H&#xA;&#xA;#include <exception>&#xA;#include <qimage>&#xA;#include <qdebug>&#xA;#include <qcamerainfo>&#xA;#include <qthread>&#xA;#include <qobject>&#xA;using namespace std;&#xA;&#xA;extern "C"&#xA;{&#xA;    #include "libavformat/avformat.h"&#xA;    #include "libavutil/hwcontext.h"&#xA;    #include "libavutil/opt.h"&#xA;    #include "libavutil/time.h"&#xA;    #include "libavutil/frame.h"&#xA;    #include "libavutil/pixdesc.h"&#xA;    #include "libavutil/avassert.h"&#xA;    #include "libavutil/imgutils.h"&#xA;    #include "libavutil/ffversion.h"&#xA;    #include "libavcodec/avcodec.h"&#xA;    #include "libswscale/swscale.h"&#xA;    #include "libavdevice/avdevice.h"&#xA;    #include "libavformat/avformat.h"&#xA;    #include "libavfilter/avfilter.h"&#xA;    #include "libavdevice/avdevice.h"&#xA;    #include "libavcodec/avcodec.h"&#xA;    #include "libavformat/avformat.h"&#xA;    #include "libavutil/pixfmt.h"&#xA;    #include "libswscale/swscale.h"&#xA;    #include "libavutil/time.h"&#xA;    #include "libavutil/mathematics.h"&#xA;}&#xA;&#xA;&#xA;#define FMT_PIC_SHOW AV_PIX_FMT_RGB24&#xA;#define FMT_FRM_PUSH AV_PIX_FMT_YUV420P&#xA;&#xA;&#xA;class CameraThreadA : public QThread&#xA;{&#xA;    Q_OBJECT&#xA;public:&#xA;    CameraThreadA();&#xA;&#xA;signals:&#xA;    void receiveImage(QImage img);&#xA;&#xA;private:&#xA;&#xA;    //code to h264 and push&#xA;    int pushVideoindex;&#xA;    AVCodecContext *pushCodecCtx = nullptr;&#xA;    AVStream *pushStream;&#xA;    AVFormatContext* pushFmtCtx = nullptr;&#xA;    AVPacket* pushPkt = nullptr;&#xA;    AVCodec * pushCodec    = nullptr;&#xA;    uint8_t *pushBuffer;&#xA;    struct SwsContext *swCtxRGB2YUV = nullptr;&#xA;    AVFrame* yuvFrame    = av_frame_alloc();&#xA;&#xA;    //receive from camera&#xA;    AVFormatContext* rcvFmtCtx = nullptr;&#xA;    AVInputFormat*   rcvInFmt  = nullptr;&#xA;    int nVideoIndex          = -1;&#xA;    AVCodecParameters* rcvCodecPara = nullptr;&#xA;    AVCodecContext   * rcvCodecCtx   = nullptr;&#xA;    AVCodec          * rcvCodec    = nullptr;&#xA;    AVFrame* cameraFrame    = av_frame_alloc();&#xA;    AVFrame* rgbFrame = av_frame_alloc();&#xA;    AVPacket* rcvPkt = nullptr;&#xA;    uint8_t* showBuffer;&#xA;    struct SwsContext *rcvSwsCtx = nullptr;&#xA;&#xA;    // QThread interface&#xA;protected:&#xA;    void run();&#xA;};&#xA;&#xA;#endif // CAMERATHREADA_H&#xA;&#xA;&#xA;</qobject></qthread></qcamerainfo></qdebug></qimage></exception>

    &#xA;

    here is .cpp :

    &#xA;

    #include "camerathreada.h"&#xA;&#xA;CameraThreadA::CameraThreadA()&#xA;{&#xA;    //init camera to rgb&#xA;    avdevice_register_all();&#xA;    if(nullptr == (rcvFmtCtx = avformat_alloc_context()))&#xA;    {&#xA;        qDebug() &lt;&lt; "create AVFormatContext failed." &lt;&lt; endl;&#xA;    }&#xA;    if(nullptr == (rcvInFmt = const_cast(av_find_input_format("dshow"))))&#xA;    {&#xA;        qDebug() &lt;&lt; "find AVInputFormat failed." &lt;&lt; endl;&#xA;    }&#xA;    QString urlString = QString("video=USB Camera");&#xA;    if(avformat_open_input(&amp;rcvFmtCtx&#xA;                           , urlString.toStdString().c_str()&#xA;                           , rcvInFmt, NULL) &lt; 0)&#xA;    {&#xA;        qDebug() &lt;&lt; "open camera failed." &lt;&lt; endl;&#xA;    }&#xA;    if(avformat_find_stream_info(rcvFmtCtx, NULL) &lt; 0){&#xA;        qDebug() &lt;&lt; "cannot find stream info." &lt;&lt; endl;&#xA;    }&#xA;    for(size_t i = 0;i &lt; rcvFmtCtx->nb_streams;i&#x2B;&#x2B;){&#xA;        if(rcvFmtCtx->streams[i]->codecpar->codec_type==AVMEDIA_TYPE_VIDEO){&#xA;            nVideoIndex = i;&#xA;        }&#xA;    }&#xA;    if(nVideoIndex == -1){&#xA;        qDebug() &lt;&lt; "cannot find video stream." &lt;&lt; endl;&#xA;    }&#xA;    rcvCodecPara = rcvFmtCtx->streams[nVideoIndex]->codecpar;&#xA;    if(nullptr == (rcvCodec = const_cast(avcodec_find_decoder(rcvCodecPara->codec_id))))&#xA;    {&#xA;        qDebug() &lt;&lt; "cannot find codec." &lt;&lt; endl;&#xA;    }&#xA;    if(nullptr == (rcvCodecCtx = avcodec_alloc_context3(rcvCodec))){&#xA;        qDebug() &lt;&lt; "cannot alloc codecContext." &lt;&lt; endl;&#xA;    }&#xA;    if(avcodec_parameters_to_context(rcvCodecCtx, rcvCodecPara) &lt; 0){&#xA;        qDebug() &lt;&lt; "cannot initialize codecContext." &lt;&lt; endl;&#xA;    }&#xA;    if(avcodec_open2(rcvCodecCtx, rcvCodec, NULL) &lt; 0){&#xA;        qDebug() &lt;&lt; "cannot open codec." &lt;&lt; endl;&#xA;        return;&#xA;    }&#xA;    rcvSwsCtx = sws_getContext(rcvCodecCtx->width, rcvCodecCtx->height, rcvCodecCtx->pix_fmt,&#xA;                                     rcvCodecCtx->width, rcvCodecCtx->height, FMT_PIC_SHOW,&#xA;                                     SWS_BICUBIC, NULL, NULL, NULL);&#xA;    int numBytes = av_image_get_buffer_size(FMT_PIC_SHOW, rcvCodecCtx->width, rcvCodecCtx->height, 1);&#xA;    showBuffer = (unsigned char*)av_malloc(static_cast<unsigned long="long">(numBytes) * sizeof(unsigned char));&#xA;    if(av_image_fill_arrays(rgbFrame->data, rgbFrame->linesize,&#xA;                            showBuffer&#xA;                            , FMT_PIC_SHOW, rcvCodecCtx->width, rcvCodecCtx->height, 1) &lt; 0)&#xA;    {&#xA;        qDebug() &lt;&lt; "av_image_fill_arrays failed." &lt;&lt; endl;&#xA;    }&#xA;    rcvPkt = av_packet_alloc();&#xA;    av_new_packet(rcvPkt, rcvCodecCtx->width * rcvCodecCtx->height);&#xA;&#xA;&#xA;    //init rgb to yuv&#xA;    swCtxRGB2YUV = sws_getContext(rcvCodecCtx->width, rcvCodecCtx->height, FMT_PIC_SHOW,&#xA;        rcvCodecCtx->width, rcvCodecCtx->height, FMT_FRM_PUSH,&#xA;        SWS_BICUBIC,NULL, NULL, NULL);&#xA;&#xA;    yuvFrame->width = rcvCodecCtx->width;&#xA;    yuvFrame->height = rcvCodecCtx->height;&#xA;    yuvFrame->format = FMT_FRM_PUSH;&#xA;    pushBuffer = (uint8_t *)av_malloc(yuvFrame->width * yuvFrame->height * 1.5);&#xA;    if (av_image_fill_arrays(yuvFrame->data, yuvFrame->linesize&#xA;                             , pushBuffer&#xA;                             , FMT_FRM_PUSH, yuvFrame->width, yuvFrame->height, 1) &lt; 0){&#xA;        qDebug() &lt;&lt; "Failed: av_image_fill_arrays\n";&#xA;    }&#xA;&#xA;&#xA;    //init h264 codec&#xA;    pushCodec = const_cast(avcodec_find_encoder(AV_CODEC_ID_H264));&#xA;    if (!pushCodec){&#xA;        qDebug() &lt;&lt; ("Fail: avcodec_find_encoder\n");&#xA;    }&#xA;    pushCodecCtx = avcodec_alloc_context3(pushCodec);&#xA;    if (!pushCodecCtx){&#xA;        qDebug() &lt;&lt; ("Fail: avcodec_alloc_context3\n");&#xA;    }&#xA;    pushCodecCtx->pix_fmt = FMT_FRM_PUSH;&#xA;    pushCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    pushCodecCtx->width = rcvCodecCtx->width;&#xA;    pushCodecCtx->height = rcvCodecCtx->height;&#xA;    pushCodecCtx->channels = 3;&#xA;    pushCodecCtx->time_base = { 1, 25 };&#xA;    pushCodecCtx->gop_size = 5; &#xA;    pushCodecCtx->max_b_frames = 0;&#xA;    pushCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    av_opt_set(pushCodecCtx->priv_data, "preset", "ultrafast", 0);&#xA;    av_opt_set(pushCodecCtx->priv_data, "tune", "zerolatency", 0);&#xA;    if (avcodec_open2(pushCodecCtx, pushCodec, NULL) &lt; 0){&#xA;        qDebug() &lt;&lt; ("Fail: avcodec_open2\n");&#xA;    }&#xA;    pushPkt = av_packet_alloc();&#xA;&#xA;&#xA;   //init rtsp pusher&#xA;   QString des = QString("rtsp://127.0.0.1/test")&#xA;   if (avformat_alloc_output_context2(&amp;pushFmtCtx, NULL, "rtsp", des.toStdString().c_str()) &lt; 0){&#xA;       qDebug() &lt;&lt; ("Fail: avformat_alloc_output_context2\n");&#xA;   }&#xA;   av_opt_set(pushFmtCtx->priv_data, "rtsp_transport", "udp", 0);&#xA;   pushFmtCtx->max_interleave_delta = 1000000;&#xA;   pushStream = avformat_new_stream(pushFmtCtx, pushCodec);&#xA;   if (!pushStream){&#xA;       qDebug() &lt;&lt; ("Fail: avformat_new_stream\n");&#xA;   }&#xA;   pushStream->time_base = { 1, 25 };&#xA;   pushVideoindex = pushStream->id = pushFmtCtx->nb_streams - 1;&#xA;   pushCodecCtx->codec_tag = 0;&#xA;   if (pushFmtCtx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;   {&#xA;     pushCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;   }&#xA;   int ret = 0;&#xA;   ret = avcodec_parameters_from_context(pushStream->codecpar, pushCodecCtx);&#xA;   if (ret &lt; 0)&#xA;   {&#xA;     qDebug() &lt;&lt;("Failed to copy codec context to out_stream codecpar context\n");&#xA;   }&#xA;   //av_dump_format(pushFmtCtx, 0, pushFmtCtx->filename, 1);&#xA;   if (!(pushFmtCtx->oformat->flags &amp; AVFMT_NOFILE)) {&#xA;       if (avio_open(&amp;pushFmtCtx->pb, "rtsp://127.0.0.1/test", AVIO_FLAG_WRITE) &lt; 0) {&#xA;           qDebug() &lt;&lt;("Fail: avio_open(&#x27;%s&#x27;)\n rtsp://127.0.0.1/test");&#xA;       }&#xA;   }&#xA;   avformat_write_header(pushFmtCtx, NULL);&#xA;  &#xA;}&#xA;&#xA;void CameraThreadA::run()&#xA;{&#xA;    int testCount = 0;&#xA;    int ret;&#xA;    while(av_read_frame(rcvFmtCtx, rcvPkt) >= 0){&#xA;        if(rcvPkt->stream_index == nVideoIndex){&#xA;            if(avcodec_send_packet(rcvCodecCtx, rcvPkt)>=0){&#xA;                while((ret = avcodec_receive_frame(rcvCodecCtx, cameraFrame)) >= 0){&#xA;                    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                        return;&#xA;                    else if (ret &lt; 0) {&#xA;                        return;&#xA;                    }&#xA;&#xA;                    //rcv&#xA;                    sws_scale(rcvSwsCtx,&#xA;                              cameraFrame->data, cameraFrame->linesize,&#xA;                              0, rcvCodecCtx->height,&#xA;                              rgbFrame->data, rgbFrame->linesize);&#xA;                    QImage img(showBuffer, rcvCodecCtx->width, rcvCodecCtx->height, QImage::Format_RGB888);&#xA;                    emit receiveImage(img);&#xA;                  &#xA;                    //rgb 2 YUV&#xA;                    if (sws_scale(swCtxRGB2YUV,&#xA;                        rgbFrame->data, rgbFrame->linesize,&#xA;                        0, rcvCodecCtx->height,&#xA;                        yuvFrame->data, yuvFrame->linesize) &lt; 0)&#xA;                    {&#xA;                        qDebug() &lt;&lt; "fail : rgb 2 YUV\n";&#xA;                    }&#xA;                    yuvFrame->pts = av_gettime();&#xA;&#xA;                    //code h264&#xA;                    ret = avcodec_send_frame(pushCodecCtx, yuvFrame);&#xA;                    if (ret &lt; 0){&#xA;                        qDebug() &lt;&lt; "send frame fail\n" &lt;&lt; ret;&#xA;                    }&#xA;                    while (ret >= 0){&#xA;                        ret = avcodec_receive_packet(pushCodecCtx, pushPkt);&#xA;                        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF){&#xA;                            qDebug() &lt;&lt;("ret == AVERROR(EAGAIN) || ret == AVERROR_EOF\n");&#xA;                            break;&#xA;                        }else if (ret &lt; 0){&#xA;                            qDebug() &lt;&lt;("Error during encoding\n");&#xA;                            break;&#xA;                        }else{&#xA;                            pushPkt->stream_index = pushVideoindex;&#xA;                            if (av_interleaved_write_frame(pushFmtCtx, pushPkt) &lt; 0) {&#xA;                                qDebug() &lt;&lt; ("Error muxing packet\n");&#xA;                            }&#xA;                            av_packet_unref(pushPkt);&#xA;                        }&#xA;                    }&#xA;                    testCount &#x2B;&#x2B;;&#xA;                    QThread::msleep(10);&#xA;                }&#xA;            }&#xA;            av_packet_unref(rcvPkt);&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;</unsigned>

    &#xA;