
Recherche avancée
Autres articles (62)
-
Qu’est ce qu’un éditorial
21 juin 2013, parEcrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
Vous pouvez personnaliser le formulaire de création d’un éditorial.
Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (13373)
-
FFMPEG Audio/video out of sync after cutting and concatonating even after transcoding
4 mai 2020, par Ham789I am attempting to take cuts from a set of videos and concatonate them together with the concat demuxer.



However, the audio is out of sync of the video in the output. The audio seems to drift further out of sync as the video progresses. Interestingly, if I click to seek another time in the video with the progress bar on the player, the audio becomes synced up with the video but then gradually drifts out of sync again. Seeking to a new time in the player seems to reset the audio/video. It is like they are being played back at different rates or something. I get this behaviour in both Quicktime and VLC players.



For each video, I decode it, trim a clip from it and then encode it to 4k resolution at 25 fps with its audio :



ffmpeg -ss 0.5 -t 0.5 -i input_video1.mp4 -r 25 -vf scale=3840:2160 output_video1.mp4



I then take each of these videos and concatonate them together with the concat demuxer :



ffmpeg -f concat -safe 0 -i cut_videos.txt -c copy -y output.mp4



I am taking short cuts of each video (approximately 0.5s)



I am using Python's subprocess to automate the cutting and concatonating of the videos.



I am not sure if this happens because of the trimming or concatenation steps but when I play back the intermediate cut video files (
output_video1.mp4
in the above command), there seems to be some silence before the audio comes in at the start of the video.


When I concatonate the videos, I sometimes get a lot of these warnings however the audio still becomes out of sync even when I do not get them :



[mp4 @ 0000021a252ce080] Non-monotonous DTS in output stream 0:1; previous: 51792, current: 50009; changing to 51793. This may result in incorrect timestamps in the output file.



From this post, it seems to be a problem with cutting the videos and their timestamps. The solution proposed in the post is to decode, cut and then encode the video however I am already doing that.



How can I ensure the audio and video are in sync ? Am I transcoding incorrectly ? This seems to be the only solution I can find online however it does not seem to work.



UPDATE :



I took inspiration from this post and seperated the audio and video from
output_video1.mp4
using :


ffmpeg -i output_video1.mp4 -acodec copy -vn video.mp4



and



ffmpeg -i output_video1.mp4 -vcodec copy -an audio.mp4



I then compared the durations of
video.mp4
andaudio.mp4
and got 0.57s and 0.52s respectively. Since the video is longer, this explains why there is a period of silence in the videos. The post then suggests transcoding is the solution however as you can see from the code above that does not work for me.


Sample Output Log for the Trim Command



built with Apple LLVM version 10.0.0 (clang-1000.11.45.5)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input_video1.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:04.06, start: 0.000000, bitrate: 14266 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 3840x2160, 14268 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
 Metadata:
 handler_name : Core Media Video
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 94 kb/s (default)
 Metadata:
 handler_name : Core Media Audio
File 'output_video1.mp4' already exists. Overwrite ? [y/N] y
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
 Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x7fcae4001e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7fcae4001e00] profile High, level 5.1
[libx264 @ 0x7fcae4001e00] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output_video1.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Stream #0:0(und): Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 3840x2160, q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
 Metadata:
 handler_name : Core Media Video
 encoder : Lavc58.54.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
 Metadata:
 handler_name : Core Media Audio
 encoder : Lavc58.54.100 aac
frame= 14 fps=7.0 q=-1.0 Lsize= 928kB time=00:00:00.51 bitrate=14884.2kbits/s dup=0 drop=1 speed=0.255x 
video:922kB audio:5kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.194501%
[libx264 @ 0x7fcae4001e00] frame I:1 Avg QP:21.06 size:228519
[libx264 @ 0x7fcae4001e00] frame P:4 Avg QP:22.03 size: 85228
[libx264 @ 0x7fcae4001e00] frame B:9 Avg QP:22.88 size: 41537
[libx264 @ 0x7fcae4001e00] consecutive B-frames: 14.3% 0.0% 0.0% 85.7%
[libx264 @ 0x7fcae4001e00] mb I I16..4: 27.6% 64.3% 8.1%
[libx264 @ 0x7fcae4001e00] mb P I16..4: 9.1% 10.7% 0.2% P16..4: 48.5% 7.3% 3.9% 0.0% 0.0% skip:20.2%
[libx264 @ 0x7fcae4001e00] mb B I16..4: 1.1% 1.0% 0.0% B16..8: 44.5% 2.9% 0.2% direct: 8.3% skip:42.0% L0:45.6% L1:53.2% BI: 1.2%
[libx264 @ 0x7fcae4001e00] 8x8 transform intra:58.2% inter:93.4%
[libx264 @ 0x7fcae4001e00] coded y,uvDC,uvAC intra: 31.4% 62.2% 5.2% inter: 11.4% 30.9% 0.0%
[libx264 @ 0x7fcae4001e00] i16 v,h,dc,p: 15% 52% 12% 21%
[libx264 @ 0x7fcae4001e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 33% 32% 2% 2% 2% 4% 2% 4%
[libx264 @ 0x7fcae4001e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 39% 9% 3% 4% 4% 12% 3% 4%
[libx264 @ 0x7fcae4001e00] i8c dc,h,v,p: 43% 36% 18% 3%
[libx264 @ 0x7fcae4001e00] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x7fcae4001e00] ref P L0: 69.3% 8.0% 14.8% 7.9%
[libx264 @ 0x7fcae4001e00] ref B L0: 88.1% 9.2% 2.6%
[libx264 @ 0x7fcae4001e00] ref B L1: 90.2% 9.8%
[libx264 @ 0x7fcae4001e00] kb/s:13475.29
[aac @ 0x7fcae4012400] Qavg: 125.000```



-
My python script using ffmpeg captures video content, but the captured content freezes in the middle and jumps frames
11 novembre 2022, par Supriyo MitraI am new to ffmpeg and I am trying to use it through a python script. The python functions that captures the video content is given below. The problem I am facing is that the captured content freezes at (uneven) intervals and skips a few frames every time it happens.


` def capturelivestream(self, argslist):
 streamurl, outnum, feedid, outfilename = argslist[0], argslist[1], argslist[2], argslist[3]
 try:
 info = ffmpeg.probe(streamurl, select_streams='a')
 streams = info.get('streams', [])
 except:
 streams = []
 if len(streams) == 0:
 print('There are no streams available')
 stream = {}
 else:
 stream = streams[0]
 for stream in streams:
 if stream.get('codec_type') != 'audio':
 continue
 else:
 break
 if 'channels' in stream.keys():
 channels = stream['channels']
 samplerate = float(stream['sample_rate'])
 else:
 channels = None
 samplerate = 44100
 process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)
 fpath = os.path.dirname(outfilename)
 fnamefext = os.path.basename(outfilename)
 fname = fnamefext.split(".")[0]
 read_size = 320 * 180 * 3 # This is width * height * 3
 lastcaptured = time.time()
 maxtries = 12
 ntries = 0
 while True:
 if process:
 inbytes = process.stdout.read(read_size)
 if inbytes is not None and inbytes.__len__() > 0:
 try:
 frame = (np.frombuffer(inbytes, np.uint8).reshape([180, 320, 3]))
 except:
 print("Failed to reshape frame: %s"%sys.exc_info()[1].__str__())
 continue # This could be an issue if there is a continuous supply of frames that cannot be reshaped
 self.processq.put([outnum, frame])
 lastcaptured = time.time()
 ntries = 0
 else:
 if self.DEBUG:
 print("Could not read frame for feed ID %s"%feedid)
 t = time.time()
 if t - lastcaptured > 30: # If the frames can't be read for more than 30 seconds...
 print("Reopening feed identified by feed ID %s"%feedid)
 process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)
 ntries += 1
 if ntries > maxtries:
 if self.DEBUG:
 print("Stream %s is no longer available."%streamurl)
 # DB statements removed here
 
 break # Break out of infinite loop.
 continue
 
 return None`




The function that captures the frames is as follows :



` def framewriter(self, outlist):
 isempty = False
 endofrun = False
 while True:
 frame = None
 try:
 args = self.processq.get()
 except: # Sometimes, the program crashes at this point due to lack of memory...
 print("Error in framewriter while reading from queue: %s"%sys.exc_info()[1].__str__())
 continue
 outnum = args[0]
 frame = args[1]
 if outlist.__len__() > outnum:
 out = outlist[outnum]
 else:
 if self.DEBUG == 2:
 print("Could not get writer %s"%outnum)
 continue
 if frame is not None and out is not None:
 out.write(frame)
 isempty = False
 endofrun = False
 else:
 if self.processq.empty() and not isempty:
 isempty = True
 elif self.processq.empty() and isempty: # processq queue is empty now and was empty last time
 print("processq is empty")
 endofrun = True
 elif endofrun and isempty:
 print("Could not find any frames to process. Quitting")
 break
 print("Done writing feeds. Quitting.")
 return None`



The scenario is as follows : There are multiple video streams from a certain website at any time during the day, and the program containing these functions has to capture them as they get streamed. The memory available to this program is 6GB and there could be upto 3 streams running at any instant. Given below is the relevant main section of the script that uses the functions given above.






`itftennis = VideoBot(siteurl)
outlist = []
t = Thread(target=itftennis.framewriter, args=(outlist,))
t.daemon = True
t.start()
tp = Thread(target=handleprocesstermination, args=())
tp.daemon = True
tp.start()
# Create a database connection and as associated cursor object. We will handle database operations from main thread only.
# DB statements removed from here...
feedidlist = []
vidsdict = {}
streampattern = re.compile("\?vid=(\d+)$")
while True:
 streampageurls = itftennis.checkforlivestream()
 if itftennis.DEBUG:
 print("Checking for new urls...")
 print(streampageurls.__len__())
 if streampageurls.__len__() > 0:
 argslist = []
 newurlscount = 0
 for streampageurl in streampageurls:
 newstream = False
 sps = re.search(streampattern, streampageurl)
 if sps:
 streamnum = sps.groups()[0]
 if streamnum not in vidsdict.keys(): # Check if this stream has already been processed.
 vidsdict[streamnum] = 1
 newstream = True
 else:
 continue
 else:
 continue
 print("Detected new live stream... Getting it.")
 streamurl = itftennis.getstreamurlfrompage(streampageurl)
 print("Adding %s to list..."%streamurl)
 if streamurl is not None:
 # Now, get feed metadata...
 metadata = itftennis.getfeedmetadata(streampageurl)
 if metadata is None:
 continue
 # lines to get matchescounter omitted here...
 if matchescounter >= itftennis.__class__.MAX_CONCURRENT_MATCHES:
 break
 if newstream is True:
 newurlscount += 1
 outfilename = time.strftime("./videodump/" + "%Y%m%d%H%M%S",time.localtime())+".avi"
 out = open(outfilename, "wb")
 outlist.append(out) # Save it in the list and take down the number for usage in framewriter
 outnum = outlist.__len__() - 1
 # Save metadata in DB
 # lines omitted here....
 argslist.append([streamurl, outnum, feedid, outfilename]) 
 else:
 print("Couldn't get the stream url from page")
 if newurlscount > 0:
 for args in argslist:
 try:
 p = Process(target=itftennis.capturelivestream, args=(args,))
 p.start()
 processeslist.append(p)
 if itftennis.DEBUG:
 print("Started process with args %s"%args)
 except:
 print("Could not start process due to error: %s"%sys.exc_info()[1].__str__())
 print("Created processes, continuing now...")
 continue
 time.sleep(itftennis.livestreamcheckinterval)
t.join()
tp.join()
for out in outlist:
 out.close()`







Please accept my apologies for swamping with this amount of code. I wanted to provide maximum context to my problem. I have removed the absolutely irrelevant DB statements, but apart from that this is what the code looks like.


If you need to know anything else about the code, please let me know. What I would really like to know is if I am using the ffmpeg streams capturing statements correctly. The stream contains both video and audio components and I need to capture both. Hence I am making the following call :


process = ffmpeg.input(streamurl).output('pipe:', pix_fmt='yuv420p', format='avi', vcodec='libx264', acodec='pcm_s16le', ac=channels, ar=samplerate, vsync=0, loglevel='quiet').run_async(pipe_stdout=True)



Is this how it is supposed to be done ? More importantly, why do I keep getting the freezes in the output video. I have monitored the streams manually, and they are quite consistent. Frame losses do not happen when I view them on the website (at least it is not obviously noticeable). Also, I have run 'top' command on the host running the program. The CPU usage sometimes go over 100% (which, I came to understand from some answers on SO, is to be expected when running ffmpeg) but the memory usage usually remain below 30%. So what is the issue here. What do I need to do in order to fix this problem (other than learn more about how ffmpeg works).


Thanks


I have tried using various ffmpeg options (while trying to find similar issues that others encountered). I also tried running ffmpeg from command line for a limited period of time (11 mins), using the same options as used in the python code, and the captured content came out quite well. No freezes. No jumps in frames. But I need to use it in an automated way and there would be multiple streams at any time. Also, when I try playing the captured content using ffplay, I sometimes get the message "co located POCs unavailable" when these freezes happen. What does it mean ?


-
FFmpeg to get usb camera video and push RSTP stream by c++
8 octobre 2022, par CrazyJack123What I want to do is get usb camera video and push rtsp stream via ffmpeg (not by command).
I've tried a few things and have successfully played RTSP streams through VLC media player using c++.


The problem now is that the rstp video received through the VLC media player has a high delay and is relatively stuck, and it will freeze after a period of time. But this phenomenon does not occur with the ffmpeg command (although there is a little delay, there will be no sucks and freeze).


The ffmpeg command and the c++ code are posted below.Can you help me locate the problem ? any help is greatly appreciated ! Thanks in advance !


By the way, the encoding environment is as follows : windows10, Qt5.9.0 msvc2013_64, ffmpeg-4.4.1-full_build-shared


The ffmpeg command is as follows :


.\ffmpeg.exe -f dshow -rtbufsize 100M -i video="USB Camera" -vcodec libx264 -preset:v ultrafast -tune:v zerolatency -rtsp_transport udp -f rtsp rtsp://127.0.0.1/test



c++ code is as follows,here is
.h
:

#ifndef CAMERATHREADA_H
#define CAMERATHREADA_H

#include <exception>
#include <qimage>
#include <qdebug>
#include <qcamerainfo>
#include <qthread>
#include <qobject>
using namespace std;

extern "C"
{
 #include "libavformat/avformat.h"
 #include "libavutil/hwcontext.h"
 #include "libavutil/opt.h"
 #include "libavutil/time.h"
 #include "libavutil/frame.h"
 #include "libavutil/pixdesc.h"
 #include "libavutil/avassert.h"
 #include "libavutil/imgutils.h"
 #include "libavutil/ffversion.h"
 #include "libavcodec/avcodec.h"
 #include "libswscale/swscale.h"
 #include "libavdevice/avdevice.h"
 #include "libavformat/avformat.h"
 #include "libavfilter/avfilter.h"
 #include "libavdevice/avdevice.h"
 #include "libavcodec/avcodec.h"
 #include "libavformat/avformat.h"
 #include "libavutil/pixfmt.h"
 #include "libswscale/swscale.h"
 #include "libavutil/time.h"
 #include "libavutil/mathematics.h"
}


#define FMT_PIC_SHOW AV_PIX_FMT_RGB24
#define FMT_FRM_PUSH AV_PIX_FMT_YUV420P


class CameraThreadA : public QThread
{
 Q_OBJECT
public:
 CameraThreadA();

signals:
 void receiveImage(QImage img);

private:

 //code to h264 and push
 int pushVideoindex;
 AVCodecContext *pushCodecCtx = nullptr;
 AVStream *pushStream;
 AVFormatContext* pushFmtCtx = nullptr;
 AVPacket* pushPkt = nullptr;
 AVCodec * pushCodec = nullptr;
 uint8_t *pushBuffer;
 struct SwsContext *swCtxRGB2YUV = nullptr;
 AVFrame* yuvFrame = av_frame_alloc();

 //receive from camera
 AVFormatContext* rcvFmtCtx = nullptr;
 AVInputFormat* rcvInFmt = nullptr;
 int nVideoIndex = -1;
 AVCodecParameters* rcvCodecPara = nullptr;
 AVCodecContext * rcvCodecCtx = nullptr;
 AVCodec * rcvCodec = nullptr;
 AVFrame* cameraFrame = av_frame_alloc();
 AVFrame* rgbFrame = av_frame_alloc();
 AVPacket* rcvPkt = nullptr;
 uint8_t* showBuffer;
 struct SwsContext *rcvSwsCtx = nullptr;

 // QThread interface
protected:
 void run();
};

#endif // CAMERATHREADA_H


</qobject></qthread></qcamerainfo></qdebug></qimage></exception>


here is
.cpp
:

#include "camerathreada.h"

CameraThreadA::CameraThreadA()
{
 //init camera to rgb
 avdevice_register_all();
 if(nullptr == (rcvFmtCtx = avformat_alloc_context()))
 {
 qDebug() << "create AVFormatContext failed." << endl;
 }
 if(nullptr == (rcvInFmt = const_cast(av_find_input_format("dshow"))))
 {
 qDebug() << "find AVInputFormat failed." << endl;
 }
 QString urlString = QString("video=USB Camera");
 if(avformat_open_input(&rcvFmtCtx
 , urlString.toStdString().c_str()
 , rcvInFmt, NULL) < 0)
 {
 qDebug() << "open camera failed." << endl;
 }
 if(avformat_find_stream_info(rcvFmtCtx, NULL) < 0){
 qDebug() << "cannot find stream info." << endl;
 }
 for(size_t i = 0;i < rcvFmtCtx->nb_streams;i++){
 if(rcvFmtCtx->streams[i]->codecpar->codec_type==AVMEDIA_TYPE_VIDEO){
 nVideoIndex = i;
 }
 }
 if(nVideoIndex == -1){
 qDebug() << "cannot find video stream." << endl;
 }
 rcvCodecPara = rcvFmtCtx->streams[nVideoIndex]->codecpar;
 if(nullptr == (rcvCodec = const_cast(avcodec_find_decoder(rcvCodecPara->codec_id))))
 {
 qDebug() << "cannot find codec." << endl;
 }
 if(nullptr == (rcvCodecCtx = avcodec_alloc_context3(rcvCodec))){
 qDebug() << "cannot alloc codecContext." << endl;
 }
 if(avcodec_parameters_to_context(rcvCodecCtx, rcvCodecPara) < 0){
 qDebug() << "cannot initialize codecContext." << endl;
 }
 if(avcodec_open2(rcvCodecCtx, rcvCodec, NULL) < 0){
 qDebug() << "cannot open codec." << endl;
 return;
 }
 rcvSwsCtx = sws_getContext(rcvCodecCtx->width, rcvCodecCtx->height, rcvCodecCtx->pix_fmt,
 rcvCodecCtx->width, rcvCodecCtx->height, FMT_PIC_SHOW,
 SWS_BICUBIC, NULL, NULL, NULL);
 int numBytes = av_image_get_buffer_size(FMT_PIC_SHOW, rcvCodecCtx->width, rcvCodecCtx->height, 1);
 showBuffer = (unsigned char*)av_malloc(static_cast<unsigned long="long">(numBytes) * sizeof(unsigned char));
 if(av_image_fill_arrays(rgbFrame->data, rgbFrame->linesize,
 showBuffer
 , FMT_PIC_SHOW, rcvCodecCtx->width, rcvCodecCtx->height, 1) < 0)
 {
 qDebug() << "av_image_fill_arrays failed." << endl;
 }
 rcvPkt = av_packet_alloc();
 av_new_packet(rcvPkt, rcvCodecCtx->width * rcvCodecCtx->height);


 //init rgb to yuv
 swCtxRGB2YUV = sws_getContext(rcvCodecCtx->width, rcvCodecCtx->height, FMT_PIC_SHOW,
 rcvCodecCtx->width, rcvCodecCtx->height, FMT_FRM_PUSH,
 SWS_BICUBIC,NULL, NULL, NULL);

 yuvFrame->width = rcvCodecCtx->width;
 yuvFrame->height = rcvCodecCtx->height;
 yuvFrame->format = FMT_FRM_PUSH;
 pushBuffer = (uint8_t *)av_malloc(yuvFrame->width * yuvFrame->height * 1.5);
 if (av_image_fill_arrays(yuvFrame->data, yuvFrame->linesize
 , pushBuffer
 , FMT_FRM_PUSH, yuvFrame->width, yuvFrame->height, 1) < 0){
 qDebug() << "Failed: av_image_fill_arrays\n";
 }


 //init h264 codec
 pushCodec = const_cast(avcodec_find_encoder(AV_CODEC_ID_H264));
 if (!pushCodec){
 qDebug() << ("Fail: avcodec_find_encoder\n");
 }
 pushCodecCtx = avcodec_alloc_context3(pushCodec);
 if (!pushCodecCtx){
 qDebug() << ("Fail: avcodec_alloc_context3\n");
 }
 pushCodecCtx->pix_fmt = FMT_FRM_PUSH;
 pushCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
 pushCodecCtx->width = rcvCodecCtx->width;
 pushCodecCtx->height = rcvCodecCtx->height;
 pushCodecCtx->channels = 3;
 pushCodecCtx->time_base = { 1, 25 };
 pushCodecCtx->gop_size = 5; 
 pushCodecCtx->max_b_frames = 0;
 pushCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 av_opt_set(pushCodecCtx->priv_data, "preset", "ultrafast", 0);
 av_opt_set(pushCodecCtx->priv_data, "tune", "zerolatency", 0);
 if (avcodec_open2(pushCodecCtx, pushCodec, NULL) < 0){
 qDebug() << ("Fail: avcodec_open2\n");
 }
 pushPkt = av_packet_alloc();


 //init rtsp pusher
 QString des = QString("rtsp://127.0.0.1/test")
 if (avformat_alloc_output_context2(&pushFmtCtx, NULL, "rtsp", des.toStdString().c_str()) < 0){
 qDebug() << ("Fail: avformat_alloc_output_context2\n");
 }
 av_opt_set(pushFmtCtx->priv_data, "rtsp_transport", "udp", 0);
 pushFmtCtx->max_interleave_delta = 1000000;
 pushStream = avformat_new_stream(pushFmtCtx, pushCodec);
 if (!pushStream){
 qDebug() << ("Fail: avformat_new_stream\n");
 }
 pushStream->time_base = { 1, 25 };
 pushVideoindex = pushStream->id = pushFmtCtx->nb_streams - 1;
 pushCodecCtx->codec_tag = 0;
 if (pushFmtCtx->oformat->flags & AVFMT_GLOBALHEADER)
 {
 pushCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }
 int ret = 0;
 ret = avcodec_parameters_from_context(pushStream->codecpar, pushCodecCtx);
 if (ret < 0)
 {
 qDebug() <<("Failed to copy codec context to out_stream codecpar context\n");
 }
 //av_dump_format(pushFmtCtx, 0, pushFmtCtx->filename, 1);
 if (!(pushFmtCtx->oformat->flags & AVFMT_NOFILE)) {
 if (avio_open(&pushFmtCtx->pb, "rtsp://127.0.0.1/test", AVIO_FLAG_WRITE) < 0) {
 qDebug() <<("Fail: avio_open('%s')\n rtsp://127.0.0.1/test");
 }
 }
 avformat_write_header(pushFmtCtx, NULL);
 
}

void CameraThreadA::run()
{
 int testCount = 0;
 int ret;
 while(av_read_frame(rcvFmtCtx, rcvPkt) >= 0){
 if(rcvPkt->stream_index == nVideoIndex){
 if(avcodec_send_packet(rcvCodecCtx, rcvPkt)>=0){
 while((ret = avcodec_receive_frame(rcvCodecCtx, cameraFrame)) >= 0){
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 return;
 else if (ret < 0) {
 return;
 }

 //rcv
 sws_scale(rcvSwsCtx,
 cameraFrame->data, cameraFrame->linesize,
 0, rcvCodecCtx->height,
 rgbFrame->data, rgbFrame->linesize);
 QImage img(showBuffer, rcvCodecCtx->width, rcvCodecCtx->height, QImage::Format_RGB888);
 emit receiveImage(img);
 
 //rgb 2 YUV
 if (sws_scale(swCtxRGB2YUV,
 rgbFrame->data, rgbFrame->linesize,
 0, rcvCodecCtx->height,
 yuvFrame->data, yuvFrame->linesize) < 0)
 {
 qDebug() << "fail : rgb 2 YUV\n";
 }
 yuvFrame->pts = av_gettime();

 //code h264
 ret = avcodec_send_frame(pushCodecCtx, yuvFrame);
 if (ret < 0){
 qDebug() << "send frame fail\n" << ret;
 }
 while (ret >= 0){
 ret = avcodec_receive_packet(pushCodecCtx, pushPkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF){
 qDebug() <<("ret == AVERROR(EAGAIN) || ret == AVERROR_EOF\n");
 break;
 }else if (ret < 0){
 qDebug() <<("Error during encoding\n");
 break;
 }else{
 pushPkt->stream_index = pushVideoindex;
 if (av_interleaved_write_frame(pushFmtCtx, pushPkt) < 0) {
 qDebug() << ("Error muxing packet\n");
 }
 av_packet_unref(pushPkt);
 }
 }
 testCount ++;
 QThread::msleep(10);
 }
 }
 av_packet_unref(rcvPkt);
 }
 }
}

</unsigned>