
Recherche avancée
Médias (9)
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (100)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (9002)
-
FFmpeg streaming using H.264 (with audio) - Red5 media server (Ubuntu OS)
6 février 2013, par B.B10I'm trying to stream my webcam with FFmpeg to my Red5 server using RTMP. I've done this successfully using FLV format with the following line :
ffmpeg -f video4linux2 -i /dev/video0 -f flv rtmp://localhost/live/livestream
I'm new to FFmpeg and live streaming, and I've tried to stream using H.264/MPEG-4. But my knowledge is a bit limited with the FFmpeg options (which I did find here : http://man.cx/ffmpeg%281%29).
So, my questions would be :
-
How can I use H.264/MPEG-4 to stream to my Red5 server ?
-
What are the options to stream audio as well ?
-
And one final issue is :
I'm having a delay of about 5 seconds when I play the content with JWPlayer in Mozilla Firefox (on Ubuntu). Can you please help me out to solve this problem ? Any suggestions why this might be ?
Many thanks
-
-
Using ffmpeg to combine small mp4 chunks ?
20 mai 2015, par shiny formicaI’m trying to convert batches of png images into a single mp4 x264 video using ffmpeg. The conversion, for reasons I won’t go into, converts groups of frames into short mp4 chunks and then I want to take those chunks and merge them into the final video at a specific fps (in this case 30fps).
My understanding of ffmpeg and the x264 options is too limited, and while I can produce the individual mp4 chunks from the source png frames without trouble, the final merge always ends up duplicating and/or dropping frames especially with very short chunks (< 4 frames).
The conversion from png to mp4 uses this command :
ffmpeg -start_number 1001 -framerate 30 -f image2 -i 'intermediate.%d.png' -c:v libx264 -crf 1 -pix_fmt yuv420p -movflags +faststart -frames:v 4 -r 30 chunk.1.mp4 -y
which appears to work as expected, I get a playable mp4 chunk of, in this case, 4 frames of the sequence of png images at 30fps. The length of each chunk can be anywhere from 1 frame to around 100 frames.
When all the chunks are generated, I’ve been trying to use the concat demuxer to combine without re-encoding, placing all the source chunk paths in a file :
concat.txt :
file 'chunk.1.mp4'
file 'chunk.2.mp4'
file 'chunk.3.mp4'
...and then running this ffmpeg command :
ffmpeg -f concat -i concat.txt -c:v copy merged.mp4 -y
but it says this during the concatenation :
[concat @ 0x315ff80] Estimating duration from bitrate, this may be inaccurate
and the resulting mp4 has dropped/duplicated frames. So I tried adding duration info to the concat.txt file :
file 'chunk.1.mp4'
duration 0.133333
file 'chunk.2.mp4'
duration 0.133333
file 'chunk.3.mp4'
duration 0.066666in this case, two 4-frame/30fps chunks and one 2-frame/30fps chunk. Which gets rid of that estimation warning, but the result is still duplicating/dropping frames.
I’m not sure where I’m going wrong here...what do I need to do either in the production of the short mp4 segments, or in the combination stage, to get a single mp4 at the right framerate with no duplicated or dropped frames ?
As suggested, here’s the console output for the conversion from png->mp4 chunks :
ffmpeg -loglevel verbose -start_number 1001 -framerate 30 -f image2 -i 'intermediate.%d.png' -c:v libx264 -crf 1 -pix_fmt yuv420p -movflags +faststart -frames:v 4 -r 30 chunk.1.mp4 -y
ffmpeg version 2.5.4 Copyright (c) 2000-2015 the FFmpeg developers
built on Feb 26 2015 10:23:42 with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-3)
configuration: --prefix=/dept/srd/vendor/ffmpeg/bundle.rhel6/ffmpeg2.5.4 --enable-static --enable-pthreads --enable-gpl --enable-version3 --disable-ffserver --disable-ffplay --disable-ffprobe --enable-x11grab --enable-nonfree --extra-cflags=-I/dept/srd/vendor/ffmpeg/extern/rhel6/include --extra-ldflags=-L/dept/srd/vendor/ffmpeg/extern/rhel6/lib --enable-libx264 --enable-fontconfig --enable-libfreetype --enable-swscale --enable-libmp3lame --enable-libfaac --disable-yasm
libavutil 54. 15.100 / 54. 15.100
libavcodec 56. 13.100 / 56. 13.100
libavformat 56. 15.102 / 56. 15.102
libavdevice 56. 3.100 / 56. 3.100
libavfilter 5. 2.103 / 5. 2.103
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, image2, from 'intermediate.%d.png':
Duration: 00:00:00.27, start: 0.000000, bitrate: N/A
Stream #0:0: Video: png, rgba, 1024x1024 (0x0), 30 fps, 30 tbr, 30 tbn, 30 tbc
[graph 0 input from stream 0:0 @ 0x273e9c0] w:1024 h:1024 pixfmt:rgba tb:1/30 fr:30/1 sar:0/1 sws_param:flags=2
[auto-inserted scaler 0 @ 0x2737ea0] w:iw h:ih flags:'0x4' interl:0
[format @ 0x273ece0] auto-inserting filter 'auto-inserted scaler 0' between the filter 'Parsed_null_0' and the filter 'format'
[auto-inserted scaler 0 @ 0x2737ea0] w:1024 h:1024 fmt:rgba sar:0/1 -> w:1024 h:1024 fmt:yuv420p sar:0/1 flags:0x4
[libx264 @ 0x273c540] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x273c540] profile High, level 3.2
[libx264 @ 0x273c540] 264 - core 142 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=36 lookahead_threads=6 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'chunk.1.mp4':
Metadata:
encoder : Lavf56.15.102
Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1024x1024, q=-1--1, 30 fps, 15360 tbn, 30 tbc
Metadata:
encoder : Lavc56.13.100 libx264
Stream mapping:
Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))
Press [q] to stop, [?] for help
No more output streams to write to, finishing.
[mp4 @ 0x273baa0] Starting second pass: moving the moov atom to the beginning of the file
frame= 4 fps=0.0 q=-1.0 Lsize= 197kB time=00:00:00.06 bitrate=24228.7kbits/s
video:196kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.439751%
Input file #0 (intermediate.%d.png):
Input stream #0:0 (video): 8 packets read (2341016 bytes); 5 frames decoded;
Total: 8 packets (2341016 bytes) demuxed
Output file #0 (chunk.3.mp4):
Output stream #0:0 (video): 4 frames encoded; 4 packets muxed (201023 bytes);
Total: 4 packets (201023 bytes) muxed
[libx264 @ 0x273c540] frame I:1 Avg QP: 0.47 size:116049
[libx264 @ 0x273c540] frame P:1 Avg QP: 2.29 size: 37932
[libx264 @ 0x273c540] frame B:2 Avg QP: 2.37 size: 23184
[libx264 @ 0x273c540] consecutive B-frames: 25.0% 0.0% 75.0% 0.0%
[libx264 @ 0x273c540] mb I I16..4: 80.0% 4.5% 15.5%
[libx264 @ 0x273c540] mb P I16..4: 0.2% 0.1% 0.4% P16..4: 8.1% 3.6% 3.7% 0.0% 0.0% skip:83.9%
[libx264 @ 0x273c540] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 4.8% 1.2% 1.6% direct: 4.3% skip:88.1% L0:38.6% L1:39.3% BI:22.1%
[libx264 @ 0x273c540] 8x8 transform intra:4.6% inter:14.8%
[libx264 @ 0x273c540] coded y,uvDC,uvAC intra: 20.7% 22.9% 22.8% inter: 8.7% 10.1% 10.0%
[libx264 @ 0x273c540] i16 v,h,dc,p: 95% 1% 3% 1%
[libx264 @ 0x273c540] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 21% 22% 6% 6% 6% 7% 5% 6%
[libx264 @ 0x273c540] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 17% 18% 7% 8% 7% 8% 6% 8%
[libx264 @ 0x273c540] i8c dc,h,v,p: 89% 4% 4% 3%
[libx264 @ 0x273c540] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x273c540] ref B L1: 89.5% 10.5%
[libx264 @ 0x273c540] kb/s:12020.88as I said, this appears to produce a valid mp4 at 30fps with no duplicated or dropped frames from the input images.
Here’s the output of the combine phase :
ffmpeg -loglevel verbose -f concat -i concat.txt -c:v copy merged.mp4 -y
ffmpeg version 2.5.4 Copyright (c) 2000-2015 the FFmpeg developers
built on Feb 26 2015 10:23:42 with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-3)
configuration: --prefix=/dept/srd/vendor/ffmpeg/bundle.rhel6/ffmpeg2.5.4 --enable-static --enable-pthreads --enable-gpl --enable-version3 --disable-ffserver --disable-ffplay --disable-ffprobe --enable-x11grab --enable-nonfree --extra-cflags=-I/dept/srd/vendor/ffmpeg/extern/rhel6/include --extra-ldflags=-L/dept/srd/vendor/ffmpeg/extern/rhel6/lib --enable-libx264 --enable-fontconfig --enable-libfreetype --enable-swscale --enable-libmp3lame --enable-libfaac --disable-yasm
libavutil 54. 15.100 / 54. 15.100
libavcodec 56. 13.100 / 56. 13.100
libavformat 56. 15.102 / 56. 15.102
libavdevice 56. 3.100 / 56. 3.100
libavfilter 5. 2.103 / 5. 2.103
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, concat, from 'concat.txt':
Duration: 00:00:00.67, start: 0.000000, bitrate: 2 kb/s
Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1024x1024, 7791 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc
Output #0, mp4, to 'merged.mp4':
Metadata:
encoder : Lavf56.15.102
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1024x1024 (0x0), q=2-31, 7791 kb/s, 30 fps, 15360 tbn, 15360 tbc
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
No more output streams to write to, finishing.
frame= 20 fps=0.0 q=-1.0 Lsize= 748kB time=00:00:00.56 bitrate=10805.0kbits/s
video:746kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.141687%
Input file #0 (concat.txt):
Input stream #0:0 (video): 20 packets read (764361 bytes);
Total: 20 packets (764361 bytes) demuxed
Output file #0 (merged.mp4):
Output stream #0:0 (video): 20 packets muxed (764361 bytes);
Total: 20 packets (764361 bytes) muxed -
FFmpeg C++ api decode h264 error
29 mai 2015, par armsI’m trying to use the C++ API of FFMpeg (version 20150526) under Windows using the prebuilt binaries to decode an h264 video file (*.ts).
I’ve written a very simple code that automatically detects the required codec from the file itself (and it is AV_CODEC_ID_H264, as expected).
Then I re-open the video file in read-binary mode and I read a fixed-size buffer of bytes from it and provide the read bytes to the decoder within a while-loop until the end of file. However when I call the function avcodec_decode_video2 a large amount of errors happen like the following ones :
[h264 @ 008df020] top block unavailable for requested intro mode at 34 0
[h264 @ 008df020] error while decoding MB 34 0, bytestream 3152
[h264 @ 008df020] decode_slice_header error
Sometimes the function avcodec_decode_video2 sets the value of got_picture_ptr to 1 and hence I expect to find a good frame. Instead, though all the computations are successful, when I view the decoded frame (using OpenCV only for visualization purposes) I see a gray one with some artifacts.
If I employ the same code to decode an *.avi file it works fine.
Reading the examples of FFMpeg I did not find a solution to my problem. I’ve also implemented the solution proposed in the simlar question FFmpeg c++ H264 decoding error but it did not work.
Does anyone know where the error is ?
Thank you in advance for any reply !
The code is the following [EDIT : code updated including the parser management] :
#include <iostream>
#include <iomanip>
#include <string>
#include <sstream>
#include <opencv2></opencv2>opencv.hpp>
#ifdef __cplusplus
extern "C"
{
#endif // __cplusplus
#include <libavcodec></libavcodec>avcodec.h>
#include <libavdevice></libavdevice>avdevice.h>
#include <libavfilter></libavfilter>avfilter.h>
#include <libavformat></libavformat>avformat.h>
#include <libavformat></libavformat>avio.h>
#include <libavutil></libavutil>avutil.h>
#include <libpostproc></libpostproc>postprocess.h>
#include <libswresample></libswresample>swresample.h>
#include <libswscale></libswscale>swscale.h>
#ifdef __cplusplus
} // end extern "C".
#endif // __cplusplus
#define INBUF_SIZE 4096
void main()
{
AVCodec* l_pCodec;
AVCodecContext* l_pAVCodecContext;
SwsContext* l_pSWSContext;
AVFormatContext* l_pAVFormatContext;
AVFrame* l_pAVFrame;
AVFrame* l_pAVFrameBGR;
AVPacket l_AVPacket;
AVPacket l_AVPacket_out;
AVStream* l_pStream;
AVCodecParserContext* l_pParser;
FILE* l_pFile_in;
FILE* l_pFile_out;
std::string l_sFile;
int l_iResult;
int l_iFrameCount;
int l_iGotFrame;
int l_iBufLength;
int l_iParsedBytes;
int l_iPts;
int l_iDts;
int l_iPos;
int l_iSize;
int l_iDecodedBytes;
uint8_t l_auiInBuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
uint8_t* l_pData;
cv::Mat l_cvmImage;
l_pCodec = NULL;
l_pAVCodecContext = NULL;
l_pSWSContext = NULL;
l_pAVFormatContext = NULL;
l_pAVFrame = NULL;
l_pAVFrameBGR = NULL;
l_pParser = NULL;
l_pStream = NULL;
l_pFile_in = NULL;
l_pFile_out = NULL;
l_iPts = 0;
l_iDts = 0;
l_iPos = 0;
l_pData = NULL;
l_sFile = "myvideo.ts";
avdevice_register_all();
avfilter_register_all();
avcodec_register_all();
av_register_all();
avformat_network_init();
l_pAVFormatContext = avformat_alloc_context();
l_iResult = avformat_open_input(&l_pAVFormatContext,
l_sFile.c_str(),
NULL,
NULL);
if (l_iResult >= 0)
{
l_iResult = avformat_find_stream_info(l_pAVFormatContext, NULL);
if (l_iResult >= 0)
{
for (int i=0; inb_streams; i++)
{
if (l_pAVFormatContext->streams[i]->codec->codec_type ==
AVMEDIA_TYPE_VIDEO)
{
l_pCodec = avcodec_find_decoder(
l_pAVFormatContext->streams[i]->codec->codec_id);
l_pStream = l_pAVFormatContext->streams[i];
}
}
}
}
av_init_packet(&l_AVPacket);
av_init_packet(&l_AVPacket_out);
memset(l_auiInBuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
if (l_pCodec)
{
l_pAVCodecContext = avcodec_alloc_context3(l_pCodec);
l_pParser = av_parser_init(l_pAVCodecContext->codec_id);
if (l_pParser)
{
av_register_codec_parser(l_pParser->parser);
}
if (l_pAVCodecContext)
{
if (l_pCodec->capabilities & CODEC_CAP_TRUNCATED)
{
l_pAVCodecContext->flags |= CODEC_FLAG_TRUNCATED;
}
l_iResult = avcodec_open2(l_pAVCodecContext, l_pCodec, NULL);
if (l_iResult >= 0)
{
l_pFile_in = fopen(l_sFile.c_str(), "rb");
if (l_pFile_in)
{
l_pAVFrame = av_frame_alloc();
l_pAVFrameBGR = av_frame_alloc();
if (l_pAVFrame)
{
l_iFrameCount = 0;
avcodec_get_frame_defaults(l_pAVFrame);
while (1)
{
l_iBufLength = fread(l_auiInBuf,
1,
INBUF_SIZE,
l_pFile_in);
if (l_iBufLength == 0)
{
break;
}
else
{
l_pData = l_auiInBuf;
l_iSize = l_iBufLength;
while (l_iSize > 0)
{
if (l_pParser)
{
l_iParsedBytes = av_parser_parse2(
l_pParser,
l_pAVCodecContext,
&l_AVPacket_out.data,
&l_AVPacket_out.size,
l_pData,
l_iSize,
l_AVPacket.pts,
l_AVPacket.dts,
AV_NOPTS_VALUE);
if (l_iParsedBytes <= 0)
{
break;
}
l_AVPacket.pts = l_AVPacket.dts = AV_NOPTS_VALUE;
l_AVPacket.pos = -1;
}
else
{
l_AVPacket_out.data = l_pData;
l_AVPacket_out.size = l_iSize;
}
l_iDecodedBytes =
avcodec_decode_video2(
l_pAVCodecContext,
l_pAVFrame,
&l_iGotFrame,
&l_AVPacket_out);
if (l_iDecodedBytes >= 0)
{
if (l_iGotFrame)
{
l_pSWSContext = sws_getContext(
l_pAVCodecContext->width,
l_pAVCodecContext->height,
l_pAVCodecContext->pix_fmt,
l_pAVCodecContext->width,
l_pAVCodecContext->height,
AV_PIX_FMT_BGR24,
SWS_BICUBIC,
NULL,
NULL,
NULL);
if (l_pSWSContext)
{
l_iResult = avpicture_alloc(
reinterpret_cast(l_pAVFrameBGR),
AV_PIX_FMT_BGR24,
l_pAVFrame->width,
l_pAVFrame->height);
l_iResult = sws_scale(
l_pSWSContext,
l_pAVFrame->data,
l_pAVFrame->linesize,
0,
l_pAVCodecContext->height,
l_pAVFrameBGR->data,
l_pAVFrameBGR->linesize);
if (l_iResult > 0)
{
l_cvmImage = cv::Mat(
l_pAVFrame->height,
l_pAVFrame->width,
CV_8UC3,
l_pAVFrameBGR->data[0],
l_pAVFrameBGR->linesize[0]);
if (l_cvmImage.empty() == false)
{
cv::imshow("image", l_cvmImage);
cv::waitKey(10);
}
}
}
l_iFrameCount++;
}
}
else
{
break;
}
l_pData += l_iParsedBytes;
l_iSize -= l_iParsedBytes;
}
}
} // end while(1).
}
fclose(l_pFile_in);
}
}
}
}
}
</sstream></string></iomanip></iostream>
EDIT : The following is the final code that solves my problem, thanks to the suggestions of Ronald.
#include <iostream>
#include <iomanip>
#include <string>
#include <sstream>
#include <opencv2></opencv2>opencv.hpp>
#ifdef __cplusplus
extern "C"
{
#endif // __cplusplus
#include <libavcodec></libavcodec>avcodec.h>
#include <libavdevice></libavdevice>avdevice.h>
#include <libavfilter></libavfilter>avfilter.h>
#include <libavformat></libavformat>avformat.h>
#include <libavformat></libavformat>avio.h>
#include <libavutil></libavutil>avutil.h>
#include <libpostproc></libpostproc>postprocess.h>
#include <libswresample></libswresample>swresample.h>
#include <libswscale></libswscale>swscale.h>
#ifdef __cplusplus
} // end extern "C".
#endif // __cplusplus
void main()
{
AVCodec* l_pCodec;
AVCodecContext* l_pAVCodecContext;
SwsContext* l_pSWSContext;
AVFormatContext* l_pAVFormatContext;
AVFrame* l_pAVFrame;
AVFrame* l_pAVFrameBGR;
AVPacket l_AVPacket;
std::string l_sFile;
uint8_t* l_puiBuffer;
int l_iResult;
int l_iFrameCount;
int l_iGotFrame;
int l_iDecodedBytes;
int l_iVideoStreamIdx;
int l_iNumBytes;
cv::Mat l_cvmImage;
l_pCodec = NULL;
l_pAVCodecContext = NULL;
l_pSWSContext = NULL;
l_pAVFormatContext = NULL;
l_pAVFrame = NULL;
l_pAVFrameBGR = NULL;
l_puiBuffer = NULL;
l_sFile = "myvideo.ts";
av_register_all();
l_iResult = avformat_open_input(&l_pAVFormatContext,
l_sFile.c_str(),
NULL,
NULL);
if (l_iResult >= 0)
{
l_iResult = avformat_find_stream_info(l_pAVFormatContext, NULL);
if (l_iResult >= 0)
{
for (int i=0; inb_streams; i++)
{
if (l_pAVFormatContext->streams[i]->codec->codec_type ==
AVMEDIA_TYPE_VIDEO)
{
l_iVideoStreamIdx = i;
l_pAVCodecContext =
l_pAVFormatContext->streams[l_iVideoStreamIdx]->codec;
if (l_pAVCodecContext)
{
l_pCodec = avcodec_find_decoder(l_pAVCodecContext->codec_id);
}
break;
}
}
}
}
if (l_pCodec && l_pAVCodecContext)
{
l_iResult = avcodec_open2(l_pAVCodecContext, l_pCodec, NULL);
if (l_iResult >= 0)
{
l_pAVFrame = av_frame_alloc();
l_pAVFrameBGR = av_frame_alloc();
l_iNumBytes = avpicture_get_size(PIX_FMT_BGR24,
l_pAVCodecContext->width,
l_pAVCodecContext->height);
l_puiBuffer = (uint8_t *)av_malloc(l_iNumBytes*sizeof(uint8_t));
avpicture_fill((AVPicture *)l_pAVFrameBGR,
l_puiBuffer,
PIX_FMT_RGB24,
l_pAVCodecContext->width,
l_pAVCodecContext->height);
l_pSWSContext = sws_getContext(
l_pAVCodecContext->width,
l_pAVCodecContext->height,
l_pAVCodecContext->pix_fmt,
l_pAVCodecContext->width,
l_pAVCodecContext->height,
AV_PIX_FMT_BGR24,
SWS_BICUBIC,
NULL,
NULL,
NULL);
while (av_read_frame(l_pAVFormatContext, &l_AVPacket) >= 0)
{
if (l_AVPacket.stream_index == l_iVideoStreamIdx)
{
l_iDecodedBytes = avcodec_decode_video2(
l_pAVCodecContext,
l_pAVFrame,
&l_iGotFrame,
&l_AVPacket);
if (l_iGotFrame)
{
if (l_pSWSContext)
{
l_iResult = sws_scale(
l_pSWSContext,
l_pAVFrame->data,
l_pAVFrame->linesize,
0,
l_pAVCodecContext->height,
l_pAVFrameBGR->data,
l_pAVFrameBGR->linesize);
if (l_iResult > 0)
{
l_cvmImage = cv::Mat(
l_pAVFrame->height,
l_pAVFrame->width,
CV_8UC3,
l_pAVFrameBGR->data[0],
l_pAVFrameBGR->linesize[0]);
if (l_cvmImage.empty() == false)
{
cv::imshow("image", l_cvmImage);
cv::waitKey(1);
}
}
}
l_iFrameCount++;
}
}
}
}
}
}
</sstream></string></iomanip></iostream>