
Recherche avancée
Autres articles (19)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 is the first MediaSPIP stable release.
Its official release date is June 21, 2013 and is announced here.
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)
Sur d’autres sites (4400)
-
Using ffmpeg and ffserver to create a 2x1 live stream fails with unconnected output error
31 janvier 2021, par weevilknievelI want to combine 2 RTSP streams (CCTV cameras) into a horizontal 2x1 strip and convert to webm for use in a HTML5 webpage.



I am able to convert the streams into an mpeg or avi file easily, but as soon as I try to post it to ffserver ffm feed, I hit a wall.



Here is my working ffmpeg command which writes out to a file :



ffmpeg -rtsp_transport tcp -thread_queue_size 12 -i "rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=1&stream=1.sdp" -i "rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=2&stream=1.sdp" -filter_complex "[0:v][1:v] hstack=inputs=2 [v]" -map "[v]" -r 30 output.mpg




It seems the ffmpeg process is outputting two streams and one of them is MPEG ? In the above command I had to put a frame rate "-r 30" into the command otherwise I got an error which said that MPEG 1/2 did not support a framerate of 5/1 ??



As soon as I try and stream to ffserver, I get an error :



ffmpeg -rtsp_transport tcp -thread_queue_size 12 -i "rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=1&stream=1.sdp" -i "rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=2&stream=1.sdp" -filter_complex "[0:v][1:v] hstack=inputs=2 [v]" -map "[v]" -r 30 http://localhost:8090/feed5.ffm




The error I get is "Filter hstack has an unconnected output" :



ffmpeg version N-86111-ga441aa90e8-static http://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2017 the FFmpeg developers
 built with gcc 5.4.1 (Debian 5.4.1-8) 20170304
 configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-5 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg
 libavutil 55. 63.100 / 55. 63.100
 libavcodec 57. 96.101 / 57. 96.101
 libavformat 57. 72.101 / 57. 72.101
 libavdevice 57. 7.100 / 57. 7.100
 libavfilter 6. 89.101 / 6. 89.101
 libswscale 4. 7.101 / 4. 7.101
 libswresample 2. 8.100 / 2. 8.100
 libpostproc 54. 6.100 / 54. 6.100
Input #0, rtsp, from 'rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=1&stream=1.sdp':
 Duration: N/A, start: 0.166667, bitrate: N/A
 Stream #0:0: Video: h264 (High), yuv420p(progressive), 352x288, 6 fps, 6 tbr, 90k tbn, 180k tbc
Input #1, rtsp, from 'rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=2&stream=1.sdp':
 Duration: N/A, start: 0.166667, bitrate: N/A
 Stream #1:0: Video: h264 (High), yuv420p(progressive), 352x288, 6 fps, 6 tbr, 90k tbn, 180k tbc
Filter hstack has an unconnected output




My ffserver conf is very basic :



HTTPPort 8090
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 100000
CustomLog -

<feed>
File /mnt/ramdisk/feed5.ffm
Truncate
FileMaxSize 5M
ACL allow 127.0.0.1
</feed>

<stream>
 Format webm
 Feed feed5.ffm
 VideoCodec libvpx
 VideoSize 640x360
 NoAudio
 AVOptionVideo quality realtime
 StartSendOnKey
 VideoBitRate 140
</stream>

<stream>
Format status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</stream>
<redirect>
URL http://www.ffmpeg.org/
</redirect>




Edit :



Here is the debug output from the ffmpeg command above :



ffmpeg -v debug -rtsp_transport tcp -thread_queue_size 12 -i "rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=1&stream=1.sdp" -i "rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=2&stream=1.sdp" -filter_complex "[0:v][1:v] hstack=inputs=2 [v]" -map "[v]" -r 30 http://localhost:8090/feed5.ffm
ffmpeg version N-86111-ga441aa90e8-static http://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2017 the FFmpeg developers
 built with gcc 5.4.1 (Debian 5.4.1-8) 20170304
 configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-5 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg
 libavutil 55. 63.100 / 55. 63.100
 libavcodec 57. 96.101 / 57. 96.101
 libavformat 57. 72.101 / 57. 72.101
 libavdevice 57. 7.100 / 57. 7.100
 libavfilter 6. 89.101 / 6. 89.101
 libswscale 4. 7.101 / 4. 7.101
 libswresample 2. 8.100 / 2. 8.100
 libpostproc 54. 6.100 / 54. 6.100
Splitting the commandline.
Reading option '-v' ... matched as option 'v' (set logging level) with argument 'debug'.
Reading option '-rtsp_transport' ... matched as AVOption 'rtsp_transport' with argument 'tcp'.
Reading option '-thread_queue_size' ... matched as option 'thread_queue_size' (set the maximum number of queued packets from the demuxer) with argument '12'.
Reading option '-i' ... matched as input url with argument 'rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=1&stream=1.sdp'.
Reading option '-i' ... matched as input url with argument 'rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=2&stream=1.sdp'.
Reading option '-filter_complex' ... matched as option 'filter_complex' (create a complex filtergraph) with argument '[0:v][1:v] hstack=inputs=2 [v]'.
Reading option '-map' ... matched as option 'map' (set input stream mapping) with argument '[v]'.
Reading option '-r' ... matched as option 'r' (set frame rate (Hz value, fraction or abbreviation)) with argument '30'.
Reading option 'http://localhost:8090/feed5.ffm' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option v (set logging level) with argument debug.
Applying option filter_complex (create a complex filtergraph) with argument [0:v][1:v] hstack=inputs=2 [v].
Successfully parsed a group of options.
Parsing a group of options: input url rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=1&stream=1.sdp.
Applying option thread_queue_size (set the maximum number of queued packets from the demuxer) with argument 12.
Successfully parsed a group of options.
Opening an input file: rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=1&stream=1.sdp.
[tcp @ 0x58fb5e0] No default whitelist set
[rtsp @ 0x58f9700] SDP:
v=0
o=- 38990265062388 38990265062388 IN IP4 192.168.1.132
a=range:npt=0-
m=video 0 RTP/AVP 96
c=IN IP4 0.0.0.0
a=rtpmap:96 H264/90000
a=framerate:0S
a=fmtp:96 profile-level-id=640014; packetization-mode=1; sprop-parameter-sets=Z2QAFK2EAQwgCGEAQwgCGEAQwgCEK1CwSyA=,aO48sA==
a=control:trackID=3

Failed to parse interval end specification ''
[rtsp @ 0x58f9700] video codec set to: h264
[rtsp @ 0x58f9700] RTP Profile IDC: 64 Profile IOP: 0 Level: 14
[rtsp @ 0x58f9700] RTP Packetization Mode: 1
[rtsp @ 0x58f9700] Extradata set to 0x58fb940 (size: 38)
[rtsp @ 0x58f9700] setting jitter buffer size to 0
[rtsp @ 0x58f9700] hello state=0
[h264 @ 0x58fca00] nal_unit_type: 7, nal_ref_idc: 3
[h264 @ 0x58fca00] nal_unit_type: 8, nal_ref_idc: 3
[h264 @ 0x58fca00] nal_unit_type: 7, nal_ref_idc: 3
[h264 @ 0x58fca00] nal_unit_type: 8, nal_ref_idc: 3
[h264 @ 0x58fca00] unknown SEI type 229
[h264 @ 0x58fca00] nal_unit_type: 7, nal_ref_idc: 3
[h264 @ 0x58fca00] nal_unit_type: 8, nal_ref_idc: 3
[h264 @ 0x58fca00] nal_unit_type: 6, nal_ref_idc: 0
[h264 @ 0x58fca00] nal_unit_type: 5, nal_ref_idc: 3
[h264 @ 0x58fca00] unknown SEI type 229
[h264 @ 0x58fca00] Reinit context to 352x288, pix_fmt: yuv420p
[h264 @ 0x58fca00] nal_unit_type: 1, nal_ref_idc: 3
 Last message repeated 5 times
[h264 @ 0x58fca00] unknown SEI type 229
 Last message repeated 1 times
[rtsp @ 0x58f9700] All info found
[rtsp @ 0x58f9700] Setting avg frame rate based on r frame rate
Input #0, rtsp, from 'rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=1&stream=1.sdp':
 Duration: N/A, start: 0.166667, bitrate: N/A
 Stream #0:0, 28, 1/90000: Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 352x288, 0/1, 6 fps, 6 tbr, 90k tbn, 180k tbc
Successfully opened the file.
Parsing a group of options: input url rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=2&stream=1.sdp.
Successfully parsed a group of options.
Opening an input file: rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=2&stream=1.sdp.
[tcp @ 0x59f4280] No default whitelist set
[rtsp @ 0x59f7620] SDP:
v=0
o=- 38990265062388 38990265062388 IN IP4 192.168.1.132
a=range:npt=0-
m=video 0 RTP/AVP 96
c=IN IP4 0.0.0.0
a=rtpmap:96 H264/90000
a=framerate:0S
a=fmtp:96 profile-level-id=640014; packetization-mode=1; sprop-parameter-sets=Z2QAFK2EAQwgCGEAQwgCGEAQwgCEK1CwSyA=,aO48sA==
a=control:trackID=3

Failed to parse interval end specification ''
[rtsp @ 0x59f7620] video codec set to: h264
[rtsp @ 0x59f7620] RTP Profile IDC: 64 Profile IOP: 0 Level: 14
[rtsp @ 0x59f7620] RTP Packetization Mode: 1
[rtsp @ 0x59f7620] Extradata set to 0x59f3670 (size: 38)
[rtp @ 0x59f62a0] No default whitelist set
[udp @ 0x58fb6a0] No default whitelist set
[udp @ 0x58fb6a0] end receive buffer size reported is 131072
[udp @ 0x591b940] No default whitelist set
[udp @ 0x591b940] end receive buffer size reported is 131072
[rtsp @ 0x59f7620] setting jitter buffer size to 500
[rtsp @ 0x59f7620] hello state=0
[h264 @ 0x59e4b20] nal_unit_type: 7, nal_ref_idc: 3
[h264 @ 0x59e4b20] nal_unit_type: 8, nal_ref_idc: 3
[h264 @ 0x59e4b20] nal_unit_type: 7, nal_ref_idc: 3
[h264 @ 0x59e4b20] nal_unit_type: 8, nal_ref_idc: 3
[h264 @ 0x59e4b20] unknown SEI type 229
[h264 @ 0x59e4b20] nal_unit_type: 7, nal_ref_idc: 3
[h264 @ 0x59e4b20] nal_unit_type: 8, nal_ref_idc: 3
[h264 @ 0x59e4b20] nal_unit_type: 6, nal_ref_idc: 0
[h264 @ 0x59e4b20] nal_unit_type: 5, nal_ref_idc: 3
[h264 @ 0x59e4b20] unknown SEI type 229
[h264 @ 0x59e4b20] Reinit context to 352x288, pix_fmt: yuv420p
[h264 @ 0x59e4b20] nal_unit_type: 1, nal_ref_idc: 3
 Last message repeated 5 times
[h264 @ 0x59e4b20] unknown SEI type 229
 Last message repeated 1 times
[rtsp @ 0x59f7620] All info found
[rtsp @ 0x59f7620] Setting avg frame rate based on r frame rate
Input #1, rtsp, from 'rtsp://192.168.1.132:554/user=admin&password=mypassword&channel=2&stream=1.sdp':
 Duration: N/A, start: 0.166667, bitrate: N/A
 Stream #1:0, 28, 1/90000: Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 352x288, 0/1, 6 fps, 6 tbr, 90k tbn, 180k tbc
Successfully opened the file.
detected 2 logical cores
[Parsed_hstack_0 @ 0x59f7cc0] Setting 'inputs' to value '2'
Parsing a group of options: output url http://localhost:8090/feed5.ffm.
Applying option map (set input stream mapping) with argument [v].
Applying option r (set frame rate (Hz value, fraction or abbreviation)) with argument 30.
Successfully parsed a group of options.
Opening an output file: http://localhost:8090/feed5.ffm.
[http @ 0x59aa700] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'
[http @ 0x59aa700] request: GET /feed5.ffm HTTP/1.1
User-Agent: Lavf/57.72.101
Accept: */*
Range: bytes=0-
Connection: close
Host: localhost:8090
Icy-MetaData: 1


[ffm @ 0x5917d00] Format ffm probed with size=2048 and score=101
[NULL @ 0x5bce3e0] Setting entry with key 'video_size' to value '640x360'
[NULL @ 0x5bce3e0] Setting entry with key 'b' to value '140000'
[NULL @ 0x5bce3e0] Setting entry with key 'time_base' to value '1/5'
[NULL @ 0x5bce3e0] Setting entry with key 'bt' to value '35000'
[NULL @ 0x5bce3e0] Setting entry with key 'rc_eq' to value 'tex^qComp'
[NULL @ 0x5bce3e0] Setting entry with key 'maxrate' to value '280000'
[NULL @ 0x5bce3e0] Setting entry with key 'bufsize' to value '280000'
[AVIOContext @ 0x5915b80] Statistics: 4096 bytes read, 0 seeks
[http @ 0x5915f00] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'
[http @ 0x5915f00] request: POST /feed5.ffm HTTP/1.1
Transfer-Encoding: chunked
User-Agent: Lavf/57.72.101
Accept: */*
Connection: close
Host: localhost:8090
Icy-MetaData: 1


Successfully opened the file.
Filter hstack has an unconnected output
[AVIOContext @ 0x59ab080] Statistics: 0 seeks, 0 writeouts




I am stumped - any clues ?



Thanks in advance.


-
Video rotating to left by 90 degree when converted using ffmpeg
15 juillet 2017, par Herdesh VermaI developed a below code :
extern "C"
{
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>opt.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavutil></libavutil>samplefmt.h>
#include <libavutil></libavutil>timestamp.h>
#include <libavformat></libavformat>avformat.h>
#include <libavfilter></libavfilter>avfiltergraph.h>
#include <libswscale></libswscale>swscale.h>
}
#include
static AVFormatContext *fmt_ctx = NULL;
static int frame_index = 0;
static int j = 0, nbytes=0;
uint8_t *video_outbuf = NULL;
static AVPacket *pAVPacket=NULL;
static int value=0;
static AVFrame *pAVFrame=NULL;
static AVFrame *outFrame=NULL;
static AVStream *video_st=NULL;
static AVFormatContext *outAVFormatContext=NULL;
static AVCodec *outAVCodec=NULL;
static AVOutputFormat *output_format=NULL;
static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
static AVCodecContext *outAVCodecContext=NULL;
static int width, height;
static enum AVPixelFormat pix_fmt;
static AVStream *video_stream = NULL, *audio_stream = NULL;
static const char *src_filename = NULL;
static const char *video_dst_filename = NULL;
static const char *audio_dst_filename = NULL;
static FILE *video_dst_file = NULL;
static FILE *audio_dst_file = NULL;
static uint8_t *video_dst_data[4] = {NULL};
static int video_dst_linesize[4];
static int video_dst_bufsize;
static int video_stream_idx = -1, audio_stream_idx = -1;
static AVPacket *pkt=NULL;
static AVPacket *pkt1=NULL;
static AVFrame *frame = NULL;
//static AVPacket pkt;
static int video_frame_count = 0;
static int audio_frame_count = 0;
static int refcount = 0;
AVCodec *codec;
static struct SwsContext *sws_ctx;
AVCodecContext *c= NULL;
int i, out_size, size, x, y, outbuf_size;
AVFrame *picture;
uint8_t *outbuf, *picture_buf;
int video_outbuf_size;
int w, h;
AVPixelFormat pixFmt;
uint8_t *data[4];
int linesize[4];
static int open_codec_context(int *stream_idx,
AVCodecContext **dec_ctx, AVFormatContext
*fmt_ctx, enum AVMediaType type)
{
int ret, stream_index;
AVStream *st;
AVCodec *dec = NULL;
AVDictionary *opts = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
if (ret < 0) {
printf("Could not find %s stream in input file '%s'\n",
av_get_media_type_string(type), src_filename);
return ret;
} else {
stream_index = ret;
st = fmt_ctx->streams[stream_index];
/* find decoder for the stream */
dec = avcodec_find_decoder(st->codecpar->codec_id);
if (!dec) {
printf("Failed to find %s codec\n",
av_get_media_type_string(type));
return AVERROR(EINVAL);
}
/* Allocate a codec context for the decoder */
*dec_ctx = avcodec_alloc_context3(dec);
if (!*dec_ctx) {
printf("Failed to allocate the %s codec context\n",
av_get_media_type_string(type));
return AVERROR(ENOMEM);
}
/* Copy codec parameters from input stream to output codec context */
if ((ret = avcodec_parameters_to_context(*dec_ctx, st->codecpar)) < 0) {
printf("Failed to copy %s codec parameters to decoder context\n",
av_get_media_type_string(type));
return ret;
}
/* Init the decoders, with or without reference counting */
av_dict_set(&opts, "refcounted_frames", refcount ? "1" : "0", 0);
if ((ret = avcodec_open2(*dec_ctx, dec, &opts)) < 0) {
printf("Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
}
*stream_idx = stream_index;
}
return 0;
}
int main (int argc, char **argv)
{
int ret = 0, got_frame;
src_filename = argv[1];
video_dst_filename = argv[2];
audio_dst_filename = argv[3];
av_register_all();
avcodec_register_all();
printf("Registered all\n");
/* open input file, and allocate format context */
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
printf("Could not open source file %s\n", src_filename);
exit(1);
}
/* retrieve stream information */
if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
printf("Could not find stream information\n");
exit(1);
}
if (open_codec_context(&video_stream_idx, &video_dec_ctx, fmt_ctx,
AVMEDIA_TYPE_VIDEO) >= 0) {
video_stream = fmt_ctx->streams[video_stream_idx];
avformat_alloc_output_context2(&outAVFormatContext, NULL, NULL,
video_dst_filename);
if (!outAVFormatContext)
{
printf("\n\nError : avformat_alloc_output_context2()");
return -1;
}
}
if (open_codec_context(&audio_stream_idx, &audio_dec_ctx, fmt_ctx,
AVMEDIA_TYPE_AUDIO) >= 0) {
audio_stream = fmt_ctx->streams[audio_stream_idx];
audio_dst_file = fopen(audio_dst_filename, "wb");
if (!audio_dst_file) {
printf("Could not open destination file %s\n", audio_dst_filename);
ret = 1;
goto end;
}
}
/* dump input information to stderr */
av_dump_format(fmt_ctx, 0, src_filename, 0);
if (!audio_stream && !video_stream) {
printf("Could not find audio or video stream in the input, aborting\n");
ret = 1;
goto end;
}
output_format = av_guess_format(NULL, video_dst_filename, NULL);
if( !output_format )
{
printf("\n\nError : av_guess_format()");
return -1;
}
video_st = avformat_new_stream(outAVFormatContext ,NULL);
if( !video_st )
{
printf("\n\nError : avformat_new_stream()");
return -1;
}
outAVCodecContext = avcodec_alloc_context3(outAVCodec);
if( !outAVCodecContext )
{
printf("\n\nError : avcodec_alloc_context3()");
return -1;
}
outAVCodecContext = video_st->codec;
outAVCodecContext->codec_id = AV_CODEC_ID_MPEG4;// AV_CODEC_ID_MPEG4; //
AV_CODEC_ID_H264 // AV_CODEC_ID_MPEG1VIDEO
outAVCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
outAVCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
outAVCodecContext->bit_rate = 400000; // 2500000
outAVCodecContext->width = 1920;
//outAVCodecContext->width = 500;
outAVCodecContext->height = 1080;
//outAVCodecContext->height = 500;
outAVCodecContext->gop_size = 3;
outAVCodecContext->max_b_frames = 2;
outAVCodecContext->time_base.num = 1;
outAVCodecContext->time_base.den = 30; // 15fps
if (outAVCodecContext->codec_id == AV_CODEC_ID_H264)
{
av_opt_set(outAVCodecContext->priv_data, "preset", "slow", 0);
}
outAVCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
if( !outAVCodec )
{
printf("\n\nError : avcodec_find_encoder()");
return -1;
}
/* Some container formats (like MP4) require global headers to be
present
Mark the encoder so that it behaves accordingly. */
if ( outAVFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
{
outAVCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
value = avcodec_open2(outAVCodecContext, outAVCodec, NULL);
if( value < 0)
{
printf("\n\nError : avcodec_open2()");
return -1;
}
/* create empty video file */
if ( !(outAVFormatContext->flags & AVFMT_NOFILE) )
{
if( avio_open2(&outAVFormatContext->pb , video_dst_filename,
AVIO_FLAG_WRITE ,NULL, NULL) < 0 )
{
printf("\n\nError : avio_open2()");
}
}
if(!outAVFormatContext->nb_streams)
{
printf("\n\nError : Output file dose not contain any stream");
return -1;
}
/* imp: mp4 container or some advanced container file required header
information*/
value = avformat_write_header(outAVFormatContext , NULL);
if(value < 0)
{
printf("\n\nError : avformat_write_header()");
return -1;
}
printf("\n\nOutput file information :\n\n");
av_dump_format(outAVFormatContext , 0 ,video_dst_filename ,1);
int flag;
int frameFinished;
value = 0;
pAVPacket = (AVPacket *)av_malloc(sizeof(AVPacket));
av_init_packet(pAVPacket);
pAVFrame = av_frame_alloc();
if( !pAVFrame )
{
printf("\n\nError : av_frame_alloc()");
return -1;
}
outFrame = av_frame_alloc();//Allocate an AVFrame and set its fields to
default values.
if( !outFrame )
{
printf("\n\nError : av_frame_alloc()");
return -1;
}
nbytes = av_image_get_buffer_size(outAVCodecContext-
>pix_fmt,outAVCodecContext->width,outAVCodecContext->height,32);
video_outbuf = (uint8_t*)av_malloc(nbytes);
if( video_outbuf == NULL )
{
printf("\n\nError : av_malloc()");
}
value = av_image_fill_arrays( outFrame->data, outFrame->linesize,
video_outbuf , AV_PIX_FMT_YUV420P, outAVCodecContext-
>width,outAVCodecContext->height,1 ); // returns : the size in bytes
required for src
if(value < 0)
{
printf("\n\nError : av_image_fill_arrays()");
}
SwsContext* swsCtx_ ;
// Allocate and return swsContext.
// a pointer to an allocated context, or NULL in case of error
// Deprecated : Use sws_getCachedContext() instead.
swsCtx_ = sws_getContext(video_dec_ctx->width,
video_dec_ctx->height,
video_dec_ctx->pix_fmt,
video_dec_ctx->width,
video_dec_ctx->height,
video_dec_ctx->pix_fmt,
SWS_BICUBIC, NULL, NULL, NULL);
AVPacket outPacket;
int got_picture;
while( av_read_frame( fmt_ctx , pAVPacket ) >= 0 )
{
if(pAVPacket->stream_index == video_stream_idx)
{
value = avcodec_decode_video2(video_dec_ctx , pAVFrame ,
&frameFinished , pAVPacket );
if( value < 0)
{
printf("Error : avcodec_decode_video2()");
}
if(frameFinished)// Frame successfully decoded :)
{
sws_scale(swsCtx_, pAVFrame->data, pAVFrame-
>linesize,0, video_dec_ctx->height, outFrame->data,outFrame->linesize);
// sws_scale(swsCtx_, pAVFrame->data, pAVFrame-
>linesize,0, video_dec_ctx->height, outFrame->data,outFrame->linesize);
av_init_packet(&outPacket);
outPacket.data = NULL; // packet data will be
allocated by the encoder
outPacket.size = 0;
avcodec_encode_video2(outAVCodecContext ,
&outPacket ,outFrame , &got_picture);
if(got_picture)
{
if(outPacket.pts != AV_NOPTS_VALUE)
outPacket.pts =
av_rescale_q(outPacket.pts, video_st->codec->time_base, video_st-
>time_base);
if(outPacket.dts != AV_NOPTS_VALUE)
outPacket.dts =
av_rescale_q(outPacket.dts, video_st->codec->time_base, video_st-
>time_base);
printf("Write frame %3d (size= %2d)\n",
j++, outPacket.size/1000);
if(av_write_frame(outAVFormatContext ,
&outPacket) != 0)
{
printf("\n\nError :
av_write_frame()");
}
av_packet_unref(&outPacket);
} // got_picture
av_packet_unref(&outPacket);
} // frameFinished
}
}// End of while-loop
value = av_write_trailer(outAVFormatContext);
if( value < 0)
{
printf("\n\nError : av_write_trailer()");
}
//THIS WAS ADDED LATER
av_free(video_outbuf);
end:
avcodec_free_context(&video_dec_ctx);
avcodec_free_context(&audio_dec_ctx);
avformat_close_input(&fmt_ctx);
if (video_dst_file)
fclose(video_dst_file);
if (audio_dst_file)
fclose(audio_dst_file);
//av_frame_free(&frame);
av_free(video_dst_data[0]);
return ret < 0;
}Problem with above code is that it rotates a video to left by 90 degree.
Snapshot of video given as input to above program
Snapshot of output video. It is rotated by 90 degree to left.
I compiled program using below command :
g++ -D__STDC_CONSTANT_MACROS -Wall -g ScreenRecorder.cpp -I/home/harry/Documents/compressor/ffmpeg-3.3/ -I/root/android-ndk-r14b/platforms/android-21/arch-x86_64/usr/include/ -c -o ScreenRecorder.o -w
And linked it using below command :
g++ -Wall -g ScreenRecorder.o -I/home/harry/Documents/compressor/ffmpeg-3.3/ -I/root/android-ndk-r14b/platforms/android-21/arch-x86_64/usr/include/ -L/usr/lib64 -L/lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7/ -L/home/harry/Documents/compressor/ffmpeg-3.3/ffmpeg-build -L/root/android-ndk-r14b/platforms/android-21/arch-x86_64/usr/lib64 -o ScreenRecorder.exe -lavformat -lavcodec -lavutil -lavdevice -lavfilter -lswscale -lx264 -lswresample -lm -lpthread -ldl -lstdc++ -lc -lrt
Program is being run using below command :
./ScreenRecorder.exe vertical.MOV videoH.mp4 audioH.mp3
Note :
- Source video is taken from iphone and is of .mov format.
- Output video is being stored in .mp4 file.Can anyone please tell me why it is rotating video by 90 degree ?
One thing i noticed in dump is shown below :
Duration: 00:00:06.04, start: 0.000000, bitrate: 17087 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 17014 kb/s, 29.98 fps, 29.97 tbr, 600 tbn, 1200 tbc (default)
Metadata:
rotate : 90
creation_time : 2017-07-09T10:56:42.000000Z
handler_name : Core Media Data Handler
encoder : H.264
Side data:
displaymatrix: rotation of -90.00 degreesit says
displaymatrix: rotation of -90.00 degrees
. Is it responsible for rotating video by 90 degree ? -
KLV data in RTP stream
18 septembre 2013, par ArdoramorI have implemented RFC6597 to stream KLV is RTP SMPTE336M packets. Currently, my SDP looks like this :
v=2
o=- 0 0 IN IP4 127.0.0.1
s=Unnamed
i=N/A
c=IN IP4 192.168.1.6
t=0 0
a=recvonly
m=video 8202 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1;profile-level-id=428028;sprop-parameter-sets=Z0KAKJWgKA9E,aM48gA==;
a=control:trackID=0
m=application 8206 RTP/AVP 97
a=rtpmap:97 smpte336m/1000
a=control:trackID=1I try to remux the RTP stream with FFmpeg like so :
ffmpeg.exe -i test.sdp -map 0:0 -map 0:1 -c:v copy -c:d copy test.m2ts
I get the following output with FFmpeg :
ffmpeg version 1.2 Copyright (c) 2000-2013 the FFmpeg developers
built on Mar 28 2013 00:34:08 with gcc 4.8.0 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
libavutil 52. 18.100 / 52. 18.100
libavcodec 54. 92.100 / 54. 92.100
libavformat 54. 63.104 / 54. 63.104
libavdevice 54. 3.103 / 54. 3.103
libavfilter 3. 42.103 / 3. 42.103
libswscale 2. 2.100 / 2. 2.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 2.100 / 52. 2.100
[aac @ 0000000002137900] Sample rate index in program config element does not match the sample rate index configured by the container.
Last message repeated 1 times
[aac @ 0000000002137900] decode_pce: Input buffer exhausted before END element found
[h264 @ 00000000002ce540] Missing reference picture, default is 0
[h264 @ 00000000002ce540] decode_slice_header error
[sdp @ 00000000002cfa80] Estimating duration from bitrate, this may be inaccurate
Input #0, sdp, from 'C:\Users\dragan\Documents\Workspace\Android\uvlens\tests\test.sdp':
Metadata:
title : Unnamed
comment : N/A
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Audio: aac, 32000 Hz, 58 channels, fltp
Stream #0:1: Video: h264 (Baseline), yuv420p, 640x480, 14.83 tbr, 90k tbn, 180k tbc
Stream #0:2: Data: none
File 'C:\Users\dragan\Documents\Workspace\Android\uvlens\tests\test.m2ts' already exists. Overwrite ? [y/N] y
Output #0, mpegts, to 'C:\Users\dragan\Documents\Workspace\Android\uvlens\tests\test.m2ts':
Metadata:
title : Unnamed
comment : N/A
encoder : Lavf54.63.104
Stream #0:0: Video: h264, yuv420p, 640x480, q=2-31, 90k tbn, 90k tbc
Stream #0:1: Data: none
Stream mapping:
Stream #0:1 -> #0:0 (copy)
Stream #0:2 -> #0:1 (copy)
Press [q] to stop, [?] for help
[mpegts @ 0000000002159940] Application provided invalid, non monotonically increasing dts to muxer in stream 1: 8583659665 >= 8583656110
av_interleaved_write_frame(): Invalid argumentThe problem is that KLV stream packets do not contain have a DTS field. According to the RFC6597 STMPE336M, RTP packet structure is the same as a standard structure :
4.1. RTP Header Usage
This payload format uses the RTP packet header fields as described in
the table below:
+-----------+-------------------------------------------------------+
| Field | Usage |
+-----------+-------------------------------------------------------+
| Timestamp | The RTP Timestamp encodes the instant along a |
| | presentation timeline that the entire KLVunit encoded |
| | in the packet payload is to be presented. When one |
| | KLVunit is placed in multiple RTP packets, the RTP |
| | timestamp of all packets comprising that KLVunit MUST |
| | be the same. The timestamp clock frequency is |
| | defined as a parameter to the payload format |
| | (Section 6). |
| | |
| M-bit | The RTP header marker bit (M) is used to demarcate |
| | KLVunits. Senders MUST set the marker bit to '1' for |
| | any RTP packet that contains the final byte of a |
| | KLVunit. For all other packets, senders MUST set the |
| | RTP header marker bit to '0'. This allows receivers |
| | to pass a KLVunit for parsing/decoding immediately |
| | upon receipt of the last RTP packet comprising the |
| | KLVunit. Without this, a receiver would need to wait |
| | for the next RTP packet with a different timestamp to |
| | arrive, thus signaling the end of one KLVunit and the |
| | start of another. |
+-----------+-------------------------------------------------------+
The remaining RTP header fields are used as specified in [RFC3550].Header from RFC3550 :
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|V=2|P|X| CC |M| PT | sequence number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| timestamp |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| synchronization source (SSRC) identifier |
+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
| contributing source (CSRC) identifiers |
| .... |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+RFC's note about placement of KLV data into RTP packet :
KLVunits small enough to fit into a single RTP
packet (RTP packet size is up to the implementation but should
consider underlying transport/network factors such as MTU
limitations) are placed directly into the payload of the RTP packet,
with the first byte of the KLVunit (which is the first byte of a KLV
Universal Label Key) being the first byte of the RTP packet payload.My question is where does FFmpeg keep looking for the DTS ?
Does it interpret the Timestamp field of the RTP packet header as DTS ? If so, I've verified that the timestamps increase (although at different rates) but are not equal to what FFmpeg prints out :
8583659665 >= 8583656110