
Recherche avancée
Autres articles (84)
-
Qualité du média après traitement
21 juin 2013, parLe bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (8533)
-
MySql stops running in combination with Laravel Queue, Supervisor, and FFMPEg
13 juin 2014, par egekhterAfter setting up queue listener to process uploaded videos with FFMPEG, I’ve come back to the server several times to find that MySql has stopped running. I checked drive space and it’s about 77% used (43G out of 60G).
Here’s my code in case it’s useful :
public function fire($job, $data)
{
$data = json_decode($data['transcoding_message'], true);
$output_directory = '/home/ubuntu/transcodes/';
$amazon_array = array();
$s3 = AWS::get('s3');
//execute main transcoding thread
$cmd = 'sudo ffmpeg -i ' . $data['temp_file_url'] . ' -y -vcodec libx264 -tune zerolatency -movflags faststart -crf 20 -profile:v main -level:v 3.1 -acodec libfdk_aac -b:a 256k ' . $output_directory. $data['temp_file_key'] . '_HQ.mp4 -vcodec libx264 -s ' . $sq_width . 'x' . $sq_height . ' -tune zerolatency -movflags faststart -crf 25 -profile:v main -level:v 3.1 -acodec libfdk_aac -b:a 256k ' . $output_directory. $data['temp_file_key'] . '_SQ.mp4 -ss ' . $seek_half . ' -f image2 -vf scale=iw/2:-1 -vframes 1 ' . $output_directory. $data['temp_file_key'] . '_thumb.jpg';
exec($cmd." 2>&1", $out, $ret);
if ($ret)
{
Log::error($cmd);
echo 'Processing error' . PHP_EOL;
//there was a problem
return;
}
else
{
//setup file urls
echo 'about to move files';
$hq_url = $this->bucket_root . $data['user_id'] . '/' . $data['temp_file_key'] . '_HQ.mp4';
$sq_url = $this->bucket_root . $data['user_id'] . '/' . $data['temp_file_key'] . '_SQ.mp4';;
$thumb_url = $this->bucket_root . $data['user_id'] . '/' . $data['temp_file_key'] . '_thumb.jpg';
$amazon_array['video_hq_url'] = $data['temp_file_key'] . '_HQ.mp4';
$amazon_array['video_sq_url'] = $data['temp_file_key'] . '_SQ.mp4';
$amazon_array['video_thumb_url'] = $data['temp_file_key'] . '_thumb.jpg';
//copy from temp to permanent
foreach ($amazon_array as $k => $f)
{
$uploader = UploadBuilder::newInstance()
->setClient($s3)
->setSource($output_directory.$f)
->setBucket($this->bucket)
->setKey('users/' . $data['user_id'] . '/' . $f)
->setConcurrency(10)
->setOption('ACL', 'public-read')
->build();
$uploader->getEventDispatcher()->addListener(
'multipart_upload.after_part_upload',
function($event) use ($f) {
// Do whatever you want
}
);
try {
$uploader->upload();
echo "{$k} => Upload complete.\n" . PHP_EOL;
DB::table('items')->where('id', $data['item_id'])->update(array($k => $this->bucket_root. $data['user_id'] . '/' .$f, 'deleted_at' => NULL));
//delete local
unlink($output_directory.$f);
unset($uploader);
} catch (MultipartUploadException $e) {
$uploader->abort();
echo "{$k} => Upload failed.\n" . PHP_EOL;
continue;
}
}
//write to database
DB::table('archives_items')->where('id', $data['archive_item_id'])->update(array('deleted_at' => NULL));
DB::connection('mysql3')->table('video_processing')->where('id', $data['id'])->update(array('finished_processing' => 1));
//delete files
//delete s3
$s3->deleteObject(
array(
'Bucket' => $this->temp_bucket,
'Key' => $data['file_name']
)
);
echo $data['temp_file_url'] . '=>' . " deleted from temp bucket.\n" . PHP_EOL;
DB::connection('mysql3')->table('video_processing')->where('id', $data['id'])->update(array('deleted_at' => \Carbon\Carbon::now()));
}
$job->delete();
// end of processing uploaded video
}
else
{
return;
}Any ideas as to why MySql would die like that ?
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
Edit : I wanted to add that the php artisan queue:listen command is being triggered via Supervisor and that I have 4 running concurrent processes.
-
How to encode images to h264 stream using ffmpeg c api ? [on hold]
8 septembre 2017, par TarhanEdit :
Is it possible to create series of H264 packets (complete NALs with starting code0x00 0x00 0x00 0x01
) in FFMPEG and in memory only without direct usage of libx264 for encoding frames ?
If so how to specify FFMPEG’s format context correctly so I can useAVPacket
’s data without writing to file ?As described below (I’m sorry for a long explanation). When I use only codec context and encode frames using avcodec_send_frame/avcodec_send_frame FFMPEG produce odd output. At least first packet(s) contains header which looks like FFMPEG logo message (hex dump provided below).
Is it possible to receive packets which contains only NALs ? I think FFMPEG birary provide expected output when I encode in file with.h264
extension. So bottom line I need to setup format context and codec context to reproduce FFMPEG binary behaviour but only in memory.Original long explanation :
I have several devices which I could not modify and do not have complete source code.
Devices receive UDP stream in custom format. Some UDP frames contains H264 video in some kind of header wrapper.
After I unwrap packets I have list of complete H264 NALs (all video payload packets starts with0x00 0x00 0x00 0x01
).
My test decoding similar to one in devices looks like this :
InitH264Parser::H264Parser(IMatConsumer& receiver) :
_receiver(receiver)
{
av_register_all();
_codec = avcodec_find_decoder(AV_CODEC_ID_H264);
_codecContext = avcodec_alloc_context3(_codec);
_codecContext->refcounted_frames = 0;
_codecContext->bit_rate = 0;
_codecContext->flags |= CODEC_FLAG_INPUT_PRESERVED | CODEC_FLAG_LOW_DELAY | CODEC_FLAG_LOOP_FILTER;
if (_codec->capabilities & CODEC_CAP_TRUNCATED)
_codecContext->flags |= CODEC_FLAG_TRUNCATED;
_codecContext->flags2 |= CODEC_FLAG2_CHUNKS | CODEC_FLAG2_NO_OUTPUT | CODEC_FLAG2_FAST;
_codecContext->flags2 |= CODEC_FLAG2_DROP_FRAME_TIMECODE | CODEC_FLAG2_IGNORE_CROP | CODEC_FLAG2_SHOW_ALL;
_codecContext->pix_fmt = AV_PIX_FMT_YUV420P;
_codecContext->field_order = AV_FIELD_UNKNOWN;
_codecContext->request_sample_fmt = AV_SAMPLE_FMT_NONE;
_codecContext->workaround_bugs = FF_BUG_AUTODETECT;
_codecContext->strict_std_compliance = FF_COMPLIANCE_NORMAL;
_codecContext->error_concealment = FF_EC_DEBLOCK;
_codecContext->idct_algo = FF_IDCT_AUTO;
_codecContext->thread_count = 0;
_codecContext->thread_type = FF_THREAD_FRAME;
_codecContext->thread_safe_callbacks = 0;
_codecContext->skip_loop_filter = AVDISCARD_DEFAULT;
_codecContext->skip_idct = AVDISCARD_DEFAULT;
_codecContext->skip_frame = AVDISCARD_DEFAULT;
_codecContext->pkt_timebase.num = 1;
_codecContext->pkt_timebase.den = -1;
if (avcodec_open2(_codecContext, _codec, nullptr) != 0) {
L_ERROR("Could not open codec");
}
L_INFO("H264 codec opened succesfully");
_frame = av_frame_alloc();
if (_frame == nullptr) {
L_ERROR("Could not allocate single frame");
}
_rgbFrame = av_frame_alloc();
int frameBytesCount = avpicture_get_size(AV_PIX_FMT_BGR24, INITIAL_PICTURE_WIDTH, INITIAL_PICTURE_HEIGHT);
_buffer = (uint8_t*)av_malloc(frameBytesCount * sizeof(frameBytesCount));
avpicture_fill((AVPicture*)_rgbFrame, _buffer, AV_PIX_FMT_BGR24, INITIAL_PICTURE_WIDTH, INITIAL_PICTURE_HEIGHT);
_packet.dts = AV_NOPTS_VALUE;
_packet.stream_index = 0;
_packet.flags = 0;
_packet.side_data = nullptr;
_packet.side_data_elems = 0;
_packet.duration = 0;
_packet.pos = -1;
_packet.convergence_duration = AV_NOPTS_VALUE;
if (avpicture_alloc(&_rgbPicture, AV_PIX_FMT_BGR24, INITIAL_PICTURE_WIDTH, INITIAL_PICTURE_HEIGHT) != 0) {
L_ERROR("Could not allocate RGB picture");
}
_width = INITIAL_PICTURE_WIDTH;
_height = INITIAL_PICTURE_HEIGHT;
_convertContext = sws_getContext(INITIAL_PICTURE_WIDTH, INITIAL_PICTURE_HEIGHT, AV_PIX_FMT_YUV420P,
INITIAL_PICTURE_WIDTH, INITIAL_PICTURE_HEIGHT, AV_PIX_FMT_BGR24,
SWS_BILINEAR, nullptr, nullptr, nullptr);
if (_convertContext == nullptr) {
L_ERROR("Faild to initialize SWS convert context");
}
_skipBad = false;
_initialized = true;
}Decoding NALs received from unwrapper :
void H264Parser::handle(const uint8_t * nalUnit, int size)
{
static int packetIndex = 0;
bool result = false;
if (!_initialized)
return;
_packet.buf = nullptr;
_packet.pts = packetIndex;
_packet.data = (uint8_t*)nalUnit;
_packet.size = size;
int frameFinished = 0;
int length = avcodec_decode_video2(_codecContext, _frame, &frameFinished, &_packet);
if (_skipBad) {
L_ERROR("We should not skip bad frames");
}
int width = 0;
int height = 0;
if (((_frame->pict_type == AV_PICTURE_TYPE_I) ||
(_frame->pict_type == AV_PICTURE_TYPE_P) ||
(_frame->pict_type == AV_PICTURE_TYPE_B)) &&
(length > 0) && (frameFinished > 0)) {
L_DEBUG("Found picture type: %d", _frame->pict_type);
if ((_codecContext->width != _width) && (_codecContext->height != _height)) {
if (_convertContext != nullptr) {
sws_freeContext(_convertContext);
_convertContext = nullptr;
}
_convertContext = sws_getContext(_codecContext->width, _codecContext->height, AV_PIX_FMT_YUV420P,
_codecContext->width, _codecContext->height, AV_PIX_FMT_BGR24,
SWS_BILINEAR, nullptr, nullptr, nullptr);
if (_convertContext == nullptr) {
L_ERROR("Could not create SWS convert context for new width and height");
return;
}
avpicture_free(&_rgbPicture);
if (avpicture_alloc(&_rgbPicture, AV_PIX_FMT_BGR24, _codecContext->width, _codecContext->height) != 0) {
L_ERROR("Could not allocate picture for new width and height");
}
_width = _codecContext->width;
_height = _codecContext->height;
}
if (sws_scale(_convertContext, _frame->data, _frame->linesize, 0, _codecContext->height, _rgbPicture.data, _rgbPicture.linesize) == _codecContext->height) {
width = _codecContext->width;
height = _codecContext->height;
cv::Mat mat(height, width, CV_8UC3, _rgbPicture.data[0], _rgbPicture.linesize[0]);
_receiver.onImage(mat);
}
}
}It workings and decode images correctly from existing encoding devices.
P.S. : There small issue with FFMPEG print warning to console "[h264 @ 00000000024ad860] data partitioning is not implemented.". But I suppose it is problem with encoding devices.How is question part.
I need to create another encoding device with settings compatible with decoding described above.
From tutorials or other Stack Overflow questions people mostly need to write H264 stream to file or direct to UDP without custom wrapping.
I need to create NALs packets in memory.Can someone provide correct code for initialization and encoding series of images into series of complete NALs packets ?
I’ve tried to create encoding using following code :
InitH264Encoder::H264Encoder(int width, int height, int fpsRationalHigh, int fpsRationalLow) :
_frameCounter(0),
_output("video_encoded.h264", std::ios::binary)
{
av_register_all();
avcodec_register_all();
_codec = avcodec_find_encoder(AV_CODEC_ID_H264);
if (!_codec) {
L_ERROR("Could not find H264 encoder");
throw std::runtime_error("Could not find H264 encoder");
}
_codecContext = avcodec_alloc_context3(_codec);
if (!_codecContext) {
L_ERROR("Cound not open codec context for H264 encoder");
throw std::runtime_error("Cound not open codec context for H264 encoder");
}
_codecContext->width = width;
_codecContext->height = height;
_codecContext->time_base = AVRational{ fpsRationalLow, fpsRationalHigh };
_codecContext->framerate = AVRational{ fpsRationalHigh, fpsRationalLow };
_codecContext->bit_rate = BIT_RATE;
_codecContext->bit_rate_tolerance = 0;
_codecContext->rc_max_rate = 0;
_codecContext->gop_size = GOP_SIZE;
_codecContext->flags |= CODEC_FLAG_LOOP_FILTER;
// _codecContext->refcounted_frames = 0;
av_opt_set(_codecContext->priv_data, "preset", "fast", 0);
av_opt_set(_codecContext->priv_data, "tune", "zerolatency", 0);
av_opt_set(_codecContext->priv_data, "vprofile", "baseline", 0);
_codecContext->max_b_frames = 1;
_codecContext->pix_fmt = AV_PIX_FMT_YUV420P;
//_codecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
if (avcodec_open2(_codecContext, _codec, nullptr) != 0) {
L_ERROR("Could not open codec");
throw std::runtime_error("Could not open codec");
}
L_INFO("H264 codec opened succesfully");
_frame = av_frame_alloc();
if (_frame == nullptr) {
L_ERROR("Could not allocate single frame");
}
_frame->format = _codecContext->pix_fmt;
_frame->width = width;
_frame->height = height;
av_frame_get_buffer(_frame, 1);
_rgbFrame = av_frame_alloc();
_rgbFrame->format = AV_PIX_FMT_BGR24;
_rgbFrame->width = width;
_rgbFrame->height = height;
av_frame_get_buffer(_rgbFrame, 1);
_width = width;
_height = height;
_convertContext = sws_getContext(width, height, AV_PIX_FMT_BGR24,
width, height, AV_PIX_FMT_YUV420P,
SWS_BILINEAR, nullptr, nullptr, nullptr);
if (_convertContext == nullptr) {
L_ERROR("Faild to initialize SWS convert context");
}
_skipBad = false;
_initialized = true;
}Encoding
void H264Encoder::processImage(const cv::Mat & mat)
{
av_init_packet(&_packet);
_packet.data = nullptr;
_packet.size = 0;
_packet.pts = _frameCounter;
_rgbFrame->data[0] = (uint8_t*)mat.data;
// av_image_fill_arrays(_rgbFrame->data, _rgbFrame->linesize, _buffer, (AVPixelFormat)_rgbFrame->format, _rgbFrame->width, _rgbFrame->height, 1);
if (sws_scale(_convertContext, _rgbFrame->data, _rgbFrame->linesize, 0, _codecContext->height, _frame->data, _frame->linesize) == _codecContext->height) {
L_DEBUG("BGR frame converted to YUV");
}
else {
L_DEBUG("Could not convert BGR frame to YUV");
}
int retSendFrame = avcodec_send_frame(_codecContext, _frame);
int retReceivePacket = avcodec_receive_packet(_codecContext, &_packet);
if (retSendFrame == AVERROR(EAGAIN)) {
L_DEBUG("Buffers are filled");
}
if (retReceivePacket == 0) {
_packet.pts = _frameCounter;
L_DEBUG("Got frame (Frame index: %4d)", _frameCounter);
_output.write((char*)_packet.data, _packet.size);
av_packet_unref(&_packet);
}
else {
L_DEBUG("No frame at moment. (Frame index: %4d)", _frameCounter);
}
_frameCounter++;
}But this code produce incorrect output. FFMPEG itself could not understand test ```video_encoded.h264`` file.
It output errors like this :[h264 @ 00000000006da940] decode_slice_header error
[h264 @ 00000000006da940] non-existing PPS 0 referenced
[h264 @ 00000000006da940] decode_slice_header error
[h264 @ 00000000006da940] non-existing PPS 0 referenced
[h264 @ 00000000006da940] decode_slice_header error
[h264 @ 00000000006da940] non-existing PPS 0 referenced
[h264 @ 00000000006da940] decode_slice_header error
[h264 @ 00000000006da940] non-existing PPS 0 referenced
[h264 @ 00000000006da940] decode_slice_header error
[h264 @ 00000000006da940] non-existing PPS 0 referenced
[h264 @ 00000000006da940] decode_slice_header error
[h264 @ 00000000006da940] non-existing PPS 0 referenced
[h264 @ 00000000006da940] decode_slice_header error
[h264 @ 00000000006da940] no frame!
[h264 @ 00000000006da940] non-existing PPS 0 referenced
[h264 @ 00000000026196a0] decoding for stream 0 failed
[h264 @ 00000000026196a0] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #0, h264, from 'video_encoded.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264, none, 25 fps, 25 tbr, 1200k tbn, 50 tbc
[mp4 @ 00000000026d00a0] dimensions not set
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Last message repeated 1 timesWhen I’ve opened file in HEX editor I found FFMPEG logo text (WHY ??) in beginning. It looks like this
Offset 0 1 2 3 4 5 6 7 8 9 A B C D E F
00000000 00 00 00 01 67 64 00 1F AC EC 05 00 5B A1 00 00 gd ¬ì [¡
00000010 03 00 01 00 00 03 00 32 8F 18 31 38 00 00 00 01 2 18
00000020 68 EA EC B2 2C 00 00 01 06 05 FF FF BE DC 45 E9 hêì², ÿÿ¾ÜEé
00000030 BD E6 D9 48 B7 96 2C D8 20 D9 23 EE EF 78 32 36 ½æÙH·–,Ø Ù#îïx26
00000040 34 20 2D 20 63 6F 72 65 20 31 35 32 20 72 32 38 4 - core 152 r28
00000050 35 31 20 62 61 32 34 38 39 39 20 2D 20 48 2E 32 51 ba24899 - H.2
00000060 36 34 2F 4D 50 45 47 2D 34 20 41 56 43 20 63 6F 64/MPEG-4 AVC co
00000070 64 65 63 20 2D 20 43 6F 70 79 6C 65 66 74 20 32 dec - Copyleft 2
00000080 30 30 33 2D 32 30 31 37 20 2D 20 68 74 74 70 3A 003-2017 - http:
00000090 2F 2F 77 77 77 2E 76 69 64 65 6F 6C 61 6E 2E 6F //www.videolan.o
000000A0 72 67 2F 78 32 36 34 2E 68 74 6D 6C 20 2D 20 6F rg/x264.html - o
000000B0 70 74 69 6F 6E 73 3A 20 63 61 62 61 63 3D 31 20 ptions: cabac=1
000000C0 72 65 66 3D 32 20 64 65 62 6C 6F 63 6B 3D 31 3A ref=2 deblock=1:
000000D0 30 3A 30 20 61 6E 61 6C 79 73 65 3D 30 78 33 3A 0:0 analyse=0x3:
000000E0 30 78 31 31 33 20 6D 65 3D 68 65 78 20 73 75 62 0x113 me=hex sub
000000F0 6D 65 3D 36 20 70 73 79 3D 31 20 70 73 79 5F 72 me=6 psy=1 psy_r
00000100 64 3D 31 2E 30 30 3A 30 2E 30 30 20 6D 69 78 65 d=1.00:0.00 mixe
00000110 64 5F 72 65 66 3D 31 20 6D 65 5F 72 61 6E 67 65 d_ref=1 me_range
00000120 3D 31 36 20 63 68 72 6F 6D 61 5F 6D 65 3D 31 20 =16 chroma_me=1
00000130 74 72 65 6C 6C 69 73 3D 31 20 38 78 38 64 63 74 trellis=1 8x8dct
00000140 3D 31 20 63 71 6D 3D 30 20 64 65 61 64 7A 6F 6E =1 cqm=0 deadzon
00000150 65 3D 32 31 2C 31 31 20 66 61 73 74 5F 70 73 6B e=21,11 fast_psk
00000160 69 70 3D 31 20 63 68 72 6F 6D 61 5F 71 70 5F 6F ip=1 chroma_qp_o
00000170 66 66 73 65 74 3D 2D 32 20 74 68 72 65 61 64 73 ffset=-2 threads
00000180 3D 38 20 6C 6F 6F 6B 61 68 65 61 64 5F 74 68 72 =8 lookahead_thr
00000190 65 61 64 73 3D 38 20 73 6C 69 63 65 64 5F 74 68 eads=8 sliced_th
000001A0 72 65 61 64 73 3D 31 20 73 6C 69 63 65 73 3D 38 reads=1 slices=8
000001B0 20 6E 72 3D 30 20 64 65 63 69 6D 61 74 65 3D 31 nr=0 decimate=1
000001C0 20 69 6E 74 65 72 6C 61 63 65 64 3D 30 20 62 6C interlaced=0 bl
000001D0 75 72 61 79 5F 63 6F 6D 70 61 74 3D 30 20 63 6F uray_compat=0 co
000001E0 6E 73 74 72 61 69 6E 65 64 5F 69 6E 74 72 61 3D nstrained_intra=
000001F0 30 20 62 66 72 61 6D 65 73 3D 31 20 62 5F 70 79 0 bframes=1 b_py
00000200 72 61 6D 69 64 3D 30 20 62 5F 61 64 61 70 74 3D ramid=0 b_adapt=
00000210 31 20 62 5F 62 69 61 73 3D 30 20 64 69 72 65 63 1 b_bias=0 direc
00000220 74 3D 31 20 77 65 69 67 68 74 62 3D 31 20 6F 70 t=1 weightb=1 op
00000230 65 6E 5F 67 6F 70 3D 30 20 77 65 69 67 68 74 70 en_gop=0 weightp
00000240 3D 31 20 6B 65 79 69 6E 74 3D 35 20 6B 65 79 69 =1 keyint=5 keyi
00000250 6E 74 5F 6D 69 6E 3D 31 20 73 63 65 6E 65 63 75 nt_min=1 scenecu
00000260 74 3D 34 30 20 69 6E 74 72 61 5F 72 65 66 72 65 t=40 intra_refre
00000270 73 68 3D 30 20 72 63 3D 61 62 72 20 6D 62 74 72 sh=0 rc=abr mbtr
00000280 65 65 3D 30 20 62 69 74 72 61 74 65 3D 31 32 30 ee=0 bitrate=120
00000290 30 20 72 61 74 65 74 6F 6C 3D 31 2E 30 20 71 63 0 ratetol=1.0 qc
000002A0 6F 6D 70 3D 30 2E 36 30 20 71 70 6D 69 6E 3D 30 omp=0.60 qpmin=0
000002B0 20 71 70 6D 61 78 3D 36 39 20 71 70 73 74 65 70 qpmax=69 qpstep
000002C0 3D 34 20 69 70 5F 72 61 74 69 6F 3D 31 2E 34 30 =4 ip_ratio=1.40
000002D0 20 70 62 5F 72 61 74 69 6F 3D 31 2E 33 30 20 61 pb_ratio=1.30 a
000002E0 71 3D 31 3A 31 2E 30 30 00 80 00 q=1:1.00 €I support additional I need to create AVFormatContext and create stream. But I don’t know how to create it for RAW H264 and most important to not write output to file but to memory buffer.
Can someone help me ?
-
Stream RTP to FFMPEG using SDP
9 avril 2021, par Johnathan KanarekI get RTP stream from WebRTC server (I used mediasoup) using node.js and I get the decrypted RTP packets raw data from the stream.
I want to forward this RTP data to ffmpeg and from there I can save it to file, or push it as RTMP stream to other media servers.
I guess that the best way would be to create SDP file that describes both the audio and video streams and send the packets through new sockets.



The ffmpeg command is :



ffmpeg -loglevel debug -protocol_whitelist file,crypto,udp,rtp -re -vcodec libvpx -acodec opus -i test.sdp -vcodec libx264 -acodec aac -y output.mp4



I tried to send the packets through UDP :



v=0
o=mediasoup 7199daf55e496b370e36cd1d25b1ef5b9dff6858 0 IN IP4 192.168.193.182
s=7199daf55e496b370e36cd1d25b1ef5b9dff6858
c=IN IP4 192.168.193.182
t=0 0
m=audio 33301 RTP/AVP 111
a=rtpmap:111 /opus/48000
a=fmtp:111 minptime=10;useinbandfec=1
a=rtcp-fb:111 transport-cc
a=sendrecv
m=video 33302 RTP/AVP 100
a=rtpmap:100 /VP8/90000
a=rtcp-fb:100 ccm fir
a=rtcp-fb:100 nack
a=rtcp-fb:100 nack pli
a=rtcp-fb:100 goog-remb
a=rtcp-fb:100 transport-cc
a=sendrecv




But I always get (removed the boring parts) :



Opening an input file: test.sdp.

[sdp @ 0x103dea0]
Format sdp probed with size=2048 and score=50
[sdp @ 0x103dea0] audio codec set to: (null)
[sdp @ 0x103dea0] audio samplerate set to: 44100
[sdp @ 0x103dea0] audio channels set to: 1
[sdp @ 0x103dea0] video codec set to: (null)
[udp @ 0x10402e0] end receive buffer size reported is 131072
[udp @ 0x10400c0] end receive buffer size reported is 131072
[sdp @ 0x103dea0] setting jitter buffer size to 500
[udp @ 0x1040740] bind failed: Address already in use
[AVIOContext @ 0x1046980] Statistics: 473 bytes read, 0 seeks
test.sdp: Invalid data found when processing input




Note that I get it even if I don't open socket at all or send anything to this port, as if the ffmpeg itself tries to open these ports more than once.



I tried also to open two (video and audio) TCP servers and define SDP with TCP :



v=0
o=mediasoup 7199daf55e496b370e36cd1d25b1ef5b9dff6858 0 IN IP4 192.168.193.182
s=7199daf55e496b370e36cd1d25b1ef5b9dff6858
c=IN IP4 192.168.193.182
t=0 0
m=audio 33301 TCP 111
a=rtpmap:111 /opus/48000
a=fmtp:111 minptime=10;useinbandfec=1
a=rtcp-fb:111 transport-cc
a=setup:active
a=connection:new
a=sendrecv
m=video 33302 TCP 100
a=rtpmap:100 /VP8/90000
a=rtcp-fb:100 ccm fir
a=rtcp-fb:100 nack
a=rtcp-fb:100 nack pli
a=rtcp-fb:100 goog-remb
a=rtcp-fb:100 transport-cc
a=setup:active
a=connection:new
a=sendrecv




However I don't see any incoming connection into my TCP servers and I get the following from ffmpeg :



Opening an input file: test.sdp.

[sdp @ 0xdddea0]
Format sdp probed with size=2048 and score=50

[sdp @ 0xdddea0]
audio codec set to: (null)

[sdp @ 0xdddea0]
audio samplerate set to: 44100
[sdp @ 0xdddea0] audio channels set to: 1
[sdp @ 0xdddea0] video codec set to: (null)
[udp @ 0xde02e0] end receive buffer size reported is 131072
[udp @ 0xde00c0] end receive buffer size reported is 131072
[sdp @ 0xdddea0] setting jitter buffer size to 500
[udp @ 0xde0740] end receive buffer size reported is 131072

[udp @ 0xde0180] end receive buffer size reported is 131072
[sdp @ 0xdddea0] setting jitter buffer size to 500
[sdp @ 0xdddea0] Before avformat_find_stream_info() pos: 593 bytes read:593 seeks:0 nb_streams:2
[libvpx @ 0xdeea80] v1.3.0
[libvpx @ 0xdeea80] --target=x86_64-linux-gcc --enable-pic --disable-install-srcs --as=nasm --enable-shared --prefix=/usr --libdir=/usr/lib64

[sdp @ 0xdddea0] Could not find codec parameters for stream 1 (Video: vp8, 1 reference frame, none): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
[sdp @ 0xdddea0] After avformat_find_stream_info() pos: 593 bytes read:593 seeks:0 frames:0
Input #0, sdp, from 'test.sdp':
 Metadata:
 title : 7199daf55e496b370e36cd1d25b1ef5b9dff6858
 Duration: N/A, bitrate: N/A
 Stream #0:0, 0, 1/90000: Audio: opus, 48000 Hz, mono, fltp
 Stream #0:1, 0, 1/90000: Video: vp8, 1 reference frame, none, 90k tbr, 90k tbn, 90k tbc
Successfully opened the file.
Parsing a group of options: output file output.mp4.
Successfully parsed a group of options.
Opening an output file: output.mp4.
[file @ 0xde3660] Setting default whitelist 'file,crypto'
Successfully opened the file.

detected 1 logical cores
[graph 0 input from stream 0:0 @ 0xde3940] Setting 'time_base' to value '1/48000'
[graph 0 input from stream 0:0 @ 0xde3940] Setting 'sample_rate' to value '48000'
[graph 0 input from stream 0:0 @ 0xde3940] Setting 'sample_fmt' to value 'fltp'
[graph 0 input from stream 0:0 @ 0xde3940] Setting 'channel_layout' to value '0x4'
[graph 0 input from stream 0:0 @ 0xde3940] tb:1/48000 samplefmt:fltp samplerate:48000 chlayout:0x4
[audio format for output stream 0:0 @ 0xe37900] Setting 'sample_fmts' to value 'fltp'
[audio format for output stream 0:0 @ 0xe37900] Setting 'sample_rates' to value '96000|88200|64000|48000|44100|32000|24000|22050|16000|12000|11025|8000|7350'
[AVFilterGraph @ 0xde0220] query_formats: 4 queried, 9 merged, 0 already done, 0 delayed

Output #0, mp4, to 'output.mp4':

 Metadata:

 title :
7199daf55e496b370e36cd1d25b1ef5b9dff6858


 encoder :
Lavf57.56.100


 Stream #0:0
, 0, 1/48000
: Audio: aac (LC) ([64][0][0][0] / 0x0040), 48000 Hz, mono, fltp, delay 1024, 69 kb/s


 Metadata:

 encoder :
Lavc57.64.100 aac


Stream mapping:

 Stream #0:0 -> #0:0 (opus (native) -> aac (native))
Press [q] to stop, [?] for help
cur_dts is invalid (this is harmless if it occurs once at the start per stream)

test.sdp: Connection timed out
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[output stream 0:0 @ 0xde3b40] EOF on sink link output stream 0:0:default.
No more output streams to write to, finishing.
[aac @ 0xde2b00] Trying to remove 1024 samples, but the queue is empty
[aac @ 0xde2b00] Trying to remove 1024 more samples than there are in the queue
[mp4 @ 0xe6a540] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[mp4 @ 0xe6a540] Encoder did not produce proper pts, making some up.
[aac @ 0xde2b00] Trying to remove 1024 samples, but the queue is empty
[aac @ 0xde2b00] Trying to remove 1024 more samples than there are in the queue
size= 1kB time=00:00:00.04 bitrate= 157.9kbits/s speed=0.00426x
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3268.000000%
Input file #0 (test.sdp):
 Input stream #0:0 (audio): 0 packets read (0 bytes); 0 frames decoded (0 samples);
 Input stream #0:1 (video): 0 packets read (0 bytes);
 Total: 0 packets (0 bytes) demuxed
Output file #0 (output.mp4):
 Output stream #0:0 (audio): 0 frames encoded (0 samples); 2 packets muxed (25 bytes);
 Total: 2 packets (25 bytes) muxed
0 frames successfully decoded, 0 decoding errors
[AVIOContext @ 0xde37a0] Statistics: 30 seeks, 25 writeouts
[aac @ 0xde2b00] Qavg: 47249.418

[AVIOContext @ 0xde6980] Statistics: 593 bytes read, 0 seeks




Note to the "Connection timed out" in the log above.



I guess that both my SDPs are wrong, any suggestions ?



Alternatives to SDP are also most welcomed.