
Recherche avancée
Médias (3)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (97)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (12900)
-
ffmpeg GRAY16 stream over network
28 novembre 2023, par Norbert P.Im working in a school project where we need to use depth cameras. The camera produces color and depth (in other words 16bit grayscale image). We decided to use ffmpeg, as later on compression could be very useful. For now we got some basic stream running form one PC to other. These settings include :


- 

- rtmp
- flv as container
- pixel format AV_PIX_FMT_YUV420P
- codec AV_CODEC_ID_H264










The problem we are having is with grayscale image. Not every codec is able to cope with this format, so as not every protocol able to work with given codec. I got some settings "working" but receiver side is just stuck on avformat_open_input() method.
I have also tested it with commandline where ffmpeg is listening for connection and same happens.


I include a minimum "working" example of client code. Server can be tested with "ffmpeg.exe -f apng -listen 1 -i rtmp ://localhost:9999/stream/stream1 -c copy -f apng -listen 1 rtmp ://localhost:2222/live/l" or code below. I get no warnings, ffmpeg is newest version installed with "vcpkg install —triplet x64-windows ffmpeg[ffmpeg,ffprobe,zlib]" on windows or packet manager on linux.


The question : Did I miss something ? How do I get it to work ? If you have any better ideas I would very gladly consider them. In the end I need 16 bits of lossless transmission, could be split between channels etc. which I also tried with same effect.


Client code that would have camera and connect to server :


extern "C" {
#include <libavutil></libavutil>opt.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>common.h>
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>imgutils.h>
}

int main() {

 std::string container = "apng";
 AVCodecID codec_id = AV_CODEC_ID_APNG;
 AVPixelFormat pixFormat = AV_PIX_FMT_GRAY16BE;

 AVFormatContext* format_ctx;
 AVCodec* out_codec;
 AVStream* out_stream;
 AVCodecContext* out_codec_ctx;
 AVFrame* frame;
 uint8_t* data;

 std::string server = "rtmp://localhost:9999/stream/stream1";

 int width = 1280, height = 720, fps = 30, bitrate = 1000000;

 //initialize format context for output with flv and no filename
 avformat_alloc_output_context2(&format_ctx, nullptr, container.c_str(), server.c_str());
 if (!format_ctx) {
 return 1;
 }

 //AVIOContext for accessing the resource indicated by url
 if (!(format_ctx->oformat->flags & AVFMT_NOFILE)) {
 int avopen_ret = avio_open(&format_ctx->pb, server.c_str(),
 AVIO_FLAG_WRITE);// , nullptr, nullptr);
 if (avopen_ret < 0) {
 fprintf(stderr, "failed to open stream output context, stream will not work\n");
 return 1;
 }
 }


 const AVCodec* tmp_out_codec = avcodec_find_encoder(codec_id);
 //const AVCodec* tmp_out_codec = avcodec_find_encoder_by_name("hevc");
 out_codec = const_cast(tmp_out_codec);
 if (!(out_codec)) {
 fprintf(stderr, "Could not find encoder for '%s'\n",
 avcodec_get_name(codec_id));

 return 1;
 }

 out_stream = avformat_new_stream(format_ctx, out_codec);
 if (!out_stream) {
 fprintf(stderr, "Could not allocate stream\n");
 return 1;
 }

 out_codec_ctx = avcodec_alloc_context3(out_codec);

 const AVRational timebase = { 60000, fps };
 const AVRational dst_fps = { fps, 1 };
 av_log_set_level(AV_LOG_VERBOSE);
 //codec_ctx->codec_tag = 0;
 //codec_ctx->codec_id = codec_id;
 out_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
 out_codec_ctx->width = width;
 out_codec_ctx->height = height;
 out_codec_ctx->gop_size = 1;
 out_codec_ctx->time_base = timebase;
 out_codec_ctx->pix_fmt = pixFormat;
 out_codec_ctx->framerate = dst_fps;
 out_codec_ctx->time_base = av_inv_q(dst_fps);
 out_codec_ctx->bit_rate = bitrate;
 //if (fctx->oformat->flags & AVFMT_GLOBALHEADER)
 //{
 // codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 //}

 out_stream->time_base = out_codec_ctx->time_base; //will be set afterwards by avformat_write_header to 1/1000

 int ret = avcodec_parameters_from_context(out_stream->codecpar, out_codec_ctx);
 if (ret < 0)
 {
 fprintf(stderr, "Could not initialize stream codec parameters!\n");
 return 1;
 }

 AVDictionary* codec_options = nullptr;
 av_dict_set(&codec_options, "tune", "zerolatency", 0);

 // open video encoder
 ret = avcodec_open2(out_codec_ctx, out_codec, &codec_options);
 if (ret < 0)
 {
 fprintf(stderr, "Could not open video encoder!\n");
 return 1;
 }
 av_dict_free(&codec_options);

 out_stream->codecpar->extradata_size = out_codec_ctx->extradata_size;
 out_stream->codecpar->extradata = static_cast(av_mallocz(out_codec_ctx->extradata_size));
 memcpy(out_stream->codecpar->extradata, out_codec_ctx->extradata, out_codec_ctx->extradata_size);

 av_dump_format(format_ctx, 0, server.c_str(), 1);

 frame = av_frame_alloc();

 int sz = av_image_get_buffer_size(pixFormat, width, height, 32);
#ifdef _WIN32
 data = (uint8_t*)_aligned_malloc(sz, 32);
 if (data == NULL)
 return ENOMEM;
#else
 ret = posix_memalign(reinterpret_cast(&data), 32, sz);
#endif
 av_image_fill_arrays(frame->data, frame->linesize, data, pixFormat, width, height, 32);
 frame->format = pixFormat;
 frame->width = width;
 frame->height = height;
 frame->pts = 1;
 if (avformat_write_header(format_ctx, nullptr) < 0) //Header making problems!!!
 {
 fprintf(stderr, "Could not write header!\n");
 return 1;
 }

 printf("stream time base = %d / %d \n", out_stream->time_base.num, out_stream->time_base.den);

 double inv_stream_timebase = (double)out_stream->time_base.den / (double)out_stream->time_base.num;
 printf("Init OK\n");
 /* Init phase end*/
 int dts = 0;
 int frameNo = 0;

 while (true) {
 //Fill dummy frame with something
 for (int y = 0; y < height; y++) {
 uint16_t color = ((y + frameNo) * 256) % (256 * 256);
 for (int x = 0; x < width; x++) {
 data[x+y*width] = color;
 }
 }

 memcpy(frame->data[0], data, 1280 * 720 * sizeof(uint16_t));
 AVPacket* pkt = av_packet_alloc();

 int ret = avcodec_send_frame(out_codec_ctx, frame);
 if (ret < 0)
 {
 fprintf(stderr, "Error sending frame to codec context!\n");
 return ret;
 }
 while (ret >= 0) {
 ret = avcodec_receive_packet(out_codec_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 break;
 else if (ret < 0) {
 fprintf(stderr, "Error during encoding\n");
 break;
 }
 pkt->dts = dts;
 pkt->pts = dts;
 dts += 33;
 av_write_frame(format_ctx, pkt);
 frameNo++;
 av_packet_unref(pkt);
 }
 printf("Streamed %d frames\n", frameNo);
 }
 return 0;
}



And part of server that should receive. code where is stops and waits


extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavformat></libavformat>avio.h>
}

int main() {
 AVFormatContext* fmt_ctx = NULL;
 av_log_set_level(AV_LOG_VERBOSE);
 AVDictionary* options = nullptr;
 av_dict_set(&options, "protocol_whitelist", "file,udp,rtp,tcp,rtmp,rtsp,hls", 0);
 av_dict_set(&options, "timeout", "500000", 0); // Timeout in microseconds 

//Next Line hangs 
 int ret = avformat_open_input(&fmt_ctx, "rtmp://localhost:9999/stream/stream1", NULL, &options);
 if (ret != 0) {
 fprintf(stderr, "Could not open RTMP stream\n");
 return -1;
 }

 // Find the first video stream
 ret = avformat_find_stream_info(fmt_ctx, nullptr);
 if (ret < 0) {
 return ret;
 }
 //...
} 




Edit :
I tried to just create a animated png and tried to stream that from the console to another console window to avoid any programming mistakes on my side. It was the same, I just could not get 16 PNG encoded stream to work. I hung trying to receive and closed when the file ended with in total zero frames received.


I managed to get other thing working :
To not encode gray frames with YUV420, I installed ffmpeg with libx264 support (was thinking is the same as H264, which in code is, but it adds support to new pixel formats). Used H264 again but with GRAY8 with doubled image width and reconstructing the image on the other side.


Maybe as a side note, I could not get any other formats to work. Is "flv" the only option here ? Could I get more performance if I changed it to... what ?


-
Decoding a h.264 stream with MediaCodec, dequeueOutputBuffer always return -1
20 septembre 2016, par bitto bittaI am trying to use the MediaCodec API for decoding live-stream screen capture from PC by ffmpeg.
For Sender (PC ffmpeg)
i use this command
ffmpeg -re -f gdigrab -s 1920x1080 -threads 4 -i desktop -vcodec libx264 -pix_fmt yuv420p -tune zerolatency -profile:v baseline -flags global_header -s 1280x720 -an -f rtp rtp://192.168.1.6:1234
and output looks like this
Output #0, rtp, to 'rtp://192.168.1.6:1234':
Metadata:
encoder : Lavf56.15.104
Stream #0:0: Video: h264 (libx264), yuv420p, 1280x720, q=-1--1, 29.97 fps, 90k tbn, 29.97 tbc
Metadata:
encoder : Lavc56.14.100 libx264
Stream mapping:
Stream #0:0 -> #0:0 (bmp (native) -> h264 (libx264))
SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 192.168.1.6
t=0 0
a=tool:libavformat 56.15.104
m=video 1234 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1; sprop-parameter-sets=Z0LAH9kAUAW6EAAAPpAADqYI8YMkgA==,aMuDyyA=; profile-level-id=42C01F
Press [q] to stop, [?] for help
frame= 19 fps=0.0 q=17.0 size= 141kB time=00:00:00.63 bitrate=1826.0kbits/
frame= 34 fps= 32 q=17.0 size= 164kB time=00:00:01.13 bitrate=1181.5kbits/
frame= 50 fps= 32 q=18.0 size= 173kB time=00:00:01.66 bitrate= 850.9kbits/For Receiver (Android MediaCodec)
I created activity with surface and implements SurfaceHolder.Callback
In surfaceChanged
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
Log.i("sss", "surfaceChanged");
if( playerThread == null ) {
playerThread = new PlayerThread(holder.getSurface());
playerThread.start();
}
}For PlayerThread
class PlayerThread extends Thread {
MediaCodec decoder;
Surface surface;
public PlayerThread(Surface surface) {
this.surface = surface;
}
@Override
public void run() {
running = true;
try {
MediaFormat format = MediaFormat.createVideoFormat("video/avc", 1280, 720);
byte[] header = new byte[] {0,0,0,1};
byte[] sps = Base64.decode("Z0LAH9kAUAW6EAAAPpAADqYI8YMkgA==", Base64.DEFAULT);
byte[] pps = Base64.decode("aMuDyyA=", Base64.DEFAULT);
byte[] header_sps = new byte[sps.length + header.length];
System.arraycopy(header,0,header_sps,0,header.length);
System.arraycopy(sps,0,header_sps,header.length, sps.length);
byte[] header_pps = new byte[pps.length + header.length];
System.arraycopy(header,0, header_pps, 0, header.length);
System.arraycopy(pps, 0, header_pps, header.length, pps.length);
format.setByteBuffer("csd-0", ByteBuffer.wrap(header_sps));
format.setByteBuffer("csd-1", ByteBuffer.wrap(header_pps));
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 1280 * 720);
// format.setInteger("durationUs", 63446722);
// format.setByteBuffer("csd-2", ByteBuffer.wrap((hexStringToByteArray("42C01E"))));
// format.setInteger(MediaFormat.KEY_COLOR_FORMAT ,MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar);
Log.i("sss", "Format = " + format);
try {
decoder = MediaCodec.createDecoderByType("video/avc");
decoder.configure(format, surface, null, 0);
decoder.start();
} catch (IOException ioEx) {
ioEx.printStackTrace();
}
DatagramSocket socket = new DatagramSocket(1234);
byte[] bytes = new byte[4096];
DatagramPacket packet = new DatagramPacket(bytes, bytes.length);
byte[] data;
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
bufferInfo = new MediaCodec.BufferInfo();
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
inputBuffers = decoder.getInputBuffers();
outputBuffers = decoder.getOutputBuffers();
int minusCount = 0;
byte[] prevData = new byte[65535];
List playLoads = new ArrayList<>();
int playloadSize = 0;
while (true) {
try {
socket.receive(packet);
data = new byte[packet.getLength()];
System.arraycopy(packet.getData(), packet.getOffset(), data, 0, packet.getLength());
inputBufferIndex = decoder.dequeueInputBuffer(-1);
Log.i("sss", "inputBufferIndex = " + inputBufferIndex);
if (inputBufferIndex >= 0)
{
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(data);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
// decoder.flush();
}
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 10000);
Log.i("sss", "outputBufferIndex = " + outputBufferIndex);
while (outputBufferIndex >= 0)
{
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
decoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
}
} catch (SocketTimeoutException e) {
Log.d("thread", "timeout");
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}I think stream from ffmpeg is not a problem because i can open it from mxPlayer via sdp file.
And if I pass this stream to local RTSP server (by VLC) then I use MediaPlayer to get RTSP stream, it works but quite slow.After I looked into the packet I realize that
- first four bytes is header and sequence number
- next four bytes is TimeStamp
- next four bytes is source identifier
So, I cut first 12 bytes out and combine packets with same TimeStamp. Then put it in buffer like this
In while(true) after received packet
Log.i("sss", "Received = " + data.length + " bytes");
Log.i("sss","prev " + prevData.length + " bytes = " + getBytesStr(prevData));
Log.i("sss","data " + data.length + " bytes = " + getBytesStr(data));
if(data[4] == prevData[4] && data[5] == prevData[5] && data[6] == prevData[6] && data[7] == prevData[7]){
byte[] playload = new byte[prevData.length -12];
System.arraycopy(prevData,12,playload, 0, prevData.length-12);
playLoads.add(playload);
playloadSize += playload.length;
Log.i("sss", "Same timeStamp playload " + playload.length + " bytes = " + getBytesStr(playload));
} else {
if(playLoads.size() > 0){
byte[] playload = new byte[prevData.length -12];
System.arraycopy(prevData,12,playload, 0, prevData.length-12);
playLoads.add(playload);
playloadSize += playload.length;
Log.i("sss", "last playload " + playload.length + " bytes = " + getBytesStr(playload));
inputBufferIndex = decoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0){
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
byte[] allPlayload = new byte[playloadSize];
int curLength = 0;
for(byte[] playLoad:playLoads){
System.arraycopy(playLoad,0,allPlayload, curLength, playLoad.length);
curLength += playLoad.length;
}
Log.i("sss", "diff timeStamp AlllayLoad " + allPlayload.length + "bytes = " + getBytesStr(allPlayload));
inputBuffer.put(allPlayload);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
decoder.flush();
}
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 10000);
if(outputBufferIndex!= -1)
Log.i("sss", "outputBufferIndex = " + outputBufferIndex);
playLoads = new ArrayList<>();
prevData = new byte[65535];
playloadSize = 0;
}
}
prevData = data.clone();The outputBufferIndex still return -1
If I change timeoutUS from 10000 to -1, it never go to next line
I’ve searched for a week but still no luck T_T
Why dequeueOutputBuffer always return -1 ?
What is the problem of my code ?
Could you properly optimize my code to work correctly ?
Thanks for your help.
Edit#1
Thanks @mstorsjo guide me to Packetization and i found useful infomation
Then i edited my code below
if((data[12] & 0x1f) == 28){
if((data[13] & 0x80) == 0x80){ //found start bit
inputBufferIndex = decoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0){
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
byte result = (byte)((bytes[12] & 0xe0) + (bytes[13] & 0x1f));
inputBuffer.put(new byte[] {0,0,1});
inputBuffer.put(result);
inputBuffer.put(data,14, data.length-14);
}
} else if((data[13] &0x40) == 0x40){ //found stop bit
inputBuffer.put(data, 14, data.length -14);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 10000);
switch(outputBufferIndex)
{
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
outputBuffers = decoder.getOutputBuffers();
Log.w("sss", "Output Buffers Changed");
break;
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
Log.w("sss", "Output Format Changed");
MediaFormat newFormat = decoder.getOutputFormat();
Log.i("sss","New format : " + newFormat);
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
Log.w("sss", "Try Again Later");
break;
default:
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
decoder.releaseOutputBuffer(outputBufferIndex, true);
}
} else {
inputBuffer.put(data, 14, data.length -14);
}
}Now i can see some picture but most of screen is gray
What should i do next ??
Thank you.
-
Gstreamer AAC encoding no more supported ?
22 juillet 2016, par Gianksi’d like to include AAC as one of the compatible formats in my app but i’m having troubles with its encoding.
FAAC seems to be missing in GStreamer-1.0 Debian-derived packages (see Ubuntu) and the main reason for that (if i got it correctly) is the presence of avenc_aac (Lunchpad bugreport) as a replacement.I’ve tried the following :
gst-launch-1.0 filesrc location="src.avi" ! tee name=t t.! queue ! decodebin ! progressreport ! x264enc ! mux. t.! queue ! decodebin ! audioconvert ! audioresample ! avenc_aac compliance=-2 ! mux. avmux_mpegts name=mux ! filesink location=/tmp/test.avi
It hangs prerolling with :
ERROR libav :0:: AAC bitstream not in ADTS format and extradata missing
Using mpegtsmux instead of avmux_mpegts seems to work since the file is created but it results with no working audio (with some players it’s completely unplayable).
This is the trace of mplayer :
Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders
[aac @ 0x7f2860d6c3c0]channel element 3.15 is not allocated
[aac @ 0x7f2860d6c3c0]Sample rate index in program config element does not match the sample rate index configured by the container.
[aac @ 0x7f2860d6c3c0]Inconsistent channel configuration.
[aac @ 0x7f2860d6c3c0]get_buffer() failed
[aac @ 0x7f2860d6c3c0]Assuming an incorrectly encoded 7.1 channel layout instead of a spec-compliant 7.1(wide) layout, use -strict 1 to decode according to the specification instead.
[aac @ 0x7f2860d6c3c0]Reserved bit set.
[aac @ 0x7f2860d6c3c0]Number of bands (20) exceeds limit (14).
[aac @ 0x7f2860d6c3c0]invalid band type
[aac @ 0x7f2860d6c3c0]More than one AAC RDB per ADTS frame is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[aac @ 0x7f2860d6c3c0]Reserved bit set.
[aac @ 0x7f2860d6c3c0]Number of bands (45) exceeds limit (28).
Unknown/missing audio format -> no sound
ADecoder init failed :(
Opening audio decoder: [faad] AAC (MPEG2/4 Advanced Audio Coding)
FAAD: compressed input bitrate missing, assuming 128kbit/s!
AUDIO: 44100 Hz, 2 ch, floatle, 128.0 kbit/9.07% (ratio: 16000->176400)
Selected audio codec: [faad] afm: faad (FAAD AAC (MPEG-2/MPEG-4 Audio))
==========================================================================
AO: [pulse] 44100Hz 2ch floatle (4 bytes per sample)
Starting playback...
FAAD: error: Bitstream value not allowed by specification, trying to resync!
FAAD: error: Invalid number of channels, trying to resync!
FAAD: error: Invalid number of channels, trying to resync!
FAAD: error: Bitstream value not allowed by specification, trying to resync!
FAAD: error: Invalid number of channels, trying to resync!
FAAD: error: Bitstream value not allowed by specification, trying to resync!
FAAD: error: Channel coupling not yet implemented, trying to resync!
FAAD: error: Invalid number of channels, trying to resync!
FAAD: error: Invalid number of channels, trying to resync!
FAAD: error: Bitstream value not allowed by specification, trying to resync!
FAAD: Failed to decode frame: Bitstream value not allowed by specification
Movie-Aspect is 1.33:1 - prescaling to correct movie aspect.
VO: [vdpau] 640x480 => 640x480 Planar YV12
A:3602.2 V:3600.0 A-V: 2.143 ct: 0.000 3/ 3 ??% ??% ??,?% 0 0
FAAD: error: Array index out of range, trying to resync!
FAAD: error: Bitstream value not allowed by specification, trying to resync!
FAAD: error: Bitstream value not allowed by specification, trying to resync!
FAAD: error: Unexpected fill element with SBR data, trying to resync!
FAAD: error: Bitstream value not allowed by specification, trying to resync!
FAAD: error: Bitstream value not allowed by specification, trying to resync!
FAAD: error: Channel coupling not yet implemented, trying to resync!
FAAD: error: Invalid number of channels, trying to resync!
FAAD: error: PCE shall be the first element in a frame, trying to resync!
FAAD: error: Invalid number of channels, trying to resync!
FAAD: Failed to decode frame: Invalid number of channels
A:3602.2 V:3600.1 A-V: 2.063 ct: 0.000 4/ 4 ??% ??% ??,?% 0 0These the messages produced by VLC (10 seconds of playback) :
ts info: MPEG-4 descriptor not found for pid 0x42 type 0xf
core error: option sub-original-fps does not exist
subtitle warning: failed to recognize subtitle type
core error: no suitable demux module for `file/subtitle:///tmp//test.avi.idx'
avcodec info: Using NVIDIA VDPAU Driver Shared Library 361.42 Tue Mar 22 17:29:16 PDT 2016 for hardware decoding.
core warning: VoutDisplayEvent 'pictures invalid'
core warning: VoutDisplayEvent 'pictures invalid'
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio warning: ADTS CRC not supported
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio warning: ADTS CRC not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio warning: Invalid ADTS header
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supported
packetizer_mpeg4audio error: Multiple blocks per frame in ADTS not supportedUsing the error of the hanging pipeline I’ve finally discovered that avenc_aac should be told in such way to output the data NOT in RAW AAC but in ADTS AAC, the point is that i’ve no idea how to do that with Gstreamer. See here, bottom of the page : FFMPEG Ticket
At this point since i’ve found no documentation seems right to say we have no support for AAC encoding in GStreamer... which isn’t true, i guess ! (IMHO anyway seems strange the missing of FAAC if AVENC_AAC requires all the time to be set in experimental mode)
Can someone propose a working pipeline for this ?
UPDATE
After some more research i’ve found (via gst-inspect on avenc_aac) what i’m probably looking for but i don’t know how to setup it as needed.
Have a look at stream-format :Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
audio/mpeg
channels: [ 1, 6 ]
rate: [ 4000, 96000 ]
mpegversion: 4
stream-format: raw
base-profile: lcThanks