
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (7)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (2466)
-
How to use timeline editing with a single image input in ffmpeg ?
29 septembre 2017, par SergeSmall image should be animated over a background video in a simple way :
- change position - move along a straight line, no easing. Starting at frame A, till frame B (i.e. frames 11 to 31) ;
- zoom in - between frames C and D (i.e. 45 and 55).
Filters I intend to use :
- overlay filter has x and y parameters for image position ;
- zoompan filter allows zooming (preceeded with a static scale up to avoid jitter).
My filtergraph :
video.avi >----------------------------------->|-------|
|overlay|-> out.mp4
image.png >-> scale >-> zoompan >-> zoompan >->|-------|
The problem is with timeline editing. Both filter support the
enable
option. I thought I could add instructions likeenable='between(n, 11, 31)'
to "place" the animations at right times.Appears that the image input has only two values of
n
: zero and 1. Checked that by wrappingn
withprint(n)
inzoompan
filter to output during rendering.Inside
overlay
filter, in opposite,n
outputs sequence of numbers as expected.
Question : how can I make the single image input "look" like a normal video stream to ffmpeg filters – so that every generated frame has its unique number ?
One of the latest tests. Video is hd720, image is 1000x200 transparent png with the logo occupying about 150x50 area in the center, not to be cropped out when zoomed in.
ffmpeg -i $FOOTAGE -loop 1 -i $IMAGE -filter_complex \
"
[1:v]
scale=10*iw:-2
,zoompan=
z='1'
:x='iw/2-(iw/zoom/2)+80'
:y='ih/2-(ih/zoom/2)'
:d=26
:s=500x100
:enable='lt(print(n),24)'
,zoompan=
z='min(zoom+1.3/18,2.3)'
:x='iw/2-(iw/zoom/2)'
:y='ih/2-(ih/zoom/2)'
:d=20
:s=500x100
:enable='between(n,24,42)'
[name];
[0:v][name]
overlay=
x=1005-250
:y=406-50
:enable='lte(n,173)'
" -t 7 -y -hide_banner out.mp4 -
How to Read DJI H264 FPV Feed as OpenCV Mat Object ?
29 mai 2019, par Walter MorawaTDLR : All DJI developers would benefit from decoding raw H264 video stream byte arrays to a format compatible with OpenCV.
I’ve spent a lot of time looking for a solution to reading DJI’s FPV feed as an OpenCV Mat object. I am probably overlooking something fundamental, since I am not too familiar with Image Encoding/Decoding.
Future developers who come across it will likely run into a bunch of the same issues I had. It would be great if DJI developers could use opencv directly without needing a 3rd party library.
I’m willing to use ffmpeg or JavaCV if necessary, but that’s quite the hurdle for most Android developers as we’re going to have to use cpp, ndk, terminal for testing, etc. That seems like overkill. Both options seem quite time consuming. This JavaCV H264 conversion seems unnecessarily complex. I found it from this relevant question.
I believe the issue lies in the fact that we need to decode both the byte array of length 6 (info array) and the byte array with current frame info simultaneously.
Basically, DJI’s FPV feed comes in a number of formats.
- Raw H264 (MPEG4) in VideoFeeder.VideoDataListener
// The callback for receiving the raw H264 video data for camera live view
mReceivedVideoDataListener = new VideoFeeder.VideoDataListener() {
@Override
public void onReceive(byte[] videoBuffer, int size) {
//Log.d("BytesReceived", Integer.toString(videoStreamFrameNumber));
if (videoStreamFrameNumber++%30 == 0){
//convert video buffer to opencv array
OpenCvAndModelAsync openCvAndModelAsync = new OpenCvAndModelAsync();
openCvAndModelAsync.execute(videoBuffer);
}
if (mCodecManager != null) {
mCodecManager.sendDataToDecoder(videoBuffer, size);
}
}
};- DJI also has it’s own Android decoder sample with FFMPEG to convert to YUV format.
@Override
public void onYuvDataReceived(final ByteBuffer yuvFrame, int dataSize, final int width, final int height) {
//In this demo, we test the YUV data by saving it into JPG files.
//DJILog.d(TAG, "onYuvDataReceived " + dataSize);
if (count++ % 30 == 0 && yuvFrame != null) {
final byte[] bytes = new byte[dataSize];
yuvFrame.get(bytes);
AsyncTask.execute(new Runnable() {
@Override
public void run() {
if (bytes.length >= width * height) {
Log.d("MatWidth", "Made it");
YuvImage yuvImage = saveYuvDataToJPEG(bytes, width, height);
Bitmap rgbYuvConvert = convertYuvImageToRgb(yuvImage, width, height);
Mat yuvMat = new Mat(height, width, CvType.CV_8UC1);
yuvMat.put(0, 0, bytes);
//OpenCv Stuff
}
}
});
}
}Edit : For those who want to see DJI’s YUV to JPEG function, here it is from the sample application :
private YuvImage saveYuvDataToJPEG(byte[] yuvFrame, int width, int height){
byte[] y = new byte[width * height];
byte[] u = new byte[width * height / 4];
byte[] v = new byte[width * height / 4];
byte[] nu = new byte[width * height / 4]; //
byte[] nv = new byte[width * height / 4];
System.arraycopy(yuvFrame, 0, y, 0, y.length);
Log.d("MatY", y.toString());
for (int i = 0; i < u.length; i++) {
v[i] = yuvFrame[y.length + 2 * i];
u[i] = yuvFrame[y.length + 2 * i + 1];
}
int uvWidth = width / 2;
int uvHeight = height / 2;
for (int j = 0; j < uvWidth / 2; j++) {
for (int i = 0; i < uvHeight / 2; i++) {
byte uSample1 = u[i * uvWidth + j];
byte uSample2 = u[i * uvWidth + j + uvWidth / 2];
byte vSample1 = v[(i + uvHeight / 2) * uvWidth + j];
byte vSample2 = v[(i + uvHeight / 2) * uvWidth + j + uvWidth / 2];
nu[2 * (i * uvWidth + j)] = uSample1;
nu[2 * (i * uvWidth + j) + 1] = uSample1;
nu[2 * (i * uvWidth + j) + uvWidth] = uSample2;
nu[2 * (i * uvWidth + j) + 1 + uvWidth] = uSample2;
nv[2 * (i * uvWidth + j)] = vSample1;
nv[2 * (i * uvWidth + j) + 1] = vSample1;
nv[2 * (i * uvWidth + j) + uvWidth] = vSample2;
nv[2 * (i * uvWidth + j) + 1 + uvWidth] = vSample2;
}
}
//nv21test
byte[] bytes = new byte[yuvFrame.length];
System.arraycopy(y, 0, bytes, 0, y.length);
for (int i = 0; i < u.length; i++) {
bytes[y.length + (i * 2)] = nv[i];
bytes[y.length + (i * 2) + 1] = nu[i];
}
Log.d(TAG,
"onYuvDataReceived: frame index: "
+ DJIVideoStreamDecoder.getInstance().frameIndex
+ ",array length: "
+ bytes.length);
YuvImage yuver = screenShot(bytes,Environment.getExternalStorageDirectory() + "/DJI_ScreenShot", width, height);
return yuver;
}
/**
* Save the buffered data into a JPG image file
*/
private YuvImage screenShot(byte[] buf, String shotDir, int width, int height) {
File dir = new File(shotDir);
if (!dir.exists() || !dir.isDirectory()) {
dir.mkdirs();
}
YuvImage yuvImage = new YuvImage(buf,
ImageFormat.NV21,
width,
height,
null);
OutputStream outputFile = null;
final String path = dir + "/ScreenShot_" + System.currentTimeMillis() + ".jpg";
try {
outputFile = new FileOutputStream(new File(path));
} catch (FileNotFoundException e) {
Log.e(TAG, "test screenShot: new bitmap output file error: " + e);
//return;
}
if (outputFile != null) {
yuvImage.compressToJpeg(new Rect(0,
0,
width,
height), 100, outputFile);
}
try {
outputFile.close();
} catch (IOException e) {
Log.e(TAG, "test screenShot: compress yuv image error: " + e);
e.printStackTrace();
}
runOnUiThread(new Runnable() {
@Override
public void run() {
displayPath(path);
}
});
return yuvImage;
}- DJI also appears to have a "getRgbaData" function, but there is literally not a single example online or by DJI. Go ahead and Google "DJI getRgbaData"... There’s only the reference to the api documentation that explains the self explanatory parameters and return values but nothing else. I couldn’t figure out where to call this and there doesn’t appear to be a callback function as there is with YUV. You can’t call it from the h264b byte array directly, but perhaps you can get it from the yuv data.
Option 1 is much more preferable to option 2, since YUV format has quality issues. Option 3 would also likely involve a decoder.
Here’s a screenshot that DJI’s own YUV conversion produces.
I’ve looked at a bunch of things about how to improve the YUV, remove green and yellow colors and whatnot, but at this point if DJI can’t do it right, I don’t want to invest resources there.
Regarding Option 1, I know there’s FFMPEG and JavaCV that seem like good options if I have to go the video decoding route.
Moreover, from what I understand, OpenCV can’t handle reading and writing video files without FFMPEG, but I’m not trying to read a video file, I am trying to read an H264/MPEG4 byte[] array. The following code seems to get positive results.
/* Async OpenCV Code */
private class OpenCvAndModelAsync extends AsyncTask {
@Override
protected double[] doInBackground(byte[]... params) {//Background Code Executing. Don't touch any UI components
//get fpv feed and convert bytes to mat array
Mat videoBufMat = new Mat(4, params[0].length, CvType.CV_8UC4);
videoBufMat.put(0,0, params[0]);
//if I add this in it says the bytes are empty.
//Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_ANYCOLOR);
//encodeVideoBuf.release();
Log.d("MatRgba", videoBufMat.toString());
for (int i = 0; i< videoBufMat.rows(); i++){
for (int j=0; j< videoBufMat.cols(); j++){
double[] rgb = videoBufMat.get(i, j);
Log.i("Matrix", "red: "+rgb[0]+" green: "+rgb[1]+" blue: "+rgb[2]+" alpha: "
+ rgb[3] + " Length: " + rgb.length + " Rows: "
+ videoBufMat.rows() + " Columns: " + videoBufMat.cols());
}
}
double[] center = openCVThingy(videoBufMat);
return center;
}
protected void onPostExecute(double[] center) {
//handle ui or another async task if necessary
}
}Rows = 4, Columns > 30k. I get lots of RGB values that seem valid, such as red = 113, green=75, blue=90, alpha=220 as a made up example ; however, I get a ton of 0,0,0,0 values. That should be somewhat okay, since Black is 0,0,0 (although I would have thought the alpha would be higher) and I have a black object in my image. I also don’t seem to get any white values 255, 255, 255, even though there is also plenty of white area. I’m not logging the entire byte so it could be there, but I have yet to see it.
However, when I try to compute the contours from this image, I almost always get that the moments (center x, y) are exactly in the center of the image. This error has nothing to do with my color filter or contours algorithm, as I wrote a script in python and tested that I implemented it correctly in Android by reading a still image and getting the exact same number of contours, position, etc in both Python and Android.
I noticed it has something to do with the videoBuffer byte size (bonus points if you can explain why every other length is 6)
2019-05-23 21:14:29.601 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2425
2019-05-23 21:14:29.802 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2659
2019-05-23 21:14:30.004 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:30.263 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6015
2019-05-23 21:14:30.507 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:30.766 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4682
2019-05-23 21:14:31.005 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:31.234 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2840
2019-05-23 21:14:31.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4482
2019-05-23 21:14:31.664 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:31.927 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4768
2019-05-23 21:14:32.174 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:32.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4700
2019-05-23 21:14:32.668 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:32.864 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4740
2019-05-23 21:14:33.102 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:33.365 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4640My questions :
I. Is this the correct format to read an h264 byte as mat ?
Assuming the format is RGBA, that means row = 4 and columns = byte[].length, and CvType.CV_8UC4. Do I have height and width correct ? Something tells me YUV height and width is off. I was getting some meaningful results, but the contours were exactly in the center, just like with the H264.II. Does OpenCV handle MP4 in android like this ? If not, do we need to use FFMPEG or JavaCV ?
III. Does the int size have something to do with it ? Why is the int size occassionally 6, and other times 2400 to 6000 ? I’ve heard about the difference between this frames information and information about the next frame, but I’m simply not knowledgeable enough to know how to apply that here.
I’m starting to think this is where the issue lies. Since I need to get the 6 byte array for info about next frame, perhaps my modulo 30 is incorrect. So should I pass the 29th or 31st frame as a format byte for each frame ? How is that done in opencv or are we doomed to use the complicated ffmpeg ? How would I go about joining the neighboring frames/ byte arrays ?
IV. Can I fix this using Imcodecs ? I was hoping opencv would natively handle whether a frame was color from this frame or info about next frame. I added the below code, but I am getting an empty array :
Mat videoBufMat = Imgcodecs.imdecode(new MatOfByte(params[0]), Imgcodecs.IMREAD_UNCHANGED);
This also is empty :
Mat encodeVideoBuf = new Mat(4, params[0].length, CvType.CV_8UC4);
encodeVideoBuf.put(0,0, params[0]);
Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_UNCHANGED);V. Should I try converting the bytes into Android jpeg and then import it ? Why is djis yuv decoder so complicated looking ? It makes me cautious from wanting to try ffmpeg or Javacv and just stick to Android decoder or opencv decoder.
VI. At what stage should I resize the frames to speed up calculations ?
Edit : DJI support got back to me and confirmed they don’t have any samples for doing what I’ve described. This is a time for we the community to make this available for everyone !
Upon further research, I don’t think opencv will be able to handle this as opencv’s android sdk has no functionality for video files/url’s (apart from a homegrown MJPEG codec).
So is there a way in Android to convert to mjpeg or similar in order to read ? In my application, I only need 1 or 2 frames per second, so perhaps I can save the image as jpeg.
But for real time applications we will likely need to write our own decoder. Please help so that we can make this available to everyone ! This question seems promising :
-
Troubleshooting ffmpeg/ffplay client RTSP RTP UDP * multicast * issue
6 novembre 2020, par MAXdBI'm having problem with using udp_multicast transport method using ffmpeg or ffplay as a client to a webcam.


TCP transport works :


ffplay -rtsp_transport tcp rtsp://192.168.1.100/videoinput_1/mjpeg_3/media.stm



UDP transport works :


ffplay -rtsp_transport udp rtsp://192.168.1.100/videoinput_1/mjpeg_3/media.stm



Multicast transport does not work :


ffplay -rtsp_transport udp_multicast rtsp://192.168.1.100/videoinput_1/mjpeg_3/media.stm



The error message when udp_multicast is chosen reads :


[rtsp @ 0x7fd6a8000b80] Could not find codec parameters for stream 0 (Video: mjpeg, none(bt470bg/unknown/unknown)): unspecified size



Run with -v debug : Observe that the UDP multicast information appears in the SDP even though the chosen transport is unicast for this run. The SDP content is unchanged for unicast or multicast.


[tcp @ 0x7f648c002f40] Starting connection attempt to 192.168.1.100 port 554
[tcp @ 0x7f648c002f40] Successfully connected to 192.168.1.100 port 554
[rtsp @ 0x7f648c000b80] SDP:
v=0
o=- 621355968671884050 621355968671884050 IN IP4 192.168.1.100
s=/videoinput_1:0/mjpeg_3/media.stm
c=IN IP4 0.0.0.0
m=video 40004 RTP/AVP 26
c=IN IP4 237.0.0.3/1
a=control:trackID=1
a=range:npt=0-
a=framerate:25.0

Failed to parse interval end specification ''
[rtp @ 0x7f648c008e00] No default whitelist set
[udp @ 0x7f648c009900] No default whitelist set
[udp @ 0x7f648c009900] end receive buffer size reported is 425984
[udp @ 0x7f648c019c80] No default whitelist set
[udp @ 0x7f648c019c80] end receive buffer size reported is 425984
[rtsp @ 0x7f648c000b80] setting jitter buffer size to 500
[rtsp @ 0x7f648c000b80] hello state=0
Failed to parse interval end specification ''
[mjpeg @ 0x7f648c0046c0] marker=d8 avail_size_in_buf=145103 
[mjpeg @ 0x7f648c0046c0] marker parser used 0 bytes (0 bits)
[mjpeg @ 0x7f648c0046c0] marker=e0 avail_size_in_buf=145101
[mjpeg @ 0x7f648c0046c0] marker parser used 16 bytes (128 bits)
[mjpeg @ 0x7f648c0046c0] marker=db avail_size_in_buf=145083
[mjpeg @ 0x7f648c0046c0] index=0
[mjpeg @ 0x7f648c0046c0] qscale[0]: 5
[mjpeg @ 0x7f648c0046c0] index=1
[mjpeg @ 0x7f648c0046c0] qscale[1]: 10
[mjpeg @ 0x7f648c0046c0] marker parser used 132 bytes (1056 bits)
[mjpeg @ 0x7f648c0046c0] marker=c4 avail_size_in_buf=144949
[mjpeg @ 0x7f648c0046c0] marker parser used 0 bytes (0 bits)
[mjpeg @ 0x7f648c0046c0] marker=c0 avail_size_in_buf=144529
[mjpeg @ 0x7f648c0046c0] Changing bps from 0 to 8
[mjpeg @ 0x7f648c0046c0] sof0: picture: 1920x1080
[mjpeg @ 0x7f648c0046c0] component 0 2:2 id: 0 quant:0
[mjpeg @ 0x7f648c0046c0] component 1 1:1 id: 1 quant:1
[mjpeg @ 0x7f648c0046c0] component 2 1:1 id: 2 quant:1
[mjpeg @ 0x7f648c0046c0] pix fmt id 22111100
[mjpeg @ 0x7f648c0046c0] Format yuvj420p chosen by get_format().
[mjpeg @ 0x7f648c0046c0] marker parser used 17 bytes (136 bits)
[mjpeg @ 0x7f648c0046c0] escaping removed 676 bytes
[mjpeg @ 0x7f648c0046c0] marker=da avail_size_in_buf=144510
[mjpeg @ 0x7f648c0046c0] marker parser used 143834 bytes (1150672 bits)
[mjpeg @ 0x7f648c0046c0] marker=d9 avail_size_in_buf=2
[mjpeg @ 0x7f648c0046c0] decode frame unused 2 bytes
[rtsp @ 0x7f648c000b80] All info found vq= 0KB sq= 0B f=0/0
[rtsp @ 0x7f648c000b80] rfps: 24.416667 0.018101
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.500000 0.013298
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.583333 0.009235
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.666667 0.005910
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.750000 0.003324
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.833333 0.001477
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.916667 0.000369
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.000000 0.000000
[rtsp @ 0x7f648c000b80] rfps: 25.083333 0.000370
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.166667 0.001478
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.250000 0.003326
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.333333 0.005912
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.416667 0.009238
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.500000 0.013302
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.583333 0.018105
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 50.000000 0.000000
[rtsp @ 0x7f648c000b80] Setting avg frame rate based on r frame rate
Input #0, rtsp, from 'rtsp://192.168.1.100/videoinput_1/mjpeg_3/media.stm':
 Metadata:
 title : /videoinput_1:0/mjpeg_3/media.stm
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0, 21, 1/90000: Video: mjpeg (Baseline), 1 reference frame, yuvj420p(pc, bt470bg/unknown/unknown, center), 1920x1080 [SAR 1:1 DAR 16:9], 0/1, 25 fps, 25 tbr, 90k tbn, 90k tbc
[mjpeg @ 0x7f648c02ad80] marker=d8 avail_size_in_buf=145103



Here is the same debug section when using udp_multicast. The SDP is identical as mentioned, and the block after the SDP containing [mjpeg] codec info is entirely missing (beginning with marker=d8)—the stream is never identified. This happens (to the eye) instantaneously, there's no indication of a timeout waiting unsuccessfully for an RTP packet, though this, too, could just be insufficient debug info in the driver. Also note that ffmpeg knows that the frames are MJPEG frames and the color primaries are PAL, it just doesn't know the size. Also curious, but not relevant to the problem, the unicast UDP transport destination port utilized for the stream does not appear in the ffmpeg debug dump shown above, meaning part of the RTSP/RTP driver is hiding important information under the kimono, that port number and how it knows that the frames will be MJPEG.


[tcp @ 0x7effe0002f40] Starting connection attempt to 192.168.1.100 port 554
[tcp @ 0x7effe0002f40] Successfully connected to 192.168.1.100 port 554
[rtsp @ 0x7effe0000b80] SDP:aq= 0KB vq= 0KB sq= 0B f=0/0
v=0
o=- 621355968671884050 621355968671884050 IN IP4 192.168.1.100
s=/videoinput_1:0/mjpeg_3/media.stm
c=IN IP4 0.0.0.0
m=video 40004 RTP/AVP 26
c=IN IP4 237.0.0.3/1
a=control:trackID=1
a=range:npt=0-
a=framerate:25.0

Failed to parse interval end specification ''
[rtp @ 0x7effe0008e00] No default whitelist set
[udp @ 0x7effe0009900] No default whitelist set
[udp @ 0x7effe0009900] end receive buffer size reported is 425984
[udp @ 0x7effe0019c40] No default whitelist set
[udp @ 0x7effe0019c40] end receive buffer size reported is 425984
[rtsp @ 0x7effe0000b80] setting jitter buffer size to 500
[rtsp @ 0x7effe0000b80] hello state=0
Failed to parse interval end specification '' 
[rtsp @ 0x7effe0000b80] Could not find codec parameters for stream 0 (Video: mjpeg, 1 reference frame, none(bt470bg/unknown/unknown, center)): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, rtsp, from 'rtsp://192.168.1.100/videoinput_1/mjpeg_3/media.stm':
 Metadata:
 title : /videoinput_1:0/mjpeg_3/media.stm
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0, 0, 1/90000: Video: mjpeg, 1 reference frame, none(bt470bg/unknown/unknown, center), 90k tbr, 90k tbn, 90k tbc
 nan M-V: nan fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0



This is the TCPDUMP of the traffic. The information in both streams appears identical.


19:21:30.703599 IP 192.168.1.100.64271 > 192.168.1.98.5239: UDP, length 60
19:21:30.703734 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.703852 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704326 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704326 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704327 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704327 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704504 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704813 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704814 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704872 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 732
19:21:30.704873 IP 192.168.1.100.59869 > 237.0.0.3.40005: UDP, length 60
19:21:30.705513 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.705513 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.705513 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.705513 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.705594 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.705774 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.706236 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.706236 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.706236 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.706236 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 732



I hope this is a configuration problem, that I can fix this in my ffplay/ffmpeg line, and it's not a bug in ffmpeg. Thanks for any tips.