
Recherche avancée
Autres articles (24)
-
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)
Sur d’autres sites (2124)
-
Using ImageMagick to efficiently stitch together a line scan image
2 octobre 2018, par rkantosI’m looking for alternatives for line scan cameras to be used in sports timing, or rather in the part where placing needs to be figured out. I found that common industrial cameras can readily match the speed of commercial camera solutions at >1000 frames per second. For my needs, usually the timing accuracy is not important, but the relative placing of athletes. I figured I could use one of the cheapest Basler, IDS or any other area scan industrial cameras for this purpose. Of course there are line scan cameras that can do a lot more than a few thousand fps (or hz), but it is possible to get area scan cameras that can do the required 1000-3000fps for less than 500€.
My holy grail would of course be the near-real time image composition capabilities of FinishLynx (or any other line scan system), basically this part : https://youtu.be/7CWZvFcwSEk?t=23s
The whole process I was thinking for my alternative is :
- Use Basler Pylon Viewer (or other software) to record 2px wide images at the camera’s fastest read speed. For the camera I am
currently using it means it has to be turned on it’s side and the
height needs to be reduced, since it is the only way it will read
1920x2px frames @ >250fps - Make a program or batch script that then stitches these 1920x2px frames together to, for example one second of recording 1000*1920x2px
frames, meaning a resulting image with a resolution of 1920x2000px
(Horizontal x Vertical). - Finally using the same program or another way, just rotate the image so it reflects how the camera is positioned, thus achieving an image
with a resolution of 2000x1920px (again Horizontal x Vertical) - Open the image in an analyzing program (currently ImageJ) to quickly analyze results
I am no programmer, but this is what I was able to put together just using batch scripts, with the help of stackoverflow of course.
- Currently recording a whole 10 seconds for example to disk as a raw/mjpeg(avi/mkv) stream can be done in real time.
- Recording individual frames as TIFF or BMP, or using FFMPEG to save them as PNG or JPG takes 20-60 seconds The appending and rotation
then takes a further 45-60 seconds
This all needs to be achieved in less than 60 seconds for 10 seconds of footage(1000-3000fps @ 10s = 10000-30000 frames) , thus why I need something faster.
I was able to figure out how to be pretty efficient with ImageMagick :
magick convert -limit file 16384 -limit memory 8GiB -interlace Plane -quality 85 -append +rotate 270 “%folder%\Basler*.Tiff” “%out%”
#%out% has a .jpg -filename that is dynamically made from folder name and number of frames.This command works and gets me 10000 frames encoded in about 30 seconds on a i5-2520m (most of the processing seems to be using only one thread though, since it is working at 25% cpu usage). This is the resulting image : https://i.imgur.com/OD4RqL7.jpg (19686x1928px)
However since recording to TIFF frames using Basler’s Pylon Viewer takes just that much longer than recording an MJPEG video stream, I would like to use the MJPEG (avi/mkv) file as a source for the appending. I noticed FFMPEG has “image2pipe” -command, which should be able to directly give images to ImageMagick. I was not able to get this working though :
$ ffmpeg.exe -threads 4 -y -i "Basler acA1920-155uc (21644989)_20180930_043754312.avi" -f image2pipe - | convert - -interlace Plane -quality 85 -append +rotate 270 "%out%" >> log.txt
ffmpeg version 3.4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 7.2.0 (GCC)
configuration: –enable-gpl –enable-version3 –enable-sdl2 –enable-bzlib –enable-fontconfig –enable-gnutls –enable-iconv –enable-libass –enable-libbluray –enable-libfreetype –enable-libmp3lame –enable-libopenjpeg –enable-libopus –enable-libshine –enable-libsnappy –enable-libsoxr –enable-libtheora –enable-libtwolame –enable-libvpx –enable-libwavpack –enable-libwebp –enable-libx264 –enable-libx265 –enable-libxml2 –enable-libzimg –enable-lzma –enable-zlib –enable-gmp –enable-libvidstab –enable-libvorbis –enable-cuda –enable-cuvid –enable-d3d11va –enable-nvenc –enable-dxva2 –enable-avisynth –enable-libmfx
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
Invalid Parameter - -interlace
[mjpeg @ 000000000046b0a0] EOI missing, emulating
Input #0, avi, from 'Basler acA1920-155uc (21644989)_20180930_043754312.avi’:
Duration: 00:00:50.02, start: 0.000000, bitrate: 1356 kb/s
Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown), 1920x2, 1318 kb/s, 200 fps, 200 tbr, 200 tbn, 200 tbc
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
Press [q] to stop, [?] for help
Output #0, image2pipe, to ‘pipe:’:
Metadata:
encoder : Lavf57.83.100
Stream #0:0: Video: mjpeg, yuvj422p(pc), 1920x2, q=2-31, 200 kb/s, 200 fps, 200 tbn, 200 tbc
Metadata:
encoder : Lavc57.107.100 mjpeg
Side data:
cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
av_interleaved_write_frame(): Invalid argument
Error writing trailer of pipe:: Invalid argument
frame= 1 fps=0.0 q=1.6 Lsize= 0kB time=00:00:00.01 bitrate= 358.4kbits/s speed=0.625x
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%
Conversion failed!If I go a bit higher for the height, I no longer get the “[mjpeg @ 000000000046b0a0] EOI missing, emulating” -error. However the whole thing will only work with <2px high/wide footage.
edit : Oh yes, I can also use
ffmpeg -i file.mpg -r 1/1 $filename%03d.bmp
orffmpeg -i file.mpg $filename%03d.bmp
to extract all the frames from the MJPEG/RAW stream. However this is an extra step I do not want to take. (just deleting a folder of 30000 jpgs takes 2 minutes alone…)Can someone think of a working solution for the piping method or a totally different alternative way of handling this ?
- Use Basler Pylon Viewer (or other software) to record 2px wide images at the camera’s fastest read speed. For the camera I am
-
Small Time DevOps
1er janvier 2021, par Multimedia Mike — GeneralWhen you are a certain type of nerd who has been on the internet for long enough, you might run the risk of accumulating a lot of projects and websites. Website-wise, I have this multimedia.cx domain on which I host a bunch of ancient static multimedia documents as well as this PHP/MySQL-based blog. Further, there are 3 other PHP/MySQL-based blogs hosted on subdomains. Also, there is the wiki, another PHP/MySQL web app. A few other custom PHP- and Python-based apps are running around on the server as well.
While things largely run on auto-pilot, I need to concern myself every now and then with their ongoing upkeep.
If you ask N different people about the meaning of the term ‘DevOps’, you will surely get N different definitions. However, whenever I have to perform VM maintenance, I like to think I am at least dipping my toes into the DevOps domain. At the very least, the job seems to be concerned with making infrastructure setup and upgrades reliable and repeatable.
Even if it’s not fully automated, at the very least, I have generated a lot of lists for how to make things work (I’m a big fan of Trello’s Kanban boards for this), so it gets easier every time (ideally, anyway).
Infrastructure History
For a solid decade, from 2004 to 2014, everything was hosted on shared, cPanel-based web hosting. In mid-2014, I moved from the shared hosting over to my own VPSs, hosted on DigitalOcean. I must have used Ubuntu 14.04 at the time, as I look down down the list of Ubuntu LTS releases. It was with much trepidation that I undertook this task (knowing that anything that might go wrong with the stack, from the OS up to the apps, would all be firmly my fault), but it turned out not to be that bad. The earliest lesson you learn for such a small-time setup is to have a frontend VPS (web server) and a backend VPS (database server). That way, a surge in HTTP requests has no chance of crashing the database server due to depleted memory.
At the end of 2016, I decided to refresh the VMs. I brought them up to Ubuntu 16.04 at the time.
Earlier this year, I decided it would be a good idea to refresh the VMs again since it had been more than 3 years. The VMs were getting long in the tooth. Plus, I had seen an article speculating that Azure, another notable cloud hosting environment, might be getting full. It made me feel like I should grab some resources while I still could (resource-hoarding was in this year).
I decided to use 18.04 for these refreshed VMs, even though 20.04 was available. I think I was a little nervous about 20.04 because I heard weird things about something called snap packages being the new standard for distributing software for the platform and I wasn’t ready to take that plunge.
Which brings me to this month’s VM refresh in which I opted to take the 20.04 plunge.
Oh MediaWiki
I’ve been the maintainer and caretaker of the MultimediaWiki for 15 years now (wow ! Where does the time go ?). It doesn’t see a lot of updating these days, but I know it still serves as a resource for lots of obscure technical multimedia information. I still get requests for new accounts because someone has uncovered some niche technical data and wants to make sure it gets properly documented.
MediaWiki is quite an amazing bit of software and it undergoes constant development and improvement. According to the version history, I probably started the MultimediaWiki with the 1.5 series. As of this writing, 1.35 is the latest and therefore greatest lineage.
This pace of development can make it a bit of a chore to keep up to date. This was particularly true in the old days of the shared hosting when you didn’t have direct shell access and so it’s something you put off for a long time.
Honestly, to be fair, the upgrade process is pretty straightforward :
- Unpack a set of new files on top of the existing tree
- Run a PHP script to perform any database table upgrades
Pretty straightforward, assuming that there are no hiccups along the way, right ? And the vast majority of the time, that’s the case. Until it’s not. I had an upgrade go south about a year and a half ago (I wasn’t the only MW installation to have the problem at the time, I learned). While I do have proper backups, it still threw me for a loop and I worked for about an hour to restore the previous version of the site. That experience understandably left me a bit gun-shy about upgrading the wiki.
But upgrades must happen, especially when security notices come out. Eventually, I created a Trello template with a solid, 18-step checklist for upgrading MW as soon as a new version shows up. It’s still a chore, just not so nerve-wracking when the steps are all enumerated like that.
As I compose the post, I think I recall my impetus for wanting to refresh from the 16.04 VM. 16.04 used PHP 7.0. I wanted to upgrade to the latest MW, but if I tried to do so, it warned me that it needed PHP 7.4. So I initialized the new 18.04 VM as described above… only to realize that PHP 7.2 is the default on 18.04. You need to go all the way to 20.04 for 7.4 standard. I’m sure it’s possible to install later versions of PHP on 16.04 or 18.04, but I appreciate going with the defaults provided by the distro.
I figured I would just stay with MediaWiki 1.34 series and eschew 1.35 series (requiring PHP 7.4) for the time being… until I started getting emails that 1.34 would go end-of-life soon. Oh, and there are some critical security updates, but those are only for 1.35 (and also 1.31 series which is still stubbornly being maintained for some reason).
So here I am with a fresh Ubuntu 20.04 VM running PHP 7.4 and MediaWiki 1.35 series.
How Much Process ?
Anyone who decides to host on VPSs vs, say, shared hosting is (or ought to be) versed on the matter that all your data is your own problem and that glitches sometimes happen and that your VM might just suddenly disappear. (Indeed, I’ve read rants about VMs disappearing and taking entire un-backed-up websites with them, and also watched as the ranters get no sympathy– “yeah, it’s a VM ; the data is your responsibility”) So I like to make sure I have enough notes so that I could bring up a new VM quickly if I ever needed to.
But the process is a lot of manual steps. Sometimes I wonder if I need to use some automation software like Ansible in order to bring a new VM to life. Why do that if I only update the VM once every 1-3 years ? Well, perhaps I should update more frequently in order to ensure the process is solid ?
Seems like a lot of effort for a few websites which really don’t see much traffic in the grand scheme of things. But it still might be an interesting exercise and might be good preparation for some other websites I have in mind.
Besides, if I really wanted to go off the deep end, I would wrap everything up in containers and deploy using D-O’s managed Kubernetes solution.
The post Small Time DevOps first appeared on Breaking Eggs And Making Omelettes.
-
Troubleshooting ffmpeg/ffplay client RTSP RTP UDP * multicast * issue
6 novembre 2020, par MAXdBI'm having problem with using udp_multicast transport method using ffmpeg or ffplay as a client to a webcam.


TCP transport works :


ffplay -rtsp_transport tcp rtsp://192.168.1.100/videoinput_1/mjpeg_3/media.stm



UDP transport works :


ffplay -rtsp_transport udp rtsp://192.168.1.100/videoinput_1/mjpeg_3/media.stm



Multicast transport does not work :


ffplay -rtsp_transport udp_multicast rtsp://192.168.1.100/videoinput_1/mjpeg_3/media.stm



The error message when udp_multicast is chosen reads :


[rtsp @ 0x7fd6a8000b80] Could not find codec parameters for stream 0 (Video: mjpeg, none(bt470bg/unknown/unknown)): unspecified size



Run with -v debug : Observe that the UDP multicast information appears in the SDP even though the chosen transport is unicast for this run. The SDP content is unchanged for unicast or multicast.


[tcp @ 0x7f648c002f40] Starting connection attempt to 192.168.1.100 port 554
[tcp @ 0x7f648c002f40] Successfully connected to 192.168.1.100 port 554
[rtsp @ 0x7f648c000b80] SDP:
v=0
o=- 621355968671884050 621355968671884050 IN IP4 192.168.1.100
s=/videoinput_1:0/mjpeg_3/media.stm
c=IN IP4 0.0.0.0
m=video 40004 RTP/AVP 26
c=IN IP4 237.0.0.3/1
a=control:trackID=1
a=range:npt=0-
a=framerate:25.0

Failed to parse interval end specification ''
[rtp @ 0x7f648c008e00] No default whitelist set
[udp @ 0x7f648c009900] No default whitelist set
[udp @ 0x7f648c009900] end receive buffer size reported is 425984
[udp @ 0x7f648c019c80] No default whitelist set
[udp @ 0x7f648c019c80] end receive buffer size reported is 425984
[rtsp @ 0x7f648c000b80] setting jitter buffer size to 500
[rtsp @ 0x7f648c000b80] hello state=0
Failed to parse interval end specification ''
[mjpeg @ 0x7f648c0046c0] marker=d8 avail_size_in_buf=145103 
[mjpeg @ 0x7f648c0046c0] marker parser used 0 bytes (0 bits)
[mjpeg @ 0x7f648c0046c0] marker=e0 avail_size_in_buf=145101
[mjpeg @ 0x7f648c0046c0] marker parser used 16 bytes (128 bits)
[mjpeg @ 0x7f648c0046c0] marker=db avail_size_in_buf=145083
[mjpeg @ 0x7f648c0046c0] index=0
[mjpeg @ 0x7f648c0046c0] qscale[0]: 5
[mjpeg @ 0x7f648c0046c0] index=1
[mjpeg @ 0x7f648c0046c0] qscale[1]: 10
[mjpeg @ 0x7f648c0046c0] marker parser used 132 bytes (1056 bits)
[mjpeg @ 0x7f648c0046c0] marker=c4 avail_size_in_buf=144949
[mjpeg @ 0x7f648c0046c0] marker parser used 0 bytes (0 bits)
[mjpeg @ 0x7f648c0046c0] marker=c0 avail_size_in_buf=144529
[mjpeg @ 0x7f648c0046c0] Changing bps from 0 to 8
[mjpeg @ 0x7f648c0046c0] sof0: picture: 1920x1080
[mjpeg @ 0x7f648c0046c0] component 0 2:2 id: 0 quant:0
[mjpeg @ 0x7f648c0046c0] component 1 1:1 id: 1 quant:1
[mjpeg @ 0x7f648c0046c0] component 2 1:1 id: 2 quant:1
[mjpeg @ 0x7f648c0046c0] pix fmt id 22111100
[mjpeg @ 0x7f648c0046c0] Format yuvj420p chosen by get_format().
[mjpeg @ 0x7f648c0046c0] marker parser used 17 bytes (136 bits)
[mjpeg @ 0x7f648c0046c0] escaping removed 676 bytes
[mjpeg @ 0x7f648c0046c0] marker=da avail_size_in_buf=144510
[mjpeg @ 0x7f648c0046c0] marker parser used 143834 bytes (1150672 bits)
[mjpeg @ 0x7f648c0046c0] marker=d9 avail_size_in_buf=2
[mjpeg @ 0x7f648c0046c0] decode frame unused 2 bytes
[rtsp @ 0x7f648c000b80] All info found vq= 0KB sq= 0B f=0/0
[rtsp @ 0x7f648c000b80] rfps: 24.416667 0.018101
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.500000 0.013298
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.583333 0.009235
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.666667 0.005910
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.750000 0.003324
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.833333 0.001477
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 24.916667 0.000369
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.000000 0.000000
[rtsp @ 0x7f648c000b80] rfps: 25.083333 0.000370
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.166667 0.001478
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.250000 0.003326
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.333333 0.005912
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.416667 0.009238
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.500000 0.013302
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 25.583333 0.018105
 Last message repeated 1 times
[rtsp @ 0x7f648c000b80] rfps: 50.000000 0.000000
[rtsp @ 0x7f648c000b80] Setting avg frame rate based on r frame rate
Input #0, rtsp, from 'rtsp://192.168.1.100/videoinput_1/mjpeg_3/media.stm':
 Metadata:
 title : /videoinput_1:0/mjpeg_3/media.stm
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0, 21, 1/90000: Video: mjpeg (Baseline), 1 reference frame, yuvj420p(pc, bt470bg/unknown/unknown, center), 1920x1080 [SAR 1:1 DAR 16:9], 0/1, 25 fps, 25 tbr, 90k tbn, 90k tbc
[mjpeg @ 0x7f648c02ad80] marker=d8 avail_size_in_buf=145103



Here is the same debug section when using udp_multicast. The SDP is identical as mentioned, and the block after the SDP containing [mjpeg] codec info is entirely missing (beginning with marker=d8)—the stream is never identified. This happens (to the eye) instantaneously, there's no indication of a timeout waiting unsuccessfully for an RTP packet, though this, too, could just be insufficient debug info in the driver. Also note that ffmpeg knows that the frames are MJPEG frames and the color primaries are PAL, it just doesn't know the size. Also curious, but not relevant to the problem, the unicast UDP transport destination port utilized for the stream does not appear in the ffmpeg debug dump shown above, meaning part of the RTSP/RTP driver is hiding important information under the kimono, that port number and how it knows that the frames will be MJPEG.


[tcp @ 0x7effe0002f40] Starting connection attempt to 192.168.1.100 port 554
[tcp @ 0x7effe0002f40] Successfully connected to 192.168.1.100 port 554
[rtsp @ 0x7effe0000b80] SDP:aq= 0KB vq= 0KB sq= 0B f=0/0
v=0
o=- 621355968671884050 621355968671884050 IN IP4 192.168.1.100
s=/videoinput_1:0/mjpeg_3/media.stm
c=IN IP4 0.0.0.0
m=video 40004 RTP/AVP 26
c=IN IP4 237.0.0.3/1
a=control:trackID=1
a=range:npt=0-
a=framerate:25.0

Failed to parse interval end specification ''
[rtp @ 0x7effe0008e00] No default whitelist set
[udp @ 0x7effe0009900] No default whitelist set
[udp @ 0x7effe0009900] end receive buffer size reported is 425984
[udp @ 0x7effe0019c40] No default whitelist set
[udp @ 0x7effe0019c40] end receive buffer size reported is 425984
[rtsp @ 0x7effe0000b80] setting jitter buffer size to 500
[rtsp @ 0x7effe0000b80] hello state=0
Failed to parse interval end specification '' 
[rtsp @ 0x7effe0000b80] Could not find codec parameters for stream 0 (Video: mjpeg, 1 reference frame, none(bt470bg/unknown/unknown, center)): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, rtsp, from 'rtsp://192.168.1.100/videoinput_1/mjpeg_3/media.stm':
 Metadata:
 title : /videoinput_1:0/mjpeg_3/media.stm
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0, 0, 1/90000: Video: mjpeg, 1 reference frame, none(bt470bg/unknown/unknown, center), 90k tbr, 90k tbn, 90k tbc
 nan M-V: nan fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0



This is the TCPDUMP of the traffic. The information in both streams appears identical.


19:21:30.703599 IP 192.168.1.100.64271 > 192.168.1.98.5239: UDP, length 60
19:21:30.703734 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.703852 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704326 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704326 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704327 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704327 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704504 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704813 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704814 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 1400
19:21:30.704872 IP 192.168.1.100.64270 > 192.168.1.98.5238: UDP, length 732
19:21:30.704873 IP 192.168.1.100.59869 > 237.0.0.3.40005: UDP, length 60
19:21:30.705513 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.705513 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.705513 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.705513 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.705594 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.705774 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.706236 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.706236 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.706236 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 1400
19:21:30.706236 IP 192.168.1.100.59868 > 237.0.0.3.40004: UDP, length 732



I hope this is a configuration problem, that I can fix this in my ffplay/ffmpeg line, and it's not a bug in ffmpeg. Thanks for any tips.