https://www.bhphotovideo.com/c/product/1926851-REG/mikrotik_...
I did some digging to find the switching chip: Marvell 98DX7335
Seems confirmed here: https://cdn.mikrotik.com/web-assets/product_files/CRS812-8DS...
And here: https://cdn.mikrotik.com/web-assets/product_files/CRS812-8DS...
> Switch chip model 98DX7335
From Marvell's specs: https://www.marvell.com/content/dam/marvell/en/public-collat... > Description: 32x50G / 16x100G-R2 / 8x100G-R4 / 8x200G-R4 / 4x400G-R8
> Bandwidth: 1600Gbps
Again, those are some wild numbers if I have the correct model. Normally, Mikrotik includes switching bandwidth in their own specs, but not in this case.Besides stuff like this switch they've also produced pretty cool little micro-switches you can PoE and run as WLAN hotspots, e.g. to distance your mobile user device from some network you don't really trust, or more or less maliciously bridge a cable network through a wall because your access to the building is limited.
e.g. QSFP28 (100GbE) splits into 4x SFP28s (25GbE each), because QSFP28 is just 4 lanes of SFP28.
Same goes for QSFP112 (400GbE). Splits into SFP112s.
It’s OSFP that can be split in half, i.e. into QSFPs.
There's also splitting at the module level, for example I have a PCIe card that is actually a fully self hosted 6 port 100GB switch with it's own onboard Atom management processor. The card only has 2 MPO fiber connectors - but each has 12 fibers, which each can carry 25Gbps. You need a special fiber breakout cable but you can mix anywhere between 6 100GbE ports and 24 25Gbe ports.
https://www.silicom-usa.com/pr/server-adapters/switch-on-nic...
https://www.fs.com/products/101806.html
But all of this is pretty much irrelevant to my original point.
Put another way, see the graphs in the OP where he points out that the old way of clustering performs worse the more machines you add? I’d expect that to happen with 200GbE also.
And with a switch, it would likely be even worse, since the hop to the switch adds additional latency that isn’t a factor in the TB5 setup.
This isn’t any different with QSFP unless you’re suggesting that one adds a 200GbE switch to the mix, which:
* Adds thousands of dollars of cost,
* Adds 150W or more of power usage and the accompanying loud fan noise that comes with that,
* And perhaps most importantly adds measurable latency to a networking stack that is already higher latency than the RDMA approach used by the TB5 setup in the OP.