This SSH window size limit is per ssh "stream", so it could be overcome by many parallel streams, but most programs do not make use of that (scp, rsync, piping data through the ssh command), so they are much slower than plain TCP as measured eg by iperf3.
I think it's silly that this exists. They should just let TCP handle this.
No, unfortunately it'snecessary so that the SSH proocol can multiplex streams independently over a single established connection.
If one of the multiplexed streams stalls because its receiver is blocked or slow, and the receive buffer (for that stream) fills up, then without window-based flow control, that causes head-of-line blocking of all the other streams.
That's fine if you don't mind streams blocking each other, but it's a problem if they should flow independently. It's pretty much a requirement for opportunistic connection sharing by independent processes, as SSH does.
In some situations, this type of multiplexed stream blockiing can even result in a deadlock, depending on what's sent over the streams.
Solutions to the problem are to either use window-based flow control, separate from TCP,, or to require all stream receive buffers to expand without limit, which is normally unacceptable.
HTTP/2 does something like this.
I once designed a protocol without this, thinking multipexing was enough by itself, and found out the hard way when processes got stuck for no apparent reason.
* Give users a config options so I can adjust it to my use case, like I can for TCP. Don't just hardcode some 2 MB (which was even raised to this in the past, showing how futile it is to hardcode it because it clearly needs adjustments to people's networks and and ever-increasing speeds). It is extremely silly that within my own networks, controlling both endpoints, I cannot achieve TCP speeds over SSH, but I can with nc and a symmetric encryption piped in. It is silly that any TCP/HTTP transfer is reliably faster than SSH.
* Implement data dropping and retransmissions to handle blocking -- like TCP does. It seems obviously asking for trouble to want to implement multiplexing, but then only implement half of the features needed to make it work well.
When one designs a network protocol, shouldn't one of the first sanity checks be "if my connection becomes 1000x faster, does it scale"?
Or, better but more difficult, it should track the dynamic TCP window size, from the OS when possible, combined with end-to-end measurements, and ensure the SSH mux channel windows grow to accomodate the TCP window, without growing so much they starve other channels.
To your second point, you can't do data dropping and retransmission for mux'd channels over a single TCP connection. After data is sent from the application to the kernel socket, it can't be removed from the TCP transmission queue, will be retransmitted by the kernel socket as often as needed, and will reach the destination eventually, provided the TCP connection as a whole survives.
You can do mux'd data dropping and retransmission over a single UDP connection, but that's basically what QUIC is.
If you use the former without the latter, you'll inevitably have head-of-line blocking issues if your connection is bandwidth or receiver limited.
Of course not every SSH user uses protocol multiplexing, many do, as it can avoid repeated and relatively expensive (terms of CPU, performance, and logging volume) handshakes.
Also, HTTP/3 must obviously also be using some kind of acknowledgements, since for fairness reasons alone it must be implementing some congestion control mechanism, and I can't think of one that gets by entirely without positive acknowledgements.
It could well be more efficient than TCP's default "ack every other segment", though. (This helps in the type of connection mentioned above; as far as I know, some DOCSIS modems do this via a mechanism called "ack compression", since TCP is generally tolerant of losing some ACKs.)
In a sense, the win of QUIC/HTTP/3 in this sense isn’t that it’s not TCP (it actually provides all the components of TCP per stream!); it’s rather that the application layer can “provide its own TCP”, which might well be more modern than the operating system’s.
Of course you need to wait for ACKs at some point though, otherwise they would be useless. That's how we detect, and potentially recover from, broken links. They are a feature. And HTTP3 has that feature.
Is it better implemented than the various TCP algorithms we use underneath regular SSH? Perhaps. That remains to be seen. The use case of SSH (long lived connections with shorter lived channels) is vastly different from the short lived bursts of many connections that QUIC was intented for. My best guess is that it could go both ways, depending on the actual implementation. The devil is in the details, and there are many details here.
Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for. Look at pages like this Linux Tuning for High Latency networks: https://fasterdata.es.net/host-tuning/linux/
There is also the HPN-SSH project which increases the buffers of SSH even more than what is standard. It is seldom needed anymore since both Linux and OpenSSH has improved, but can still be useful.
SSH multiplexes multiple channels on the same TCP connection which results in head of line blocking issues.
> Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for.
Not really, no. OpenSSH has a 2 MB window size (in the 2000s, 64K), even with just ~gigabit speeds it only takes around 10-20 ms of latency to start being limited by the BDP.
But it's still irrelevant here; specifically called out in README:
> The keystroke latency in a running session is unchanged.
The YouTube and social media eras made everyone so damn dramatic. :/
Mosh solves a problem. tmux provides a "solution" for some that resolves a design decision that can impact some user workflows.
I guess what I'm saying here, is it you NEED mosh, then running tmux is not even a hard ask.
1. High latency, maybe even packet-dropping connections;
2. You’re roaming and don’t want to get disconnected all the time.
For 2, sure tmux is mostly okay, it’s not as versatile as the native buffer if you use a good terminal emulator but whatever. For 1, using tmux in mosh gives you an awful, high latency scrollback buffer compared to the local one you get with regular ssh. And you were specifically taking about 1.
For read-heavy, reconnectable workloads over high latency connections I definitely choose ssh over mosh or mosh+tmux and live with the keystroke latency. So saying it’s a huge downside is not an exaggeration at all.
From my stance, and where I've used mosh has been in performing quick actions on routers and servers that may have bad connections to them, or may be under DDoS, etc. "Read" is extremely limited.
So from that perspective and use case, the "huge downside" has never been a problem.
Filtering inbound UDP on one side is usually enough to break mosh, in my experience. Maybe they use better NAT traversal strategies since I last checked, but there's usually no workaround if at least one network admin involved actively blocks it.