I think "without further implementation details" is the key point here. Client developers usually have these. Sure, Nostr is still small, but there's several clever ways of dealing with scalability issues. Not least of which is the outbox model, linked in my first post.
Your criticisms of the article are valid tho. And I don't think it is unique in its failing. Perhaps Nostr's fatal flaw is in the way it is being sold by its fans, myself included.
But that's OK. It will take off as Bitchat, or Primal, or whatever the next iteration is that figures out a way of selling Nostr's benefits, without confusing people with its implementation.
From the information given in the article, it states categorically that the relays do not ever connect to other relays (which makes you wonder why they even choose to misname them if they're not actually relaying anything).
It then continues saying that clients need to connect to multiple (but not more than a dozen) to be able to receive all content from anywhere. The only inference I can make from that is that a client is responsible to receiving a message from one "relay" and transmitting it to another.
The obvious question then is how does the client know if the other relays already have the message? There are two obvious options:
* The client informs the relay about every new message it receives from every other relay. That means each relay will be informed about each new message from the vast majority of the clients that connect to it, which is obviously going to be expensive. It would also put the burden on clients to remember which relays they've informed, and if they add a new relay, the client would presumably have to replay every message it knows just in case the relay is missing it.
* The other option is that the client has to query the relay for a list of every single message on the relay and only forward on new messages to the relay if the relay says it doesn't have it. This could potentially be even more expensive, and even if the client/relay maintain some kind of shared state, if the client tries another relay, it'd have to re-download the entire list of messages. Even if we're only talking about message IDs, that's a huge amount of data to download.
In any case, if relays will just accept any old message and rely on the clients to check they were signed correctly, then it stands to reason that any relay can be trivially DDoS by bombarding it with junk. The impression the article gives is that relays would never verify the authenticity of a message itself, because that would break their distributed model.
The article doesn't provide any detail about how its new "relay" solution works. It just stops abruptly after asserting that relays fix everything, with no explanation. This is exactly the reason why I said the article feels like it's cut short.
So, without any hints to its possible implementation, one can only speculate and I personally can't see any way in which this solution would be better than a peer-based solution where "relays" actually relay messages between themselves. It's possible that whatever the author has created is truly innovative and groundbreaking, but they haven't chosen to tell us why in the article.
My suggestion would be to skip it and learn about nostr from other sources. I'm on Nostr since almost the beginning and it's been very exciting to watch. For reference my android client app (Amethyst) is currently directly connected to 390 relays (using the new "outbox model") and it works well, no slow down, no battery drain.
Because that is the obvious thing that would happen without further implementation details. A few large relays taking the brunt of the vast majority of the network. It isn't an inherently scalable architecture.
Of course you can do other stuff in addition and thereby achieve scalability. At least arguably. But then a relevant explanation needs carefully walk through those additional non-obvious details.