Just look at the clusterfuck that HTML5 has become. You need to have extremely deep pockets to enter that market.
Ouch. I feel like this is kind of unfair. XML, HTML1-4, and HTML5 all differ in how they treat Postel's law. XML rejects it at the spec level; if you send garbage to a parser it bails immediately, which is nice. HTML5 embraces Postel's law at the spec level. If you send garbage to an HTML5 parser, there's an agreed-on way to deal with it gracefully. Also nice. The problem was rather with HTML1-4, which embraced Postel's law promiscuously, at the implementation level. There were specs, but mainstream implementations largely ignored them and all handled garbage input slightly differently. This is what created the afore-mentioned clusterfuck.
I'm a bit worried about the authors taking this overboard and trying to redefine the URL standard with similar complexity.
Ideally, you'd both accept and correct. But that is the idea, just reworded.
Wrestling with Postel’s Law https://techblog.workiva.com/tech-blog/wrestling-postel’s-la...
More, I think it split on how you read it. If you view it as an absolute maxim to excuse poor implementations, it is panned. If you view it as a good faith behavior not to choke on the first mistake, you probably like it.
This is akin to grammar police. In life encounters, there is no real place for grammar policing. However, you should try to be grammatically correct.
That's because most humans have feelings. But most machines don't. So that's not comparable.
If some of those become dominant, produces might start depending on that behavior and it becomes a de facto standard. This is literally what has happened to HTML, but holds true for many other Internet protocols.
If you're looking for some external reading, I found at least this:
* https://tools.ietf.org/html/draft-thomson-postel-was-wrong
I think you'll find few protocol designers arguing _for_ the robustness principle these days.
I mean, don't go out of your way to under specify input. But relatively nobody is going back to the heavy schema of xml over simple json. Even if they probably should.
I feel this is an anti fragile position. Try not to encourage poor input. But more importantly, be resilient to it. Not dismissive of it.
I've just gotten weary of so many replacement protocols that get dreamed up and go nowhere. Often because they didn't actually learn all of the lessons from predecessors.
"Accept and correct" in the absence of ECC is just delusion if not hubris. The sender could be in a corrupted state and could have sent data it wasn't supposed to send. Or the data could have been corrupted during transfer, accidentally or deliberately. You can't know unless you have a second communication channel (usually an email to the author of the offending piece of software), and what you actually do is literally "guess" the data. How can it go wrong?
For system to system, things are obviously a but different. Don't just guess at what was intended. But, ideally, if you take a date in, be like the gnu date utility and try to accept many formats. But be clear in what you will return.
And, typically, have a defined behavior. That could be to crash. Doesn't have to be, though. Context of the system will be the guide.
And, of course, most people don't actually understand why they succeeded at something. It is easy to understand failure from a specific cause. It is much more difficult to understand success from a combination of many causes.
What do you mean by "enter that market"?
https://tools.ietf.org/html/draft-thomson-postel-was-wrong-0...
This is commonly known as Postel's Law, and comes from one of the TCP RFCs [1].
[1] https://en.wikipedia.org/wiki/Robustness_principle