I personally hate the usual interpretation as float and see it as a common but extremely-implementation-induced failure. It's far better interpreted as an arbitrary precision numeric type, not float or int. The spec even says as much and only says that implementations mostly suck so watch out. IMO precision myopia is why we end up with e.g. Python's refusal-by-default to (de)serialize from/to Decimal.
edit: I didn't mention integer keys, because object members canonically start with a letter.
This is not true, JSON numbers are simply signed decimal numbers. They might be parsed into floating point (as is the case with JavaScript), or any other numeric type, which makes them unreliable without additional constraints beyond what JSON specifies.
I never understood these two choices in the spec as they are totally against the “human-readable” goal…
- The numbers is floating points, but cannot be Infinity and NaN. It is not a integer type, so long integers might not work properly. (There are other problems with numbers too, as mentioned in that article.)
- The strings is Unicode. Non-Unicode (including binary data) doesn't do properly, and even Unicode can have problems (some of which are mentioned in that article, but there are others too).
- Keys are only strings, not numbers.
- Syntax convenience isn't so well, e.g. doesn't have comments, optional trailing commas, etc.
- The format is difficult for reasons explained in that article, too.
One possible alternative would be a format based on a subset of PostScript (instead of JavaScript), e.g. (a part of a example from Wikipedia):
PostScript also has binary format, comments (with a percentage sign), hex string literals, etc. (And, commas are not used, so the problem with trailing commas also does not apply.)(Nevertheless, I did write a JSON parser (and also a JSON writer) in PostScript.)
It is also possible to use binary formats, CSV, etc, depending on what exactly is needed by the program; for many reasons, one format cannot solve everything.