Simplest example, "a\u0000b" is a perfectly valid and in-bounds JSON string that valid JSON data sets may have in it. Doesn't it end up falling short of 'Anything JSON can do, it can do" to refuse to serialize that string?
The spec on the GitHub says that it is banned to include NUL under a security stance, that someone that after parse someone might do strlen and accidentally truncate to a shorter string in C.
Which I think has some premise, but its a valid string contents in JSON (and in Utf8), so it is deliberately breaking 1:1 parity with JSON parity in the name of a security hypothetical.
Users can of course enable NUL in the rare cases where they need it, but I want safe defaults.
Actually, I'll make that section clearer.
Just focusing narrowly on the \0 part to explain why I say so: the spec proposed is that implementations have to either hard ban embedded \0 or disallow by default with an opt in. So someone comes with a dataset that has it, they can get support in this case only if they configure both the serializer and parser to allow it. But if you're willing to exert that level of special case extra control, I think all of the other preexisting binary-json implementations that exist do meet the top line definition you are setting as well. For some binary-json implementation which has additional types, if someone is in full end to end control to special case, then they could just choose not to use those types too, the mere existence of extra types in the binary format is no extra "problem" for 1:1 than this choice.
IMO the deliverable that a 1:1 mapping would give us "there is no bonjson data that won't losslessly round trip to JSON and vice versa". The benefit is when it is over all future data that you haven't seen yet, where the downside of using something that is not bijective is that you run for a long time suddenly you have data dependent failures in your system because you can't 1:1 map legal data.
And especially with this guarantee, what will inevitably happen is some downstream handling will also take as a given that they can strlen() since they "knew" the bonjson format spec banned it, so suddenly when you have it as in-bounds data you also won't be able to trivially flip the switch, instead you are stuck with legal JSON that you can't ingest in your system without an expensive audit because the reduction from 1:1 gets entrenched as an invariant into the handling code.
Note that my vantage point might be a bit skewed here: I work on Protobuf and this shape of ecosystem interoperability topics are top of mind for me in ways that they don't necessarily need to be for small projects, and I also recognize that "what even is legal JSON" itself is not actually completely clear, so take it all with a grain of salt (and again, I also do think it looks like a very nice encoding in general).
Unrelated JSON experience:
I worked on a serializer which save/load json files as well as binary file (using a common interface).
From my own use case I found JSON to be restrictive for no benefit (because I don't use it in a Javascript ecosystem)
So I change the json format into something way more lax (optional comma, optional colon, optional quotes, multi line string, comments).
I wish we would stop pretending JSON to be a good human-readable format outside of where it make sense and we would have a standard alternative for those non-json-centric case.
I know a lot of format already exists but none really took off so far.
It sucks, but we're stuck with JSON. So the idea here is to make it suck a little less by stopping all this insane text processing for data that never ever meets a human directly.
The progression I envisage is:
1. Dev reaches for JSON because it's easy and ubiquitous.
2. Dev switches to BONJSON because it's more efficient and requires no changes to their code other than changing the codec library.
3. Dev switches to a sane format after the complexity of their app reaches a certain level where a substantial code change is warranted.
As for FracturedJson, it looks great. The basic problem statement of "either minified and unreadable or prettified and verbose" isn't one I had put my finger on before, but now that it's been said I can't unsee it.
If you need custom data types, you can use tagged elements, but that requires you to have functions registered to convert the data type to/from representable values (often strings).
It natively supports quite a bit more than JSON does, without writing custom data readers/writers.
I've found a more comprehensive documentation here. [1]
At first glance, I would say it's a bit more complex that it should for a "human readable" format.
Thanks for sharing your work!
I'm actually having second thoughts with Concise Encoding. It's gotten very big with all the features it has, which makes it less likely to be adopted (people don't like new things).
I've been toying around with a less ambitious format called ORB: https://github.com/kstenerud/orb
It's essentially an extension of BONJSON (so it can read BONJSON documents natively) that adds extra types and features.
I'm still trying to decide what types will actually be of use in the real world... CE's graph type is cool, but if nobody uses it...
Your extensions of JSON with comments, hexadecimal notation, optional commas, etc is useful though (my own program to convert JSON to DER does treat commas as spaces, although that is an implementation detail).
I am writing this because I work on a related topic https://replicated.wiki/blog/args.html
Nevertheless, I believe your claims are mostly accurate, except for a few issues with which things are allowed or not allowed, due to JavaScript and other things (although in some of these cases, the BONJSON specification allows options to control this). Sometimes rejecting certain things is helpful, but not always; for example sometimes you do want to allow mismatched surrogates, and sometimes you might want to allow null characters. (The defaults are probably reasonable, but are often the result of a bad design anyways, as I had mentioned above.) Also, the top of the specification says it is safe against many attacks, but these are a feature of the implementation, which would also be the case if you are implement JSON or other formats (although the specification for BONJSON does specify that implementations are supposed to check for these things to make them safe).
(The issue of overlong UTF-8 encodings in IIS web servers is another security issue, which is using a different format for validation and for usage. In this case there are actually two usages though, because one of these usages is the handling of relative URLs (using the ASCII format) and the other is the handling of file names on the server (which might be using UTF-16 here; in addition to that is the internal format of the file paths into individual pieces with the internal handling of relative file paths). There are reasons to avoid and to check for overlong UTF-8 encodings, although this is a different more general issue than the character encoding.)
Another issue is canonical forms; the canonical form of JSON can be messy, especially for numbers (I don't know what the canonical form for numbers in JSON is, but I read that apparently it is complicated).
I think DER is better. BONJSON is more compact but that also makes the framing more complicated to handle than DER (which uses consistent framing for all types). I also wrote a program to convert JSON to DER (I also made up some nonstandard types, although the conversion from JSON to DER only uses one of these nonstandard types (key/value list); the other types it needs are standard ASN.1 types). Furthermore, DER is already canonical form (and I had made up SDER and SDSER for when you do not want canonical form but also do not want the messiness of BER; SDSER does have chunking and does not require the length to be known ahead of time, so more like BONJSON in these ways). Because of the consistent framing, you can easily ignore any types that you do not use; even though there are many types you do not necessarily need all of them.
Safe, sane defaults, and some configurability for people who (hopefully) know what they're doing. Falling into success rather than falling into failure.
BONJSON is a small spec, and easy to implement ( https://github.com/kstenerud/ksbonjson/blob/main/library/src... and https://github.com/kstenerud/ksbonjson/blob/main/library/src... ).
It's not the end-all-be-all of data formats; it's just here to make the JSON pipeline suck less.
JSON implementations can be made just as safe, but the issue is that unsafe JSON implementations are still considered valid implementations (and so almost all JSON implementations are unsafe because nobody is an authority on which design is correct). Mandating safety and consistency within the spec is a MAJOR help towards raising the safety of all implementations and avoiding these security vulnerabilities in your infrastructure.
Yes, I agree (if you want to use it at all, which as I have mentioned you should consider if you should not use JSON or something related), although some of the things that you specify as not having options will make it more restrictive than JSON will be, even if those restrictions might be reasonable by default. One of these is mismatched surrogates (although matched surrogates should always be disallowed, an option to allow mismatched surrogates should be permitted (but not required)). Also, I think checking for duplicate names probably should not use normalized Unicode. Furthermore, the part that says that names MUST NOT be null seems redundant to me, since it already says that names MUST be strings (for compatibility with JSON) and null is not a string.
> Mandating safety and consistency within the spec is a MAJOR help towards raising the safety of all implementations and avoiding these security vulnerabilities in your infrastructure.
OK, this is a valid point, although there is still the possibility of incorrect implementations (adding test cases would help with that problem, though).
I've also been working in the other direction, making JSON more machine-readable:
https://github.com/kstenerud/bonjson/
It has EXACTLY the same capabilities and limitations as JSON, so it works as a drop-in replacement that's 35x faster for a machine to read and write.
No extra types. No extra features. Anything JSON can do, it can do. Anything JSON can't do, it can't do.