Preferences

CRDTs are also crazy slow due to their architecture ; even the best alg out there are costly by design ; so adding homomorphic encryption is even more of a challenge ; tough it really is impressing I'm curious if this can be usable at all;

edit so i bring some "proof" of my claim: from this very page : `To calculate the new map, the server must go through and merge every single key. After that, it needs to transfer the full map to each peer — because remember, as far as it knows, the entire map is different.`


jason_oster
CRDTs are not inherently “crazy slow”. Researchers just don’t succumb to the appeal of premature optimization.

See: https://josephg.com/blog/crdts-go-brrr/

(And even these optimizations are nascent. It can still get so much better.)

The section you quoted describes an effect of homomorphic encryption alone.

There is the problem that both CRDTs and encryption add some overhead, and the overhead is additive when use together. But I can’t tell if that is the point you are trying to make.

josephg
Yep. Author here - that article is out of date now. I should really do a followup. Performance of CRDTs has improved again through a new grab bag of tricks. I’ve also been told the beta of automerge 3 uses a lot of the optimisations in that post, and it’s now much faster as a result.

A crdt library should be able to handle millions of changes per second. If it’s the bottleneck, something somewhere has gone wrong.

Please do! There are too few current articles about CRDTs around!
PartiallyTyped
The article was a great read, many thanks! Looking forward to the next!
hansvm
> additive

The overhead is usually multiplicative per-item. Let's say you're doing N things. CRDTs make that O(Nk) for some scaling factor k, and adding encryption makes it O(Nkj) for some scaling factor j.

Give or take some multiplicative log (or worse) factors depending on the implementation.

motorest
> CRDTs are also crazy slow due to their architecture ;

You must back up your extraordinary claim with some extraordinary evidence. There is nothing inherently slow in CRDTs.

Also, applying changes is hardly on anyone's hot path.

The only instance where I saw anyone complaining about CRDT performance, it turned out to be from very naive implementations that tried to spam changes with overly chatty implementations. If you come up with any code that requires a full HTTPS connection to send a single character down the wire, the problem is not the algorithm.

__MatrixMan__
Is it the CRDT that's slow there, or is the problem that they've made it one party's job to update everybody?

By having a server in the mix it feels like we're forcing a hub/spoke model on something that wants to be a partial mesh. Not surprising that the hub is stressed out.

svieira
The whole point of Conflict-free Replicated Data Types is that you don't need an authoritative server. You're thinking of Operational Transform which does require an authority.
While it is true that CRDTs don't require an authoritative server, the hub and spoke model (which could also be thought of as having a well-known always-online super peer) is more efficient and provides a better user experience. In practice most products that are built with CRDTs today use this model.
__MatrixMan__
I may have worded that poorly but my intent was to suggest that it would work better if there wasn't a server.

I didn't know the name for the serverful alternative though, so thanks for that.

Asraelite
> CRDTs are also crazy slow due to their architecture

What kinds of CRDTs specifically are you referring to? On its own this statement sounds far too broad to be meaningful. It's like saying "nested for loops are crazy slow".

MangoToupe
> CRDTs are also crazy slow

compared to what? c'mon

This item has no comments currently.