One reason homomorphic encryption has a reputation as absurdly slow is that people typically talk about “fully” homomorphic encryption, which essentially means that you can compute an arbitrarily large function on the encrypted data. This involves a very expensive process called bootstrapping, which incurs a roughly ~15 ms cost per binary operation evaluated. As you can imagine, that adds up.
This work uses “leveled” homomorphic encryption, where we only perform a function of ‘bounded’ size (that is, the homomorphic dot product). So, we do not have to perform bootstrapping, and thus avoid some of the more extreme costs.
The other reason this work is practical is that we ‘tailored’ our construction to the particular problem of private information retrieval. When people try to apply homomorphic encryption generically to their problem, they typically end up with disappointingly slow and expensive results. Cool work on better ‘FHE compilers’ so in progress, so hopefully that will also help!
What is the meaning of the word "large" here? How are we defining the size of a function?
I'm also closely working together with a team at $WORK who's using a protocol very similar to Apple's but not doing CSAM detection. We are seeing some severe pushback on this technology. I wouldn't be surprised if there are multiple homomorohic encryption based products at Big Tech that have yet to see the light of day.
(And this is not a criticism; this is a compliment. You start so far behind the eight-ball with homomorphic encryption with regard to the resources it consumes I wasn't convinced it was ever going to be even remotely useful for much of anything. Precisely because I was so skeptical, I am that impressed to see something work this well. It's not the fastest Wikipedia mirror, but... honestly... I've been on slower websites! Websites with far less excuse.)