Preferences

I sat through a briefing last week about quantum encryption and the threat that quantum computing poses to encryption in use today. It was stressed that nation states are hoovering up encrypted data now in order to decrypt later with quantum computing. Much the same way America decrypted old soviet encrypted data. I wonder if it will take as long and if anyone will still be alive to make use of that data.

If quantum computing would progress just like in the last 30 years it may take 300 years before it can be useful.

https://eprint.iacr.org/2025/1237.pdf

  As has been previously pointed out, the 2001 and 2012 quantum factorisation records may be easily matched with a dog trained to bark three times [33]. We verified this by taking a recently-calibrated reference dog, Scribble, depicted in Figure 6, and having him bark three times, thus simultaneously factorising both 15 and 21. This process wasn’t as simple as it first appeared because Scribble is very well behaved and almost never barks. Having him perform the quantum factorisation required having his owner play with him with a ball in order to encourage him to bark. It was a special performance just for this publication, because he understands the importance of evidence-based science.
I look forward to more dog-based science.
> we also estimate that factorising at least two-digit numbers should be within most dogs’ capabilities, assuming the neighbours don’t start complaining first
> Similarly, we refer to an abacus as “an abacus” rather than a digital computer, despite the fact that it relies on digital manipulation to effect its computations.

I loved this quote as well

This deserves an Ig Nobel Prize lol.
Ig Nobel’s go to actual research, not to satire.
If you know a better way to factor 35, I’d like to hear it.
If we know anything, it's that development is never linear
If anyone had made meaningful progress on QC in that time there is no way knowledge of it would be allowed to be public.

It is one of those domains where success would land you in a gilded prison.

Like LLMs, this isn't the sort of thing where a small group would make a sudden advancement and be made secret, and I doubt that the NSA can make theirs significantly faster than any industry team today. I think more likely you would need to get worried if someone got one scalable to hundreds or thousands of logical qubits and then stopped publishing.
> I think more likely you would need to get worried if someone got one scalable to hundreds or thousands of logical qubits and then stopped publishing.

Consider the likelihood of managing that without alerting the authorities to what is going on.

Thanks for sharing this, great read.
This shouldn't be a major issue because of Forward Secrecy (https://en.wikipedia.org/wiki/Forward_secrecy) principles built into modern TLS protocols, which ensure that even if the public/private key scheme is vulnerable to (for example) quantum attacks, the attacks have to be done now, as a MITM for the handshake, or otherwise the full traffic capture is useless for future decryption without getting some secrets from one of the endpoints.

That being said, it's not 100% used everywhere yet (Wikipedia mentions 92.6% of websites), and various means of tricking devices into downgrading to an older protocol would result in traffic that might be decrypted later.

No, this absolutely is not how forward secrecy works in TLS. Forward secrecy protects against a break in the signature algorithm, but not in the key agreement algorithms.

Both the FFDH and ECDH key agreement algorithms are vulnerable to quantum crypt-analysis; someone capturing traffic today could later break that agreement and then decrypt the data. An attacker would have to capture the entire session up to the "point of interest" though.

This is why FFDH/ECDH are being augmented with Post-Quantum secure KEMs.

What I want to know is how they guess which 0.001% of signals or internet traffic is actually worthwhile to keep? The biggest nation states could conceivably store about 1 year’s worth of internet traffic right now, but then they also need to store whatever other signals intelligence they’re gathering for analysis, so it will be less than a single years worth.

But almost all that data is going to turn out to be useless if or when they gain quantum ability to decrypt it, and even the stuff that could be useful now gets less useful with every month it stays encrypted. Stuff that is very useful intelligence now could be absolutely useless in five years…

> What I want to know is how they guess which 0.001% of signals or internet traffic is actually worthwhile to keep?

By observing DNS lookups in centralized taps like room 641a at AT&T.

If you discard all major video streaming sites (including adult entertainment) then you probably can get most of the way there; you're probably mostly interested in text communication and actual user data, not the video content which is so much larger than that.
2000 years in the future people will know which porn you slobbed it to.
All of Tor that can be hoovered up seems like a worthwhile investment.
An exabyte isn't as much as it sounds like.
It was a lot more in 2014. Presumably they have upgraded it significantly since.
I wonder if there is any way of figuring out a "data space inflation" metric or similar, like money but for drive space?

So we who grew up with 500MB computers could properly communicate how big the drives "felt" at the time, compared to what we have today.

I was a few years into computer use before I got to experience a hard drive, a whopping 40MB.
I don't think this is true at anything resembling a concerning scale.

Even trying to do something like saving 'just' the average yearly traffic tor handles would account for 2-3% of all the current storage available.

We're talking about the same government that quickly abandoned their quest of 'archiving every tweet in the Library of Congress'

Sat in a similar briefing in 2018, sounds like the same talking points still.
It's an interesting little nugget of evidence in favor of the simulation hypothesis. We're currently living through the first era in humanity's history where there will be enough raw information to create a workable global level simulation once that data is decrypted. Pair that with the fact that we're living through such a huge inflection point in history (birth of the internet, humanity becoming a multiplanetary species, and more) and you have a time where people both (1) can and (2) will want to simulate/experience. It's quite interesting.

I'm still convinced that the simulation hypothesis is just religion for the atheist or agnostic, because if it turns out that it's correct and one day you 'wake up' only to find that it was all a simulation, well how do you know that isn't now also just another simulation? It's a non-theory. But I find this some quite compelling circumstantial evidence in favor of this non-theory. Because an arbitrary number of individuals may be able to experience "this" era throughout our species' future, yet only one group will be the one that gets to actually live it, and that group will ostensibly be orders of magnitude smaller than the sum total of all that will later 'experience' it. Statistically you're rather more likely to belong to the simulation group than the real, if these assumptions are correct.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal