- From analytical arguments considering a rather generic error type, we already know that for the Shor algorithm to produce a useful result, the error rate with the number of logical qubits needs to decrease as ~n^(-1/3), where `n` is the number of bits in the number [1].
This estimate, however, assumes that interaction can be turned on between arbitrary two qubits. In practice, we can only do nearest-neighbour interactions on a square lattice, and we need to simulate the interaction between two arbitrary qubits by repeated application of SWAP gates, mangling the interaction through as in the 15th puzzle. This two-qubit simulation would add about `n` SWAP gates, which would then multiply the noise factor by the same factor, hence now we need an error rate for logical qubits on a square lattice to be around ~n^(-4/3)
Now comes the error correction. The estimates are somewhat hard to make here, as they depend on the sensitivity of the readout mechanism, but for example let’s say a 10-bit number can be factored with a logical qubit error rate of 10^{-5}. Then we apply a surface code that scales exponentially, reducing the error rate by 10 times with 10 physical qubits, which we could express as ~1/10^{m/10}, where m is the number of physical qubits (which is rather optimistic). Putting in the numbers, it would follow that we need 40 physical qubits for a logical qubit, hence in total 400k physical qubits.
That may sound reasonable, but then we made the assumption that while manipulating the individual physical qubits, decoherence for each individual qubit does not happen while they are waiting for their turn. This, in fact, scales poorly with the number of qubits on the chip because physical constraints limit the number of coaxial cables that can be attached, hence multiplexing of control signals and hence the waiting of the qubits is imminent. This waiting is even more pronounced in the quantum computer cluster proposals that tend to surface sometimes.
[1]: https://link.springer.com/article/10.1007/s11432-023-3961-3
- I have also fallen in this trap by thinking that a good product that addresses the needs of users would make it wanted. But coming so far with no traction to show I seriously doubt my prospects of bridging the gap between from the needs to the wants.
- I have quite a few gripes about the app structure while developing https://github.com/PeaceFounder/AppBundler.jl. The requirement (recommendation) to distribute shared libraries within the Frameworks folder, where each directory follows a strict structure, looks nice, but it’s a hassle to bundle the application that way. I am now using a Libraries folder to bypass this requirement, which appears during code signing.
My biggest issue, though, is Apple code signing. It’s already enough that a signature is attached to every binary, which seems wasteful. Why would anyone consider it better than keeping hashes of each file in one place and attaching the signature to them? Then there are entitlements, which are attached to the launcher binary when signed. Why couldn’t these just be stored in `Info.plist` or a separate file, instead of requiring this process?
And then there is notarisation, where at any point in the future, you might discover that your application bundle no longer passes, as requirements have become more stringent.
- I map the arrow keys for switching windows which works quite well for me.
- That is one possibility. Probably the only one, though.
- Indeed, a wonderful proof. It does, though, make one implicit assumption that if one stretches the fabric by the same amount, all holes in it stretch by the same amount. In particular, it assumes that triangle stretching is size-independent. Perhaps there are fabrics where that is not true...
- > you can build most of the control and readout circuits to work at cryogenic temps (2-4K) using slvtfets
Given the magic that happens inside high-precision control and readout boxes connected to qubits with coaxial cables, I would not equate the possibility of building one with such a control circuit ever reaching the same level of precision. I find it strange that I haven’t seen that on the agenda for QC, where instead I see that multiplexing is being used.
> The theoretical limit for this quantum computing platform is, I believe, on the order of a million qubits in a single cryostat.
What are the constraints here?
- > I don't think there is any fundamental reason why this could not be replaced by a different tech in the future.
The QC is designed with coaxial cables running from the physical qubits outside the cryostat because the pulse measurement apparatus is most precise in large, bulky boxes. When you miniaturise it for placement next to qubits, you lose precision, which increases the error rate.
I am not even sure whether logical components work at such low temperatures, since everything becomes superconducting.
> Even with current "coaxial cable tech", it "only" needs to scale up to the point of reaching one logical qubit.
Having a logical qubit sitting in a big box is insufficient. One needs multiple logical qubits that can be interacted with and put in a superposition, for example. A chain of gates represents each logical qubit gate between each pair of physical qubits, but that's not possible to do directly at once; hence, one needs to effectively solve the 15th puzzle with the fewest steps so that the qubits don't decohere in the meantime.
- I agree with the statement that measuring the performance of factorisation now is not a good metric to assess progress in QC at the moment. However, the idea that once logical qubits become available, we reach a cliff, is simply wishful thinking.
Have you ever wondered what will happen to those coaxial cables seen in every quantum computer setup, which scale approximately linearly with the number of physical qubits? Multiplexing is not really an option when the qubit waiting for its control signal decoheres in the meantime.
- I am consistently using `m` for marking relevant files/directories in the dired mode and then `A` to find a regex among all included files. It does not seem that I miss anything by not relying on such a project approach.
- It is quite negligent that they are not using the threshold decryption ceremony, but at the same time, I don't think we should dismiss the framing of human mistake here. Even if there were a threshold decryption ceremony in place, such a failure mode could still happen; here, it simply makes it more visible. The question of how one would select the threshold seems pertinent.
A small threshold reduces privacy, whereas a large threshold makes human error or deliberate sabotage attempts more likely. What is the optimum here? How do we evaluate the risks?
- It is very easy to switch. Today I got things done with ChatGPT. If I didn’t have any LLM available only then it would be disaster.
- It is discouraged to override internal internal functions, hence, one often only needs to monitor the public API changes of packages as in every other programming language. Updates for packages in my experience rarely had broke stuff like that. Updates in Base somrimes can cause issues like that, but those are thoroughly tested on most popular registered packages before a new Julia version is released.
Interfaces could be good as intermediaries and it is always great to hear JuliaCon talks every year on the best ways to implement them.
> Imagine you try to move a definition from one file to another. Sounds like a trivial piece of organization, right?
In my experience it’s most trivial. I guess your pain points may have come by making each file as a module and then adding methods for your own types in different module and then moving things around is error prone. The remedy here sometimes is to not make internal modules. However the best solution here is to write integration tests which is a good software development practice anyway.
- All is true, and I agree with you, yet it is deeply unsatisfying for one with original ideas.
- > I admit that I was initially surprised by how often I ran into the attitude from students in these programs that they don't actually need to be well-versed in anything besides the exact information they need to know to conduct research in their field.
The PhD students tend to get this attitude from the competitive publish an perish environment where they are in. Sometimes suprrvisours are contributing by dismissing students gaining context and the big picture on why their research is important when the research topic is preassigned as it is unproductive. When the productivity is measured in papers not curios PhD graduates who contribute to the society that’s true.
- Because of the variety of roof designs and associated mounting systems, it is generally known how to mount the panels you are paying for. Installing panels on a high-pitched roof is also not an easy feat.
For ground mounting, site preparation is required. Probably one wouldn't want to see their panel system get some slope year after year.
- Equating freedom with choice is strange. Often choices comes hand in hand with commitments which removes initial options from the table while other options appear. Whether the lost options produces regret is a matter of one’s evolving values that perceives the available options at the given time. Hence absence of regret seems the real freedom, but we don’t have a Time Machine or a Crystal Ball. The next best option is approaching regret with humility of the past and agency to align current values with future options. If the values don’t need to be forced on oneself to align with available options then we get freedom.
- I personally use Julia, which does not have such boxing issues. Rust, C, C++, and Fortran also avoid boxing like this. Perhaps Go is also free from such boxing? Python does it, that's true.
- That doesn't sound very pleasant.
The hard problem then remains how to connect those qubits at scale. Using a coaxial cable for each qubit is impractical; some form of multiplexing is needed. This, in turn, causes qubits to decohere while waiting for their control signal.