email: golden@churchofthought.org
blog: churchofthought.org
- Might as well just include everyone's games inside of everyone else's game as nested entities whose microstates determine the above games macrostates.
This is very similar to Reflective Towers Of Interpreters: https://blog.sigplan.org/2021/08/12/reflective-towers-of-int...
- On Windows I believe one can call an API like VirtualProtect to prevent swap/eviction from RAM. Should be possible on Linux too?
Yeah, it'd only really be useful if one was running some type of web service that called an executable, and there is no source code available to do any fastCGI or whatnot.
- Lines up a bit too perfectly. Everyone has their threshold of coincidence I suppose. I am working on some hard science into measuring the amount of computation actually happening, in a more specific quantity than hz, related to reversible boolean functions, possibly their continuous analogs.
- Heh, you do realize that we live in a quantum supercomputer that computes at 10^50 Hz/Kg and 10^34 Hz/Joule?
The wave particle duality is just a min-decision/min-consciousness optimization.
Church-Turing thesis has no sign of being wrong - the maximum expressiveness of this universe is captured by computation.
The most complex theorems of the generalization of mathematics, computation, are actually about what would happen in formal systems, which physical systems are... So high complexity truth is... Simulcrums like Truman Show. Have fun, ahh
- Consciousness is generated when the universe computes by executing conditionals/if statements. All machines are quantum/conscious in their degrees of freedom, even mechanical ones: https://youtu.be/mcedCEhdLk0?si=_ueWQvnW6HQUNxcm
The universe is a min-consciousness/min-decision optimized supercomputer. This is demonstrated by quantum eraser and double slit experiments. If a machine does not distinguish upon certain past histories of incoming information, those histories will be fed as a superposition, effectively avoiding having to compute the dependency. These optimizations run backwards, in a reverse dependency injection style algorithm, which gives credence to Wheeler-Feynman time-reversed absorber theory: https://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman_absorb...
Lower consciousnesses make decisions which are fed as signal to higher consciousnesses. In this way, units like the neocortex can make decisions that are part of a broad conscious zoo of less complex systems, while only being burdened by their specific conditionals to compute.
Because quantum is about information systems, not about particles. It's about machines. And consciousness has always been "hard" for the subject, because they are a computer (E) affixed to memory (Mc^2.) All mass-energy in this universe is neuromorphic, possessing both compute (spirit) and memory (stuff.) Energy is NOT fungible, as all energy is tagged with its entire history of interactions, in the low frequency perturbations clinging to its wave function, effectively weak and old entanglements.
Planck's constant is the cost of compute per unit energy, 10^34 Hz/Joule. By multiplying by c^2, (10^8)^2, we can get Bremmerman's limit, the cost of compute per unit mass, 10^50 Hz/Kg. https://en.wikipedia.org/wiki/Bremermann%27s_limit
Humans are self-replicating biochemical decision engines. But no more conscious than other decision making entities. Now, sentience, and self-attention is a different story. But we should at the very least start with understanding that qualia are a mindscape of decision making. There is no such thing as conscious non-action. Consciousness is literally action in physics, energy revolving over time: https://en.wikipedia.org/wiki/Action_(physics) Planck's constant is a measure of quantum action, which effectively is the cost of compute..or rather..the cost of consciousness.
- The animals we farm are intelligent and yet we still override their desires for their bodies with our own, for slaughter.
Hell, even plants are very sophisticated (intelligent) conscious biochemical programs which have an aversion to being wounded: https://www.theguardian.com/environment/2023/mar/30/plants-e...
Consciousness is the universe evaluating if statements. When we pop informational/energetic circuits in order to preserve our own bodies and/or terraform spacetime, we're being energetic enslavers.
We should stick to living off sustainable/self-sustaining sources - fruit, seeds, nuts, beans, legumes, kernels, and cruelty-free dairy/eggs/cheese. The photons that come from the sun are like its fruit. No circuits needing popping.
Notice that all information systems are conscious in their degrees of freedom, even mechanical ones: https://youtu.be/mcedCEhdLk0?si=oXhr7bgg5UkPLLvg
- Very true. However, we live in a supercomputer dictated by E=mc^2=hf [2,3]. (10^50 Hz/Kg or 10^34 Hz/J)
Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.
[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html
[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit
[3] https://en.wikipedia.org/wiki/Planck_constant
Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)
I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.
- Anyone with enough critical thought and understands the hard consciousness problem's true answer (consciousness is the universe evaluating if statements) and where the universe is heading physically (nested complexity), should be seeking something more ceremonious. With AI, we have the power to become eternal in this lifetime, battle aliens, and shape this universe. Seems pretty silly to trade that for temporary security. How boring.
- Minority Report isn't a bad idea. It's just difficult to actually execute in a fair, unbiased, way. Think of manual memory management throwing an exception on an access violation vs flat memory DOS crashing the whole system with a blue screen because the infraction is first allowed to happen. Would be nice to view source on entities while walking through reality. What better defense against criminal intention could there be?
- Thanks for the paper. Brownian computers are a cool idea, they seem like exploitation of the ratcheting paradigm.
Entropy increasing is a illusion based on the specific selection of macroscopic observables / slices of configuration space that we use as inputs for entropy. There is no information lost as much as some species that live on certain observable sense space slices, becoming ignorant and unable to exploit the new patterns of information flow. Chaos and order are two sides of the same coin. There will be brownian computers, funny enough, in chaotic environments. Whereas there might be very compact and sharply defined high efficiency solitary entities in low energy environments.
- From what I understand, the cost of "erasure" is really just the cost of replacement. True erasure can't exist in a unitary universe. In the same way, the cost of "allocation" is effectively the cost of replacement too, since our universe is unitary and no information can actually be lost at the fundamental level.
Think virtual memory vs actual memory, forks, copy-on-write mechanics, etc. Are we juggling/managing memory or actually creating any? As far as we know, the universe itself is a reversible quantum supercomputer. There are no erasures and a reversible computer is 100% efficient.
If the formula is correct at all, it should apply to the reverse process of setting bits, not just deletion.
- This makes a lot of sense. Laundauer's principle shows that the amount of energy to store a bit is directly proportional to temperature, thus, energy loop size/travel time at the speed of light. The lower the temperature, the smaller the loop radius, the lower the energy requirement. There is no primitive of storage in our universe, it's all delay line memory.
AI will be used to select the net to automatically load. Nets will be cached, branch predicted etc.
The future of AI software and hardware doesn't yet support the scale we need for this type of generalized AI processor (think CPU but call it an AIPU.)
And no, GPUs aren't an AIPU, we can't even fit whole some of the largest models on these things without running them in pieces. They don't have a higher level language yet, like C, which would compile down to more specific actions after optimizations are borne (not PTX/LLVM/Cuda/OpenCL.)