- _alternator_Why doesn’t Signal have the same mindspace that these (imo) marginal apps have? It’s actually private. I wonder if people find it hard to use or something…
- This. Tables of numbers are explicitly not subject to copyright; that’s a copyright 101 fact.
Any of the code that wraps the model or makes it useful is subject to copyright. But the weights themselves are as unrestricted as it gets.
- Rob Pike is definitely not the only person going to be pissed off by this ill-considered “agentic village” random acts of kindness. While Claude Opus decided to send thank you notes to influential computer scientists including this one to Rob Pike (fairly innocuous but clearly missing the mark), Gemini is making PRs to random github issues (“fixed a Java concurrency bug” on some random project). Now THAT would piss me off, but fortunately it seems to be hallucinating its PR submissions.
Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.
- Embedding state in a real number, and calling it a “length” is a common trick to show that a physical system is TC. Unfortunately, the abstraction (length<->real number) suffers from numerous real-world issues that typically renders any implementation impossible.
I’m not even talking impractical; real numbers are simply too powerful to be resolved in the physical world. Unless you spend a ton of effort talking about quantizing and noise, you are very, very far from a realizable computer.
- I’ve made some interesting things in the past few years, in particular singing Tesla coils and digitally-controlled plasma tube lights. Was thinking about making bespoke musical instruments based on some of these learnings.
Of particular interest was some interesting types of feedback that came from the Tesla coils. Basically we modulated the frequency we drove the coils to produce sound, but the coils would interfere with one another because that’s how electromagnetism works. We had to tune them to different resonant frequencies to play sound. But the interference itself could sound unique and eerie, sometimes like an old-timey radio. It’s similar in principle to a theremin but a very different sound.
Or I could just get a soul sucking job and do this in early retirement. Shrug.
- 2 points
- I think you are asking whether consciousness might be a fundamentally different “thing” from physics and thus hard or impossible to simulate.
I think there is abundant evidence that the answer is ‘no’. The main reason is that consciousness doesn’t give you new physics, it follows the same rules and restrictions. It seems to be “part of” the standard natural universe, not something distinct.
- Human brains and experiences seem to be constrained by the laws of quantum physics, which can be simulated to arbitrary fidelity on a computer. Nit sure where Godel’s incompleteness theory would even come in here…
- Panpsychism is actually quite reasonable in part because it changes the questions you ask. Instead of “does it think” you need to ask “in what ways can it think, and in what ways is it constrained? What types of ‘experience/qualia’ can this system have, and what can’t it have?”
When you think in these terms, it becomes clear that LLMs can’t have certain types of experiences (eg see in color) but could have others.
A “weak” panpsychism approach would just stop at ruling out experience or qualia based on physical limitations. Yet I prefer the “strong” pansychist theory that whatever is not forbidden is required, which begins to get really interesting (would imply that for example an LLM actually experiences the interaction you have with it, in some way).
- Computing the Kolmorgorov constant?
- This comment will probably get buried because I’m late to the party, but I’d like to point out that while they identify a real problem, the author’s approach—using code or ASTs to validate LLM output—does not solve it.
Yes, the approach can certainly detect (some) LLM errors, but it does not provide a feasible method to generate responses that don’t have the errors. You can see at the end that the proposed solution is to automatically update the prompt with a new rule, which is precisely the kind of “vibe check” that LLMs frequently ignore. If they didn’t, you could just write a prompt that says “don’t make any mistakes” and be done with it.
You can certainly use this approach to do some RL on LLM code output, but it’s not going to guarantee correctness. The core problem is that LLMs do next-token prediction and it’s extremely challenging to enforce complex rules like “generate valid code” a priori.
As a closing comment, it seems like I’m seeing a lot of technical half-baked stuff related to LLMs these days because LLMs are good at supporting people when they have half baked ideas, and are reluctant to openly point out the obvious flaws.
- A tidbit that I’m very interested in is Apple’s reports that it has not removed any apps at the request of the US government. It seems that in this case, they did; why don’t they report it transparently?
- Let me second this: a baseline analysis should include papers that were published or reviewed at least 3-4 years ago.
When I was in grad school, I kept a fairly large .bib file that almost certainly had a mistake or two in it. I don’t think any of them ever made it to print, but it’s hard to be 100% sure.
For most journals, they actually partially check your citations as part of the final editing. The citation record is important for journals, and linking with DOIs is fairly common.
- Sorry, I just realized the typo but I can no longer edit: I’m not a doctor, just a nerd. I blame my phone and my own crappy copy-editing. I hope this didn’t confuse too many of you.
- The emerging evidence, taken together, shows a ~20% reduction in dementia over 7 years. So it’s actually pretty dramatic. https://med.stanford.edu/news/all-news/2025/03/shingles-vacc...
- I don’t think that there are many things known to have as strong of an effect as HZ vaccines. The current evidence is that the vaccine eliminates like 20% of all cases, suggesting that HZ (aka chickenpox) is directly responsible for at least 20% of dementia cases, possibly much more.
- The highlights are a good start. (I’m a doctor, just a nerd who likes to read papers.)
My comments in brackets.
- Herpes zoster vaccination reduced dementia diagnosis in our prior natural experiments. [Previous work. I’m familiar with the Wales experiment where they had a sharp age cutoff for getting the vaccine in their national health system. Comparing those just below and just beyond the cutoff allows for analysis similar to a randomized controlled trial (aka ‘natural experiment’). The results showed a ~20% decrease in dementia due to vaccine, so the results were already pretty strong.]
- Here, we find a lower occurrence of MCI and dementia deaths among dementia patients [MCI = ‘mild cognitive impairment’. This is a more refined result than prior work, harder to see in the data than a clear dementia diagnosis.]
- Herpes zoster vaccination appears to act along the entire clinical course of dementia. [This is not surprising given the earlier results, but the demonstration is harder, and it may lead to recommendations for earlier HZ vaccination, IIRC currently at 50 or 55 in the US.]
- This study’s approach avoids the common confounding concerns of observational data [Basically they are improving their methods and getting stronger results, classic good science.]
- Wait, they didn’t give them real money. They simulated the results.
- I would give the same review, without seeing any of this as a positive. NKS was bloviating, grandiose, repetitive, and shallow. The fact that Wolfram himself didn’t show that CA were Turing complete when most theoretical computer scientists would say “it’s obvious, and not that interesting” kinda disproves his whole point about him being an under appreciated genius. Shrug.
- Glad you updated on this front-page post. Your Twitter post is buried on p3 for me right now. Good luck on the recovery and hopefully this helps someone.