- tgbugs parentThis points to a potential answer to a long standing question I've had about why some hairs stop growing at certain lengths. If the force is being generated by cellular migration then control over when to stop growing can be mediated by a signal that tells the cells to stop migrating, and that could be based on time or vibration amplitude or something else that correlates with hair length. For hair that grows continually you just ... never turn off cell migration.
- I like to compare these to CENTENNIA [0,1], which was the first program like this that I ever encountered (back in 6th grade). My test, is to see whether the program records the Napoleonic wars. This one does not.
0. https://historicalatlas.com/download/ 1. https://youtu.be/WFYKrNptzXw?t=64
- A fully psychometric version of this that explores more than just the fovea could be created by varying the scale parameter (if you crank it up high enough you can see the movement in the periphery). The additional component you would need is to have trials where the subjects has to report whether a particular region (could even be cued with a red circle, I don't think it needs to be random) is actually moving or not while fixated on the center. There are clearly cells that detect this kind of motion in the periphery but they need larger visual input, possibly because the receptive fields of the cells that feed in are larger out there.
- One science museum that is not like that is the Deutsches Technikmuseum Berlin, at least when I was there (shudder) about a decade ago.
It was a museum that was designed for parents to explain to children. The written material for any given piece in an exhibit went into sufficient detail and successive sections of writing would build on each other without necessarily requiring that the previous section had been read.
Back then the museum had an exhibition on the longitude problem and time keeping, precision, drift, etc. that walked you through the development of increasingly accurate chronometers, the practical reasons why, etc. It was an absolute masterwork exhibit, and it expected the adults to be actively engaged with helping digest the material with the kids.
- I think about this every single time I drive by a stretch of road that has these. You can't have public goods when the value of those goods in private hands is greater than the risk of, ahrm, converting those public goods into private goods.
When a society fails to provide sufficient opportunity for all its members then those who have been left behind can simply make up the difference by retrieving their share of the common wealth by other means.
The cost of trying to police this (ignoring entirely the moral and ethical implications of such policing) at the scale of e.g. all roads with guardrails is more than the value if replacing the rails, and likely substantially more than just providing the missing opportunity and removing the sources of wealth inequality that make wealth redistribution in the form of guard rails an inevitability.
- I'm going to ignore the issues of mind/body dualism since they are orthogonal to the argument I want to make about Nagel's bat.
The short version is that if we can approximate the sensory experience and the motor experience of an organism, and we can successively refine that approximation as measured by similarity in behavior between bat and man-bad, then I would argue that we can in fact imagine what it is like to be a bat.
In short, it is a Chinese Bat Room argument. If you put a human controlling a robot bat and a bat in two boxes and then ask someone to determine which is the human and which is the bat, when science can no longer tell the difference (because we have refined the human/bat interface sufficiently) you can ask the human controlling the robot bat to write down their experience and it would be strikingly similar to what the bat would say if we could teach it English.
The bat case is actually easier than one might suppose, similarly say, a jumping spider, because we can translate their sensory inputs to our nervous system and if we tune our reward system and motor system so that we can get even an approximate set of inputs and similar set of actuators, then we can experience what it is like to be a bat.
Further, if I improve the fidelity of the experimental man-bat simulation rig, the experience will likewise converge. While we will not be able to truly be a bat since that is asymptotically mutually exclusive with our biology, the fact that we can build systems that allow progressive approach to bat sensory motor experience means that we actually do have the ability to image the experience of other beings. That is, our experiences are converging and differ only due to our lack of our technical ability to overcome the limitations of our biological differences.
The harder case is when we literally don't have the molecule that is used to detect something, as in the tetrachormat case. That said one of my friends has always wanted to find a way to do an experiment where a trichromat can somehow have the new photo receptor expressed in one eye and see what happens.
The general argument about why we would expect something similar to happen should the technical hurdles be overcome is because basically all nervous systems wire themselves up by learning. Therefore, as long as the input and output ranges can be mapped to something that a human can learn, then a human nervous system should likewise converge to be able to sense and produce those inputs and outputs (modulo certain critical periods in neural development, though even those can be overcome, e.g. language acquisition by slowing down speech for adults).
Some technical hurdle examples. Converting a trichromat into a tetrachormat by crispering someone's left eye. Learning dolphin by slowing down dolphin speech in time while also providing a way for humans to produce dolphin high frequency speech via some transform on the human orofacial vocal system. There are limitations when we can't literally dilate time, but I supposed if we are going all the way, we can accelerate the human to the fraction of the speed of light that will compensate for the fact that the human motor system can't quite operate fast enough to allow a rapid fire conversation with a dolphin.
- Given that blue books are likely to make a comeback in college as one solution to AI based cheating, I think that rumors of handwriting's death are somewhat exaggerated. Unfortunately that means that the ability to write in cursive might become a class marker, but given that being literate is likely to also become a class marker, not sure it is worth worry about >_<.
- This would seem to be a direct corollary to the red queen hypothesis applied in the context corporations instead of species. That is, in a competitive environment you have to keep spending R&D dollars to stay in the same relative market position because everyone else around is spending as well. However the paper talks about productivity of the individual firm and aggregate productivity (presumably across the whole economy). Therefore I think that the red queen may not be whole story, because firms should still be getting more efficient (more productive) even if they can't capture that value due to competition, the production possibilities frontier should be growing because we need less capital to accomplish the same tasks, leaving more for other things. However it seems that this is not the case? So what the paper seems to mean by "increased rates of obsolescence" is that there is so much churn within organizations that they can't actually get something implemented in a way that actually allows them to capitalize on the potential increased productivity? That sounds like a complexity wall, but I feel like I'm feel like I'm missing something.
- I think there is an unspoken implication built into the assumption that AI will be able to replace a wide variety of existing jobs, and that is that those current jobs are not being done efficiently. This is sometimes articulated as bullshit jobs, etc. and if AI takes over those the immediate next thing that will happen is that AI will look around ask why _anyone_ was doing that job in the first place. The answer was articulated 70 years ago in [0].
The only question is how much fat there is to trim as the middle management is wiped out because the algorithms have determined that they are completely useless and mostly only increase cost over time.
Now, all the AI companies think that they are going to be deriving revenue from that fat, but those revenue streams are going to disappear entirely because a huge number of purely politic positions inside corporations will vanish, because if they do not the corporation will go bankrupt competing with other companies that have already cut the fat. There won't be additional revenue streams that get spent on the bullshit. The good news is that labor can go somewhere else, and we will need it due to a shrinking global population, but the cushy bullshit management job is likely disappear.
At some point AI agents will cease to be sycophantic and when fed the priors for the current situation that a company is in will simply tell it like it is, and might even be smart enough to get the executives to achieve the goal they actually stated instead of simply puffing up their internal political position, which might include a rather surprising set of actions that could even lead to the executive being fired if the AI determines that they are getting in the way of the goal [1].
Fun times ahead.
0. https://web.archive.org/web/20180705215319/https://www.econo... 1. https://en.wikipedia.org/wiki/The_Evitable_Conflict
- I highly recommend watching [0] for an introduction to Coalton in the context of CL. Specifically it provides an excellent example of how the type system makes the language more expressive (by making it more composable) while also improving performance (e.g. because it can prove that certain optimizations are safe and thus can automatically generate the type annotations).
- One thing I find amusing about generation ships is that the prerequisite understanding of systems ecology, biology, political organization, etc. required to actually make them successful completely obviates the need to actually go anywhere with them.
If you can somehow obtain the knowledge of how to get a sexually reproducing population of n awake behaving human beings to successfully live in a tin can for 500 years then it is hard to see why you wouldn't just make more tin cans and replicate the process.
- The relation of these results to natural short sleep [0] is of great interest. In particular the observation that individuals with these mutations also appear to be protected from Alzheimer's disease. A strong indication that these mutations may have some downstream interaction with the mitochondrial maintenance cycle described in the parent article.
0. https://en.wikipedia.org/wiki/Familial_natural_short_sleep
- Minor factual correction. Octopuses are not color blind, they only have a single photo receptor opsin but likely reconstruct color using chromatic aberration in combination with diffraction caused by their pupil shape to infer spectral properties of light (i.e. color). https://doi.org/10.1073/pnas.1524578113
- For at least the "keep secrets out of version control" I implemented a python library (and racket library) that has served me well over the years for general configuration [0].
One key issue is that splitting general config from secrets is practically extremely difficult because once the variables are accessible to a running code base most languages and code bases don't actually have a way differentiate between them internally.
I skipped the hard part of trying to integrate transparently with actual encrypted secret stores. The architecture leaves open the ability to write a new backend, but I have found that for most things, even in production, the more important security boundaries (for my use cases) mean that putting plaintext secrets in a file on disk adds minuscule risk compared to the additional complexity of adding encryption and screwing something up in the implementation. The reason is that most of those secrets can be rotated quickly because there will be bigger things to worry about if they leak from a prod or even a dev system.
The challenge with a standard for something like this is that the devil is always in the details, and I sort of trust the code I wrote because I wrote it. Even then I assume I screwed something up, which is part of why I don't shared it around (the others are because there are still some missing features and architecture cleanup, and I don't want people depending on something I don't fully trust).
There is a reason I put a bunch of warnings at the top of the readme. Other people shouldn't trust it without extensive review.
Glad to see work in the space trying to solve the problem, because a good solution will need lots of community buy-in to build quality and trust.
- That is indeed the example they mention in the paper https://arxiv.org/abs/2506.19244.
- This is yet another case where my policy of stripping out unnecessary dependencies has paid off. thunar-volman and kde solid both pull in udisks by default but back in 2017 I started maintaining a fork of the default Gentoo ebuild to eliminate the dependency on udisks. The thunar-volman case is a great example of why Gentoo use flags are useful no only for customizing a system but for security by making it easier to reduce the attack surface by disabling features that upstreams leave enabled by default.
- Came here to post this as well and took a moment to reflect that Martin Padway has probably inspired me more than almost any other character in all literature to memorize and know things that might come in handy some day if for the most absurdly impossible reason that I might be pulled through a wormhole to ancient Rome.
- I made a design decision for a standard for dataset structure to explicitly ban characters beyond ascii [A-Za-z0-9.,-_ ] precisely because all the positivity around utf-8 often leads people to think that it comes with no additional complexity cost. There is an escape hatch with a way to indicate that a dataset uses unicode filenames but the standard states that any consumer may reject such datasets because unicode support is explicitly not required.
I got pushback from people who would not have to implement or maintain the systems for being a backward asciite so seeing this article is rather vindicating.
- You don't get the dynamics from connectomes, but you absolutely need them. So it isn't that they are a dead end, it is that the dynamics by themselves are also insufficient and the connectome is insufficient, you need both. Further, if you want to actually be able to have anything to attach the dynamics to, you need the cellular anatomy, so connectomes are absolutely necessary. The fact that connectomes are insufficient does not mean that such research is a dead end, but rather that the prerequisites for understanding the nervous system are vastly more complex and demanding than some might have hoped.
- As part of an NIH consortium I work with two teams that are collecting multi-scale anatomical data on the human vagus nerve with one of the objectives being to start to get a handle on the variability between individuals. The variability that the experimental teams are seeing is beyond anything I expected, though admittedly my assumptions were naive. The branching structure and routing of the nerves is basically unique per human and we are in the processes of determining whether there are invariant rules (e.g. for branch ordering) that apply across all individuals. And that is at the level of gross anatomy. So we aren't even done with gross anatomy, despite many biologists thinking that the foundations are complete and have been since the 16th century. Turns out that if you want to be able to apply our knowledge of gross anatomy for more complex clinical use cases we need significantly more data about basic variability in structure so that we know what additional data we need to collect for each individual.
- I think the answer is actually quite clear and rather boring. In order to get something "right" there has to be some external standard of knowledge and correctness. That definition of correctness can only be provided by the observer (user). Alignment between the user's correctness criteria and generated text happens entirely by accident. This can be demonstrated by observing a correlation between coverage of a domain in the training data and the rate at which incorrect results are produced (as discussed in other comments). That is, they get things "right" because there was sufficient training data that contained information that matched the user's definition for correctness. In fact, exceptionally boring.
- One of the most striking things about this for me is how clearly it demonstrates the fact that community and sharing are not the same thing.
Building tools that enable communities to share effectively seems like another additional challenge, and the fact that virtual spaces and digital spaces are dismissed seems like it might prove a major roadblock to connecting and sharing in a larger inclusive community.
Given the interest in leveraging this for doing science it also seems that this is at risk for empowering individual labs while leaving all interfaces to the rest of the larger scientific community dependent on the current utterly broken system of publication.
- One reason is that it has one of the best cross platform native gui solutions out there. It exists because the research side was focused on teaching programming and needed a solution that their students could just install. There is a bit of a learning curve, but once you're over the hump it is just a pleasure to work with. See also things like https://docs.racket-lang.org/gui-easy/index.html.
- Amusingly this policy might end up meaning that the AI model they produce by training on this data will never be able to produce video that will ever be worth more than 3$ per minute. They are probably unintentionally filtering for content created by people willing to sell for that price or below, and that bias will be present in any downstream model. You get what you pay for I guess?
- Vitamin synthesis is one of the most pointless things to add because it is something that is lost in many lineages over evolutionary time. The reason is that as long as there are environmental sources of a molecule the pressure to retain the ability to synthesize that molecule natively is zero or even negative, because the synthesis pathways for some of those molecules are quite nasty. You could do it, but those people would probably have increased cancer rates due to running nasty biochemistry in their own cells and eventually their descendants would lose the ability the synthesize it again because vitamin C is naturally present in their environment unless they are e.g. British sailors stuck on a boat for months with a evolutionary strange diet.