- nyssos parentClassical logic is the presumed default for mathematics, if someone is working in a different system they will say so explicitly.
- > but my understanding of copyright is that if a person when into a clean room and wrote an new article from scratch, without having read any NYT, that just so happened to be exactly the same as an existing NYT article, it would still be a copyright violation.
It would not be. Independent creation is a complete defense against copyright infringement.
Patents, however, do work this way.
- Here's a less galactic version. Suppose you're implementing a binary tree where every leaf has to have the same height - a toy model of a self-balancing search tree.
Here's an implementation using GADTs and type-level addition
It's impossible to construct an unbalanced node, since `Interior` only takes two nodes of the same level, and every `Leaf` is of level 0.data Node (level :: Nat) (a :: Type) where Leaf :: a -> Node Zero a Interior :: a -> (Node l a, Node l a) -> Node (l + 1) a - > Yet Turing machines are about as far from abstract mathematics as one can get, because you can actually build these things in our physical universe and observe their behavior over time (except for the whole "infinite tape" part)
The infinite tape part isn't some minor detail, it's the source of all the difficulty. A "finite-tape Turing machine" is just a DFA.
- To an extent. A truly completely formal proof, as in symbol manipulation according to the rules of some formal system, no. It's valid or it isn't.
But no one actually works like this. There are varying degrees of "semiformality" and what is and isn't acceptable is ultimately a convention, and varies between subfields - but even the laxest mathematicians are still about as careful as the most rigorous physicists.
- You're conflating a few things here.
Constructivists are only interested in constructive proofs: if you want to claim "forall x in X, P(x) is true" then you need to exhibit a particular element of x for which P holds. As a philosophical stance this isn't super rare but I don't know if I would say it's ever been common. As a field of study it's quite valuable.
Finitists go further and refuse to admit any infinite objects at all. This has always been pretty rare, and it's effectively dead now after the failure of Hilbert's program. It turns out you lose a ton of math this way - even statements that superficially appear to deal only with finite objects - including things as elementary as parts of arithmetic. Nonetheless there are still a few serious finitists.
Ultrafinitists refuse to admit any sufficiently large finite objects. So for instance they deny that exponentiation is always well-defined. This is completely unworkable. It's ultrafringe and always has been.
Wildberger is an ultrafinitist.
- > That's just a good practice in the general case: an intermediate type that fully described the data wouldn't have saved you from overwriting it unless you actually looked closely at the type signature.
The issue isn't that it got overridden, it's that it got overridden with a value of the wrong type. An intermediate type signature with `updatedAt` as a key will produce a type error regardless of the type of the corresponding value.
> I'd be very interested to know what you'd do to change the type system here to catch this.
Like the other commenter said, extensible records. Ideally extensible row types, with records, unions, heterogeneous lists, and so on as interpretations, but that seems very unlikely.
- > Do you have an anecdote (just one!) of a case where TypeScript's lack of type system soundness bit you on a real application?
Sure. The usual Java-style variance nonsense is probably the most common source, but I see you're not bothered by that, so the next worst thing is likely object spreading. Here's an anonymized version of something that cropped up in code review earlier this week:
const incomingValue: { name: string, updatedAt: number } = { name: "foo", updatedAt: 0 } const intermediateValueWithPoorlyChosenSignature: { name: string } = incomingValue const outgoingValue: { name: string, updatedAt: string } = { updatedAt: new Date().toISOString() , ...intermediateValueWithPoorlyChosenSignature } - Informally, a sound type system is one that never lies to you.
Formally, the usual notion of soundness is defined with respect to an evaluation strategy: a term-rewriting rule, and a distinguished set of values. For pure functional programs this is literally just program execution, whereas effects require a more sophisticated notion of equivalence. Either way, we'll refer to it as evaluating the program.
There are two parts:
- Preservation: if a term `a` has type `T` and evaluates to `b`, then `b` has type `T`.
- Progress: A well-typed term can be further evaluated if and only if it is not a value.
- > In Physics, entropy is a property of a bit of matter, it is not related to the observer or their knowledge. We can measure the enthalpy change of a material sample and work out its entropy without knowing a thing about its structure.
Enthalpy is also dependent on your choice of state variables, which is in turn dictated by which observables you want to make predictions about: whether two microstates are distinguishable, and thus whether the part of the same macrostate, depends on the tools you have for distinguishing them.
- > Dark matter is strictly defined by its effects
All physical entities are defined by their effects! Suppose we found axions and they had the right mass to be dark matter. Would that mean we now "really knew" what dark matter was, in your sense? No, it would just push the defining effects further back - because all an axion is is a quantum of the (strong CP-violation term promoted to a field).
Just like the electromagnetic field is the one that acts on charged particles in such and such a way, and a particle is charged if the electromagnetic field acts on it in that way. There's no deeper essence, no intuitive "substance" with some sort of intrinsic nature. All physical properties are relational.
- > I dunno if this is the correct way of thinking about it, but I just imagine it as a particle that has mass but does not interact with other particles (except at big-bang like energy levels?).
Not even anything that extreme. What's ruled out is interaction via electromagnetism (or if you want to get really nit-picky, electromagnetic interaction with a strength above some extremely low threshold).
- All major DM candidates also have multiple interactions: that's the WI in WIMP, for instance. In fact I don't know that anyone is seriously proposing that dark matter is just bare mass with no other properties - aside from the practical problems, that would be a pretty radical departure from the last century of particle physics.
- > If an algorithmic process is an experience and a collection of experiences is intelligence
Neither, what I'm saying is that the observable correlates of experience are the observable correlates of intelligence - saying that "humans are X therefore humans are Y, software is X but software is not Y" is special pleading. The most defensible positions here are illusionism about consciousness altogether (humans aren't Y) or a sort of soft panpsychism (X really does imply Y). Personally I favor the latter. Some sort of threshold model where the lights turn on at a certain point seems pretty sketchy to me, but I guess isn't ruled out. But GP, as I understand them, is claiming that biology doesn't even supervene on physics, which is a wild claim.
> Or, we must need a more refined definition of intelligence which more closely reflects what people actually are trying to convey when they use this word.
Well that's the thing, I don't think people are trying to convey any particular thing. I think they're trying to find some line - any line - which allows them to write off non-animal complex systems as philsophically uninteresting. Same deal as people a hundred years ago trying to find a way to strictly separate humans from nonhuman animals.
- > In the case that you get a zero parameter, you could inflate it by some epsilon and the solution would basically be the same.
Not everything is continuous. Add an epsilon worth of torsion to GR and you don't get almost-GR, you get a qualitatively different theory in which potentially arbitrarily large violations of the equivalence principle are possible.
- > By spelling out what computers are doing it becomes very obvious that there is nothing that can be aware of any experiences in computers as it is all simply a sequence of arithmetic operations.
By spelling out what brains are doing it becomes very obvious that it's all simply a sequence of chemical reactions - and yet here we are, having experiences. Software will never have a human experience - but neither will a chimp, or an octopus, or a Zeta-Reticulan.
Mammalian neurons are not the only possible substrate for intelligence; if they're the only possible substrate for consciousness, then the fact that we're conscious is an inexplicable miracle.
- > Sure we can't see dark matter (it doesn't interact electromagnetically), but we can see its effects
Even this is granting too much: "seeing it" and "seeing its effects" are the same thing. No one has ever "directly seen", in the sense that internet DM skepticism demands, anything other than a photon.