- > Adding to this: it's not just that the apprenticeship ladder is gone—it's that nobody wants to deal with juniors who spit out AI code they don't really understand.
I keep hearing this and find it utterly perplexing.
As a junior, desperate to prove that I could hang in this world, I'd comb over my PRs obsessively. I viewed each one as a showcase of my abilities. If a senior had ever pointed at a line of code and asked "what does this do?" If I'd ever answered "I don't know," I would've been mortified.
I don't want to shake my fist at a cloud, but I have to ask genuinely (not rhetorically): do these kids not have any shame at all? Are they not the slightest bit embarrassed to check in a pile of slop? I just want to understand.
- I've done this too. The nice side-benefit of this approach is that it also serves as good documentation for other humans (including your future self) when trying to wrap their heads around what was done and why. In general I find it helpful to write docs that help both humans and agents to understand the structure and purpose of my codebase.
- I guess we're in a minority but I'm in full agreement. Color passthrough really felt like a game-changer, and I've long wished for a more open, non-Meta alternative. Guess we'll be waiting a bit longer
- > 2. Their subtitles are just the subbed version's subtitles which are drastically different from what the dubbed VAs are actually saying.
I get that you might not like it, but it sure beats the option you didn't list:
4. Has auto-generated subtitles for the dub that fail in dramatic and distracting ways, especially for proper nouns or any kind of show-specific invented terminology
- Yeah I don't know if this is a skill issue on my part, the nature of my projects, the limits of Sonnet vs. Opus, or a combination of all of the above, but my experiences track with all of yours.
From the article:
> The default mode requires you to pay constant attention to it, tracking everything it does and actively approving changes and actions every few steps.
I've never seen a YOLO run that doesn't require me to pay constant attention to it. Within a few minutes, Claude will have written bizarre abstractions, dangerous delegations of responsibility, and overall the smelliest code you'll see outside of a coding bootcamp. And god help you if you have both client and server code within the same repo. In general Claude seems to think that it's fine to wreak havoc in existing code, for the purpose of solving whatever problem is immediately at hand.
Claude has been very helpful to me, but only with constant guidance. Believe me, I would very much like to YOLO my problems away without any form of supervision. But so far, the only useful info I've received is to 1) only use it for side projects/one-off tools, and 2) make sure to run it in a sandbox. It would be far more useful to get an explanation for how to craft a CLAUDE.md (or, more generally, get the right prompt) that results in successful YOLO runs.
- When I first got started with CC, and hadn't given context management too much consideration, I also encountered problems with non-compliance of CLAUDE.md. If you wipe context, CLAUDE.md seems to get very high priority in the next response. All of this is to say that, in addition to the content of CLAUDE.md, context seems to play a role.
- From the linked post:
> If you use projects, Claude creates a separate memory for each project. This ensures that your product launch planning stays separate from client work, and confidential discussions remain separate from general operations.
If for some reason you want Claude's help making bath bombs, you can make a separate project in which memory is containerized. Alternatively, the bath bomb and bedsheet questions seem like good candidates for the Incognito Chat feature that the post also describes.
> All these LLM manufacturers lack ways to edit these memories either.
I'm not sure if you read through the linked post or not, but also there:
> Memory is fully optional, with granular user controls that help you manage what Claude remembers. (...) Claude uses a memory summary to capture all its memories in one place for you to view and edit. In your settings, you can see exactly what Claude remembers from your conversations, and update the summary at any time by chatting with Claude. Based on what you tell Claude to focus on or to ignore, Claude will adjust the memories it references.
So there you have it, I guess. You have a way to edit memories. Personally, I don't see myself bothering, since it's pretty easy and straightforward to switch to a different LLM service (use ChatGPT for creative stuff, Gemini for general information queries, Claude for programming etc.) but I could see use cases in certain professional contexts.
- Super late to this, sorry.
> I'm not the one claiming that a calculator thinks. The burden of proof lies on those that do. Claims require evidence and extraordinary claims require extraordinary evidence.
You're right, I may have misconstrued the original claim. I took the parent to be saying something like "calculators understand math, but also, understanding isn't particularly important with respect to AI" but I may have gotten some wires crossed. This isn't the old argument about submarines that swim, I don't think.
> Understanding is a superpower.
Thanks, this is all well-put.
- While I agree with you in the main, I also take seriously the "until someone can explain why" counterpoint.
Though I agree with you that your calculator doesn't understand math, one might reasonably ask, "why should we care?" And yeah, if it's just a calculator, maybe we don't care. A calculator is useful to us irrespective of understanding.
If we're to persuade anyone (if we are indeed right), we'll need to articulate a case for why understanding matters, with respect to AI. I think everyone gets this on an instinctual level- it wasn't long ago that LLMs suggested we add rocks to our salads to make them more crunchy. As long as these problems can be overcome by throwing more data and compute at them, people will remain incurious about the Understanding Problem. We need to make a rigorous case, probably with a good working alternative, and I haven't seen much action here.
- Isn't this just a reformulation of the Turing Test, with all the problems it entails?
- On a related note, everyone should know about Gell-Mann amnesia:
- > I think this muddies the water unnecessarily. Computation is not language, even though we typically write software in so called programming languages. But the computation itself is something different from the linguistic-like description of software. The computation is the set of states, and the relationships between them, that a computer goes through.
In hindsight, choosing the word "language" was probably more distracting than helpful. We could get into a debate about whether computation is essentially another form of language-like syntactic manipulation, but it does share a key feature with language: observer-relative ontology. @mjburgess has already made this case with you at length, and I don't think I could improve on what's already been written, so I'll just leave it at that.
> I should have been clearer - I meant that he admits that materialism makes more sense than idealism or property dualism, but I realize that this comes off as suggesting it makes more sense than his own position, which of course he does not.
I'm not sure that I saw this specific claim made, but it's not especially important. What's more important is understanding what his objection to materialism is, such that you can a)agree with it or b)articulate why you think he's wrong. That said, it isn't the main focus of this paper, so the argument is very compressed. It also rests on the assumption that you believe that consciousness is real (i.e. not an illusion), and given the rest of your comment, I'm not sure that you do.
> Both you and he seem to find a single example of a phenomenon that is causally reducible to some constituent part, but that is not ontological reducible to that constitutent part - consciousness
Yes, although to be clear, I'm mainly interested in correctly articulating the viewpoint expressed in the paper. My own views don't perfectly overlap with Searle's
> (he would add intentionality, I think, given the introduction, but it's not clear to me this is even a meaningfully separatable concept from consciousness)
I doubt he'd add it as a discrete entry because, as you correctly observe, intentionality is inseparable from consciousness (but the reverse is not true)
> And you both claim that this is the case because of this special feature of "first person ontology", which is a different thing than "third person ontology" - which seems to me to simply be dualism by another name.
Ok good, this is directly interacting with the paper's thesis: why he's not a (property) dualist. He's trying to thread the needle between materialism and dualism. His main objection to property dualism is that consciousness doesn't exist "over and above" the brain, on which it is utterly dependent. This is probably his tightest phrasing of his position:
> The property dualist means that in addition to all the neurobiological features of the brain, there is an extra, distinct, non physical feature of the brain; whereas I mean that consciousness is a state the brain can be in, in the way that liquidity and solidity are states that water can be in.
Does his defense work for you? Honestly I wouldn't blame you if you said no. He spends a full third of the paper complaining about the English language (this is a theme) and how it prevents him from cleanly describing his position. I get it, even if I find it a little exhausting, especially when the stakes are starting to feel kinda low.
> I think it's entirely possible to reject the notion of a meaningful first person ontology completely.
On first reading, this sounds like you might be rejecting the idea of consciousness entirely. Or do you think it's possible to have a 'trivial' first person ontology?
> It's very possible that the appearance of a first person narrative that we experience is a retroactive illusion we create that uses our models of how other people function on ourselves. That is, we are simple computers that manipulate symbols in our brains, that generate memories of their recent state as being a "conscious experience", which is just what we invented as a model of why other animals and physical phenomena more broadly behave the way they do (since we intuitively assign emotions and intentions to things like clouds and fires and mountains, to explain their behavior).
I'm not sure where to start with this, so I'll just pick a spot. You seem to deny that "conscious experience" is a real thing (which is equivalent to "what it's like to be a zombie") but we nonetheless have hallucinated memories of experiences which, to be clear, we did not have because we don't really have conscious experiences at all. But how do we replay those memories without consciousness? Do we just have fake memories about remembering fake memories? And where do the fake fake fake memories get played, in light of the fact that we have no inner lives except in retrospect?
- > Searle &co assert that there is a real world that has special properties, without providing any way to show that we are living in it
Searle described himself as a "naive realist" although, as was typical for him, this came with a ton of caveats and linguistic escape hatches. This was certainly my biggest objection and I passed many an afternoon in office hours trying to pin him down to a better position.
- > I remember the guy saying that disembodied AI couldn’t possibly understand meaning.
While I don't disagree with the substance of this post, I don't think this was one of Searle's arguments. There was definitely an Embodied Cognition camp on campus, but that was much more in Lakoff's wheelhouse.
- > I think I'm still a bit confused... so, in the languages which cannot produce understanding and consciousness, you mean to include "machine language"? (And thus, any computer language which can be compiled to machine language?)
It's... a little more complicated but basically yes. Language, by its nature, is indexical: it has no meaning without someone to observe it and ascribe meaning to it. Consciousness, on the other hand, requires no observer beyond the person experiencing it. If you have it, it's as real and undeniable as a rock or a tree or a mountain.
> On your interpretation, are there any sorts of computation that Searle believes would potentially allow consciousness?
I'm pretty sure (but not 100%) that the answer is "no"
> ETA: The other issue I have is with this whole idea is that "understanding requires semantics, and semantics requires consciousness". If you want to say that LLMs don't "understand" in that sense, because they're not conscious, I'm fine as long as you limit it to technical philosophical jargon.
Sure, if you want to think of it that way. If you accept the premise that LLMs aren't conscious, then you can consign the whole discussion to the "technical philosophical jargon" heap, forget about it, and happily go about your day. On the other hand, if you think they might be conscious, and consider the possibility that we're inflicting immeasurable suffering on sapient being that would rightly be treated with kindness (and afforded some measure of rights), then we're no longer debating how many angels can dance on the head of a pin. That's a big, big "if" though.
- Well, that was incredibly depressing. Maybe I can lighten things with a funny (to me) anecdote.
There are many people who know a lot about a little. There are also those who know a little about a lot. Searle was one of those rare people who knew a lot about a lot. Many a cocky undergraduate sauntered into his classroom thinking they'd come prepared with some new fact that he hadn't yet heard, some new line of attack he hadn't prepared for. Nearly always, they were disappointed.
But you know what he knew absolutely nothing about? Chinese. When it came time to deliver his lecture on the Chinese Room, he'd reach up and draw some incomprehensible mess of squigglies and say "suppose this is an actual Chinese character." Seriously. After decades of teaching about this thought experiment, for which he'd become famous (infamous?), he hadn't bothered to teach himself even a single character to use for illustration purposes.
Anyway, I thought it was funny. My heart goes out to Jennifer Hudin, who was indispensable, and all who were close to him.
- I'd quibble with some of this, but overall I agree: the Chinese Room has a lot of features that really aren't ideal and easily lead to misinterpretation.
I also didn't love the "observer-relative" vs. "observer-independent" terminology. The concepts seem to map pretty closely to "objective" vs. "subjective" and I feel like he might've confused fewer people if he'd used them instead (unless there's some crucial distinction that I'm missing). Then again, it might've ended up confusing things even more when we get to the ontology of consciousness (which exists objectively, but is experienced subjectively), so maybe it was the right move.
- > This makes no sense. You could equally make the statement that thought is by definition an abstract and strictly syntactic construct - one that has no objective reality.
No.
I could jam a yardstick into the ground and tell you that it's now a sundial calculating the time of day. Is this really, objectively true? Of course not. It's true to me, because I deem it so, but this is not a fact of the universe. If I drop dead, all meaning attributed to this yardstick is lost.
Now, thoughts. At the moment I'm visualizing a banana. This is objectively true: in my mind's eye, there it is. I'm not shuffling symbols around. I'm not pondering the abstract notion of bananas, I'm experiencing the concretion of one specific imaginary banana. There is no "depends on how you look at it." There's nothing to debate.
> There's also no "magic" involved in transmuting syntax into semantics, merely a subjective observer applying semantics to it.
There's no "magic" because this isn't a thing. You can't transmute syntax into semantics any more than you can transmute the knowledge of Algebra into the sensation of a cool breeze on a hot summer day. This is a category error.
I do credit their sense of humor about it though.