danielbln
Joined 3,457 karma
- danielblnI'm so sick of that particular LLMism: it's not (just) X -- it's Y!
- I don't remember the last time Claude Code hallucinated some library, as it will check the packages, verify with the linter, run a test import and so on.
Are you talking about punching something into some LLM web chat that's disconnected from your actual codebase and has tooling like web search disabled? If so, that's not really the state of the art of AI assisted coding, just so you know.
- All you had to do is scroll down further and read the next couple of posts where the author is being more specific on how they used LLMs.
I swear, the so called critics need everything spoon fed.
- Must feel nice to let yourself be coddled by in-group/out-group thinking like that. "I've decided that AI is bad and useless, therefore anyone disagreeing must be an AI bro".
- You say that with such conviction "the whole point is to think less". Why do you think that? I think no less now that I use AI agents all day long, I just think about different things. I don't think about where I place certain bits of code, how certain structures look like. Instead I think about data models, systems, and what the ideal deliverable looks like and how we can plan its implementation and let it execute by the willing agent. I think about how I best automate flows so that I can parallelize work, within a harness that reduces the possibilities for mistakes. I think a whole lot more about different technologies and frameworks, as the cost of exploring and experimenting with them has come down tremendously.
Will what I do now be automated eventually or before long? Probably, we keep automating things, so one has to swim up the abstraction layers. Doesn't mean one has to think less.
- Do we think less because we use C++ vs assembly? Less because we use assembly over punching cards? Less because we use computers over pen and paper? And so on. You can put a strong local coding model on your local hardware today and no investor will be involved (unless you mean investors on the company you work for, but the truth is,n those were never in any way interested in how you build things, only that you do).
- How would one even "misuse" a historical LLM, ask it how to cook up sarine gas in a trench?
- Accurate-ish, let's not forget their tendency to hallucinate.
- Its part of managing the context. It's a bit of prepared context that can be lazy-loaded in as the need arises.
Inversely, you can persist/summarize a larger bit of context into a skill, so a new agent session can easily pull it in.
So yes, it's just turtles, sorry, prompts all the way down.
- After a session with Claude Code I just tell it "turn this into a skill, incorporate what we've learned in this session".
- That's a common (but no less frustrating) model, Apple does it that way. Certain things are included in the sub, but you can also purchase things that aren't. At least Apple makes it reasonably easy to filter and find stuff, Amazons UX is a fever dream.
- That seems unlikely, as we didn't see anything like that with Dall-E, unless the auto regressive nature of gpt-image somehow was more influenced by it.
- Also: detailed planning phase, cross-LLM reviews via subagents, tests, functional QA etc. There at more (and complimentary) ways to ensure the code does what it should then to comb through ever line.
- The Titanic wasn't crushed, it was sliced, wasn't it?
- They are no less beautiful nor are they gone after some gradient descent has slithered over them, so worry not.
- I use LLMs for that all the time. Most frontier models have books trained in, so I just ask for a spoiler free recap or ask about certain characters. Works well in my experience, and made jumping back into Wheel of Time a lot easier
- And as always, plenty of oil runs down that slope to make it slippery. First it's terrorists, then heavy crime, then petty crime, then small things, then it's whoever the powers that be don't deem deserving of freedom. We've been down that road on Germany, but history rhymes, as the saying goes.
- So don't. But if mega corps want to juice our brains with their slop, to the degree that it's difficult to escape it, then we should also be able to sneer at the ads that miss the mark even more than usual.
- You cannot read the full thread unless you have an account and are logged in. That's reason enough to appreciate a mirror link like that.
- Unless they do an ad where they literally crush creativity into a thin slab, an ad they had to later apologize for. Pepperage Farm remembers.