sam + hn [at] sampatt [dot] com
- Anecdotally, homeschooled children often speak and behave more like adults.
Whether this is a positive or negative thing depends on the situation. Being precocious is something adults might think positively about (though not in all situations) but it's not something other kids usually admire.
- You blame the calculator for people's innumeracy?
I'd argue it's a failure of education or general lack of intelligence. The existence of a tool to speed the process up doesn't preclude people understanding the process.
I don't think this relates as closely to AI as you seem to. I'm simply better at building things, and doing things, with AI than without. Not just faster, better. If that's not true for you, you're either using it wrong or maybe you already knew how to do everything already - if so, good for you!
- If we define intelligence as problem solving ability, then AI makes _me_ more intelligent, and I'm willing to pay for that.
- I signed up. From GR area too. Good luck with this.
- 4 points
- I can visualize things in a lucid dream, and it's identical to seeing for me. But I can only control it for a short time before I wake up.
When awake, I have a "mind's eye," but it's more like what you're describing. As I fall asleep, I can actually begin to see things. I wonder if some people can do that when awake.
- I doubt this. I've done AI annotation work on the big models. Part of my job was comparing two model outputs and rating which is better, and using detailed criteria to explain why it's better. The HF part.
That's a lot of expensive work they're doing, and ignoring, if they're just later poisoning the models!
- Good comment.
This is the part people don't like to talk about. We just brand people as "mentally ill" and suddenly we no longer need to consider if they're acting rationally or not.
Life can be immensely difficult. I'm very skeptical that giving people AI would meaningfully change existing dynamics.
- 100% agree
I've used Linux as my daily driver for well over a decade now, but there were quite a few times where I almost gave up.
I knew I could always fix any problem if I was willing to devote the time, but that isn't a trivial investment!
Now with these AI tools, they can diagnose, explain, and fix issues in minutes. My system is more customized than ever before, and I'm not afraid to try out new tools.
True for more than just Linux too. It's a godsend for homelab stuff.
- I do exactly the same thing, and it works beautifully. We can't really be doing it wrong if it's working!
- Do backups in the same room count as backups?
- It absolutely won't go away.
Even if OpenAI folds, the open source stuff is good enough that someone will build a compelling tutor platform in their place. It probably already exists.
- There is definitely stickiness if you're a frequent user. It has a history of hundreds of conversations. It knows a lot about me.
Switching would be like coding with a brand new dev environment. Can I do it? Sure, but I don't want to.
- One point to add about integration between LLMs and Obsidian: plugins.
Obsidian has a plugin system that can be easily customized. You can run your own JS scripts from a local folder. Claude Code is excellent at creating and modifying them on the fly.
For example, I built a custom program that syncs Obsidian files with a publish flag to my Github repo, which triggers a netlify build. My website updates when I update my vault and run a sync.
- LLMs cannot themselves calculate, but they are given tools which can.
They're getting quite good at that now.
- Unfortunately, being serious about privacy is socially damaging. I've experienced it.
I eventually accepted that being outside my home meant I gave up on my privacy. I still take it seriously in my home and online, but not in public.
I'd love to see the culture shift on this, but I won't hold my breath.
- It could be built to use local models completely.
Open source transcription models are already good enough to do this, and with good context engineering, the base models might be good enough, too.
It wouldn't be trivial to implement, but I think it's possible already.
- The average user will never need to answer ICPC questions though.
- The difference between using Cursor when it launched and using Cursor today is dramatically different.
It was basically a novelty before. "Wow, AI can sort of write code!"
Now I find it very capable.
It's not hard to spend a few hours testing out models / platforms and learning how to use them. I would argue this has been true for a long time, but it's so obviously true now that I think most of those people are not acting in good faith.