- xwowsersx parentThe site doesn't seem to be loading. Hug of death?
- How unimpressive the first iPhone was??
Yeah, totally... a full touchscreen computer in your pocket with no physical keyboard, pinch-to-zoom magic people thought was CGI, a browser that wasn't a joke, visual voicemail, and an OS so smooth it made every other phone look like it ran on car batteries. Truly underwhelming stuff.
It literally redefined an entire industry, vaporized half the product lines at Nokia/BlackBerry/Palm/Microsoft, and set the blueprint for every smartphone that exists today.
But sure..."unimpressive."
This is the weirdest revisionist history I've ever heard.
If you mean that the iPhone has come a long way and that it was unimpressive relative to the phones we have 18 years later, sure. But unimpressive it was not.
- I think what PG meant by "do things that don't scale" is earnest effort in service of building a real product: talking to users, manually onboarding, hand-holding early customers so you can learn fast and iterate toward something that eventually does scale.
What this startup did isn't that, AFAICT. It wasn't manual work in service of learning...it was just fraud as a business model, no? Like, they were pretending the technology existed before it actually did. There's a bright line between unscalable hustle and misleading customers about what your product actually is.
Doing unscalable things is about being scrappy and close to the problem. Pretending humans are AI is just straight up deceiving people.
- How so? Read the paper. The methodology was entirely observational. They did not intervene in the prosper.com loan market or interact with the borrowers. If anything, the paper identified a form of bias that exists in the real world, namely that people commonly "perceived" as less trustworthy are penalized despite their actual creditworthiness.
- My monthly reminder that I really should resume my Learning Nim series :( https://www.youtube.com/@Nimward
- Yeah, exactly. The context here was about the profitability part of OP's comment. The parent said "plenty of businesses fail to find a way to make a profit," and my point was that OP's statement doesn't contradict that. OP was saying they'll need AGI to be profitable, not that they're guaranteed to become profitable.
Sure, they phrased it as "they will reach AGI," but that's clearly tongue-in-cheek...the underlying idea is "they better reach AGI, because that's the only way they could make money." So my comment ("necessary, not sufficient") was just pointing out that even if AGI is required for profitability, it doesn't mean they'll actually get there or succeed once they do, and that the original comment was perfectly compatible with the idea that not every business reaches profitability.
- Well, of course you're correct that SQL is text, but that's not what the article is arguing about. The point isn't whether SQL is text... it's about the kind of text it is.
SQL is a formal language, not a natural one. It's precise, rigid, and requires a specialized understanding of schema, joins, and logic. text-to-sql systems don't exist because people are too lazy to type; they exist because most people can't fluently express analytical intent in sql syntax. They can describe what they want in natural language ("show me all active users who registerd this year"), but translating that into correct, optimized sql requires at least familiarity, and sometimes expertise
So the governance challenges discussed in the article aren't about "oh SQL is too hard to type"...they're about trust, validation, and control when you introduce an AI intermediary that converts natural lang into a query that might affect sensitive data
- Supplementing with vitamin D is honestly one of the easiest things you can do... it's cheap, available everywhere, and makes a real difference. Just make sure you're also taking magnesium citrate (or another good form of magnesium) with it, since your body needs magnesium to properly use vit D
- This take feels like classic Cal Newport pattern-matching: something looks vaguely "consumerish," so it must signal decline. It's a huge overreach.
Whether OpenAI becomes a truly massive, world-defining company is an open question, but it's not going to be decided by Sora. Treating a research-facing video generator as if it's OpenAI's attempt at the next TikTok is just missing the forest for the trees. Sora isn't a product bet, it's a technology demo or a testbed for video and image modeling. They threw a basic interface on top so people could actually use it. If they shut that interface down tomorrow, it wouldn't change a thing about the underlying progress in generative modeling.
You can argue that OpenAI lacks focus, or that they waste energy on these experiments. That's a reasonable discussion. But calling it "the beginning of the end" because of one side project is just unserious. Tech companies at the frontier run hundreds of little prototypes like this... most get abandoned, and that's fine.
The real question about OpenAI's future has nothing to do with Sora. It's whether large language and multimodal models eventually become a zero-margin commodity. If that happens, OpenAI's valuation problem isn't about branding or app strategy, it's about economics. Can they build a moat beyond "we have the biggest model"? Because that won't hold once opensource and fine-tuned domain models catch up.
So sure, Sora might be a distraction. But pretending that a minor interface launch is some great unraveling of OpenAI's trajectory is just lazy narrative-hunting.
- That's true. Wanting openness in everyday tech isn't "absolutist" in itself. But the article's tone (and a lot of the FOSS movement's rhetoric) frames it as failure rather than frontier.
Of course we'd all prefer open printers and cars, but those domains aren't mainly limited by software ideology; they're limited by regulation, liability, and econ. The fact that programmers can build entire OSs, compilers, and global infra as open projects is already astonishing.
So yes, the world is still full of closed systems... but that doesn't mean FOSS lost. It means it's reached the layer where the obstacles are social, legal, and physical, not technical. IMO that's a harder, slower battle, not evidence that the earlier ones were meaningless.
- > The infrastructure it powers is mostly cloud hosted SaaS which is far and away the most closed model of software. Cloud SaaS is far more closed than closed source software on a personal device. Often it’s not even possible to export your own data.
That's fair, but I think it misses the distinction between who owns the infra and what the infra is built on. Yes, SaaS is often closed to end users, but the reason those companies could even exist at scale is because the underlying layers (OS, databases, frameworks, orchestration, etc.) are open.
You're right that control shifted from users to cloud vendors, but that's a business model problem, not a failure of open software. If anything, FOSS won so decisively on the supply side that it enabled an entire generation of companies to build closed services faster and cheaper than ever before.
- I think this post overstates the "loss" of free software. Yes, closed firmware and locked hardware are real gaps...but that doesn't erase the fact that open software has completely reshaped the modern stack. From Linux and K8s to Postgres and Python, it is the infra of the internet. "Winning" doesn't have to mean owning every transistor; it means setting the norms and powering most of what's built.
I tend to see this kind of absolutist, binary tone a lot from people deeply involved in FOSS... and sometimes I think maybe that mindset is necessary to push the movement forward, but it also feels detached from how much open software has already changed reality.
- Thank you very much I'll check out these resources. ROB101 looks really great.
I love the 3B1B videos, but I've noticed my attention tends to drift when watching videos. I've learned that I absorb information best through text. For me, videos work well as a supplement, but not as the main way to learn.
Thanks again.
- Thanks. I have a copy of Strang and have been going through it intermittently. I am primarily focused on ML itself and that's been where I'm spending most of my time. I'm hoping to simultaneously improve my mathematical maturity.
I hadn't known about Learning from Data. Thank you for the link!
- You're totally right. I left out the important context. I'm learning linear algebra mainly for applied use in ML/AI. I don't want to skip the theory entirely, but I've found that approaching it from the perspective of how it's actually used in models (embeddings, transformations, optimization, etc.) helps me with motivation and retaining.
So I'm looking for resources that bridge the gap, not purely computational "cookbook" type resources but also not proof-heavy textbooks. Ideally something that builds intuition for the structures and operations that show up all over ML.
- This is great. I really appreciate visual explanations and the way you build up the motivation. I'm using a few resources to learn linear algebra right now, including "The No Bullshit Guide to Linear Algebra", which has been pretty decent so far. Does anyone have other recommendations? I've found a lot of books to be too dense or academic for what I need. My goal is to develop a practical, working understanding I can apply directly.