Preferences

mjr00
Joined 6,191 karma

  1. > we still see claims that LLMs are "just next token predictors" and "just regurgitate code they read online". These are just uninformed and wrong views. It's fair to say that these people were (are!) wrong.

    I don't think it's fair to say that at all. How are LLMs not statistical models that predict tokens? It's a big oversimplification but it doesn't seem wrong, the same way that "computers are electricity running through circuits" isn't a wrong statement. And in both cases, those statements are orthogonal to how useful they are.

  2. > No, that doesn’t make you a skeptic in this context.

    That's good to hear, but I have been called an AI skeptic a lot on hn, so not everyone agrees with you!

    I agree though, there's a certain class of "AI denialism" which pretends that LLMs don't do anything useful, which in almost-2026 is pretty hard to argue.

  3. "Skeptics" is also a loaded term; what does it actually mean? I find LLMs incredibly useful for various programming tasks (generating code, searching documentation, and yes with enough setup agents can accomplish some tasks), but I also don't believe they have actual intelligence, nor do I think they will eviscerate programming jobs, the same way that Python and JavaScript didn't eviscerate programming jobs despite lowering the barrier to entry compared to Java or C. Does that make me a skeptic?

    It's easy to declare "victory" when you're only talking about the maximalist position on one side ("LLMs are totally useless!") vs the minimalist position on the other side ("LLMs can generate useful code"). The AI maximalist position of "AI is going to become superintelligent and make all human work and intelligence obsolete" has certainly not been proven.

  4. I get what you're saying but let's be real: 99.99999% of modern software development is done with constant internet connectivity and is effectively impossible without it. Whether that's pulling external packages or just looking up the name of an API in the standard library. Yeah, you could grep docs, or have a shelf full of "The C++ Programming Language Reference" books like we did in the 90s, but c'mon.

    I have some friends in the defense industry who have to develop on machines without public internet access. You know what they all do? Have a second machine set up next to them which does have internet access.

  5. I mean at a minimum I understand how they work, even if you don't. So the claim that "nobody and I mean nobody understands how LLMs work" is verifiably false.
  6. That's a CEO of an AI company saying his product is really superintelligent and dangerous and nobody knows how it works and if you don't invest you're going to be left behind. That's a marketing piece, if you weren't aware.

    Just because the restaurant says "World's Best Burgers" on its logo doesn't make it true.

  7. > It also has already taken junior jobs.

    Correction: it has been a convenient excuse for large tech companies to cut junior jobs after ridiculous hiring sprees during COVID/ZIRP.

  8. The only thing we don't fully understand is how the ELIZA effect[0] has been known for 60 years yet people keep falling for it.

    [0] https://en.wikipedia.org/wiki/ELIZA_effect

  9. > The ground truth reality is that nobody and I mean nobody understands how LLMs work.

    This is a really insane and untrue quote. I would, ironically, ask an LLM to explain how LLMs work. It's really not as complicated as it seems.

  10. > Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...

    The money is never wrong! That's why the $100 billion invested in blockchain companies from 2020 to 2023 worked out so well. Or why Mark Zuckerberg's $50 billion investment in the Metaverse resulted in a world-changing paradigm shift.

  11. But AI is still a stochastic parrot with no actual intellectual capability... who actually believes otherwise? I figured most people had played with local models enough by now to understand that it's just math underneath. It's extremely useful, but laughably far from intelligence, as anyone who has attempted to use Claude et al for anything nontrivial knows.
  12. It's the gaming version of Galaxy Quest: a parody that is not only great when it stands by itself, but is satirical in a way that shows they are genuinely big fans of the source material.

    (Though perhaps unsurprisingly, Blow has only once mentioned The Looker, saying he hates how it devalues his art, and now refuses to talk about it ever.)

  13. > So I think that a lot of juniors WILL get replaced by AI not because they are junior necessarily but because a lot of them won't be able to add great value compared to a default AI and companies care about getting the best value from their workers. A junior who understands this and does more than the bare minimum will stand out while the rest will get replaced.

    Again this is what people said about outsourced developers. 2008 logic was, why would anyone hire a junior for $50k/year when you could hire a senior with 20 years experience for $10k/year from India?

    Reality: for 5+ years you could change careers by taking a 3-6 month JavaScript bootcamp and coming out the other end with a $150k job lined up. That's just how in demand software development was.

  14. I went to university 2005-2008 and I was advised by many people at the time to not go into computer science. The reasoning was that outsourced software developers in low-cost regions like India and SEA would destroy salaries, and software developers should not expect to make more than $50k/year due to the competition.

    Even more recently we had this with radiologists, a profession that was supposed to be crushed by deep learning and neural networks. A quick Google search says an average radiologist in the US currently makes between $340,000 to $500,000 per year.

    This might be the professional/career version of "buy when there's blood in the streets."

  15. > So the simpler problem is that your product now becomes merely a tool call for AI agents. That's what users want.

    This is a big assumption, and not one I've seen in product testing. Open-ended human language is not a good interface for highly detailed technical work, at least not with the current state of LLMs.

    > It's the same reason why API access to SaaS is usually restricted or not available for the users except biggest customers.

    I don't... think this is true? Of the top of my head, aside from cloud providers like AWS/GCP/Azure which obviously provide APIs: Salesforce, Hubspot, Jira all provide APIs either alongside basic plans or as a small upsell. Certainly not just for the biggest customers. You're probably thinking of social media where Twitter/Reddit/FB/etc don't really give API access, but those aren't really B2B SaaS products.

  16. > This shouldnt be the goal. The goal should be to build an AI that can tell you what is done and what needs to be done i.e. replace jira with natural interactions. An AI that can "see" and "understand" your project. An AI that can see it, understand it, build it and modify it.

    The difference is that an AI-coded internal Jira clone is something that could realistically happen today. Vague notions of AI "understanding" anything are not currently realistic and won't be for an indeterminate amount of time, which could mean next year, 30 years from now, or never. I don't consider that worth discussing.

  17. > AI-generated code still requires software engineers to build, test, debug, deploy, ensure security, monitor, be on-call, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.

    Yeah it's a fundamental misunderstanding of economies of scale. If you build an in-house app that does X, you incur 100% of the maintenance costs. If you're subscribed to a SaaS product, you're paying for 1/N % of the maintenance costs, where N is the number of customers.

    I only see AI-generated code replacing things that never made sense as a SaaS anyway. It's telling the author's only concrete example of a replaced SaaS product is Retool, which is much less about SaaS and much more about a product that's been fundamentally deprecated.

    Wake me up when we see swaths of companies AI-coding internal Jira ("just an issue tracker") and Github Enterprise ("just a browser-based wrapper over git") clones.

  18. I agree an HTTP call for every rounding operation would be awful. I would question service boundaries in this case though. This is very domain-specific, but there's likely only a small subset of your system that cares about payments, calculating taxes, rounding, etc. which would ever call a rounding operation; in that case that entire subdomain should be packaged up as a single service IMO. Again, this gets very domain-specific quickly; I'm making the assumption this is a standard-ish SaaS product and not, say, a complex financial system.
  19. > It’s easy to say things like this but also incredibly difficult to know if you’ll introduce subtle bugs or incompatibilities between services.

    You are right: it is difficult. It is harder than building a monolith. No argument there. I just don't think proper microservices are as difficult as people think. It's just more of a mindshift.

    Plenty of projects and companies continue to release backwards compatible APIs: operating systems, Stripe/PayPal, cloud providers. Bugs come up, but in general people don't worry about ec2:DescribeInstances randomly breaking. These projects are still evolving internally while maintaining a stable external API. It's a skill, but something that can be learned.

    > So let’s say you have a shared money library that you have fixed a bug in… what would you do in the real world - redeploy all your services that use said library or something else?

    In the real world I would not have a shared "money library" to begin with. If there were money-related operations that needed to be used by multiple services, I would have a "money service" which exposed an API and could be deployed independently. A bug fix would then be a deploy to this service, and no other services would have to update or be aware of the fix.

    This isn't a theoretical, either, as a "payments service" that encapsulates access to payment processors is something I've commonly seen.

  20. The opposite situation of needing to upgrade your entire company's codebase all at once is much more painful. With services you can upgrade runtimes on an as-needed basis. In monoliths, runtime upgrades were massive projects that required a ton of coordination between teams and months or years of work.

This user hasn’t submitted anything.