Preferences

johnwheeler
Joined 5,222 karma
Solopreneur and Creator of www.demo.fun - A browser extension for recording interactive product demos.

john@demo.fun


  1. Exactly. You can keep pushing it up the chain to the Investor. Then Sam Altman I guess.
  2. I've been doing this for 25 years though so maybe that's why. But the bigger point is that again you're not giving me anything more than "it makes mistakes" Sure it does but it makes less of them now. It will make less in the future. Also Anthropic guys are in the same boat: They don't write _much_ code anymore. They just use Claude code, So I'm not the only one.
  3. This is how I feel as well pretty much.

    It's interesting you mention the loss of knowledge. I've heard that China has adopted AI in their classrooms to teach students at a much faster pace than western countries. Right now I'm using it to teach me how to write a reverb plug-in because I don't know anything about DSP and it's doing a pretty good job at that.

    So maybe there has to be some form of understanding. I need to understand how reverb works, how DSP works in order to be able to make decisions on it, not necessarily implementation. And some things are hard enough to just understand and maybe that's where the differentiation comes in

  4. It's that whole "this time it's different" argument I guess. This time it really does feel different is my worry.
  5. Nor is it an impossibility. If the AI stays like it is right now, I think we're fine. In fact I would probably opt for that. I don't know what I would do.
  6. I'm not trying to be snarky at all but maybe you're less experienced at prompting than me or you just have to work on some really gnarly code. Is that a possibility? Because yours is the argument that I can't say for myself.

    The codebases I work on, I can pretty much delegate more and more to AI as time goes on. There's no doubt about that. They're not big unwieldy codebases that have lots of technical debt necessarily but maybe those get quickly replaced over time.

    I just don't see this argument holding out over time that the AI will always make mistakes. I would love to be proven wrong though with a counter argument that jives

  7. When I started there used to be database analysts and server administrators. There still are but they're in far fewer supply because developers have mostly taken on those roles.

    And I think you're right. Cross-function is super important. That's why I think the next consolidation is going to roll up to product development. Basically the product developers that can use AI and manage the full stack are going to be successful but I don't know how long that will last.

    What's even more unsettling to me is it's probably going to end up being completely different in a way that nobody can predict. Your predictions, my predictions might be completely wrong, which is par for the course.

  8. This is a really weak argument.
  9. Right, the only problem I have with this argument is that past performance is no indicator of future results.
  10. right, I think in the near term, the worry isn't about replacing people wholesale but just replacing most or more people and causing serious economic disruption. In the limit, you would have a CEO who commands the AI to do everything, but that seems less plausible
  11. it's not about people disagreeing with my assessment. It's that people keep saying, "I'm not afraid of AI because it makes mistakes." That's the main argument I've heard. I don't know if those people are ignorant, arrogant, or in denial. Or maybe they're right. I don't know. But I don't think they're right. Human nature leads me to believe they're in denial. Or they're ignorant. I don't think there's necessarily any shame in being in denial or ignorant. They don't know or see what I see.

    I don't have to write code anymore, and the code that's coming out needs less and less of my intervention. Maybe I'm just much better at prompting than other people. But I doubt that

    The two things I hear are:

    1. You'll always need a human in the loop

    2. AI isn't any good at writing code

    The first one sounds more plausible, but it means less programmers over time.

  12. I guess the main thing people aren't taking into account from what I see is that the models are substantially improving. Claude Opus 4.5 is markedly better than Claude Sonnet 3.7. If the jump to version 5 represents such a leap, I see it is game over, pretty much. You'll just need one person to manage all your systems or the subsystems, if the entire system is extremely large. And then I can't think past that. I don't know how long it is before AI replaces that central orchestrator and takes the human out of the loop, or if it ever does, that's what they seem to want it to do.

    Anyway, I appreciate the response. I don't know how old you are, but I'm kind of old. And I've noticed that I've become much more cynical and pessimistic, not necessarily for any good reasons. So maybe it's just that.

  13. Thank you for your response. This is exactly the type of commentary I'm talking about. The key phrase is "at the moment." It's not that developers will be replaced, but there will be far less need for developers, is what I think.

    I think the flaws are going to be solved for, and if that happens, what do you think? I do believe there needs to be a human in the loop, but I don't think there needs to be humans, plural. Eventually.

    I believe this is denial. The statement that the best AI can't be reliable enough to do a modest refactoring is not correct. Yes, it can. What it currently cannot do is write a full app from start to finish, but they're working on longer task execution. And this is before any of the big data centers have even been built. What happens then? You get the naysayers that say, "Well, the scaling laws don't apply," but there's a lot of people who think they do apply.

  14. Employed. My contention is the AI is getting so good at doing tech related things that you'll need far fewer employees. I think Claude Code 4.5 is already there. Honestly, it just needs to permeate the market.
  15. To each their own. I can definitely understand that sentiment, although I'm 46 years old, and I've turned more conservative over the last few years, I still really like Obama. I also like Trump, though. It's kind of weird. Don't want to start a flame war though, I respect your opinions.
  16. Yes, I agree with this sentiment. A little better or a little worse, hopefully not too bad. But I think the same feeling of dread will persist, especially going into 2027, particularly around artificial intelligence for tech workers. My concern is that the switch won't be gradual. One day someone will come out with a model that can do everything valuable this software engineer can do. Claude code 4.5 almost hits that mark for me. In a year, I can't imagine…

    It might not be AGI, it might not be able to do everything everyone could do, but it'll be enough that you can fire most of the people on your team and just use a few to guide the AI. And I think that's gonna tighten things up in the job market even more. Hopefully it leads to more job creation and it empowers people to compete with one another based on taste and individuals to compete with organizations. That's the best-case scenario I see.

  17. Tech folk. Anyone really.
  18. It's like Penn and Teller say: "Everyone thinks the world is getting worse, but it's always getting better." I hope you're right. Maybe it's just my age creeping up on me and turning me into a cynic. Anyway, good to know there's still people who think positively.
  19. I can't imagine a public execution; that seems insane. I know the world is crazy right now, but I don't know about this.

This user hasn’t submitted anything.