Preferences

weatherlite
Joined 1,405 karma

  1. This is a reach. Can you share a few examples of Western countries where that is the case?
  2. Doctors get sued all the time. It doesn't mean doctors are no good. I also don't think ChatGPT will pretend they are replacing doctors / committing to diagnosis with this tool. They will cover their ass legally.
  3. What's the difference between Googling diseases/symptoms and asking ChatGPT?
  4. What ? In most countries, including the U.S, they are a very highly paid profession (I'm not talking about the internship phase)
  5. I don't think it's Utopia either (I was being a bit sarcastic) but it's the best case scenario; the worst case is governments do nothing and let "the market" run its course; this could be borderline Great Depression levels of depravity I think.

    As for those professions; I think they are objectively hard for certain kinds of people but I think much of the problem is the working conditions; less shifts, less stress, more manpower and you'll see more satisfaction. There's really no reason why teachers in the U.S should be this burned out! In Scandinavia being a teacher is a honorable, high status profession. Much of this has to do with framing and societal prestige rather than the actual work itself. If you pay elder carers more they'll be happier. We pretty treat our elders like a burden in most modern societies, in more traditional societies I'm assuming if you said your job is caring for elders it is not a low status gig.

  6. Totally agree; these are all in need of bodies plus they are always understaffed (why the hell does a nurse need to oversee 15 patients in people have to rot in ICU for hours? We accept this because it's cost effective not because it's a decent or even safe practice). Governments could and should make conditions in those professions more tolerable, and use money from A.I to retrain people into them. If a teacher oversaw 10 kids instead of 35 maybe we'll have less burnout and maybe children get better education. If had more police there would be less crime and less burnout. Etc etc. The thing is what happens untill (and if) we get into this utopia.
  7. > If we get effective humanoid robots

    That's still an if and also a when; could be 2 decades from now or more till this reliably replaces a nurse.

    > Retraining to what exactly?

    I wish I had a good solution for all of us and you raise good points , even if you retrain to become say a therapist or a personal trainer the economy could become too broken and fragmented for you to be able to making a living. Governments that can will have to step in.

  8. You're right. But you know what they'll do - they'll offshore those "jobs" e.g token usage to countries that are A.I friendly or that can be bribed easily and do whatever they have to do to fight it out in courts for a decade or as long as it takes. Or am I being pessimistic here?
  9. UBI (from taxing big tech) and retraining. In the U.S they'll have enough money to do this and it will still suck and many people won't recover the extreme loss of status and income (after we've been told our income and status are the most important things in life it's gonna be very hard for people to adapt to the loss of it). Countries like India and Philipines and Ukraine which are basically knowledge support hub without much original knowledge of their own yeah this is gonna be something for sure. Quite depressing.
  10. To me it's more of a mixed bag. On the one hand - disheartening to see how the knowledge base and skills I've worked more than a decade to develop became of little value (not worthless, but not as valuable as before). Also - yeah, the speed of delivery that is going to be expected of devs will make it so we will not be able to hold all the pieces in our heads and rely on A.I (when things break it will suck, hopefully A.I will be able to get us out of the jam). This is also not enjoyable to me.

    On the other hand : way less time spent on being stuck on yarn/pip dependency issues, docker , obscure bugs , annoying css bugs etc etc. You can really focus on the task at hand and not spend hours/days trying to figure out something silly.

  11. > Time will tell what happens, but if programming becomes "prompt engineering", I'm planning on quitting my job and pivoting to something else. It's nice to get stuff working fast, but AI just sucks the joy out of building for me.

    I hear you but I think many companies will change the role ; you'll get the technical ownership + big chunks of the data/product/devops responsibility. I'm speculating but I think one person can take that on himself with the new tools and deliver tremendous value. I don't know how they'll call this new role though, we'll see.

  12. I'm seeing this as well. Not huge codebases but not tiny - 4 year old startup. I'm new there and it would have been impossible for me to deliver any value this soon. 12 years experience; this thing is definitely amazing. Combined with a human it can be phenomenal. It also helped me tons with lots of external tools, understand what data/marketing teams are doing and even providing pretty crucial insights to our leadership that Gemini have noticed. I wouldn't try to completely automate the humans out of the loop though just yet, but this tech for sure is gonna downsize team numbers (and at the same time - allow many new startups to come to life with little capital that eventually might grow and hire people. So unclear how this is gonna affect jobs.)
  13. The main issue in this discussion is the word "replace" . People will come up with a bunch of examples where humans are still needed in SWE and can't be fully replaced, that is true. I think claiming that 100% of engineers would be replaced in 2026 is ridiculous. But how about downsizing? Yeah that's quite probable.
  14. I like the stock
  15. > What do LLMs train off of now? I wonder if, 10 years from now, LLMs will still be answering questions that were answered in the halcyon 2014-2020 days of SO better than anything that came after? Or will we find new, better ways to find answers to technical questions?

    That's a great question. I have no idea how things will play out now - do models become generalized enough to handle "out of distrubition" problems or not ? If they don't then I suppose a few years from now we'll get an uptick in Stackoverflow questions; the website will still exist it's not going anywhere.

  16. I can relate. I used to have a decent SO profile (10k+ reputation, I know this isnt crazy but it was mostly on non low hanging fruit answers...it was a grind getting there). I used to be proud of my profile and even put it in my resume like people put their Github. Now - who cares? It would make look like a dinosaur sharing that profile, and I never go to SO anymore.
  17. This is incredible. Anyone who claims LLMs aren't useful will need to explain how come almost every programmer can solve 95% of his problems with an LLM without needing anything else. This is real usefulness right here.

    EDIT: I'm not saying I'm loving what happened and what is becoming of our roles and careers, I'm just saying things have changed forever; there's still a (shrinking) minority of people who seem to not be convinced.

  18. No idea about training tenserflow models - is it super complex or is it just calling a couple of APIs ? Langchain is literally calling an API. Maybe you need to get good with prompting or whatever, but I don't see where the complexity lies. Please let me know.
  19. > So it is absurdly incorrect to say "they can only reproduce the past."

    Also , a shitton of what we do economically is reproducing the past with slight tweaks and improvements. We all do very repetitive things and these tools cut the time / personnel needed by a significant factor.

  20. > Skill and Langchain experts with production-grade 0>1 experience.

    Also , it's just normal backend work - calling a bunch of APIs. What am I missing here?

  21. > Are you sure it’s _people_ driving this increase?

    Most likely - yes. If Google has been dead for years people wouldn't pour hundreds of billions of dollars into ads there. The Search revenue keeps increasing, even since ChatGPT showed up. It might stagnate soon or even decrease a bit - but "death" ? The numbers don't back this up. One blog saying he stops paying for Google ads conflicts with the reality of around 200 billion yearly revenue from Search.

  22. > It still needs to do learning (RL or otherwise) in order to do new tasks.

    Why ? As in - why isn't reading the Brainfuck documentation enough for Gemini to learn Brainfuck ? I'd allow for 3-7 days of a learning curve like perhaps a human would need but why do you need to kinda redo the whole model (or big parts of it) just so it could learn Brainfuck or some other tool? Either the learning (RL or otherwise) need to become way more efficient than it is today (takes today weeks? months? billions of dollars) or it isn't AGI I would say. Not in practical/economic sense and I believe not in the philosophical sense of how we all envisioned true generality.

  23. > None of the stated problems are actually issues with LLMs after on policy training is performed

    But still , isnt it a major weakness they have to RL on everything that has not much data? That really weakens the attempt to make it true AGI.

  24. > Almost anyone can prompt an LLM to generate a thousand-line patch and submit it for code review. That’s no longer valuable. What’s valuable is contributing code that is proven to work.

    That's really not a great development for us. If our main point is now reduced to accountability over the result with barely any involvement in the implementation - that's very little moat and doesn't command a high salary. Either we provide real value or we don't ...and from that essay I think it's not totally clear what the value is - it seems like every QA, junior SWE or even product manager can now do the job of prompting and checking the output.

  25. > Israelians

    Israelis

  26. > AI agents break rules under everyday pressure

    Jeez they really ARE becoming human like

  27. > I feel like most devs will become moreso architects and testers of the output

    Which stands to reason you'll need less of them. I'm really hoping this somehow leads to an explosion of new companies being built and hiring workers , otherwise - not good for us.

  28. > I feel the effects of this are going to take a while to be felt (5 years?);

    Who knows if we'll even need senior devs in 5 years. We'll see what happens. I think the role of software development will change so much those years of technical experience as a senior won't be so relevant but that's just my 5 cents.

  29. I'm impressed by this. You know in the beginning I was like hey why doesn't this look like counterstrike ? yeah I had the exepectation this things can one shot an industry leading computer game. Of course that's not yet possible. But still, this is pretty damn impressive for me.
  30. Well the same can be said about non contrarians ...

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal