Preferences

> The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t.

For me, however, there is one issue: how can I utilize AI without degenerating my own abilities? I use AI sparingly because, to be honest, every time I use AI, I feel like I'm getting a little dumber. I fear that excessive use of AI will lead to the loss of important skills on the one hand and create dependencies on the other. Who benefits if we end up with a generation of software developers who can no longer program without AI? Programming is not just writing code, but a process of organizing, understanding, and analyzing. What I want above all is AI that helps me become better at my job and continue to build skills and knowledge, rather than making me dependent on it.


> I use AI sparingly because, to be honest, every time I use AI, I feel like I'm getting a little dumber. I fear that excessive use of AI will lead to the loss of important skills on the one hand and create dependencies on the other. Who benefits if we end up with a generation of software developers who can no longer program without AI?

The shareholders benefit this quarter. Look man, I know you probably have a high opinion of yourself and all, but your job now is to degrade your abilities in order to deliver faster results now. The investors kindly demand that you get with the program, enthusiastically accept your new role as a depreciating asset (not human capital to be invested in), and stop thinking so much.

Do we think less because we use C++ vs assembly? Less because we use assembly over punching cards? Less because we use computers over pen and paper? And so on. You can put a strong local coding model on your local hardware today and no investor will be involved (unless you mean investors on the company you work for, but the truth is,n those were never in any way interested in how you build things, only that you do).
> Do we think less because we use C++ vs assembly? Less because we use assembly over punching cards? ...

Apologists love to make such analogies. "From 30,000 feet, doesn't the new things kinda look like some old thing? Then they're the same and you should accept the new thing!" But the analogies are never apt, and the "argument" is really only one of glossing over the differences.

The whole point of AI is for people to think less. It's basically the goddamned name. If people aren't thinking less, AI isn't doing its job. All of those things you listed are instances of mechanical translation, and aren't thinking..

> You can put a strong local coding model on your local hardware today and no investor will be involved (unless you mean investors on the company you work for, but the truth is,n those were never in any way interested in how you build things, only that you do).

Don't pretend you can cosplay a capitalist with AI. You need money, and if you can build something with a local model, the people with money can do it too, so they don't have to pay you. We work for a living.

Also it's a fantasy your local model with be anything but a dim candle to the ones the rich have. Real life is not a sci-fi novel.

Your employers are hoping to use you up making this current imperfect iteration of the technology work, because the hope is the next version won't need you. Don't be cheerful about it. It's a bad end for you.

You say that with such conviction "the whole point is to think less". Why do you think that? I think no less now that I use AI agents all day long, I just think about different things. I don't think about where I place certain bits of code, how certain structures look like. Instead I think about data models, systems, and what the ideal deliverable looks like and how we can plan its implementation and let it execute by the willing agent. I think about how I best automate flows so that I can parallelize work, within a harness that reduces the possibilities for mistakes. I think a whole lot more about different technologies and frameworks, as the cost of exploring and experimenting with them has come down tremendously.

Will what I do now be automated eventually or before long? Probably, we keep automating things, so one has to swim up the abstraction layers. Doesn't mean one has to think less.

Current "AI" are statistical models, not AGI - that's why you're wrong there.

To use them well you still need to know everything - whenever you prompt lazily you're opening yourself up to a fuckton of technical debt.

That might be acceptable to some, but is generally a bad idea

If it was AGI, you'd be right though...

> To use them well you still need to know everything - whenever you prompt lazily you're opening yourself up to a fuckton of technical debt.

> That might be acceptable to some, but is generally a bad idea

That's my point: to use them effectively you need to know everything, but to use them heavily puts you in a situation where that knowledge atrophies (e.g. the OP's statement "every time I use AI, I feel like I'm getting a little dumber"). The bosses want the results now, and don't mind if in a few years you're much less capable (and maybe not capable of getting effective results anymore).

If AGI comes soon enough, their bet will have paid off.

You can’t and that’s the new normal. We’re probably the only generation which was given an opportunity to get properly good at coding. No such luxury will be available in a few years optimistically; pessimistically it’s been taken away with GPT 5.2 and Opus 4.5.
If that's the case (and I'm not convinced it is), shouldn't retaining that skill be the priority for anyone who has already acquired it? I've yet to see any evidence AI can turn someone who can't code into a substitute for someone who can. If the supply of that skill is going to dry up, surely it will only become more valuable. If using AI erodes it, the logical thing would be not to use AI.
That's the correct diagnosis IMHO, but getting good as software engineering is ~3 years of serious studying and ~5-10 years of serious work and that's after you've learned to code, which is easier to some and more difficult to others.

Compare ROI of that to being able to get kinda the software you need in a few hours of prompting; it's a new paradigm, progress is (still) exponential and we don't know where exactly things will settle.

Experts will get scarce and very sought after, but once they start to retire in 10-20-30 years... either dark ages or AI overlords await us.

> If that's the case [...], shouldn't retaining that skill be the priority for anyone who has already acquired it?

Indeed I believe that, but in my experience these skills get more and more useless in the job market. In other words: retaining such (e.g. low-level coding) skills is an intensively practises hobby of such people that is (currently) of "no use" in the job market.

i think cs students should force themselves to learn the real thing and write the code themselves, at least for their assignments. i have seen that a lot of recent cs grads that has gpt in most of their cs life basically cannot write proper code, with or without ai.
They can't. Universities will eventually catch up to the demand of companies, just like how the one I attended switched from C/C++ to only managed languages.

With that the students were more directly a match for the in-demand roles, but reality is that other roles will see a reduction of supply.

The question here is: Will there be a need in the future for people who can actually code?

I think so. I also believe the field is evolving and that the pendulum always swings to extremes. Right now we are just beginning to see the consequences of the impact of AI on stability & maintainability of software. And we have not seen the impact of when it catastrophically goes wrong.

If you, together with your AI buddy, cannot solve the problem on this giant AI codebase, pulling in a colleague probably isn't going to help anymore.

The amount of code that is now being generated with AI (and accepted because it looks good enough) is causing long-term stability to suffer. What we are seeing is that AI is very eager to make the fixes without any regard towards past behavior or future behavior.

Of course, this is partially prevented by having better prompts, and human reviews. But this is not the future companies want us to go. They want us to prompt and move on.

AI will very eagerly create 10,000 pipes from a lake to 10,000 houses in need of water. And branch off of them. And again.

Until one day you realize the pipes have lead in them and you need to replace them.

Today this is already hard. With AI it's even harder because there is no unified implementation somewhere. It's all copy pasted for the sake of speed and shipping.

I have yet to see a Software Engineer who stands behind every line of code produced to be faster on net-new development using AI. In fact, most of the time they're slower because the AI doesn't know. And even when they use AI the outcome is worse because there is less learning. The kind of learning that eventually pushes the boundaries in 'how can we make things better'.

In the same way we make kids learn addition and multiplication even though they have access to calculators
>For me, however, there is one issue: how can I utilize AI without degenerating my own abilities?

My cynical view is you can't, and that's the point. How many times before have we seen the pattern of "company operates at staggering losses while eliminating competition or becoming entrenched in enough people's lives, and then clamps down to make massive profits"?

> how can I utilize AI without degenerating my own abilities?

Couldn't the same statement, to some extent, be applied to using a sorting lib instead of writing your own sorting algorithm? Or how about using a language like python instead of manually handling memory allocation and garbage collection in C?

> What I want above all is AI that helps me become better at my job and continue to build skills and knowledge

So far, on my experience, the quality of what AI outputs is directly related to the quality of the input. I've seen some AI projects made by junior devs that a incredibly messy and confusing architecture, despite they using the same language and LLM model that I use? The main difference? The AI work was based on the patterns and architecture that I designed thanks to my knowledge, which also happens to ensure that the AI will produce less buggy software.

I think their is a huge difference between using a library and using python instead of C/Rust etc. You use those because they are fundementally more efficient at the expense of having to worry about efficient memory use. Robust programming is a trade off and the speed of development might be worth it but it also could be so problematic that the project just never works. A sort library is an abstraction over sorting its extension to your language pool you now have the fundemental operator sort(A). Languages kind of transend the operator difference.

I think the problem the OP is trying to get at is that if we only program at the level of libs we lose the ability to build fundementally cooler/better things. Not everyone does that of course but AI is not generating fundementally new code its copy pasting. Copy Pasting has its limits especially for people in the long term. Copy paste coders don't build game engines. They don't write operating systems. These are esototeric to some people as how many people actually write those things! But there is a craftsmanship lost in converting more people to Copy Paste all be it with inteligence.

I personally lean on the side that this type of abstraction over thinking is problematic long term. There is a lot damage being done on people not necessiarly in Coding but in Reading/Writing especially in (9-12 grade + college). When we ask people to write essays and read things, AI totally short circuits the process but the truth is no one gets any value in the finished product of an essay about "Why columbus coming to the new world cause X,Y or Z". The value is from the process of thinking that used to be required to generate that essay. This is similar to the OPs worry. You can say well we can do both and think about it as we review AI outputs. But human's are lazy. We don't mull over the calculator thinking about how some value is computed something we take it and run. I think there is lot more value/thinking in the application of the calculated results so calculator didn't destroy mathematical thinking but the same is not necessiarly true in how AI is being applied. The fact of your observation of inn junior dev's output proves support to my view. We are short circuiting the thinking. If those juniors can learn the patterns than there is no issue but it's not guarenteed. I think the uncertainity is the the OPs worry but maybe restated in a better way.

Love to hear your thoughts!

no, it's more like asking a junior dev to write the sorting algorithm instead of writing it yourself. using a library would be like using an already verified and proven one algorithm. that's not what AI code provides.
I worry about that too.

But at this point, it's like refusing to use vehicles to travel long distances in fear of becoming physicaly unfit. We go to the gym.

Incidentally I have been knocking people for years who take the car to drive to the gym and take an escalator up, only to pay money to make the equivalent physical exertions to cycling + taking the stairs, which would be... free!
I guess your username is fitting, at least.
I like that analogy a lot. FWIW I also find myself learning a lot more at a higher rate with LLM usage
This is part of the learning curve. When you vibe code you produce something that is as if someone else wrote it. It’s important to learn when that’s appropriate versus using it in a more limited way or not at all.
Do you save time by using a calculator / spreadsheet or try to do all calculations in your head, because your ability to do quick calculations degrades the more you rely on tools to do it.

I'm not too worried about degrading abilities since my fundamentals are sound and if I get rusty due to lack of practice, I'm only a prompt away from asking my expert assistant to throw down some knowledge to bring me back up to speed.

Whilst my hands on programming has reduced, the variety of Software I create has increased. I used to avoid writing complex automation scripts in bash because I kept getting blocked trying to remember its archaic syntax, so I'd typically use bun/node for complex scripts, but with AI I've switched back to writing most of my scripts in bash (it's surprising at what's capable in bash), and have automated a lot more of my manual workflows since it's so easy to do.

I also avoided Python because the lack of typing and api discovery slowed me down a lot, but with AI autocomplete whenever I need to know how to do something I'll just write a method stub with comments and AI will complete it for me. I', now spending lots of time writing Python, to create AI Tools and Agents, ComfyUI Custom Nodes, Image and Audio Classifiers, PIL/ffmpeg transformations, etc. Things I'd never consider before AI.

I also don't worry about its effects as I view it as inevitable, with the pendulum having swung towards code now being dispensable/cheap to create, what's more important is velocity and being able to execute your ideas quickly, for me that's using AI where I can.

My answer is: use AI exactly for the tasks that you as a tech lead on a project would be ok delegating to someone else. I.e. you still own the project and probably want to devote your attention to all of the aspects that you HAVE to be on to of, but there are probably a lot of tasks where you have a clear definition of the task and its boundaries, and you should be ok to delegate and then review.

This gets particularly tricky when the task requires a competency that you yourself lack. But here too the question is - would you be willing to delegate it to another human whom you don't fully trust (e.g. a contractor)? The answer for me is in many cases "yes, but I need to learn about this enough so that I can evaluate their work" - so that's what I do, I learn what I need to know at the level of the tech lead managing them, but not at the level of the expert implementing it.

If we sees ourselves less as a programmer and more as a software builder, then it doesn’t really matter if our programming skills atrophy in the process of adopting this tool, because it affords us to build at a higher abstraction level), kind of like how a PM does it. This up-leveling in abstractions have happened over and over in software engineering as our tooling improves over time. I’m sure some excellent software engineers here couldn’t write in assembly code to save their lives, but are wildly productive and respected for what they do - building excellent software.

That said, as long as there’s the potential for AI to hallucinate, we’ll always need to be vigilant - for that reason I would want to keep my programming skills sharp.

AI assisted software building by day, artisanal coder by night perhaps.

Isn't this the exact reason why modern software is so bloated?
I think this question can be answered in so many ways - first of all, piling abstraction doesn’t automatically imply bloating - with proper compile time optimizations you can achieve zero cost abstractions, e.g C++ compilers.

Secondly, bloated comes in so many forms and they all have different reasons. Did you mean bloated as in huge dependency installs like those node modules? Or did you mean an electron app where a browser is bundled? Or perhaps you mean the insane number of FactoryFactoryFactoryBuilder classes that Java programmers have to bear with because of misguided overarchitecting? The 7 layer of network protocols - is that bloating?

These are human decisions - trade-offs between delivering values fast and performance. Foundational layers are usually built with care, and the right abstractions help with correctness and performance. At the app layers, requirements change more quickly and people are more accepting of performance hits, so they pick tech stacks that you would describe as bloated for faster iteration and delivery of value.

So even if I used abstraction as an analogy, I don’t think that automatically implies AI assisted coding will lead to more bloat. If anything it can help guide people to proper engineering principles and fit the code to the task at hand instead of overarchitecting. It’s still early days and we need to learn to work well with it so it can give us what we want.

You'd have to define bloat first. Is internationalization bloat? How about screen reader support for the blind? I mean, okay, Excel didn't need a whole flight simulator in it, but just because you doing don't you use a particular feature doesn't mean it's necessarily bloat. So first: define bloat.
Some termite mounds in Botswana already reach over two meters high, but these traditional engineering termites will be left behind in their careers if they don't start using AI and redefine themselves as mound builders.
I know this is reductionist, and I believe that you are likely correct in your concerns, but this type of thing has been happening for thousands of years. Writing itself was controversial!

> They go on to discuss what is good or bad in writing. Socrates tells a brief legend, critically commenting on the gift of writing from the Egyptian god Theuth to King Thamus, who was to disperse Theuth's gifts to the people of Egypt. After Theuth remarks on his discovery of writing as a remedy for the memory, Thamus responds that its true effects are likely to be the opposite; it is a remedy for reminding, not remembering, he says, with the appearance but not the reality of wisdom. Future generations will hear much without being properly taught, and will appear wise but not be so, making them difficult to get along with.

https://en.wikipedia.org/wiki/Phaedrus_(dialogue)

You can always ask it to nudge you in the right direction instead of giving the solution right away. I suspect this way of using it is not very popular though.

This is not a new problem I think. How do you use Google, translator, (even dictionaries!), etc without "degenerating" your own abilities?

If you're not careful and always rely on them as a crutch, they'll remain just that; without actually "incrementing" you.

I think this is a very good question. How should we actually be using our tools such that we're not degenerating, but growing instead?

> How do you use Google, translator, (even dictionaries!), etc without "degenerating" your own abilities?

By writing down every foreign word/phrase that I don't know, and adding a card for it to my cramming card box.

I haven't driven a car constantly for ages, but a few months ago when I went behind the wheel for a bit, pretty much everything came rushing back, and my primary issue was adjusting to driving on the left side of the road in a right-side drive car. This is how it is with skills; they never really go away, though it may feel like it. And so it is with programming skills (knowledge is a different thing, since the field is constantly changing).
> how can I utilize AI without degenerating my own abilities?

"Coding in the Red-Queen Era" https://corecursive.com/red-queen-coding/ (2025)

Regular coding gyms and problem solving drills

As humans we have developed tools to ease our physical needs (we don’t need to run, walk or lift things) and now we have a tool that thinks and solve problems for us

There is the other issue that AI generated anything has a value close to zero.

So what's my cut of something basically worthless? Doesn't seem lucrative in the long run.

What is value, even? A dollar bill is worth a dollar, but even that’s made up too. A crappy crayon drawing of stick people and a house is utterly priceless if your kid made it, worthless if it's some other kid. AI is forcing us to confront how squishy valuation is in the first place.

Prices are not fundamental truths. They’re numbers that happen to work. Ideally price > cost, but that’s not even reliably true once you factor in fixed costs, subsidies, taxes, rebates, etc. Boeing famously came out and said they couldn't figure out how much it actually cost to make a 747, back when they were still flying.

Here's a concrete example: You have a factory with $50k/month in fixed costs. Running it costs $5 per widget in materials and labor. You make 5,000 widgets.

Originally you sell them for $20. Revenue $100k, costs $75k, pocket a cool $25k every month. Awesome.

Then, a competitor shows up and drives the price down to $10. Now revenue is $50k. On paper you “lose money” vs your original model.

But if you shut the factory down, you still eat the full $50k fixed cost and make $0. If you keep running, each widget covers its $5 marginal cost and contributes $5 toward fixed costs. You break even instead of losing $50k.

That’s the key mistake in "AI output is worth zero." Zero marginal value does not imply zero economic value. The question is whether it covers marginal cost and contributes to something else you care about: fixed costs, distribution, lock-in, differentiation, complements, optionality.

We've faced this many times before so AI isn't special in this regard. It just makes the gap between marginal cost and perceived value impossible to ignore.

> how can I utilize AI without degenerating my own abilities?

Personally I think my skill lies in solving the problem by designing and implementing the solution, but not how I code day-to-day. After you write the 100th getter/setter you're not really adding value, you're just performing a chore because of language/programming patterns.

Using AI and being productive with it is an ability and I can use my time more efficiently than if I were not to use it. I'm a systems engineer and have done some coding in various languages, can read pretty much anything, but am nowhere near mastery in any of the languages I like.

Setting up a project, setting up all the tools and boilerplate, writing the main() function, etc are all tasks that if you're not 100% into the language take some searching and time to fiddle. With AI it's a 2-line prompt.

Introducing plumbing for yet another feature is another chore: search for the right libraries/packages, add dependencies, learn to use the deps, create a bunch of files, sketch the structs/classes, sketch the methods, but not everything is perfectly clear yet, so the first iteration is "add a bunch of stuff, get a ton of compiler warnings, and then refine the resulting mess". With AI it's a small paragraph of text describing what I want and how I'd like it done, asking for a plan, and then simply saying "yes" if it makes sense. Then wait 5-15m. Meanwhile I'm free to look at what it's doing and if it's doing something stupid wrong, or think about the next logical step.

Normally the result for me has been 90% good, I may need to fix a couple things I don't like, but then syntax and warnings have already been worked out, so I can focus on actually reading, understanding and modifying the logic and catching actual logic issues. I don't need to spend 5+ days learning how to use an entire library, only to find out that the specific one I selected is missing feature X that I couldn't foresee using last week. That part takes now 10m and I don't have to do it myself, I just bring the finishing touches where AI cannot get to (yet?).

I've found that giving the tool (I personally love Copilot/Claude) all the context you have (e.g. .github/copilot-instructions.md) makes a ton of difference with the quality of the results.

Yeah, what about the degeneration of the skill of writing assembly? Or of punching cards, even? When compilers came onto the scene, there were similar objections. Instead of focusing on one skill atrophying, look at the new skills being developed. They may not be the ones you necessarily want to e good at, but it turns out developing social engineering skills to get an LLM to do something it's been prompted not to, might actually be a transferrable skill to real life.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal