It's typically been productive to care about the how, because it leads to better maintainability and a better ability to adapt or pivot to new problems. I suppose that's getting less true by the minute, though.
Sometimes, you strike gold, so there's that.
And I do think there's more to it than preference. Like there's actual bugs in the code, it's confusing and because it's confusing there's more bugs. It's solving a simple problem but doing so in an unnecessarily convoluted way. I can solve the same problem in a much simpler way. But because everything is like this I can't just fix it, there's layers and layers of this convolution that can't just be fixed and of course there's no proper decoupling etc so a refactor is kind of all or nothing. If you start it's like pulling on a thread and everything just unravels.
This is going to sound pompous and terrible but honestly some times I feel like I'm too much better than other developers. I have a hard time collaborating because the only thing I really want to do with other people's code is delete it and rewrite it. I can't fix it because it isn't fixable, it's just trash. I wish they would have talked to me before writing it, I could have helped then.
Obviously in order to function in a professional environment i have to suppress this stuff and just let the code be ass but it really irks me. Especially if I need to build on something someone else made - itsalmost always ass, I don't want to build on a crooked foundation. I want to fix the foundation so the rest of the building can be good too. But there's no time and it's exhausting fixing everyone else's messes all the time.
Which is part of what I find so motivating with AI. It is much better at making sense of that muck, and with some guidance it can churn out code very quickly with a high degree of readability.
I usually attribute it to people being lazy, not caring, or not using their brain.
It's quite frustrating when something is *so obviously* wrong, to the point that anyone with a modicum of experience should be able to realize that what was implemented is totally whack. Please, spend at least a few minutes reviewing your work so that I don't have to waste my time on nonsense.
1. If you don't care about code and only care about the "thing that it does when it's done", how do you solve problems in a way that is satisfying? Because you are not really solving any problem but just using the AI to do it. Is prompting more satisfying than actually solving?
2. You claim you're done "learning about coding and stretching my personal knowledge in the area" but don't you think that's super dangerous? Like how can you just be done with learning when tech is constantly changing and new things come up everyday. In that sense, don't you think AI use is actually making you learn less and you're just justifying it with the whole "I love solving problems, not code" thing?
3. If you don't care about the code, do the people who hire you for it do? And if they do, then how can you claim you don't care about the code when you'll have to go through a review process and at least check the code meaning you have to care about the code itself, right?
1. The problem solving is in figuring out what to prompt, which includes correctly defining the problem, identifying a potential solution, designing an architecture, decomposing it into smaller tasks, and so on.
Giving it a generic prompt like "build a fitness tracker" will result in a fully working product but it will be bland as it would be the average of everything in its training data, and won't provide any new value. Instead, you probably want to build something that nobody else has, because that's where the value is. This will require you to get pretty deep into the problem domain, even if the code itself is abstracted away from you.
Personally, once the shape of the solution and the code is crystallized in my head typing it out is a chore. I'd rather get it out ASAP, get the dopamine hit from seeing it work, and move on to the next task. These days I spend most of my time exploring the problem domain rather than writing code.
2. Learning still exists but at a different level; in fact it will be the only thing we will eventually be doing. E.g. I'm doing stuff today that I had negligible prior background in when I began. Without AI, I would probably require an advanced course to just get upto speed. But now I'm learning by doing while solving new problems, which is a brand new way of learning! Only I'm learning the problem domain rather than the intricacies of code.
3. Statistically speaking, the people who hire us don't really care about the code, they just want business results. (See: the difficulty of funding tech debt cleanup projects!)
Personally, I still care about the code and review everything, whether written by me or the AI. But I can see how even that is rapidly becoming optional.
I will say this: AI is rapidly revolutionizing our field and we need to adapt just as quickly.
Coding is just a formal specification, one that is suited to be automatically executed by a dumb machine. The nice trick is that the basic semantics units from a programming language are versatile enough to give you very powerful abstractions that can fit nicely with the solution your are designing.
> Personally, once the shape of the solution and the code is crystallized in my head typing it out is a chore
I truly believe that everyone that says that typing is a chore once they've got the shape of a solution get frustrated by the amount of bad assumptions they've made. That ranges from not having a good design in place to not learning the tools they're using and fighting it during the implementation (Like using React in an imperative manner). You may have something as extensive as a network protocol RFC, and still got hit by conflict between the specs and what works.
To a lot of people (clearly not yourself included), the most interesting part of software development is the problem solving part; the puzzle. Once you know _how_ to solve the puzzle, it's not all that interesting actually doing it.
That being said, you may be using the word "shape" in a much more vague sense than I am. When I know the shape of the solution, I know pretty much everything it takes to actually implement it. That also means I've very bad at generating LOEs because I need to dig into the code and try things out, to know what works... before I can be sure I have a viable solution plan.
That being said, we can say
- Given the implementation options we've found, this solution/direction is what we think is the best
- We have enough information now that it is unlikely anything we find out is going to change the solution
- We know enough about the solution that it is extremely unlikely that there are any more real "problems/puzzles" to be solved
At that point, we can consider the solution "found" and actually implementing it is no more a part of solving it. Could the implemented solution wind up having to deal with an off-by-one error that we need to fix? Sure... but that's not "puzzle solving". And, for a lot of people, it's just not the interesting part.
Look at the length of my prompt and the length of the code. And that's not even including the tests I had it generate. It made all the right assumptions, including specifying tunable optional parameters set to reasonable defaults and (redacted) integrating with some proprietary functions at the right places. It's like it read my mind!
Would you really think writing all that code by hand would have been comparable to writing the prompt?
But the point is, there were no assumptions or tooling or bad designs that had to be fought. Just an informal, high-level prompt that generated the exact code I wanted in a fraction of the time. At least to me that was pretty surprising -- even if it'd become routine for a while by then -- because I'd expect that level of wavelength-match between colleagues who had been working on the same team for a while.
If you really believe this, I'd never want to hire you. I mean, it's not wrong, it's just ... well, it's not even wrong.
Your response and depth of reasoning about why you wouldn't hire them is a red flag though. Not for a manager role and certainly not as an IC.
My comment was based on you saying you don't care about the code and only what it does. But now you're saying you care about the code and review everything so I'm not sure what to make out of it. And again, I fundamentally disagree that reviewing code will become optional or rather should become optional. But that's my personal take.
This just sounds like "no true scotsman" to me. You have a problem and a toolkit. If you successfully solve the problem, and the solution is good enough, then you are a problem solver by any definition worth a damn.
The magic and the satisfaction of good prompting is getting to that "good enough", especially architecturally. But when you get good at it - boy, you can code rings around other people or even entire teams. Tell me how that wouldn't be satisfying!
I'm not the person you originally replied to, so my take is different, which explains your confusion :-)
However I do increasingly get the niggling sense I'm reviewing code out of habit rather than any specific benefit because I so rarely find something to change...
> And if you're really going too deep into the problem domain, what is the point of having the code abstracted?
Let's take my current work as an example: I'm doing stuff with computer vision (good old-fashioned OpenCV, because ML would be overkill for my case.) So the problem domain is now images and perception and retrieval, which is what I am learning and exploring. The actual code itself does not matter as much the high-level approach and the component algorithms and data structures -- none of which are individually novel BTW, but I believe I'm the only one combining them this way.
As an example, I give a high-level prompt like "Write a method that accepts a list of bounding boxes, find all overlapping ones, choose the ones with substantial overlap and consolidate them into a single box, and return all consolidated boxes. Write tests for this method." The AI runs off and generates dozens of lines of code -- including a tunable parameter to control "substantial overlap", set to a reasonable default -- the tests pass, and when I plug in the method, 99.9% of the times the code works as expected. And because this is vision-based I can immediately verify by sight if the approach works!
To me, the valuable part was coming up with that whole approach based on bounding boxes, which led to that prompt. The actual code in itself is not interesting because it is not a difficult problem, just a cumbersome one to handcode.
To solve the overall problem I have to combine a large number of such sub-problems, so the leverage that AI gives me is enormous.
But as I said, it's getting rare that I need to change anything the AI generates. That's partly because I decompose the problem into small, self-contained tasks that are largely orthogonal and easily tested -- mostly a functional programming style. There's very little that can go wrong because there is little ambiguity in the requirements, which is why a 3 line prompt can reliably turn into dozens of lines of working, tested code.
The main code I deal with manually is the glue that composes these units to solve the larger computer vision problem. Ironically, THAT is where the tech debt is, primarily because I'm experimenting with combinations of dozens of different techniques and tweaks to see what works best. If I knew what was going to work, I'd just prompt the AI to write it for me! ;-)
> You've really hit the crux of the problem and why so many people have differing opinions about AI coding.
Part of it perhaps, but there's also a huge variation in model output. I've been getting some surprisingly bad generations from ChatGPT recently, though I'm not sure if that's ChatGPT getting worse or me getting used to a much higher quality of code from Claude Code which seems to test itself before saying "done". I have no idea if my opinion will flip again now 5.2 is out.
And some people are bad communicators, an important skill for LLMs, though few will recognise it because everyone knows what they themselves meant by whatever words they use.
And some people are bad planners, likewise an important skill for breaking apart big tasks that LLMs can't do into small ones they can do.
I use writing the code as a way to investigate the options and find new ones. By the time I'm sure of the correct way to implement something, half the code is written [1]. At that point, now that I know what and how I'm going to do, it starts to get boring. I think what would work best for me would be able to say "ok, now finish this" to the AI and have it do that boring part.
[1] This also makes my LOEs horrible, because I don't know what I'm going to build until I've completed half of it. And figuring out how long it will take to do something that isn't defined is... inaccurate.
Many engineers walk a path where they start out very focussed on programming details, language choice, and elegant or clever solutions. But if you're in the game long enough, and especially if you're working in medium-to-large engineering orgs on big customer-facing projects, you usually kind of move on from it. Early in my career I learned half a dozen programming languages and prided myself on various arcane arts like metaprogramming tricks. But after a while you learn that one person's clever solution is another person's maintainability nightmare, and maybe being as boring and predictable and direct as possible in the code (if slightly more verbose) would have been better. I've maintained some systems written by very brilliant programmers who were just being too clever by half.
You also come to realize that coding skills and language choice don't matter as much as you thought, and the big issues in engineering are 1) are you solving the right problem to begin with 2) people/communication/team dynamics 3) systems architecture, in that order of importance.
And also, programming just gets a little repetitive after a while. Like you say, after a decade or so, it feels a bit like "more of the same." That goes especially for most of the programming most of us are doing most of the time in our day jobs. We don't write a lot of fancy algorithms, maybe once in a blue moon and even then you're usually better off with a library. We do CRUD apps and cookie-cutter React pages and so on and so on.
If AI coding agents fall into your lap once you've reached that particular variation of a mature stage in your engineering career, you probably welcome them as a huge time saver and a means to solve problems you care about faster. After a decade, I still love engineering, but there aren't may coding tasks I particularly relish diving into. I can usually vaguely picture the shape of the solution in my head out the gate, and actually sitting down and doing it feels rather a bore and just a lot of typing and details. Which is why it's so nice when I can kick off a Claude session to do it instead, and review the results to see if they match what I had in mind.
Don't get me wrong. I still love programming if there's just the right kind of compelling puzzle to solve (rarer and rarer these days), and I still pride myself on being able to do it well. Come the holidays I will be working through Advent of Code with no AI assistance whatsoever, just me and vim. But when January rolls around and the day job returns I'll be having Claude do all the heavy lifting once again.
1) When I already have a rough picture of the solution to some programming task in my head up front, I do not particularly look forward to actually going and doing it. I've done enough programming that many things feel like a variation on something I've done before. Sometimes the task is its own reward because there is a sufficiently hard and novel puzzle to solve. Mostly it is not and it's just a matter of putting in the time. Having Claude do most of the work is perfect in those cases. I don't think this is particularly anything to do with working on a ball of mud: it applies to most kinds of work on clean well-architected projects as well.
2) I have a restless mind and I just don't find doing something that interesting anymore once I have more or less mastered it. I'd prefer to be learning some new field (currently, LLMs) rather than spending a lot of time doing something I already know how to do. This is a matter of temperament: there is nothing wrong with being content in doing a job you've mastered. It's just not me.
Every time I think I have a rough picture of some solution, there's always something in the implementation that proves me wrong. Then it's reading docs and figuring whatever gotchas I've stepped into. Or where I erred in understanding the specifications. If something is that repetitive, I refactor and try to make it simple.
> I have a restless mind and I just don't find doing something that interesting anymore once I have more or less mastered it.
If I've mastered something (And I don't believe I've done so for pretty much anything), the next step is always about eliminating the tedium of interacting with that thing. Like a code generator for some framework or adding special commands to your editor for faster interaction with a project.
getting things solved entirely feels very very numbing to me
even when gemini or chatgpt solves it well, and even beyond what i'd imagine.. i feel a sense of loss
But I still love getting my hands dirty and writing code as a mental puzzle. And the best puzzles tend to happen outside of a work environment anyways. So I continue to work through advent of code problems (for example) as a way of exercising that muscle.
Most are not paid for results, they're paid for time at desk and regular responsibilities such as making commits, delivering status updates, code reviews, etc. - the daily activities of work are monitored more closely than the output. Most ESOP grant such little equity that working harder could never observably drive an increase in its value. Getting a project done faster just means another project to begin sooner.
Naturally workers will begin to prefer the motions of the work they find satisfying more than the result it has for the business's bottom line, from which they're alienated.
This gets us to the rule number one of being successful at a job: Make sure your manager likes you. Get 8 layers of people whose priority is just to be sure their manager likes them, and what is getting done is very unlikely to have much to do with shareholder value, customer happiness, or anything like that.
Wow. I've read a lot of hacker news this past decade, but I've never seen this articulated so well before. You really lifted the veil for me here. I see this everywhere, people thinking the work is the point, but I haven't been able to crystallize my thoughts about it like you did just now.
Im on the side of only enjoy coding to solve problems and i skipped software engineering and coding for work explicitly because i did not want to participate in that dynamic of being removed from the problems. instead i went into business analytics, and now that AI is gaining traction I am able to do more of what I love - improving processes and automation - without ever really needing to "pay dues" doing grunt work I never cared to be skilled at in the first place unless it was necessary.
Sometimes you can, sometimes you have to break the problem apart and get the LLM to do each bit separately, sometimes the LLM goes funny and you need to solve it yourself.
Customers don't want you wasting money doing by hand what can be automated, nor do they want you ripping them off by blindly handing over unchecked LLM output when it can't be automated.
If enough people can make the product faster, then competition will drive the price down. But the ability to charge less is not at all an obligation to charge less.
ultimately i wonder how long people will need devs at all if you can all prompt your wishes
some will be kept to fix the occasional hallucination and that's it
Having said that I used to be deep into coding and back then I am quite sure that I would hate AI coding for me. I think for me it comes down to – when I was learning about coding and stretching my personal knowledge in the area, the coding part was the fun part because I was learning. Now that I am past that part I really just want to solve problems, and coding is the means to that end. AI is now freeing because where I would have been reluctant to start a project, I am more likely to give it a go.
I think it is similar to when I used to play games a lot. When I would play a game where you would discover new items regularly, I would go at it hard and heavy up until the point where I determined there was either no new items to be found or it was just "more of the same". When I got to that point it was like a switch would flip and I would lose interest in the game almost immediately.