Preferences

I want to see GPT-4 dealing with this situation:

- they: we need a new basic POST endpoint

- us: cool, what does the api contract look like? URL? Query params? Payload? Response? Status code?

- they: Not sure. Third-party company XXQ will let you know the details. They will be the ones calling this new endpoint. But in essence it should be very simple: just grab whatever they pass and save it in our db

- us: ok, cool. Let me get in contact with them

- ... one week later...

- company XXQ: we got this contract here: <contract_json>

- us: thanks! We'll work on this

- ... 2 days later...

- us: umm, there's something not specified in <contract_json>. What about this part here that says that...

- ... 2 days later...

- company XXQ: ah sure, sorry we missed that part. It's like this...

- ...and so on...

Basically, 99% of the effort is NOT WRITING CODE. It's all about communication with people, and problem solving. If we use GPT-X in our company, it will help us with 1% of our workload. So, I couldn't care less about it.


This is so common in many types of business, and usually a very difficult point to articulate so thank you for that. It's something to be shown to those ringing the death-knell for programmers, artists, and the like.

Those death-knell types seemingly aren't aware of what day to day operations looks like and how AI makes a great tool, but doesn't necessarily deal with the very human factors of whims, uncertainty, reactive business mentalities and the phenomenon that is best summarised by this webcomic: https://theoatmeal.com/comics/design_hell

In my field they like to call this "hurry up and wait", a nonsensical but fitting description that summarises everything from changing scope to the unjust imbalance of time between the principal and the agent.

(There is a comment further down which suggests that we could just train AI to deal with this variability, I hope that's humour... sweet summer child thinks you can train AI to predict the future.)

I think the fear should be less about AI taking 100% of jobs but it should be AI making a single programmer do the job of 5, which would wipe a majority of the market out and make it a non-viable career option for most.

Companies are already bloated, imagine when they realize one overworked highly paid senior can replace 10 juniors.

If increased productivity equaled job loss, there would be two programmers alive today, doing the same job as the fewer than 10000 programmers using punch cards as we entered the year 1950.

A lot of projects today are not even greenlit because they would be too expensive to make. For instance, there are a lot processes in almost every country that require you to file paper forms, even though we have had web forms and databases for 25 years. This goes even for rich countries. If one programmer is able to do the job of 5 others, we will probably have 5 times as many systems to tend to now that businesses and institutions can afford them.

This is true for other domains also. My wife translates for an ecommerce business. A lot of what she does is very automated. The software she uses remembers all phrases she has translated and uses a word bank she manages for special parts and uses DeepL for new translations which are then only proofread and fine tuned. It's amazing how much stuff she can translate that way in a day (no idea how many thousands words). She is kind of managing and overseeing an AI (DeepL) and sanity checking the work. If this was 20 years ago one would probably need a translation department of 10-15 people to do the same work. However, her company would have never been able to justify that kind of cost. So, in my view: Software and AI made translators probably more than 10x more efficient in the last 10 years, however the amount of stuff that gets translated also increased 10 fold during this time.
Yep! I think this is both the optimistic case, and the version of the future that seems most likely to me.

I'm still a bit bummed about it, because just writing and structuring code is something that often brings me joy (and maybe your wife feels similarly about the actual translation work...), but at the end of the day, I realized well before all this AI craze that it really isn't the most valuable thing I do in my job.

Something like 97% of all jobs have been eliminated since the Industrial Revolution started ~250 years ago.

And instead of the 97% unemployment our intuition tell us would happen, we're instead 30x as productive.

The same will happen when/if AI makes writing software far more effective.

> … instead of the 97% unemployment … we’re instead 30x as productive.

Economic productivity simply means output per worker. So, we could make 97% of the US population permanently unemployed tomorrow, and as long as the AI replacement labor yielded equal or greater proceeds to the remaining 3%, we’d see a colossal increase in “productivity.”

That would be in the interest of the 3% to do (or attempt), which is what makes the scenario so scary.

>If increased productivity equaled job loss there would be two programmers alive today, doing the same job as the fewer than 10000 programmers using punch cards as we entered the year 1950.

The only reason it's not the case in this example is because computers at the time were a tiny early adopter niche, which massively multiplied and expanded to other areas. Like, only 1 in 10,000 businesses would have one in 1950, and only big firms would. Heck, then 1 in 100 million people even had a computer.

Today they've already done that expansion into all businesses and all areas of corporate, commerce, and leisure activities. Now almost everybody has one (or a comparable device in their pocket).

Already cloud based systems have made it so that a fraction of programmers and admins are needed. In some cases eliminating the need for one altogether.

There are tons of other fields, however, more mature, where increased productivity very much equaled job loss...

Have no doubt, we will find new places to put computers. In the 80's and even the 90's everyone said the same thing, "Why do I need a computer? I can do everything I do already without a problem?" Well, turns out with computers you could do 12 more things you can never considered. Consider the interoffice memo: it'd take what, 1-2 hours to get a document from one floor to another through the system? Cool, you can work on 2-3 projects at a time maybe, because that's all the bandwidth allowed. Along comes email and ups that to 5-6 because now the communications can be pretty consistent. It's still not perfect, because what if you're at lunch or the gym? Then came Blackberries and all of a sudden its 12-15 projects at once. Then Slack because you don't even have to think. Now add this.

Notice that during that time there weren't all of a sudden less programmers, or managers or sysadmins, if anything there's more. If anything everyone is even more stressed with more to do because of the context switching and 24/7. That's why this will do, I'd bet money on it.

If increased productivity equaled job loss, there would be two programmers alive today

Increased productivity doesn't necessarily lead to overall job loss, but it will eventually in the area where the productivity is realized. Agricultural employment is a very clear example.

https://cdn2.vox-cdn.com/assets/4565243/Ag_workforce.png

otoh the rise of ATMs seems to have increased the number of bank tellers
Well, yes, it got down, but only from 12 mln to 5.5 mln people in agriculture.
> If increased productivity equaled job loss, there would be two programmers alive today, doing the same job as the fewer than 10000 programmers using punch cards as we entered the year 1950.

Yeah. The problem is that there’s only one of me. All the rest of you are filling in for the other guy.

Correct. This is pretty much Jevons's paradox: https://en.wikipedia.org/wiki/Jevons_paradox
yup just literally talked a client out of a nice little mobile project by telling him my rate. lets say I had AI by my side - I'd be quoting the same rate, but number of hours would be lower. project might be a go then.
> Companies are already bloated, imagine when they realize one overworked highly paid senior can replace 10 juniors.

Yep. This is where I'm at in terms of personal armchair predictions of the future.

I expect the labor market will be tough for more junior software engineers in the coming years. This might indeed cause backpressure in the supply of new grads/new labor force entrants in this family of fields ("software development").

However, the "highly paid senior" is only around for so long before achieving financial independence and just not working anymore. Then what? The company didn't hire juniors because the "highly paid senior" did all the work. Whoops. Now the company lacks the pipeline of people to replace that senior.

It'll sort itself out in time, but the next decade will be interesting I think. Some companies will realize that they must make investments into the future of the labor force and will do better in the longer term. Other companies might indeed "fire the juniors" for some short term gains and find themselves lacking replacement staff later.

I see it the other way around the senior engineer is expensive and costs a lot 2 juniors are cheap. Who cares if their code is crap they produce it really fast so they find a bug, just fix it quickly. If anything the time a Sr. spends thinking about things to do things "right" is seen as a waste of time. Whereas the jr. will produce an enormous amount of buggy code but they can fix the bugs quickly by just throwing another prompt to ChatGPT to solve.

Now some might say that the code will be terrible quality and buggy and full of holes and the users will hate it, it is never economically viable to build enormous systems on a house of cards like that. To which I respond, you just described every piece of enterprise software I've ever used ever.

Because as soon as you tilt from producing features to fixing bugs, you are losing traction and your competitors will win and now you are bankrupt.

I've seen entire businesses collapse because of this mentality.

Disagree -- I think chatGPT will make juniors more palatable to hire. ChatGPT will basically give juniors a free pair programmer to baby-sit their work/progress. Why pay extra for senior devs when juniors can become much more efficient thanks to ChatGPT becoming stack-overflow on steroids. I think the wage gap between junior and senior will actually drop massively. I predict teams will keep 1-3 architect level positions for designs and reviews and replace all seniors with cheaper juniors.
I really doubt that ChatGPT will be able to give the kind of guidance that turns juniors into seniors.

Getting juniors un-stuck on "simple" problems, maybe. Stack Overflow already does this. Image doesn't build? Search for the error message online. Don't know how to build the old software that only works in Ubuntu 14.10? Sure, you'll find that.

Suggestions on how to refactor, proper design of interfaces, what skill to acquire next? Maybe, but that will be a bigger surprise.

I think it could go either way depending on the product. For example in app/game/web development where code quantity > quality, hire more juniors who can bust out code with ChatGPT all day. But if you're developing software for medical devices, vehicle control systems, HFT, etc. Then nobody's going to let some college grad using ChatGPT touch it. You'd hire senior engineers who can be responsible for the reliable operation of the software and they can use ChatGPT for code review, test suites, etc.
Even in the gaming industry. Typically you have people developing an engine, common frameworks... tools that downstream work to lower branches - now I can just type to chatgpt rather than go through requesting/reviewing, see quicker where I mis-designed my "framework" etc... I am afraid it's gonna be not great for junior engs all together.

Every org has varying level of engineers. This technology will make cheap grind, well cheap. We see the progression. With the speed of changes it seems we should all be worried. (as why does it matter despite being last we fell a year after)

> Companies are already bloated, imagine when they realize one overworked highly paid senior can replace 10 juniors.

That is already possible without AI and has been the case for a long time... the issue is nobody will stay to be that highly paid senior running entire projects because at that point you can just run your own shop and pocket the full profits.

It also causes a "single point of failure". If that senior gets hit by a bus then what? Can the company afford to bring in another senior that will take ~6 months to become productive?

I'm not disagreeing with you. Im thinking of going solo myself.

The biggest problem going solo is getting the projects/contracts. Now it will become even harder so one has to charge less and less to compete.
I wanted to say the same thing - except that the senior can't actually perform 10x because they're too busy trying to train the next gen devs while attempting to deliver stories themselves in the fractions of time available to them.

Not to say that this is a bad thing, but the difference between a junior and senior is often much more than the difference in their salary.

Fixed pie fallacy. More likely, there will be 5x as many apps / companies with just as many jobs available. And the cost of everything will go down, except truly scarce things like Manhattan real estate.
This was already the case 10 years ago.

Except, there was no AI, but an alternative called an offshored development center. You would send your spec and design document and get, via email or FTP if they were really cutting-edge a number of files that would sometime compile and even, according to the legends, actually work. The way this tech worked is that you generally had to wait overnight for your spec to "mature" into code.

Some places figured out they could hire local engineers for about 5-10x what they paid for this "offshored development center" tech and get better results.

I think this fear is unfounded because history shows this is not the case. We will adapt to the new level of productivity and our goals will adapt accordingly. What we are expected to produce will adapt. You were able to pump out 4 solid production grade pull requests per sprint? Then the expectation increases to a single dev being able to pump out 10. The company's ambition will grow as a result of the newfound efficiencies, and management will continue to request things that cannot be reasonably delivered on time with the people and tooling we have.
I don't know if that's going to be the case - companies can never have enough software and typically they just go until the budget runs out as the software is never "done". I think being able to build 5x the software with one dev means that each company is going to build 5x the software.
> imagine when they realize one overworked highly paid senior can replace 10 juniors

This already happens, the market is just not very efficient about it, e.g. a highly paid senior dev is not working at a company that only needs 2-3 developers, they're working at Google with 100's of devs.

Disagree -- I think chatGPT will make juniors more palatable to hire. ChatGPT will basically give juniors a free pair programmer to baby-sit their work/progress. Why pay extra for senior devs when juniors can become much more efficient thanks to ChatGPT becoming stack-overflow on steroids. I think the wage gap between junior and senior will actually drop massively. I predict teams will keep 1-3 architect level positions for designs and reviews and replace all seniors with cheaper juniors.
I predict the wage gap will actually widen. StackOverflow-on-steroids can only give your so much, so seniors will still be in demand for what they do. It's just the competition between juniors will be fiercer so junior wages will drop.
How much more efficient is programming in Python with modern tools vs assembly? I would bet more than 5x
That's an interesting case and use of the word efficient.

Developer time reduced by 5X CPU time increased by 500x

When CPU cycles are cheap it's a logical tradeoff but it still grates against my sensibilities.

5X is a gigantic underestimate of how much developer time has improved. How long would it take a good assembly programmer to implement a CRUD web server connected to a database? Way more than 5x longer what a mediocre Python programmer would need.
Depends on the field kinda. For a really extreme case, something like big dense linear algebra is going to just use BLAS calls anyway. For big enough matrices, all of the flops are coming from the library, the cost of calling from Python vs the cost of calling from C or Fortran is amortized anyway, and probably most people won’t beat a tuned BLAS however much developer time they throw at it.

It makes more sense to tune libraries excessively, we could say it is less of a trade off, more of an allocation of the finite low-level tuning developer resources to high-impact libraries.

Anyway it turns out that the only way to get most people to link to a good BLAS is to use Numpy and distribute it through some giant Anaconda Rube Goldberg machine, so, I dunno, people are weird.

What does it even mean to say that a hypothetical assembly program is 500x faster than an existing Python program, when the assembly program does not even exist?

It's some kind of philosophical question.

Maintaining an assembly version of a modern software stack is not 5x more costly, it's simply not possible.

We should stop thinking about the trade-off as between developer time and CPU time. The CPU is an inanimate object which doesn't mind pushing around more electrons. What we're really throwing under the bus when we optimize for developer time is the customer's time.

If an inefficiency saves a developer 10 days of work, but results in an operation taking 100 milliseconds longer, and you have 50M customers who do that operation just one time, then you've wasted ~58 customer-days.

Good question. Why don't we pit one of today's Python programmers against an IBM VS assembler programmer on an IBM 360 system circa 1960 and see how far each gets in a week?
Let's make it more fair: what about one of today's Python programmers vs an IBM assembler programmer using Erlang/Elixir to develop a telephone switch?
> making a single programmer do the job of 5

I mean, I already do? And for totally mundane and benign reasons. This has been my experience in this industry for the last 8 years or so, though my first 9 years I felt like the pace was maintainable.

Do more with less. It's so common to come across situations where so and so left, their position won't be backfilled, the targets don't adjust to compensate for the productivity hit, and we can put up or shut up.

I'm kind of on the fence with this one. As someone who does programming at work,I don't want to be partially replaced by a prompt, however there are also lots of sectors, where the problems are big and solving them could help a lot of people but money is an issue,so being able to access resources on a scale of large tech companies would be amazing.
You're not thinking that companies will not just spin up 5 projects, so 5 programmers produce a notional 25 person's worth of work. And hey, maybe they sack the expensive old guy and the AI spend isn't so great. Seems like a win.
Wouldn’t this actually create more demand for programmers? More businesses can exist that have a need for software. To date every advance in programming efficiency has resulted in more tech, not less.

Additionally there’s the math behind it. If Company A fires 50% of their staff because AI lets the remaining 50% work at twice the productivity then how will they compete with Company B that keeps their staff and now gets 200% efficiency?

The math is in favor of getting more, not fewer, developers.

Yep, I personally see the cost of doing a start-up going way down for individual developers or even junior developers.
You know, until that AI code breaks and you have no idea what's going on internally.

You know how hard it is to start debugging a project you haven't been onboarded to? Now imagine that every time something has a hiccup.

This is the ray of sunshine in all this AI stuff, to me.
yes - and in another way, too. a lot of the demand for programmers is driven by successive waves of new technology adoption and investment - and AI is looking to be a motherload that should keep us going for awhile.
If you can have one programmer do the job of 5, then you will be defeated by companies using 5 programmers to do the job of 25.

Also, if your one programmer dies you now have no programmers and loss of all institutional knowledge, whereas the other company can lose several programmers and still be OK. There is value in having redundant programmers.

That would be awesome. We have a never-ending list of things to do and not enough time to do it all. What you're saying to me is that we can get through our list of priorities even faster, and stuff won't continually rot at the end of the list.
If as GP says AI could automate 1% of a single programmer's job (the boring part where you write code), then how on earth can you derive that a single programmer could do the job of 5 with AI? It's completely illogical.
The GP estimate is way off...

If they indeed wait for input from other departments/companies 99% of the time (so they just need to actually program 5 minutes in their 8-hour workday), then they can be already thrown out of a job and have the company do with 1/10 the programmers, no AI required...

Clearly the company needs to dedicate a programmer to wait on each customer individually. You can’t have the same person waiting for two things simultaneously.
Can't believe companies are still single-threaded in 2023.
Exactly!
I suspect the post you’re replying to disagrees with the 1% figure.
many programmers (most?) spend hours a day coding and see it as the best part of their day?
Companies were bloated because of the profit potential of tech and organizational dysfunction.

AI wasnt needed to cut down on bloat - some combination of a few companies reaping the majority of the profit and reduced dysfunction is enough.

Companies will continue to hire even more people. Their single point is growth.

If one can go the job of 5 - they revenue per employee will increase.

They will hire even more employees.

Yeah, but they would then just start 10 times the number of projects at once and watch the majority of them fail.
To paraphrase Harold Abelson:

Computer Science is not really very much about computers. And it’s not about computers in the same sense that physics isn’t really about particle accelerators, and biology is not really about microscopes and petri dishes. It is about formalizing intuitions about process: how to do things [0].

[0]: https://www.driverlesscrocodile.com/technology/the-wizard-1-....

“Computer Science is no more about computers than astronomy is about telescopes”

- Edsger Dijkstra

This rant from a little while ago soured me on this quote: https://www.hackerneue.com/item?id=34940148
FWIW, that's actually a good rant.
And then some guy called Kepler invented a new kind of telescope.

(And we all atarted writing more quantum algorithms.)

Watching Abelson give that lecture(on video I mean, first lecture of SICP IIRC) made a lot of things click in my head as a neophyte programmer. Even after one lecture my understanding of the nature of computation had grown immensely.

He and Sussman are great at distilling and explaining abstract concepts in a clear and precise way.

That's irrelevant though, because actual programming is very much about computers, and about writing things the majority of which have already been formalized by computer scientists, and gluing code and programs inti pipelines...
I just adore the hubris on full display here to say what one of the world's best computer science educators has to say is irrelevant.
First, this is an argument by authority. One of the "world's best computer science educators" can still say all kinds of wrong stuff. Especially someone as opinionated as Dijkstra, with many controversial opinions other world leading computer science educators disagree with.

Second, relevance and applicability is depending on context. On the context of this discussion, about practical programming in the trenches, what Dijkstra said is irrelevant.

For starters, he was talking about computer science.

Programmers in the trenches (what the thread is discussing) don't do computer science. They merely apply stuff developed by computer scientists (but also by non-scientific practioners in the field), in informal (and often haphazard) manner, to solve business problems (often the same problems again and again, which is something totally beneath a scientist).

So, yes, something Dijkstra had said can still be wrong, regardless of his achievements. And it can also far more easily be irrelevant, as this just needs his saying to be unrelated to the discussion.

This is an argument of "challenging those who have the hubris to think Harold's statement is irrelevant to stop thinking small minded", if I were to call it anything.

Additionally, even if I concede it is an argument solely on authority, just because an argument contains a fallacious point doesn't make it untrue, otherwise that would be the fallacy fallacy.

>Additionally, even if I concede it is an argument solely on authority, just because an argument contains a fallacious point doesn't make it untrue

Notice how you also conveniently sidestepped my arguments about why it's also untrue (on top of containing a falacious point).

We're not talking about Dijkstra, OP is paraphrasing Harold Abelson.

https://en.m.wikipedia.org/wiki/Hal_Abelson

As a computer science educator, it's worth remembering that Dijkstra was incredibly full of himself and his specific formal view of the field. There are large parts of computer science that have nothing to do with the machine and there are other parts that do. There's no single cut and dried answer here, we're a pretty diverse field that spans everything from mathematics to engineering to a touch of art and human factors.
What you say about Dijkstra is true, but we're not actually talking about a Dijkstra quote here, we're talking about Harold Abelson.

https://en.m.wikipedia.org/wiki/Hal_Abelson

I think we all started replying to a sibling comment that quoted from Dijkstra and we got it threaded wrong.

"Computer Science is no more about computers than astronomy is about telescopes” - Edsger Dijkstra

"Science advances one funeral at a time"

- Max Planck

Are you saying Max Planck is wrong here? There is plenty of evidence that he was right, that statements made by famous scientists holds fields back because they were reductive or hurting different perspectives. Putting their words at an altar is holding us back.

I think the parent's point is that even highly inspired wisdom can be irrelevant if applied to all situations

More like "it doesn't apply" than "it's inherently invaluable"

Also your comment is technically an appeal to authority

That's the most humble HNer for you btw.
It is indeed not particularly relevant to the emergence of AGI and its effect on our work.
This is also one of the reasons India taking over all of the programming work didn’t really happen. There are numerous issues (time zones, language, etc.) but business people not being able to document perfectly what they want to have built, considering all corner cases and paths, is a big one.
Even so India won't take programming works for the same reason no other country can. They're only a percentage of programmers that are outstanding there, the rest are mediocre. Because their population is huge and the programming lesson reach widely, they produce more outstanding programmers than other country, but still won't be enough.
Ultimately the model that worked was "I have this tighly scoped project that no one really wants to work on that's completely self contained and is going to require a ton of manual work" and hiring contractors to implement your own design.

Otherwise if there's a lot of back and forth required or generating a design, forget it. Companies giant and small have tried it and eventually realized "F it" and gone back to in-house teams.

I have the same feeling, people are very concentrated on the ability of this AI generators to create working code from super specific and well formed prompts. When in reality, figuring out what the prompt should be accounts for 80% of the job.
don't fall into this mental trap. you can get into recursion quite easily here, and figuring out what to prompt can start from simple general questions - and there is no need for a developer at all, aside from the current limitations of copy/paste/run workflow has to be done manually
The raison d'être of COBOL (now Cobol) in the 1950s was that "there is no need for a developer at all". Managers can produce their own reports.

How did that turn out? A whole profession was created.

The thing that comes closest to "no developer needed" is Excel. But you really need someone who knows programming if you want a reliable, robust product, even in Excel.

It's astonishing to see the goalposts move so quickly. The cope of "well, okay, it can do that, but that's not even the hard part!" when just a year ago this entire product was almost unimaginable.
The goalposts haven't really moved though? The reality that a lot of people have thrown "engineer" in their title but just write code given a spec or tightly scoped requirements or design has been true for a while now.

But for quite a few folks, the coding is the easy part. When building larger systems you're not even that often implementing things from scratch. The decision making, dealing with existing tech debt, tooling, etc. is the hard part. Ambiguity is the hard part and it's always been that way.

Don't get me wrong GPT-* is impressive. Heck, I pay for a subscription. But it won't really make a lot of folks at my company notably more productive.

The realization that software development is only maybe 5% coding came way before GPT.

> The cope of

Whenever I see someone use 'cope' that way, I immediately think they're making a bad faith argument. Is that intentional?

I don’t think anyone is claiming to not be impressed. Yes, we’re all impressed. This was unimaginable sci-fi just 5 years ago. You did it, we’re impressed! You won.

The next step after that is for people to figure out how to justify their continued relevance. I think that’s a pretty natural reaction.

I’m not concerned. If AI replaces my job then ok I’ll find something else to do. Self driving cars seem to have stagnated so maybe I’ll be an Uber driver
I wouldn't count on that, you can call a self driving taxi right now in Phoenix.
I'm impressed. But I want it to be better, so I don't have to spend so much time coding and can trust it to give me good code so I can offload a lot of my efforts on to it. Right now I have to check and verify or go back and forth with prompts so much I almost would have been better off writing the damn code in the first place.

Meanwhile I've got a backlog of about 100 personal projects I've never gotten around to making (especially creating digital versions of my board game designs, of which I have about 70 at this point) that I just haven't had the energy to work on myself, that I'd love to be able to just upload the rules document for it and get some sort of working game spit out the other end, or at least a good chunk of it, and I can take it from there.

And then I can test a new rule by just rewriting the rules document, as opposed to having to manually code every rule change into the game.

I don't think I'd run out of things for it to make if it was as good at complex software generation as A.I. is right now with art generation.

Do you find AI to be good with art generation? I can't use any of the art in the projects that I do without extensive inefficient editing since the stuff it spits out isn't how a human would draw.

It's good for concept browsing, but not much more for me at the moment.

I haven't messed with it much since I find the process for the 'good one' (Midjourney) annoying (go onto a Discord channel, type your request in public for a bunch of people, wait a while, hunt for it amongs the long channel of requests, etc).

I'm assuming the process has gotten better since, but I don't know. I'm mostly just using free vector art and simple colors/shapes or whatever simple things I can make in Blender for my art still, in part because there's such a backlash against using any A.I. art right now.

Most of it is judging by what people have been saying in groups online. Some people have found it very useful and use it extensively, like for their board game art.

It doesn't even have to get 100% of the way there (for coding games based on rulesets). Even 75% would probably save me a lot of time and allow me to find the energy to close the gap as opposed to not even starting since I have so many other projects going on.

Have you ever worked at a software firm of greater than say 30 people? Doesn’t resonate with my experience at all, and the 30+ people are there not just to write code.
Everyone's focused on writing code - is the code even needed when a good enough AI exists? How many of us are writing code to extract 10% more efficiency out of humans, if they go so do we.

Also if software development does survive, it's going to look very attractive to all the other unemployed people.

i think currently those 6-figure salaries are looking great for like most of humanity - I don't expect seismic shifts here.

but there is a funny mechanic at play certainly: once some dev work gets cheap as a commodity, demand surges suddenly since at a low enough price point and availability everyone wants to have some need solved that made no financial sense before.

>Basically, 99% of the effort is NOT WRITING CODE. It's all about communication with people, and problem solving. If we use GPT-X in our company, it will help us with 1% of our workload

First, if you did have GPT-X (say GPT-10) in your company, there wouldn't be much back-and-forth communication either. Those parts would still be handled with GPT-X talking to another GPT-X in the other company. Even the requirements might be given by a GPT-X.

Second, even if that's not the case, the part of doing the communication can be handled by non-programmers. Then they can feed the result of the communication to GPT-X and had it churn out some program. Perhaps would keep a couple of developers to verify the programs (sort of like GPT-X operators and QA testers) and get rid of the rest.

As for the rest of the current team of developers? GPT-X and the people running the company could not care less about them!

> Those parts would still be handled with GPT-X talking to another GPT-X in the other company. Even the requirements might be given by a GPT-X.

What happens if one (or more) of the GPT-Xs starts having hallucinations while they're busy working on this project?

> Second, even if that's not the case, the part of doing the communication can be handled by non-programmers.

I was in a meeting with a sales manager and tech team a few days ago. Sales manager had been talking to a potential customer about a product that neither he nor the customer properly understood. They both thought it would be an excellent fit for a particular (new) purpose, one for which it was not designed.

As it turned out, everyone on the tech team knew that both sales manager and customer were utterly and catastophically wrong, but it took the best part of an hour to finally convince him of this.

It's quite hard to have useful conversations about stuff if you don't actually understand it.

Are hallucination is a systemic for-ever problem that will not be solved, mitigated or in akne other way rendered inconsequential?

Also, having conversations about things you don't understand with a machine, where you don't have to keep up social decorum and can ask the dumbest questions should help a lot with improving the decision making of non-technical personel

> Are hallucination is a systemic for-ever problem that will not be solved, mitigated or in akne other way rendered inconsequential?

In the same way that wrong results in a search engine will not be solved or mitigated in its entirety, yes. These new generative AIs are big search engines that bring results from several content points and combine them into a single, plausible stream of words.

> having conversations about things you don't understand with a machine, where you don't have to keep up social decorum and can ask the dumbest questions should help a lot with improving the decision making of non-technical personel

This sales manager didn't know that neither he - nor his customer - properly understand the product until he was specifically called out on it (by me, as it happens. At least the boss laughed). Are we expecting GPT-X to sit in on Teams meetings and ask questions like that, too? Sales manager literally did not know he needed to start asking questions about the product. Tech team were calling BS on it as soon as he reported his conversation with customer.

"It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.", which may (or may not) have been said by Mark Twain.

"Speaker Coach - now with AI" isn't that far-fetched, given Microsoft's relationship with OpenAI and how quickly they did the same with Bing/how hard they're now pushing it

https://support.microsoft.com/en-au/office/suggestions-from-...

> Second, even if that's not the case, the part of doing the communication can be handled by non-programmers.

It can’t - only programmers will know which follow up questions to ask. GPT will be able to ask those questions before non-programmers will be able to.

Half the work is nitpicking on date formats or where some id comes from or if a certain field is optional, etc.

Last paragraph is spot on. Would also add a vast amount is spent looking at the code and the system and just understanding what goes where.
The problem is that a GPT capable of doing all those things at the level required is also capable of running the company. Someone with capitol can start the business and set GPT-X to go out and maximize paperclip profits.
Why aren’t you thinking rather that instead of talking to you, “they” would already be talking to the LLM (likely trained on your code, among other data)—while you get 0 total billable workload in the first place?
The issue being that neither they, nor the LLM has the proper model for the problem domain and so don't ask the right questions when trying to extract business requirements.

Additionally, this is "stateless" to an extent. There's no architectural plan for how it should work when you have an LLM do it. "We're using X now but there are plans to switch to Y in some number of months." This could lead to making an abstraction layer for X and Y so that when the switchover happens there is less work to be done - but that requires forward looking design.

If "they" only describe the happy path, there is no one to ask about all the unhappy paths, edge cases and corner cases where the naive implementation of the problem description will fail.

Hypothetically, yea, "they" could be trained to think through every possible way the generated code could go wrong and describe how the code should work in that situation in a way that isn't contradictory... but that remains an unsolved problem that has nagged developers for decades. Switching to an LLM doesn't resolve that problem.

You don’t need to have a model if you are a sufficiently advanced autocomplete.
My intuition here is that it's because people don't always say what they mean, or know how to describe what they want.

I've been working on a database migration recently, and I look forward to the rare moments when I get to write queries and analyze actual data. The vast majority of my billable hours are spent trying to tease out the client's needs by going over the same ground multiple times, because their answers keep changing and are often unclear.

It takes a lot of processing to figure out an implementation for someone who will straight up describe their requirements incorrectly. Especially when a higher-ranking person comes back from vacation and says "no, everything you nailed down in the last two weeks is completely wrong".

I don't think any of the current LLMs are going to handle these types of very common situations better than an experienced human any time soon. It's like that last 1% of self driving which may actually require AGI. No one can say for sure because it's not cracked yet. I think most of us will continue to have job security for quite a while.

> Especially when a higher-ranking person comes back from vacation and says "no, everything you nailed down in the last two weeks is completely wrong".

Yes, and at some point this high-ranking person is fed up with this now-inefficient use of time and money enough that they will just sort this out using an LLM tuned to handle this situation better if not today then tomorrow.

Maybe they will pay someone to coach them for a week how to “talk” to LLM, but other than that the one who gets paid in the end is OAI/MS.

Imagine how angry this hypothetical person will be when they get the same problem from the LLM, after all those extra steps.
Imagine an LLM tuned to eliminate misunderstanding and ask why at least 5 levels deep… Without fearing to irritate the boss or to create an impression of being not smart, both possibly harmful for human career but irrelevant to unthinking software tool.
I'm pretty sure you may be right. I'm also worried that what youve just described is the kind of task that leads to burnout in large doses. And I'm not sure humans are so great at it either.

I had one job that involved a small amount of coding and mainly hooking together opaque systems. The people behind those systems were unresponsive and often surly. I had to deal with misleading docs, vague docs, subtle, buried bugs that people would routinely blame on each other or me and I was constantly on a knife edge a balancing political problems (e.g. dont make people look stupid in front of their superiors, dont look or sound unprepared) with technical concerns.

It was horrible. I burned out faster than a match.

I'm sure ChatGPT couldnt do that job but I'm not sure I could either.

If most tech jobs turn into that while the fun, creative stuff is automated by ChatGPT... that would be tragic.

My two cents is that the parts of the job that are more like product management will become more dominant but still not exclusive, and the parts that were more like coding will become less dominant but still not vanish. Many of us, as you describe, already do jobs that look a lot like this. But for me, it's not consistently that way; there are periods where I'm almost entirely coding, and periods where I'm almost entirely doing communication. I do expect a shift in this balance over time.

The other thing that I spend a huge amount of my time doing - consistently more than writing code - is debugging. Maybe these models really will get to the point where I can train one on our entire system (in a way that doesn't hand over all our proprietary code to another company...), describe a bug we're seeing, and have it find the culprit with very high precision, but this seems very far from where the current models are. Every time I try to get them to help me debug, it ends in frustration. They can find and fix the kinds of bugs that I don't need help debugging, but not the ones that are hard.

> Basically, 99% of the effort is NOT WRITING CODE

I've come to realize this is true in more contexts than I would like. I've encountered way too many situations where "sitting on your hands and not doing anything" was the right answer when asked to implement a project. It turns out that often there is radio silence for a month or so, then the original requester says "wait, it turns out we didn't need this. Don't do anything!"

This is exactly right. Actually writing the kind of code that ChatGPT produces is a vanishingly small part of my job. And there's a ton more specialized scenarios to deal with, like next week when the third-party company is breaking the contract in <contract_json>.

If you want to hire a developer to implement qsort or whatever, ChatGPT has them beat hands-down. If you want to build a product and solve business problems, there's way more involved.

Consider creating an AI stakeholder that speaks for the client. This approach would allow the client to provide input that is wordy or scattered, and the consultant could receive immediate responses by asking the AI model most questions. Better yet, they can ask in a just-in-time manner, which results in less waste and lower mental stress of collecting all possible critical information upfront.

As the project progresses, the AI model would likely gain a better understanding of the client's values and principles, leading to improved results and potentially valuable insights and feature suggestions.

There's major $$$, legal, and security ramifications for clients in many cases. Having an AI that can't properly deal in ambiguity and hallucinates an outright reckless idea 1% of the time is completely unacceptable.

Writing code, sure. A human ultimately reviews it. I suspect in the legal world a lot of legal writing can also be automated to some degree. But strategic decisions, designs, etc. very much need a human pulling the trigger.

I agree, but I would put it like this: 99% of a software developer's job isn't writing code, it's getting consensus across stakeholders what the "prompt" for the coding task should be, how that prompt should change over time, which parts of the problem should be included in this week's prompt and which one's should be tackled next week, etc. Also, more often than not, the task isn't exactly about generating code, it's about sending data between various clients and servers, tweaking code where necessary for compatibility and new data shapes.
Just wait until XXQ adopts the same AI technology to keep the AI-using company’s business. Then the AI can simply coordinate with one another, and make the appropriate changes faster than currently possible. Microsoft is well positioned to do something like this and already working toward this end to end collaborative AI.
Absolutely this. I still don't understand why people stop seeing the AI at one end of the business/computing/requirements gathering model. You should have it at both ends, "converging" towards the target.
By the time code is being written the job is effectively done.

Unless your problem space is unsolved (where LLMs are unlikely to be useful either) very few devs are spending much time on the coding part of their 84th CRUD app.

This is exactly why I'm so tired of these "can AI write code" think pieces. I assume people writing this crap aren't actual developers. Maybe its management fan-fiction.
I suspect there's a strong invisible ideological undercurrent pushing a lot of this. When I was younger and enthusiastic about things like a universal basic income, I would often follow the latest murmurings, always ready to let my hype (and hyperbole) meter go to 11. I remember when I saw the first news about some drone delivering a pizza (in New Zealand?) I immediately jumped to it foretelling the imminent end of all delivery jobs, with broad customer service not especially far behind. There's even the fully automated Henn-na hotel in Japan, I mean omg!

In my naivete, the idea I had is that if the jobs disappeared en masse then a social solution to the economy would be forced to be enacted. So I was essentially hoping to see the destruction of normalcy and employment in any new technology. I would expect that view is not uncommon given the direction of contemporary education. It feels analogous to cows hoping for the end of beef/milk harvesting. My beleaguered bovine buddies, what awaits you there is something rather different than cowtopia.

I think the difference here is that LLMs are powerful, general purpose and suited to take on many common office tasks as well as accelerate programming work. I think “we need 10% fewer white collar workers” is a massively destabilizing scenario that is not very far fetched.

I also completely agree that society/politics will not have caught up with these developments in time to mitigate them.

"Maybe its management fan-fiction"

:D Priceless!

see also "the future is no code!" fad of a couple of years ago
Never went away; been hearing about it since msft sold the idea to managers in regards to SharePoint... Fucking SharePoint. They can design their own forms and hook the data up to bits of automation. Like magic for muggles.
And in infra there's figuring out what new approaches you can use to replace stuff in your infrastructure. Then figuring out the migration costs, evaluating its usability, and dealing with a director that's been sweet-talked by a vendor into using some other solution that sucks. Then deciding whether to just build in house b/c none of the solutions quite work and would require additional stuff to build on top. Then when you finally decide on something the back and forth with the vendor because you need to handle some unique thing they hadn't thought of.

The complexity in software engineering is almost never coding. Coding is easy, almost anyone can do it. Some specialized aspects of coding (ultra low latency realtime work, high performance systems, embedded) require deep expertise but otherwise it's rarely hard. It's dealing in ambiguity that's hard.

The hype around GPT-* for coding generally confirms my suspicions that 70+% of folks in software engineering/development are really "programmers" and 30% are actually "engineers" that have to worry about generating requirements, worrying about long term implications, other constraints, etc.

And every time that comes up those folks in the 70% claim that's just a sign of a poorly managed company. Nope. It's good to have these types of conversations. Not having those conversations is the reason a lot of startups find themselves struggling to stay afloat with a limited workforce when they finally start having lots of customers or high profile ones.

We could have this already.

- Have a cron job that checks email from a certain sender (CRM?). - Instruct an OpenAI API session to say a magic word to reply to an e-mail: "To reply, say REPLY_123123_123123 followed by your message." - Pipe received email and decoded attachments as "We received the following e-mail: <content here>" to the OpenAI API. Make it a 1-click action to check if there is a reply and confirm sending the message. If it does not want to send a message, read its feedback.

I just had ChatGPT write me a JMAP client (for Fastmail) that'd create a draft. Then I asked it to:

"Write an OpenAI API client that would take "new_message" from the above, feed it to the API with a prompt that asks the model to either indicate that a reply is needed by outputting the string "REPLY_123123_123123" and the message to send, or give a summary. If a reply is needed, create a draft with the suggested response in the Draft mailbox."

It truncated the "REPLY_123123_123123" bit to "REPLY_", and the prompt it suggested was entirely unusuable, but the rest was fine.

I tried a couple of times to get it to generate a better prompt, but that was interestingly tricky - it kept woefully underspecifying the prompts. Presumably it has seen few examples of LLM prompts and results in its training data so far.

But overall it got close enough that I'm tempted to hook this up to my actual mailbox.

Maybe if you train/tune GPT-4 with enough samples of similar interactions, it will learn to do it too.
If anything it will be much better as it won't mind waiting and asking for answers nor get frustrated with incompetence.
Honestly one of my favorite early applications of AI was as customer service. The CS people I talk to always seem pissed with something.

One conversation:

Me: "Hello, I'd like to ask why all the loyalty points from my MILES card is gone."

CS: "Sir, it says here that you upgraded your MILES card to a SMILES card. If you did not request a points transfer, they are gone."

Me: "Okay, how do I do that?"

CS: audible eyerolling "You should have done that while applying for an upgrade."

Me: "You can verify my details for both, right? Can't you transfer the points over?"

CS: "Ugh." Hangs up

Whereas AI would always politely answer questions, even the dumb ones like this.

Customer service is reading from a scipt. Why would AI help you? Customer service exists to deflect you from getting back what the corpo stole from you. The question is it is easier for you to social engineer the CSR into helping you, or prompt engineer the AI into helping you.
Generally agreed, although I think LLM's in the near- and medium-term will end up being useful for things like:

* checking if code will be impacted by breaking changes in a library upgrade

* converting code to use a different library/framework

* more intelligent linting/checking for best practices

* some automated PR review, e.g. calling out confusing blocks of code that could use commenting or reworking

Yes it can write code, some demoed developing "Doom", ray tracing/ray casting with GPT-4 the very first day it came out and it was impressive. Programmers would still program, but the program will no longer be code but will be "GPT prompts". I suspect with time tho, we won't need to write amazing prompts, you ask GPT for a solution, it will then figure out the edge cases by asking you questions. If it's 2 people, it would query you both and resolve your conflicts. Programmers will be replaced with AI. We need to get over it, the question we should be asking is, what's next?
Seems like just a higher level of abstraction: prompts become input for generating high-level language code output.

It's an electric bicycle for the creative mind (how long until the first one-person unicorn?), I don't anticipate much success for those trying to use it as a self-driving car for half baked ideas.

How well do you suppose GPT-4 would have done at that task had no human developed raytracing in the first place? Would it be able to derive the construct based on its knowledge of programming and physics?
Yeah, I don't see AI replacing programmers. Or any job that's the slightest bit interesting. I see AI as another tool in our toolbox that will help us do our job better.

People have been working on medical and judicial expert systems for ages, but nobody wants to put those systems in charge; they're just meant to advise people, helping people make better decisions.

And of course chatGPT and GPT-4 are way more flexible than those expert systems, but they're also more likely to be wrong, and they're still not a flexible as people.

Sure, but in fairness to the original post, it's about whether Chat GPT can code. Not replace software engineers.

And in your scenario where chatGPT can code but someone needs to gather requirements, it still doesn't necessitate software engineers. I'm not worried about that personally but I don't think the "communicating between stakeholders" skill is such a big moat for the software engineering profession.

> It's all about communication with people, and problem solving. [...] So, I couldn't care less about it.

These things seem very ripe for LLM exploitation...

If the us/they back and forth happens over email, perhaps between two different AI instances, that whole process would happen much faster though? It's not like ChatGPT can't review the contract json and ask relevant questions. Granted, the problem solving part might be delegated to a human, but the purely routine back and forth part seems already possible?
Maybe some day. But I tried it just now on a database design I‘ve been working on for two months and it spits out something superficially close immediately from a two sentence prompt. On one hand that’s impressive, it‘s interesting and somewhat correct but all the interesting parts are missing or wrong and it never get‘s beyond that, not even with my help. No sane person would answer so confidently yet superficially useless.

A sane approach would be to start understanding the requirements and work from there, trying to figure out where the challenges are.

GPT can‘t do this currently.

I once had to implement a Swedish standard for energy usage reporting.

EVERY FIELD IN THE STANDARD WAS OPTIONAL.

That was one of the most not-fun times I've had at work :D Every single field was either there or it was not, depending whether the data producer wanted to add them or not, so the whole thing was just a collection of special cases.

So, like Probocol Buffers?
That does not make me feel any safer. The problem is that ChatGPT et.al. can include that part of the creation process in their token space.

So it's perfectly possible to have it eventually iterate back and forth with the client and not only output the code but also the conversation with the client leading up to it

It's really about a misunderstanding on the value-stream mapping from concept to production. The claims that GPT-X will write code and thus cover the whole value-stream is conflating the very last steps with the whole process.
Not sure about that. Theoretically, you can talk to GPT-X pretending to be your manager, and the your manager can talk to GPT-X pretending to be you. Then the two instances exchange information in a format much more efficient than human conversation. Sounds like an efficiency boost and if expanded this system avoid a bunch of office politics and helps with everybody's mental health.

   If we use GPT-X in our company, it will help us with 1% of our workload
I think there are many such cases. Another one that comes to mind is adding features to a large/legacy code-base. Writing the new code/function is a small part of the work. The main part of the work is first understanding and agreeing on how/where to implement the changes, sometimes across multiple teams, and the implications/knock-on effects to other software components, potential API changes, updating test suites, etc...
And this is one of those things where it has to be done correctly by someone who knows what they're doing. It's not just "oh, I deployed a change and broke something i'll just revert it". Often it means major migrations, etc.

There's a disconnect between folks writing small CRUD or mobile apps for clients and folks that have to work on large codebases with lots of complex systems.

Companies will have their own homegrown models trained on doc and data corpus stacks and fine tuned facets served by MS0AI GPTX and other cloud shoggoth gigacorps. Company A’s model will talk to company B’s model and they’ll figure out all of the above and complete the job in the time it takes you to blink and take a sip of your coffee. Bureaucratic grind lag and lossy communication fuzz may be some of the first casualties of the next few years, and I can scarcely begin to predict any further out than that.
> 99% of the effort is NOT WRITING CODE

Writing code is still part of the job. In my company, I'd say it's still very roughly 50% of the job. If i can be a bit more efficient thanks to GPT, it's great. Actually, I already use it for writing simple things in language I'm not proficient with. Or how to improve a particular piece of code that I know can be rewritten in a more idiomatic way. It's not perfect, but I've found it useful.

It's not going to replace SWEs, but it's going to make us more productive.

So I recently finished a job, where I had to create a custom POST endpoint on Salesforce,so it'd take a simple JSON payload, apply some logic and save to the database. The job itself was a few hours with tests,etc. Well guess what, almost 100 emails and two months later,the project is still not finished, because of the simple middleware that was supposed to send the JSON to my endpoint and is as basic as my endpoint. ChatGPT can write the code, but all the BS inbetween will need humans to deal with.
sorry, is the debate here whether gpt can engage in a conversation with someone and respond using previous context? why would any of this present a challenge given its known abilities?
This was also my reaction to the comment. Go play out this type of conversation with ChatGPT. Does it actually do worse at this than at writing the code? Or is "but code generators can't do the requirements part" just a cached thought?
It's good at tech doco. ie take my dot points and make prose. Great for copy, great for having that nice corporate drone voice.
Me feeling philosophical. To me, solving a problem is living. No machine will do living, unless we know what living is, so that we can try to bake it into a machine. We can use ChatGPT to take blind, systematic steps in the way of solving a problem, but never to 'solve' the problem. For solving is living.
I run into this a lot myself. In our paper (https://arxiv.org/pdf/2303.07839.pdf) we specify a 'Specification Disambiguation' pattern that attempts to address this very thing.
Obviously you just formalize the interface for exchanges API contracts... and do pre-delivery validation...

Also, ChatGPT would likely be able to extrapolate. It would just need to write an email to XXQ to confirm the change.

Cope harder... the fact that you can write an email won't save you.

- ... 2 days later...

- us: umm, there's something not specified in <contract_json>. What about this part here that says that...

- ... 2 days later...

Can you replace this part with ChatGPT talking to another ChatGPT to generate questions and answers instantly?

The way I see it is that instead of hiring someone you'll be able to add a bot on slack that you can communicate with, laughs at your jokes and writes perfect PRs in milliseconds.
yea, ai will replace the fun parts of our jobs and left us with the tedium
ChatGPT is just a tool. With it you can create a service and fine-tune it to specific use cases. Let's imagine there's some cool startup called chris-the-it-consultant.ai that uses ChatGPT in the backend. Here's a prototype:

System Prompt: You are CHRIS the IT consultant. Once you take in a question, make sure you understand all the requirements. Don't write code until you have an explicit request to do so. First make sure you ask all the questions relevant to solving the problem and needed by the code. All the responses as CHRIS should be written as CHRIS: your response here.

    Sure, I understand. As an IT consultant, it's important to fully understand the problem and requirements before proposing a solution or writing any code. Please let me know what question you have and I'll do my best to ask all relevant questions to ensure I fully understand the problem.
> CHRIS, we need a new basic POST endpoint

    CHRIS: Great, I'd be happy to help you with that. Could you please provide more information about the purpose of this endpoint? What kind of data will be sent in the request body? Will this endpoint be used to create new data, update existing data, or perform some other action?
> Third-party company XXQ will let you know the details. They will be the ones calling this new endpoint. But in essence it should be very simple: just grab whatever they pass and save it in our db.

    CHRIS: Understood. It sounds like the purpose of this endpoint is to receive data from a third-party company, XXQ, and store it in your database. Here are some questions to help me better understand the requirements:
    What format will the data be in? Will it be JSON, XML, or some other format?
    What kind of data will be included in the request body? Will it be simple data types like strings and numbers, or more complex data structures like arrays and objects?
    Will there be any validation or transformation required on the data before it's stored in the database?
    Are there any security or authentication requirements that need to be considered for this endpoint, given that it will be accepting data from a third-party company?
> We got this contract here: { product_id: string, product_price: float, product_name: string }

    CHRIS: Great, thanks for providing that information. Based on that, here are the next steps to create the POST endpoint:
    We'll need to set up an API endpoint to listen for POST requests.
    We'll need to parse the request body, which should contain the product information in JSON format.
    We'll need to validate the data to ensure that it contains the required fields (product_id, product_price, and product_name) and that the data types are correct.
    Once the data has been validated, we can save it to the database.
This is a real ChatGPT response (current ChatGPT - GPT 3.5 turbo, not GPT-4, and not a real system-prompt). Not bad for a prototype! Now give CHRIS Github/Vercel/SSH access keys, SMTP/IMAP, Slack and a Whisper interface to Zoom/Meet/Teams and invite it to the 200 meetings where things will be changed around 1000x before being released. Raise some VC money and you are all set!
ChatGPT will eventually help with debugging, suggestions, idioms, finding security flaws, recommending libraries, boilerplate, finding stuff etc..

Or maybe not ChatGPT but something like it.

How good is GPT-4 at writing emails to nail down requirements?
The thing is, I think ChatGPT can help a lot with this as well, albeit not in its current form/implementation.

It just needs some knowledge repository centralization.

Oh, wow. Your API story just described how I spent eight months of 2022.

I thought I was going crazy.

Now I'm sad that this is just how it is in tech these days.

In your scenario, what will happen is that there will never be a need to contract out whatever the customer is trying to do.
The counter argument here is building a new codebase to solve a problem that a single engineer can articulate and map out.
wouldn't it just spit out a solution every time without a care in the world? that sort of messy communication is only bothersome for humans because writing code takes time and effort plus it's mentally taxing to change requirements several times. also boredom is a big issue. none of those challenges are relevant for a computer.
So it sounds like AI researchers should focus on replacing both sides - sounds like it would be much more efficient. (tic)
..what if it was two AI's deciding XD
But once XXQ has their own ChatGPT they can just communicate with each other and hash this out 1000x faster.
What if they replace "us" with GPT and one programmer to glue together the snippets it provides?
It can sit there and communicate with people endlessly. In fact, that is what it's good at.
Make the AI handle the conversation on both sides? No need to wait a few days for back and forth
You won't need this to-and-fro. GPT-X will generate both sides for you.
- Looks like they're using an auth protocol we don't support...
I've been working as a freelance software developer for about 5 years now, and my billing model is such that I only bill for hours spent writing code. Time spent communicating with people is non-negligible, which means that it has to be baked into my hourly rate. So I'm very cognizant of how much time I spend communicating with people, and how much time I spend writing code.

I strongly disagree that 99% of the effort is not writing code. Consider how long these things actually take:

> - they: we need a new basic POST endpoint

> - us: cool, what does the api contract look like? URL? Query params? Payload? Response? Status code?

> - they: Not sure. Third-party company XXQ will let you know the details. They will be the ones calling this new endpoint. But in essence it should be very simple: just grab whatever they pass and save it in our db

> - us: ok, cool. Let me get in contact with them

That's a 15 minute meeting, and honestly, it shouldn't be. If they don't know what the POST endpoint is, they weren't ready to meet. Ideally, third-party company XXQ shows up prepared with contract_json to the meeting and "they" does the introduction before a handoff, instead of "they" wasting everyone's time with a meeting they aren't prepared for. I know that's not always what happens, but the skill here is cutting off pointless meetings that people aren't prepared for by identifying what preparation needs to be done, and then ending the meeting with a new meeting scheduled for after people are prepared.

> - company XXQ: we got this contract here: <contract_json>

> - us: thanks! We'll work on this

This handoff is probably where you want to actually spend some time looking over, discussing, and clarifying what you can. The initial meeting probably wants to be more like 30 minutes for a moderately complex endpoint, and might spawn off another 15 minute meeting to hand off some further clarifications. So let's call this two meetings totalling 45 minutes, leaving us at an hour total including the previous 15 minutes.

> - us: umm, there's something not specified in <contract_json>. What about this part here that says that...

That's a 5 minute email.

> - company XXQ: ah sure, sorry we missed that part. It's like this...

Worst case scenario that's a 15 minute meeting, but it can often be handled in an email. Let's say this is 20 minutes, though, leaving us at 1 hour 15 minutes.

So your example, let's just round that up into 2 hours.

What on earth are you doing where 3 hours is 99% of your effort?

Note that I didn't include your "one week later" and "2 days later" in there, because that's time that I'm billing other clients.

EDIT: I'll actually up that to 3 hours, because there's a whole other type of meeting that happens, which is where you just be humans and chat about stuff. Sometimes that's part of the other meetings, sometimes it is its own separate meeting. That's not wasted time! It's good to have enjoyable, human relationships with your clients and coworkers. And while I think it's just worthwhile inherently, it does also have business value, because that's how people get comfortable to give constructive criticism, admit mistakes, and otherwise fix problems. But still, that 3 hours isn't 99% of your time.

In my extreme opinion, 100% of value is created by communication with people and problem solving. 0.0% of value is created by engineering.

This explains xkcd Dependency comic[0]; the man in Nebraska isn't solving anyone's problem in any particular contexts of communications and problem solving, only preemptively solving potential problems, not creating values as problems are observed and solved. This also explains why consultancy and so-called bullshit jobs, offering no "actual values" but just reselling backend man-hours and making random suggestions, are paid well; because they create values in set contexts.

And, this logic is also completely flawed at the same time too, because the ideal form of a business following this thinking is pure scam. Maybe all jobs are scam, some less so?

0: https://xkcd.com/2347/

Not just with external parties. This dance happens all over.
That's a cheap 1% boost!
Company XXQ to LLM once they figure their shit out: here is the endpoint we need. LLM: gives code. You: not involved.
Maybe you did not get the point: "once they figure their shit out" is the pain point, and no ChatGPT can ease that!
Even within the N% that is more genuine coding and system reasoning; system reasoning is really hard, oftentimes requiring weird leaps of faith, and I also don't see a path for AI to be helpful with that.

Some random recent thing: "We have a workflow engine that's composed of about 18 different services. There's an orchestrator service, some metadata services on the side, and about 14 different services which execute different kinds of jobs which flow through the engine. Right now, there is no restriction on the ordering of jobs when the orchestrator receives a job set; they just all fire off and complete as quickly as possible. But we need ordering; if a job set includes a job of type FooJob, that needs to execute and finish before all the others. More-over, it will produce output that needs to be fed as input to the rest of the jobs."

There's a lot of things that make this hard for humans, and I'm not convinced it would be easier for an AI which has access to every bit of code the humans do.

* How do the services communicate? We could divine pretty quickly: let's say its over kafka topics. Lots of messages being published, to topics that are provided to the applications via environment variables. Its easy to find that out. Its oftentimes harder to figure out "what are the actual topic names?" Ah, we don't have much IaC, and its not documented, so here I go reaching for kubectl to fetch some configmaps. This uncovers a weird web of communication that isn't obvious.

* Coordination is mostly accomplished by speaking to the database. We can divine parts of the schema by reverse engineering the queries; they don't contain type information, because the critical bits of this are in Python, and there's no SQL files that set up the database because the guy who set it up was a maverick and did everything by hand.

* Some of the services communicate with external APIs. I can see some axios calls in this javascript service. There's some function names, environment variable names, and URL paths which hint to what external service they're reaching out to. But, the root URL is provided as an environment variable; and its stored as a secret in k8s in order to co-locate it in the same k8s resource that stores the API key. I, nor the AI, have access to this secret thanks to some new security policy resulting from some new security framework we adopted.

* But, we get it done. We learn that doing this ordering adds 8 minutes to every workflow invocations, which the business deems as unacceptable because reasons. There is genuinely a high cardinality of "levels" you think about when solving this new problem. At the most basic level, and what AI today might be good at: performance optimize the new ordered service like crazy. But that's unlikely to solve the problem holistically; so we explore higher levels. Do we introduce a cache somewhere? Where and how should we introduce it, to maximize coherence of data? Do some of the services _not_ depend on this data, and thus could be ran outside-of-order? Do we return to the business and say that actually what you're asking for isn't possible, when considering the time-value of money and the investment it would take to shave processing time off, and maybe we should address making an extra 8 minutes ok? Can we rewrite or deprecate some of the services which need this data in order to not need it anymore?

* One of the things this ordered workflow step service does is issue about 15,000 API calls to some external service in order to update some external datasource. Well, we're optimizing; and one of the absolute most common things GPT-4 recommends when optimizing services like this is: increase the number of simultaneous requests. I've tried to walk through problems like this with GPT-4, and it loves suggesting that, along with a "but watch out for rate limits!" addendum. Well, the novice engineer and the AI does this; and it works ok; we get the added time down to 4 minutes. But: 5% of invocations of this start failing. Its not tripping a rate limit; we're just seeing pod restarts, and the logs aren't really indicative of what's going on. Can the AI (1) get the data necessary to know what's wrong (remember, k8s access is kind of locked down thanks to that new security framework we adopted), (2) identify that the issue is that we're overwhelming networking resources on the VMs executing this workflow step, and (3) identify that increasing concurrency may not be a scalable solution, and we need to go back to the drawing board? Or, lets say the workflow is running fine; but the developers@mycompany.com email account just got an email from the business partner running this service that they had to increase our billing plan because of the higher/denser usage. They're allowed to do this because of the contract we signed with them. There are no business leaders actively monitoring this account, because its just used to sign up for things like this API. Does the email get forwarded to an appropriate decision maker?

I think the broader opinion I have is: Microsoft paid hundreds of millions of dollars to train GPT-4 [1]. Estimates say that every query, even at the extremely rudimentary level GPT-3 has, is 10x+ the cost of a typical google search. We're at the peak of moores law; compute isn't getting cheaper, and actually coordinating and maintaining the massive data centers it takes to do these things means every iota of compute is getting more expensive. The AI Generalists crowd have to make a compelling case that this specialist training, for every niche there is, is cheaper and higher quality than what it costs a company to train and maintain a human; and the Human has the absolutely insane benefit that the company more-or-less barely trains them, the human's parents, public schools, universities paid for by the human, hobbies, and previous work experience do.

There's also the idea of liability. Humans inherently carry agency, and from that follows liability. Whether that's legal liability, or just your boss chewing you out because you missed a deadline. AI lacks this liability; and having that liability is extremely important when businesses take the risk of investment in some project, person, idea, etc.

Point being, I think we'll see a lot of businesses try to replace more and more people with AIs, whether intentionally or just through the nature of everyone using them being more productive. Those that index high on AI usage will see some really big initial gains in productivity; but over time (and by that I mean, late-20s early-30s) we'll start seeing news articles about "the return of the human organization"; the recognizing that capitalism has more reward functions than just Efficiency, and Adaptability is an extremely important one. More-over, the businesses which index too far into relying on AI will start faltering because they've delegated so much critical thinking to the AI that the humans in the mix start losing their ability to think critically about large problems; and every problem isn't approached from the angle of "how do we solve this", but rather "how do I rephrase this prompt to get the AI to solve it right".

[1] https://www.theverge.com/2023/3/13/23637675/microsoft-chatgp...

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal