Preferences

A simple rule applies: "No matter what tool created the code, you are still responsible for what you merge into main".

As such, task of verification, still falls on hands of engineers.

Given that and proper processes, modern tooling works nicely with codebases ranging from 10k LOC (mixed embedded device code with golang backends and python DS/ML) to 700k LOC (legacy enterprise applications from the mainframe era)


> A simple rule applies: "No matter what tool created the code, you are still responsible for what you merge into main".

Beware of claims of simple rules.

Take one subset of the problem: code reviews in an organizational environment. How well does they simple rule above work?

The idea of “Person P will take responsibility” is far from clear and often not a good solution. (1) P is fallible. (2) Some consequences are too great to allow one person to trigger them, which is why we have systems and checks. (3) P cannot necessarily right the wrong. (4) No-fault analyses are often better when it comes to long-term solutions which require a fear free culture to reduce cover-ups.

But this is bigger than one organization. The effects of software quickly escape organizational boundaries. So when we think about giving more power to AI tooling, we have to be really smart. This means understanding human nature, decision theory, political economy [1], societal norms, and law. And building smart systems (technical and organizational)

Recommending good strategies for making AI generated code safe is hard problem. I’d bet it is a much harder than even “elite” software developers people have contemplated, much less implemented. Training in software helps but is insufficient. I personally have some optimism for formal methods, defense in depth, and carefully implemented human-in-the-loop systems.

[1] Political economy uses many of the tools of economics to study the incentives of human decision making

ponector
> As such, task of verification, still falls on hands of engineers.

Even before LLM it was a common thing to merge changes which completely brake test environment. Some people really skip verification phase of their work.

bandoti
Agreed. I think engineers though following simple Test-Driven Development procedures can write the code, unit tests, integration tests, debug, etc for a small enough unit by default forces tight feedback loops. AI may assist in the particulars, not run the show.

I’m willing to bet, short of droid-speak or some AI output we can’t even understand, that when considering “the system as a whole”, that even with short-term gains in speed, the longevity of any product will be better with real people following current best-practices, and perhaps a modest sprinkle of AI.

Why? Because AI is trained on the results of human endeavors and can only work within that framework.

abdullin OP
Agreed. AI is just a tool. Letting in run the show is essentially what the vibe-coding is. It is a fun activity for prototyping, but tends to accumulate problems and tech debt at an astonishing pace.

Code, manually crafted by professionals, will almost always beat AI-driven code in quality. Yet, one has still to find such professionals and wait for them to get the job done.

I think, the right balance is somewhere in between - let tools handle the mundane parts (e.g. mechanically rewriting that legacy Progress ABL/4GL code to Kotlin), while human engineers will have fun with high-level tasks and shaping the direction of the project.

This item has no comments currently.