Preferences

also training data quality. they are horrifyingly bad at concurrent code in general in my experience, and looking at most concurrent code in existence.... yeah I can see why.

disgruntledphd2
The really depressing part about LLMs (and the limitations of ML more generally) is that humans are really bad at formal logic (which is what programming basically is), and instead of continuing the path of making machines that made it harder for us to get it wrong, we instead decided to toss every open piece of code/text in existence into a big machine that then reproduces those patterns non-deterministically and use that to build more programs.

One can see the results in a place where most code is terrible (data science is the place I see this most, as it's what I do mostly) but most people don't realise this. I assume this also happens for stuff like frontend, where I don't see the badness because I'm not an expert.

CaptainOfCoit
> is that humans are really bad at formal logic (which is what programming basically is),

The tricky part is that I don't think all programming is formal logic at all, just a small part. And this thing with that different code is for different purposes really screws up LLMs reasoning process unless you make it really clear what code is for what.

> The tricky part is that I don't think all programming is formal logic at all, just a small part.

Why do you say this? The foundation of all of computer science is formal logic and symbolic logic.

CaptainOfCoit
Lots of parts are more creative or more "for humans" I might say, like building the right abstractions considering the current context and potentially future contexts. There is no "right/wrong" abstractions, just abstractions with different tradeoffs, and lots of things in programming is like this, not a binary "this is correct, this is wrong", but somewhere along a spectrum of "This is what I subjectively prefer considering these tradeoffs".

There is a reason a lot of programmers see programming having lots of similarities with painting and other creative activities.

bpt3
Any programmer who doesn't understand the basis of their craft and the environment they're working in isn't a very good one imo.
CaptainOfCoit
Problem is that everyone probably agrees with that, but where the line of "the basis" is drawn isn't so widely agree on. Is "the basis" the physical composition of the hardware components? Understanding assembly? Knowing how a CPU works? How electrons move around inside the whole thing? How all the pieces fit together, including the OS?

The space is just so large that everyone has their own "basis" that sometimes even move with time. They can still be good programmers imo.

wredcoll
> Why do you say this? The foundation of all of computer science is formal logic and symbolic logic.

Yes, but also it has to deal with "the real world" which is only logical if you can encode a near infinite number of variables, instead we create leaky abstractions in order to actually get work done.

bpt3
And those abstractions need to be encoded using symbolic and formal logic.
ahazred8ta
We basically throw rigor out the window and hope it doesn't hit anybody on the way down.
Grimblewald
Or when code is fully vectorizable they default to using loops even if explicitly told not to yse loops. Code I got a LLM to solve for a fairly straightforward problem took 18 minutes to run.

my own solution? 1.56 seconds. I consider myself to be at an intermediate skill level, and while LLMs are useful, they likely wont replace any but the least talented programmers. Even then i'd value human with critial thinking paired with an LLM over an even more competent LLM.

CaptainOfCoit
Codex (GPT-5) + Rust (with or without Tokio) seems to work out well for me, asking it to run the program and validate everything as it iterates on a solution. I've used the same workflow with Python programs too and seems to work OK, but not as well as with Rust.

Just for curiosities sake, what language have you been trying to use?

Groxx OP
mostly Go, because that's at work. for a variety of reasons, I have helped troubleshoot at least 100+ teams' projects, many of which have had concurrency issues either obvious in nearby code, or causing issues (which is why I was helping troubleshoot). same with several dozen "help us find a way to speed up [this activity]" teams' work.

this is not at all a sample of high-quality, well-educated-about-concurrency code, but it does roughly match a lot of Business™ code and also most less-mature open source code I encounter (which is most open source code). it's just not something most people are fluent with.

these same people using LLMs have generally produced much worse concurrent code, regardless of the model or their prompting-sophistication or thinking-time, unless it's extremely trivial (then it's slightly better, because it's at least tutorial-level correct) (and yes, they should have just used one of many pre-existing libraries in these cases). doing anything except like "5 workers on this queue plz" consistently ends up with major correctness flaws - often it works well enough while everything is running smoothly, but under pressure or in error cases it falls apart extremely badly... which is true for most "how to write x currently" blog posts I run across too - they're over-simplified to the point of being unusable in practice (e.g. by ignoring error handing) and far too inflexible to safely change for slightly different needs.

honestly I think it's mostly due to two things: a lack of quality training material (some obviously exists, but it's overwhelmed by flawed stuff), and an extreme sensitivity to subtle flaws (much more so than normal code). so it's both bad at generalizing (not enough transitional examples between targets), and its general lack of ability to actually think introducing flaws that look like normal code that humans are less likely to notice (due to their own lack of experience).

this is not to claim it's not possible to use them to write good concurrent code, there are many counter-examples that show it is. but it's a particularly error-prone area in practice, especially in languages without much safer patterns or built-in verification.

erichocean
In my experience, because the Clojure concurrency model is just incredibly sane and easy to get right, LLMs have no difficulty with it.

This item has no comments currently.