Much as I dislike Go, it is indeed probably closer to the ideal language for the LLM. But I suspect that we need to dial it down even further, e.g. no type inference whatsoever (so no := etc). In fact I wonder if forcing the model to spell out the type of every subexpression as a type assertion might be beneficial due to the way LLMs work, for the same reason why prompting for explicit chain-of-thought improves outputs even with models not specifically trained to produce CoT. In the similar vein, it could require fully qualified names for all library functions etc. But it also needs to have fewer footguns, which Go has aplenty - possible to ignore error returns, concurrency is unsafe etc. I suspect message passing a la Erlang might be the best bet there but this is just a gut feel.
Of course, the problem with any hypothetical new PL optimized for LLMs is that there's no training data for it. To some extent this can be mitigated by mechanically converting existing code - e.g. mandatory fully qualified names and explicit type assertions for subexpressions could be easily bolted onto any existing statically typed language.
I noticed that CC’s generated Go code nowadays is very solid. No hallucination recently that i can remember or struck me. I do see youtube videos of people working with js/ts still struggling with this. Which is odd, there is way more training material for the latter. Perhaps the simplicity of Go shines here.
CC might generate Go code for which there are already library functions present. So thorough code reviews are a necessity.