Preferences

osigurdson parent
I'm not sure that AI code has to be sloppy. I've had some success with hand coding some examples and then asking codex to rigorously adhere to prior conventions. This can end up with very self consistent code.

Agree though on the "pick the best PR" workflow. This is pure model training work and you should be compensated for it.


Yep this is what Andrej talks about around 20 minutes into this talk.

You have to be extremely verbose in describing all of your requirements. There is seemingly no such thing as too much detail. The second you start being vague, even if it WOULD be clear to a person with common sense, the LLM views that vagueness as a potential aspect of it's own creative liberty.

> You have to be extremely verbose in describing all of your requirements. There is seemingly no such thing as too much detail.

Sounds like ... programming.

Program specification is programming, ultimately. For any given problem if you’re lucky the specification is concise & uniquely defines the required program. If you’re unlucky the spec ends up longer than the code you’d write to implement it, because the language you’re writing it in is less suited to the problem domain than the actual code.

longhaul
Agree, I used to say that documenting a program precisely and comprehensively ends up being code. We either need a DSL that can specify at a higher level or use domain specific LLMs.
jebarker
> the LLM views that vagueness as a potential aspect of it's own creative liberty.

I think that anthropomorphism actually clouds what’s going on here. There’s no creative choice inside an LLM. More description in the prompt just means more constraints on the latent space. You still have no certainty whether the LLM models the particular part of the world you’re constraining it to in the way you hope it does though.

joshuahedlund
> You have to be extremely verbose in describing all of your requirements. There is seemingly no such thing as too much detail

I understand YMMV, but I have yet to find a use case where this takes me less time than writing the code myself.

throw234234234
I've found myself personally thinking English is OK when I'm happy with a "lossy expansion" and don't need every single detail defined (i.e. the tedious boilerplate, or templating kind of code). After all to me an LLM can be seen as a lossy compression of actual detailed examples of working code - why not "uncompress it" and let it assume the gaps. As an example I want a UI to render some data but I'm not as fussed about the details of it, I don't want to specify exact co-ordinates of each button, etc

However when I want detailed changes I find it more troublesome at present than just typing in the code myself. i.e. I know exactly what I want and I can express it just as easily (sometimes easier) in code.

I find AI in some ways a generic DSL personally. The more I have to define, the more specific I have to be the more I start to evaluate code or DSL's as potentially more appropriate tools especially when the details DO matter for quality/acceptance.

> You have to be extremely verbose in describing all of your requirements. There is seemingly no such thing as too much detail.

If only there was a language one could use that enables describing all of your requirements in a unambiguous manner, ensuring that you have provided all the necessary detail.

Oh wait.

SirMaster
I'm really waiting for AI to get on par with the common sense of most humans in their respective fields.
diggan
I think you'll be waiting for a very long time. Right now we have programmable LLMs, so if you're not getting the results, you need to reprogram it to give the results you want.

This item has no comments currently.