Preferences

bsenftner parent
Consider that these AIs are trained on human communications, they mirror that communication. They are literally damaged document repair models, they use what they are given to generate a response - statistically. The fact that a question generates text that appears like an answer is an exploited coincidence.

It's a perspective shift few seem to have considered: if one wants an expert software developer from their AI, they need to create an expert software developer's context by using expert developer terminology that is present in the training data.

One can take this to an extreme, and it works: read the source code of an open source project and get and idea of both the developer and their coding style. Write prompts that mimic both the developer and their project, and you'll find that the AI's context now can discuss that project with surprising detail. This is because that project is in the training data, the project is also popular, meaning it has additional sites of tutorials and people discussing use of that project, so a foundational model ends up knowing quite a bit, if one knows how to construct the context with that information.

This is, of course, tricky with hallucination, but that can be minimized. Which is also why we will all become aware of AI context management if we continue writing software that incorporates AIs. I expect context management is what was meant by prompt engineering. Communicating within engineering disciplines has always been difficult.


This item has no comments currently.