Preferences

enraged_camel parent
I've actually thought about this extensively, and experimented with various approaches. What I found is that the quality of results I get, and whether the AI gets stuck in the type of loop you describe, depends on two things: how detailed and thorough I am with what I tell it to do, and how robust the guard rails I put around it are.

To get the best results, I make sure to give detailed specs of both the current situation (background context, what I've tried so far, etc.) and also what criteria the solution needs to satisfy. So long as I do that, there's a high chance that the answer is at least satisfying if not a perfect solution. If I don't, the AI takes a lot of liberties (such as switching to completely different approaches, or rewriting entire modules, etc.) to try to reach what it thinks is the solution.


But don't they keep forgetting the instructions after enough time have passed? How do you get around that? Do you add an instruction that after every action it should go back and read the instructions gain?
enraged_camel OP
They do start "drifting" after a while, at which point I export the chat (using Cursor), then start a new chat and add the exported file and say "here's the previous conversation, let's continue where we left off". I find that it deals with the transition pretty well.

It's not often that I have to do this. As I mentioned in my post above, if I start the interaction with thorough instructions/specs, then the conversation concludes before the drift starts to happen.

This item has no comments currently.