An actor can emulate the communication style of judicial decision language, sure.
But the cost of a wrong answer (wrongful conviction) exceeds a threshold of ethical use.
> We try prompt engineering techniques to spur the LLM to act more like human judges, but with no success. “Judge AI” is a formalist judge, not a human judge.
But the cost of a wrong answer (wrongful conviction) exceeds a threshold of ethical use.
> We try prompt engineering techniques to spur the LLM to act more like human judges, but with no success. “Judge AI” is a formalist judge, not a human judge.
From "Asking 60 LLMs a set of 20 questions" https://www.hackerneue.com/item?id=37451642 :
> From https://www.hackerneue.com/item?id=36038440 :
>> Awesome-legal-nlp links to benchmarks like LexGLUE and FairLex but not yet LegalBench; in re: AI alignment and ethics / regional law
>> A "who hath done it" exercise
>> "For each of these things, tell me whether Gdo, Others, or You did it"
AI should never be judge, jury, and executioner.