portaouflop parent
You can modify the output but the underlying model is always susceptible to jail breaks.
A method I tried a couple months ago to reliably get it to explain to me how to cook meth step by step still works. I’m not gonna share it, you just have to take my word on this.
I believe you, but you only need to establish a safety standard where jailbreaking is required by the end-user to show you are protecting property in good faith, AFAIK.
Why is this so problematic? You can read all this stuff in old papers and patents that are available in the web.
And if you are not capable to do this you will likely not succeed with the chatgpt instructions.
I’m not saying it’s not possible to get this information elsewhere - but it’s impossible to prevent ChatGPT from telling you how to do illegal stuff; something that the model explicitly should not be able to according to its makers