Preferences

You can modify the output but the underlying model is always susceptible to jail breaks. A method I tried a couple months ago to reliably get it to explain to me how to cook meth step by step still works. I’m not gonna share it, you just have to take my word on this.

I believe you, but you only need to establish a safety standard where jailbreaking is required by the end-user to show you are protecting property in good faith, AFAIK.
Why is this so problematic? You can read all this stuff in old papers and patents that are available in the web.

And if you are not capable to do this you will likely not succeed with the chatgpt instructions.

I’m not saying it’s not possible to get this information elsewhere - but it’s impossible to prevent ChatGPT from telling you how to do illegal stuff; something that the model explicitly should not be able to according to its makers

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal