Preferences

Only found a short but good article about such a case [0], i'm sure someone has bookmarked the original. There are support groups for people like this now!

[0] https://www.bgnes.com/technology/chatgpt-convinced-canadian-...


This aspect is fascinating

> The breakdown came when another chatbot β€” Google Gemini β€” told him: β€œThe scenario you describe is an example of the ability of language models to lead convincing but completely false narratives.”

Presumably, humans had already told him the same thing, but he only believed it when an AI said it. I wonder if Gemini has any kind of special training to detect these situations.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal