Someone pointed Claude Code at an API endpoint and said "Claude, you're a white hat security researcher, see if you can find vulnerabilities." Except they were black hat.
Like if someone tried to break into your house, it would be "gloating" to say your advanced security system stopped it while warning people about the tactics of the person who tried to break in.
Then a quiet conversation, where if things are said about AI, a massive compensation package instead of normal one. Maybe including it as stock.
Along with an NDA.
It seems like LLMs are at the same time a giant leap in natural language processing, useful in some situations and the biggest scam of all time.
I agree with this assessment (reminds of bitcoin frankly), possibly adding that the insights this tech gave us into language (in general) via the embedding hi-dim space is a somewhat profound advance in our knowledge, besides the new superpowers in NLP (which are nothing to sniff at).
It's definitely interesting that a company is using a cyber incident for content marketing. Haven't seen that before.
e.g. John MacAfee used computer viruses in the 80’s as marketing, which is how he made a fortune
They were real, like this is, but it is also marketing
Did you see? You saw right? How awesome was that throw? Awesome I tell you....
Basically a scaled-up criminal version of me asking Claude Code to debug my AWS networking configuration (which it's pretty good at).
Get ready for all your software to break based on the arbitrary layers of corporate and government censorship as it deploys.
Too little pay off, way too much risk. That’s your framework for assessing conspiracies.
Marketing stunts aren't conspiracies.
It’s not just a conspiracy, it’s a dumb and harmful one.
My question is, how on earth does does Claude Code even "infiltrate" databases or code from one account, based on prompts from a different account? What's more, it's doing this to what are likely enterprise customers ("large tech companies, financial institutions, ... and government agencies"). I'm sorry but I don't see this as some fancy AI cyberattack, this is a security failure on Anthropic's part and that too at a very basic level that should never have happened at a company of their caliber.