He was clearly showing that LLMs could do a lot, but still had problems.
Unfortunately from his tweets I have to agree with the grand poster that he didn’t learn this.
--- start quote ---
Possibly worse, it hid and lied about it
It lied again in our unit tests, claiming they passed
I caught it when our batch processing failed and I pushed Replit to explain why
https://x.com/jasonlk/status/1946070323285385688
He knew
https://x.com/jasonlk/status/1946072038923530598
how could anyone on planet earth use it in production if it ignores all orders and deletes your database?
https://x.com/jasonlk/status/1946076292736221267
Ok so I'm >totally< fried from this...
But it's because destoying a production database just took it out of me.
My bond to Replie is now broken. It won't come back.
https://x.com/jasonlk/status/1946241186047676615
--- end quote ---
Does this sound like an educated experiment into the limits of LLMs to you? Or "this magical creature lied to me and I don't know what to do"?
To his credit he did eventually learn some valuable lessons: https://x.com/jasonlk/status/1947336187527471321 see 8/13, 9/13, 10/13
No it wasn't. If you follow the threads, he went in fully believing in magical AI that you could talk to like a person.
At one point he was extremely frustrated and ready to give up. Even by day twelve he was writing things "but Replie clearly knows X, and still does X".
He did learn some invaluable lessons, but it was never an educated "experiment in the limitations of AI".