we could all do with a bit more humility dealing with this topic and each other.
Why bother being "human". We are all just a bunch of cells exchanging chemicals and electrical signals. That's all there is to it. There is no reasoning, just a bunch of signals going back and forth.
It stand's to reason that the bigger the model, the more likely you'll get an answer to a question you're looking for. Even the apparently "tricky questions".
Even things like translating code from one coding language to another...
Anyway, maybe we are ALL stochastic parrots (including ChatGPT-10) and that's all we'll ever be...bravo.
Edit: I meant to say I don't need access to training. By experimenting with in/outputs you can get a basic picture. I don't need to see biological scans to say something about your personality either.
Judging someones personality is a subjective process, not an objective one.
Those two papers are critical of LLMs and discuss what researchers believe that they can and cannot do. I don't say that you need to agree with them but I think reading them should give you a good primer on why some researchers are not as excited as HN users are.
Obviously there's much more out there, those three things are a pretty good read.
[1]: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...
Stopped reading right there. The author clearly has no clue of how neural networks work or what makes them tick.