- xanderlewisI'm not talking about monetary cost.
- With no cost?
- > The UK locks up more people for speech crimes than Russia does.
Do you think there might be a fairly obvious reason for that?
- That's because a lot of commenters here are not hackers in any real sense; rather, they're software engineers. Perhaps this hasn't always been the case.
- > too obsessed with getting ahead
or perhaps with others (potentially) getting ahead of us.
- > a better way of doing something
Your argument fails right here because you're supposing something that isn't true. LLMs are better than search engines for some things, but you're speaking as if they're a replacement for what came before. They're absolutely not. Reading books — going to the original source rather than relying on a stochastic facsimile — is never going to go away, even if some of us are too lazy to ever do so. Their loss.
Put another way: leaving aside non-practical aspects of the experience, the car does a better job of getting you from A to B than a horse does. An LLM does not 'do a better job' than a book. Maybe in some cases it's more useful, but it's simply not a replacement. Perhaps a combination is best: use the LLM to interpolate and find your way around the literature, and then go and hunt down the real source material. The same cannot be said of the car/horse comparison.
- ...good question. This (standard) excuse is designed to make you feel bad for potentially insulting someone trying their hardest, but it doesn't make any sense.
- You're right. Unfortunately, it seems that not many are willing to admit this and be (rightly) impressed by how remarkably effective LLMs can be, at least for manipulating language.
- You must not know great Lobachevsky.
- > or at least it's easily possible to come up with a good definition.
- But flying machines are well defined, or at least it's easily possible to come up with a good definition. 'A machine capable of transporting a person from a to b without touching the ground at any point in between', or whatever.
For AGI, that's very far from being true.
- So you don't think it's relevant at all? Really?
It seems completely obvious that AI companies benefit massively from (and in many cases likely only continue to stay afloat because of) 'research papers' like this.
I also don't think a scientist purely interested in the truth would be claiming anything about concepts like 'introspection' that are nebulous and only really serve to capture the imagination of the general public (and, of course, investors).
The difference between AI and the pharmaceutical industry should be clear: one produces products of undeniable value, and the other is largely built on hype and endless dreaming of what might come next, but so far hasn't.
- боже мой.
- > I don't get why you would say that.
Because it's hard to imagine the sheer volume of data it's been trained on.
- I can't be exactly sure of the intended target, but it certainly helps to increase the sense of FOMO among investors even if as an unintended side effect (though I don't think it is unintended).
- Did you understand the point of my comment at all?
- Given that this is 'research' carried out (and seemingly published) by a company with a direct interest in selling you a product (or, rather, getting investors excited/panicked), can we trust it?
- It's absolutely understandable that you would want to know my cards, and I'm sorry to have kept that vital information from you.
*My current hand* (breakdown by suit and rank)
...
- I agree. I've written like this too, but these days when you see it it's more likely to be AI.
I actually think if I were writing blog posts these days I'd deliberately avoid these kinds of cliches for that reason. I'd try to write something no LLM is likely to spit out, even if it ends up weird.
- > That's when it clicked:
> You know the drill:
etc etc.
If these are hand-typed, I'll eat my hat.