gareth full_stop mccaughan at_sign pobox.com
- It feels to me like there's a distinction between "on one occasion, one person in group X did Y" and "group X does Y", and it's the second of those that (for some choices of Y, including "attacking police with sledgehammers") could justify calling group X a terrorist group.
Obviously "on one occasion, a person in group X did Y" is evidence for "group X does Y". If Samuel Corner attacked a police sergeant with a sledgehammer during one Palestine Action, er, action, then that's the sort of thing we expect to see more often if PA is generally in favour of attacking police with sledgehammers. (Either as a matter of explicit open policy, or as a nudge-nudge-wink-wink thing where everyone in PA knows that if they start smashing up police as well as property then their PA comrades will think better of them rather than worse.)
But it falls way short of proof. Maybe Samuel Corner sledgehammered a cop because Palestine Action is a terrorist organization after all; but maybe Samuel Corner sledgehammered a cop because Samuel Corner is a thug or an idiot or was drunk or whatever. Or maybe Samuel Corner sledgehammered a cop because the cops were already being violent with the Palestine Action folks and he was doing his (ill-advised) best to protect the others from the police. (This, as I understand it, is his account of things.)
(An Oxford University graduate attacked a police officer with a sledgehammer. I take it you would not say that that makes the University of Oxford a terrorist organization, and you wouldn't say that even if he'd done it while attending, say, a university social function rather than while smashing up alleged military hardware. It matters how typical the action is of the organization, what the group's leadership thinks of the action, etc.)
I took a look at the video. It's not easy to tell what's going on, but it looks to me as follows. One of the PA people is on the ground, being forcibly restrained and tasered by a police officer, complaining loudly about what the police officer is doing. (It isn't obvious to me whether or not her complaints are justified[1].) There is another police officer, whom I take to be Kate Evans, nearby, kneeling on the ground and helping to restrain this PA person. Samuel Corner approaches with his sledgehammer and attacks that second police officer. I can't tell from the video exactly what he's trying to do (e.g., whether he's being as violent as possible and hoping to kill or maim, or whether he's trying to get the police officer off the other person with minimal force but all he's got is a sledgehammer).
[1] I get the impression that she feels she has the right not to suffer any pain while being forcibly restrained by police, which seems like a rather naive view of things. But I also get the impression that the police were being pretty free with their tasering. But it's hard to tell exactly what's going on, and I imagine it was even harder in real time, and I am inclined to cut both her and the police some slack on those grounds.
It's highly misleading, even though not technically false, to say that Corner attacked Kate Evans "while she was on the ground"; she certainly was on the ground in the sense that she was supported by the floor, and even in the sense that she wasn't standing up -- I think she was crouching -- but it's not like she was lying on the ground injured or inactive; she was fighting one of the other PA people, and she was "on the ground" because that PA person was (in a stronger sense) "on the ground" too.
For the avoidance of doubt, I do not approve of attacking police officers with sledgehammers just because they are restraining someone you would prefer them not to be restraining, even if you think they're doing it more violently than necessary. And I have a lot of sympathy with police officers not being super-gentle when the people they're dealing with are armed with sledgehammers.
But the story here looks to me more like "there were a bunch of PA people, who had sledgehammers because they were planning to smash up military hardware; the cops arrived and wrestled and tasered them, and one of the PA people lost his temper and went for one of the cops to try to defend his friend whom he thought was being mistreated, and unfortunately he was wielding a sledgehammer at the time" than like "PA is in the business of attacking cops with sledgehammers".
None of that makes Kate Evans any less injured. But I think those two possibilities say very different things about Palestine Action. Carrying sledgehammers because you want to smash equipment is different from carrying sledgehammers because you want to smash people. Attacking police because they are a symbol of the state is different from attacking police because they are attacking your friend. One person doing something bad in the heat of the moment because he thinks his friend is being mistreated is different from an organization setting out to do that bad thing.
There are plenty of documented cases of police being violent (sometimes with deadly effect) with members of the public. Sometimes they have good justification for it, sometimes not so much. Most of us don't on those grounds call the police a terrorist organization. Those who do say things along those lines do so because they think that actually the police are systematically violent and brutal.
I think the same applies to organizations like Palestine Action. So far as I can tell, they aren't systematically violent and brutal. Mostly they smash up hardware that they think would otherwise be used to oppress Palestinians. (I am making no judgement as to whether they're right about that, which is relevant to whether they're a Good Thing or a Bad Thing but not to whether they're terrorists.) Sometimes that leads to skirmishes with the police. On one occasion so far, one of them badly injured a police officer. It's very bad that that happened, but this all seems well short of what it would take to justify calling the organization a terrorist one.
- I don't think you should be posting the answer to the current question unobfuscated, especially when there's no way for curious readers to get any other question after having that one spoiled (other than waiting for, at the moment, 7 hours or so).
It's not quite true that "it genuinely could have been from any OT book". First, when you make a guess it doesn't just tell you whether you got the book right, it also tells you whether you got the right section (major prophets, gospels, etc.). So if you guess that something's from Isaiah, after that you will at least know whether it's from Ezekiel or not.
I agree that this one could be from almost-but-not-quite anywhere in the OT, but bear in mind that the target audience (1) may well have read the whole thing multiple times and (2) may well have a good idea of the "tone" of various different books.
(I've been a godless heathen for many years, though I was a Christian for many years before that. In the present case my first guess was the same as yours but I didn't try your second guess, not only because of the can't-be-in-that-section thing but because I have a pretty good idea what sort of thing is in that book and I don't think there's much there that reads like this verse. It did, none the less, take me quite a lot of guesses. If I were still in the habit of reading the Bible a lot I expect I'd have done better. Which is kinda the point.)
- I'm not sure I agree. "GSM" is three syllables, versus four for "grammes per square metre". You can write it correctly using only characters everyone knows how to type quickly on their keyboard, versus either finding a way to get that superscript ² or else typing something like g/m^2 which is uglier and longer. And you can use it comfortably even if you are a complete mathematical ignoramus (you just need to know things like "larger numbers mean heavier paper" and "cheap printer paper is about 80gsm" and so forth) without the risk of turning g/m² into the nonsensical g/m2 or something.
(But arguably what whoever decided on "gsm" should have done was to just use "g", with the "per square metre" left implicit.)
- > no one actually wants to buy a tungsten cube
Apparently some people do and don't even regret the purchase: https://thume.ca/2019/03/03/my-tungsten-cube/
- Those metrics are all aggregate ones. A group containing Bill Gates plus one destitute homeless person $1M in debt has great metrics of that sort. Total debt is a tiny fraction of total income. Income per person is huge, and doesn't stop being huge when you adjust for price differences or hours worked or anything else you care to adjust for. But that destitute homeless person with a $1M debt is still destitute and homeless and $1M in debt.
I haven't commented on "repayment behaviour" because your other comments don't actually mention that. Maybe there's something behind one of the links you posted that explains what you mean by it. I did have a quick look at the not-paywalled ones and didn't see anything of the kind.
(The above isn't a claim that actually the US economy is in a very real sense tanking, or that not-very-rich Americans are heading for destitution, or anything else so concrete. Just pointing out why the things you've been posting don't seem like they address the objection being made.)
- I am not 100% convinced by this. The matchup between their painting-based economic index (it's the first component from a PCA analysis, the data for each painting being a vector of pixel-counts for colours in each of 108 bins based on HSV) and GDP growth is pretty dubious, and in places where the two vary together the painting-based metric frequently changes several years before the allegedly-corresponding change in GDP growth.
They have ad hoc explanations for the divergences and try to make lemonade out of the lemons by claiming that their index reveals "higher-frequency fluctuations that traditional series smooth over" but I am willing to bet that if they had had to predict the divergences before doing the calculations they wouldn't have been able to.
I think this is probably mostly pareidolia.
- Right. But (1) no longer needing the skill of thinking seems not obviously a good thing, and (2) in scenarios where in fact there is no need for humans to think any more I would be seriously worried about doomy outcomes.
(Maybe no longer needing the skill of thinking would be fine! Maybe what happens then is that people who like thinking can go on thinking, and people who don't like thinking and were already pretty bad at it outsource their thinking to AI systems that do it better, and everything's OK. But don't you think it sounds like the sort of transformation where if someone described it and said "... what could possibly go wrong?" you would interpret that as sarcasm? It doesn't seem like the sort of future where we could confidently expect that it would all be fine.)
- For the avoidance of doubt, I was not claiming that AI is the worst thing ever. I too think that complaints about that are generally overblown. (Unless it turns out to kill us all or something of the kind, which feels to me like it's unlikely but not nearly as close to impossible as I would be comfortable with[1].) I was offering examples of ways in which LLMs could plausibly turn out to do harm, not examples of ways in which LLMs will definitely make the world end.
Getting worse at mental arithmetic because of having calculators didn't matter much because calculators are just unambiguously better at arithmetic than we are, and if you always have one handy (which these days you effectively do) then overall you're better at arithmetic than if you were better at doing it in your head but didn't have a calculator. (Though, actually, calculators aren't quite unambiguously better because it takes a little bit of extra time and effort to use one, and if you can't do easy arithmetic in your head then arguably you have lost something.)
If thinking-atrophy due to LLMs turns out to be OK in the same way as arithmetic-atrophy due to calculators has, it will be because LLMs are just unambiguously better at thinking than we are. That seems to me (a) to be a scenario in which those exotic doomy risks become much more salient and (b) like a bigger thing to be losing from our lives than arithmetic. Compare "we will have lost an important part of what it is to be human if we never do arithmetic any more" (absurd) with "we will have lost an important part of what it is to be human if we never think any more" (plausible, at least to me).
[1] I don't see how one can reasonably put less than 50% probability on AI getting to clearly-as-smart-as-humans-overall level in the next decade, or less than 10% probability on AI getting clearly-much-smarter-than-humans-overall soon after if it does, or less than 10% probability on having things much smarter than humans around not causing some sort of catastrophe, all of which means a minimum 0.5% chance of AI-induced catastrophe in the not-too-distant future. And those estimates look to me like they're on the low side.
- As someone else said, we don't know for sure. But it's not like there aren't some at-least-kinda-plausible candidate harms. Here are a few off the top of my head.
(By way of reminder, the question here is about the harms of LLMs specifically to the people using them, so I'm going to ignore e.g. people losing their jobs because their bosses thought an LLM could replace them, possible environmental costs, having the world eaten by superintelligent AI systems that don't need humans any more, use of LLMs to autogenerate terrorist propaganda or scam emails, etc.)
People become like those they spend time with. If a lot of people are spending a lot of time with LLMs, they are going to become more like those LLMs. Maybe only in superficial ways (perhaps they increase their use of the word "delve" or the em-dash or "it's not just X, it's Y" constructions), maybe in deeper ways (perhaps they adapt their _personalities_ to be more like the ones presented by the LLMs). In an individual isolated case, this might be good or bad. When it happens to _everyone_ it makes everyone just a bit more similar to one another, which feels like probably a bad thing.
Much of the point of an LLM as opposed to, say, a search engine is that you're outsourcing not just some of your remembering but some of your thinking. Perhaps widespread use of LLMs will make people mentally lazier. People are already mostly very lazy mentally. This might be bad for society.
People tend to believe what LLMs tell them. LLMs are not perfectly reliable. Again, in isolation this isn't particularly alarming. (People aren't perfectly reliable either. I'm sure everyone reading this believes at least one untrue thing that they believe because some other person said it confidently.) But, again, when large swathes of the population are talking to the same LLMs which make the same mistakes, that could be pretty bad.
Everything in the universe tends to turn into advertising under the influence of present-day market forces. There are less-alarming ways for that to happen with LLMs (maybe they start serving ads in a sidebar or something) and more-alarming ways: maybe companies start paying OpenAI to manipulate their models' output in ways favourable to them. I believe that in many jurisdictions "subliminal advertising" in movies and television is illegal; I believe it's controversial whether it actually works. But I suspect something similar could be done with LLMs: find things associated with your company and train the LLM to mention them more often and with more positive associations. If it can be done, there's a good chance that eventually it will be. Ewww.
All the most capable LLMs run in the cloud. Perhaps people will grow dependent on them, and then the companies providing them -- which are, after all, mostly highly unprofitable right now -- decide to raise their prices massively, to a point at which no one would have chosen to use them so much at the outset. (But at which, having grown dependent on the LLMs, they continue using them.)
- The "obvious" thing to try, which presumably some people are trying pretty hard right now[1], is to (1) use a mathematically-tuned LLM like this one to propose informal Next Things To Try, (2) use an LLM (possibly the same LLM) to convert those into proof assistant formalism, (3) use the proof assistant to check whether what the LLM has suggested is valid, and (4) hook the whole thing together to make a proof-finding-and-verifying machine that never falsely claims to have proved something (because everything goes through that proof assistant) and therefore can tolerate confabulations from LLM #1 and errors from LLM #2 because all those do is waste some work.
[1] IIRC, AlphaProof is a bit like this. But I bet that either there's a whole lot of effort on this sort of thing in the major AI labs, or else there's some good reason to expect it not to work that I haven't thought of. (Maybe just the "bitter lesson", I guess.)
It would doubtless be challenging to get such a system to find large difficult proofs, because it's not so easy to tell what's making progress and what isn't. Maybe you need LLM #3, which again might or might not be the same as the other two LLMs, to assess what parts of the attempt so far seem like they're useful, and scrub the rest from the context or at least stash it somewhere less visible.
It is, of course, also challenging for human mathematicians to find large difficult proofs, and one of the reasons for them is that it's not so easy to tell what's making progress and what isn't. Another major reason, though, is that sometimes you need a genuinely new idea, and so far LLMs aren't particularly good at coming up with those. But a lot of new-enough-ideas[2] are things like "try a version of this technique that worked well in an apparently unrelated field", which is the kind of thing LLMs aren't so bad at.
[2] Also a lot of the new-enough-ideas that mathematicians get really happy about. One of the cool things about mathematics is the way that superficially-unrelated things can turn out to share some of their structure. If LLMs get good at finding that sort of thing but never manage any deeper creativity than that, it could still be enough to produce things that human mathematicians find beautiful.
- I think it's fair to say that summing the series directly would be slow, even if it's not slow when you already happen to have summed the previous n-1 terms.
Not least because for modestly-sized target sums the number of terms you need to sum is more than is actually feasible. For instance, if you're interested in approximating a sum of 100 then you need something on the order of exp(100) or about 10^43 terms. You can't just say "well, it's not slow to add up 10^43 numbers, because it's quick if you've already done the first 10^43-1 of them".
- Nah, look at their posting history. In the last hour they've posted a whole slew of comments with the same sort of tone and the same AI-ish stylistic quirks, all in quite surprisingly quick succession if the author is actually reading the things they're commenting on and thinking about them before posting. (And their comments before this posting spree are quite different in style.) I won't say it's impossible for this to be human work, but it sure doesn't look like it.
- For that sort of task: no, Tao isn't all that much better than a "regular researcher" at relatively easy work. But the tougher the problems you set them at, the more advantage Tao will have.
... But mathematics gets very specialized, and if it's a problem in a field the other guy is familiar with and Tao isn't, they'll outperform Tao unless it's a tough enough problem that Tao takes the time to learn a new field for it, in which case maybe he'll win after all through sheer brainpower.
Yes, Tao is very very smart, but it's not like he's 100x better at everything than every other mathematician.
- Could you name a specific person whose estimate of when we might get AGI has doubled twice since 2022? Or do you mean you found one person with a really short estimate in 2022, another person with a longer one in 2024, and another with a longer one now?
Also, if you compare with 50 years ago, AGI has also (better than) halved the interval experts are commonly predicting since then.
(Of course the experts could turn out to be hilariously wrong, for fusion or AI or both. I just don't think your comparison is anything like apples-to-apples.)
- That description is obviously written by an AI. Has anyone actually checked whether it's an accurate description rather than just yet another LLM Making Stuff Up?
(Also, I don't think there's anything very NSFW on the far end of that link, although it describes something used for making NSFW writing.)
- Fun fact: in the original radio-series version of HHGttG the name was "Paul Neil Milne Johnstone" and allegedly he was an actual person known to Douglas Adams, who was Not Amused at being used in this way, hence the name-change in the books.
(I do not know whether said actual person actually wrote poetry or whether it was anywhere near as bad as implied. Online sources commonly claim that he did and it was, but that seems like the sort of thing that people might write without actually knowing it to be true.)
[EDITED to add:] Actually, some of those online sources do in fact give what looks like good reason to believe that he did write actual poetry and to suspect it wasn't all that bad. I haven't so far found anything that seems credibly an actual poem written by Johnstone. There is something on-screen at the appropriate point in the TV series, but it seems very unlikely that it is a real poem written by Paul Johnstone. There's a Wikipedia talk page for Johnstone (even though no longer an actual article) which quotes what purport to be two lines from one of his poems, on which the on-screen Terrible Poetry may be loosely based. It doesn't seem obviously very bad poetry, but it's hard to tell from so small a sample.
- Note that at the time this was written the word "quaint" had both (1) roughly its modern meaning -- unusual and quirky, with side-orders of prettiness and (at the time) ingenuity, fastidiousness, and pride -- and also (2) a rather different meaning, equivalent to a shorter word ending in -nt.
So, even less couched than some readers might realise.
- I think companies always prioritized their own interests.
A company can increase its profits (1) by improving their products and services, so that they'll get more customers or customers willing to pay more, or (2) by increasing how much of their revenue is profit by (e.g.) cutting corners on quality or raising prices or selling customers' personal information to third parties.
Either of those can work. Yes, a noble idealistic company might choose #1 over #2 out of virtue, but I think that if most companies picked #1 in the past it's because they thought they'd get richer that way.
I think what's happened is that for some reason #2 has become easier or more profitable, relative to #1, over time. Or maybe it used not to be so clearly understood that #2 was a live option, and #1 seemed safer, but now everyone knows that you can get away with #2 so they do that.
- What looks like the relevant table has a summary line saying "geometric mean: 1.45x" so I think that in this case "45% slower" means "times are 1.45x as long".
(I think I would generally use "x% slower" to mean "slower by a factor of 1+x/100", and "x% faster" to mean "faster by a factor of 1+x/100", so "x% slower" and "x% faster" are not inverses, you can perfectly well be 300% faster or 300% slower, etc. I less confidently think that this is how most people use such language.)
- It doesn't particularly matter, but it looks to me as if there are a couple of errors in the fragment of transcript provided by the author.
It says "the regular exercise thereof" where the scan looks to me much more like "the regular course thereof".
And -- this one is smaller but gave me more trouble -- there's a misplaced comma: it should be after "thereof", not after "intercept". (The sentence structure is a bit weird even with the comma in the right place, but having it in the wrong place makes it even more confusing.)
- "At the end of July, Amazon reported second quarter results which beat Wall Street expectations on several counts, including a 13% year over year increase in sales to $167.7bn (£125bn)."
The stock-owning class is in a boom. The working class is in a recession.
(People who have stock-market investments and need to work for a living are somewhere in between.)
The average of one billionaire gaining £10M and a hundred middle-class folks losing £100k is solidly positive, so this looks like a "growing economy" to many of the usual metrics.
I am not sure whether the people who are benefiting from all this have noticed how often this sort of dynamic in the past has led to torches and pitchforks and the like.
- > The text adds some pieces of information you wouldn't get from the images alone
But this is exactly where being AI-written bothers me! I don't really mind the style (the LLMs have learned to write a particular way because 1. people write that way and 2. other people like it) and I don't have the "boooo stochastic parrot plagiarism machines booooooooo" sense of disgust at AI that some people have, but I do know that when LLMs write things those things are ... not always true.
(Of course when people write things they aren't always true either, but the LLMs get things wrong more than humans do.)
Which means that when the article tells me something interesting -- flour sacks! mobile cinemas! exhibited in galleries! -- I can't trust it. And that, for me, is the main damage that outsourcing your writing to an LLM does: it destroys trust.
(I would say you'd been violent to me if you'd slapped me in the face. I would rather be slapped in the face than have my house ransacked and smashed up. Some not-violent things are worse than some violent things.)
If you dropped a bomb on a weapons factory that had, or plausibly could have had, people in it then that would unquestionably be an act of violence. If you somehow knew that there was nothing there but hardware then I wouldn't call it an act of violence.
(In practice, I'm pretty sure that when you drop a bomb you scarcely ever know that you're not going to injure or kill anyone.)
I'm not claiming that this is the only way, or the only proper way, to use the word "violence". But, so far as I can tell from introspection, it is how I would use it.
There are contexts in which I would use the word "violence" to include destruction that only affects things and not people. But they'd be contexts that already make it clear that it's things and not people being affected. E.g., "We smashed up that misbehaving printer with great violence, and very satisfying it was too".