- i mean that the business models of google and facebook would go poooof
- in fact, cookies legible to anything except the single sandboxed webpage running on your local browser would be illegal and thus never exist to begin with
- Seriously, why can't we just have a law that makes entirely illegal the retention of any personally identifiable information in any way that is legible to the retainer.
You can store my data for me, but only encrypted, and it can be decrypted only in a sandbox. And the output of the sandbox can be sent only back to me, the user. Decrypting the personal data for any other use is illegal. If an audit shows a failure here, the company loses 1% of revenue the first time, then 2%, then 4, etc.
And companies must offer to let you store all of your own data on your own cloud machine. You just have to open a port to them with some minimum guarantees of uptime, etc. They can read/write a subset of data. The schema must be open to the user.
Any systems that have been developed from personal user data (i.e. recommendation engines, trained models) must be destroyed. Same applies: if you're caught using a system that was trained in the past on aggregated data across multiple users, you face the same percentage fines.
The only folks who maybe get a pass are public healthcare companies for medical studies.
Fixed.
(But yeah it'll never happen because most of the techies are eager to screw over everyone else for their own gain. And they'll of course tell you it's to make the services better for you.)
- If morality never factors into your own decisions, you don't get to be upset when it doesn't factor into other peoples'. In other words, society just sucks when everyone thinks this way, even if it true that resolving it is hard.
- TLDR: AI company uses AI to write blog post about abusive AI chrome extension
(Yes it really is AI-written / AI-assisted. If your AI detectors don’t go off when you read it you need to be retrained.)
- Yes, it is, 100%. One does not even need an AI detector; it's obvious from the first sentence: "brought a lot of context, more scars, and more pattern recognition." "Like a lineman sees a frayed cable." Lol. This is the sloppiest slop there is.
But you got downvoted for pointing out that it was slop. I got similarly downvoted a couple days ago. Hackernews folk seem uninterested in having it pointed out when AI is being used to generate posts.
I'd guess it's some combination of a) they like using AI themselves, and b) they can't distinguish AI themselves. And they turn to all manner of excuse like "AI detectors do not work" or "non-native speakers need a way to produce articles, too". It's a crappy time to be a humanist, or really to care about anything, it seems.
- The article is cool; there's no doubt. But it could have been written without AI, and it would be better to write the article in human voice than to proliferate AI slop. Is it really so horrible to take the time to write things ourselves?
If you read this article and don't observe the tells of AI content, you have a problem (or maybe you don't, because no one cares anymore).
The tells in this article: There are lots of parts that look like AI - the specific pattern of lists, the "not this but that", particular phrases that are relatively unlikely.
For example, the strange parallelism here (including the rhyming endings): "Sunscreen balms – Licked off immediately Fabric nose shields – She rubbed them off constantly Keeping her indoors – Reduced her quality of life drastically Reapplying medication constantly – Exhausting and ineffective" The style is cloying and unnatural.
"That solution didn't exist. So we decided to create it."
"For the holidays, I even made her a bright pink version, giving her a fashionable edge." -- wtf is a fashionable edge? A fashionable edge over what?
"I realized this wasn't just Billie's story—it was a problem affecting dogs everywhere."
Sure these could just be cliche style (and increasingly we will probably see that as the AI garbage infects the writing style of actual humans), but they look like AI. It's not as bad as some, but it's there.
Everyone should be disclosing the use of AI. And every time someone uses AI, he should say "I don't care enough about you the reader to actually put the time into writing this myself."
- Folks should be disclosing when they're using AI to write articles. AI style is garbage. It not only pollutes the internet but will steadily infect the writing style of others.
- erm..no, because i don't suck
- There is a lot of dislike for AI detection in these comments. Pangram labs (PL) claims very low false positive rates. Here's their own blog post on the research: https://www.pangram.com/blog/pangram-predicts-21-of-iclr-rev...
I increasingly see AI generated slop across the internet - on twitter, nytimes comments, blog/substack posts from smart people. Most of it is obvious AI garbage and it's really f*ing annoying. It largely has the same obnoxious style and really bad analogies. Here's an (impossible to realize) proposal: any time AI-generated text is used, we should get to see the whole interaction chain that led to its production. It would be like a student writing an essay who asks a parent or friend for help revising it. There's clearly a difference between revisions and substantial content contribution.
The notion that AI is ready to be producing research or peer reviews is just dumb. If AI correctly identifies flaws in a paper, the paper was probably real trash. Much of the time, errors are quite subtle. When I review, after I write my review and identify subtle issues, I pass the paper through AI. It rarely finds the subtle issues. (Not unlike a time it tried to debug my code and spent all its time focused on an entirely OK floating point comparison.)
For anecdotal issues with PL: I am working on a 500 word conference abstract. I spent a long while working on it but then dropped it into opus 4.5 to see what would happen. It made very minimal changes to the actual writing, but the abstract (to me) reads a lot better even with its minimal rearrangements. That surprises me. (But again, these were very minimal rearrangements: I provided ~550 words and got back a slightly reduced, 450 words.) Perhaps more interestingly, PL's characterizations are unstable. If I check the original claude output, I get "fully AI-generated, medium". If I drop in my further refined version (where I clean up claude's output), I get fully human. Some of the aspects which PL says characterize the original as AI-generated (particular n-grams in the text) are actually from my original work.
The realities are these: a) ai content sucks (especially in style); b) people will continue to use AI (often to produce crap) because doing real work is hard and everyone else is "sprinting ahead" using the semi-undetectable (or at least plausibly deniable) ai garbage; c) slowly the style of AI will almost certainly infect the writing style of actual people (ugh) - this is probably already happening; I think I can feel it in my own writing sometimes; d) AI detection may not always work, but AI-generated content is definitely proliferating. This *is* a problem, but in the long run we likely have few solutions.
- This is AI generated, which is annoying.
- Perhaps it’s that a global solution in the language of set theory was hard to find, but distributed systems — which need to provide guarantees only from local node behavior, without access to global — offered an alternate perspective. They weren’t designed to do so but they ended up being useful.
- One difference is that if it were found that a psychiatrist or other professional had encouraged a patient's delusions or suicidal tendencies, then that person would likely lose his/her license and potentially face criminal penalties.
We know that humans should be able to consider the consequences of their actions and thus we hold them accountable (generally).
I'd be surprised if comparisons in the self-driving space have not been made: if waymo is better than the average driver, but still gets into an accident, who should be held accountable?
Though we also know that with big corporations, even clear negligence that leads to mass casualties does not often result in criminal penalties (e.g., Boeing).
- Is this also partially AI generated? What's with the repeated short phrases? Is this just everyone's style now?
- Pretty sure this article is at least partially AI-written. Interesting content. Annoying style.
- I had a convo with a senior CS prof at Stanford two years ago. He was excited about LLM use in paper writing to, e.g., "lower barriers" to idk, "historically marginalized groups" and to "help non-native English speakers produce coherent text". Etc, etc - all the normal tech folk gobbledygook, which tends to forecast great advantage with minimal cost...and then turn out to be wildly wrong.
There are far more ways to produce expensive noise with LLMs than signal. Most non-psychopathic humans tend to want to produce veridical statements. (Except salespeople, who have basically undergone forced sociopathy training.) At the point where a human has learned to produce coherent language, he's also learned lots of important things about the world. At the point where a human has learned academic jargon and mathematical nomenclature, she has likely also learned a substantial amount of math. Few people want to learn the syntax of a language with little underlying understanding. Alas, this is not the case with statistical models of papers!
- Perhaps what is meant is "blame the development of LLMs." We don't "blame guns" for shootings, but certainly with reduced access to guns, shootings would be fewer.
- That whole koi blog post is sloppy AI garbage, even if it's accurate. So obnoxious.
But let me entertain it for a moment: prior to knowing, e.g., that plastics or CO2 are bad for the environment, how should one know that they are bad for the environment. Fred, the first person to realize this would run around saying "hey guys, this is bad".
And here is where I think it gets interesting: the folks making all the $ producing the CO2 and plastics are highly motivated to say "sorry Fred, your science is wrong". So when it finally turns out that Fred was right, were the plastics/CO2 companies morally wrong in hindsight?
You are arguing that morality is entirely socially determined. This may be partially true, but IMO, only economically. If I must choose between hurting someone else and dying, I do not think there is a categorically moral choice there. (Though Mengzi/ Mencius would say that you should prefer death -- see fish and the bear's paw in 告子上). So, to the extent that your life or life-preserving business (i.e. source of food/housing) demands hurting others (producing plastics, CO2), then perhaps it is moral to do so. But to the extent that your desire for fancy cars and first class plane tickets demands producing CO2...well (ibid.).
The issue is that the people who benefit economically are highly incentivized to object to any new moral reckoning (i.e. tracking people is bad; privacy is good; selling drugs is bad; building casinos is bad). To the extent that we care about morality (and we seem to), those folks benefitting from these actions can effectively lobby against moral change with propaganda. And this is, in fact, exactly what happens politically. Politics is, after all, an attempt to produce a kind of morality. It may depend on whom you follow, but my view would be that politics should be an approach to utilitarian management of resources, in service of the people. But others might say we need to be concerned for the well-being of animals. And still others, would say that we must be concerned with the well-being of capital, or even AIs! In any case, large corporations effectively lobby against any moral reckoning against their activities and thus avoid regulation.
The problem with your "socially determined morality" (though admittedly, I increasingly struggle to see a practical way around this) is that, though in some ways true (since society is economics and therefore impacts one's capacity to live) is that you end up in a world in which everyone can exploit everyone else maximally. There is no inherent truth in what the crowd believes (though again, crowd beliefs do affect short-term and even intermediate-term economics, especially in a hyper-connected world). The fact that most white people in the 1700s believed that it was not wrong to enslave black people does not make that right. The fact that many people believed tulips were worth millions of dollars does not make it true in the long run.
Are we running up against truth vs practicality? I think so. It may be impractical to enforce morality, but that doesn't make Google moral.
Overall, your arguments are compatible with a kind of nihilism: there is no universal morality; I can adopt whatever morality is most suitable to my ends.
I make one final point: how should slavery and plastics be handled? It takes a truly unfeeling sort of human to enslave another human being. It is hard to imagine that none of these people felt that something was wrong. Though google is not enslaving people nor are its actions tantamount to Nazism, there is plenty of recent writing about the rise of technofascism. The EAs would certainly sacrifice the "few" of today's people for the nebulous "many" of the future over which they will rule. But they have constructed a narrative in which the future's many need protection. There are moral philosophies (e.g. utilitarianism) that would justify this. And this is partially because we have insufficient knowledge of the future, and also because the technologies of today make highly variable the possible futures of tomorrow.
I propose instead that---especially in this era of extreme individual power (i.e. the capacity to be "loud" -- see below)---a different kind of morality is useful: the wielding of power is bad. As your power grows, so to does the responsibility to consider its impact on others and to more aggressively judge every action one takes under the Veil of Ignorance. Any time we affect the lives of others around us, we are at greater risk of violating this morality. See eg., Tools for Conviviality or Silence is a Commons (https://www.hackerneue.com/item?id=44609969). Google and the tech companies are being extremely loud, and you'd have to be an idiot to see that it's not harmful. If your mental contortions allow you to say "harm is moral because the majority don't object," well, that looks like nihilism and certainly doesn't get us anywhere "good". But my "good" cannot be measured, and your good is GDP, so I suppose I will lose.