Preferences

Before the bubble does pop, which I think is inevitable, there will be many stories like this one, and a lot of people will be scammed, manipulated, and harmed. It might take years until the general consensus is negative about the effects of these tools. All while the wealthy and powerful continue to reap the benefits, while those on slightly lower rungs fight to take their place. And even if the public perception shifts, the power might be so concentrated that it could be impossible to dislodge it without violent means.

What a glorious future we've built.


autoexec
> It might take years until the general consensus is negative about the effects of these tools.

The only thing I'm seeing offline are people who already think AI is trash, untrustworthy, and harmful, while also occasionally being convenient when the stakes are extremely low (random search results mostly) or as a fun toy ("Look I'm a ghibli character!")

I don't think it'll take long for the masses to sour to AI and the more aggressively it's pushed on them by companies, or the more it negatively impacts their life when someone they depend on and should know better uses it and it screws up the quicker that'll happen.

gdbsjjdn
I work in Customer Success so I have to screenshare with a decent number of engineers working for customers - startups and BigCos.

The number of them who just blindly put shit into an AI prompt is incredible. I don't know if they were better engineers before LLMs? But I just watch them blindly pass flags that don't exist to CLIs and then throw their hands up. I can't imagine it's faster than a (non-LLM) Google search or using the -h flag, but they just turn their brains off.

An underrated concern (IMO) is the impact of COVID on cognition. I think a lot of people who got sick have gotten more tired and find this kind of work more challenging than they used to. Maybe they have a harder time "getting in the zone".

Personally, I still struggle with Long COVID symptoms. This includes brain fog and difficulty focusing. Before the pandemic I would say I was in the top 10% of engineers for my narrow slice of expertise - always getting exceptional perf reviews, never had trouble moving roles and picking up new technologies. Nowadays I find it much harder to get started in the morning, and I have to take more breaks during the day to reset my focus. At 5PM I'm exhausted and I can't keep pushing solving a problem into the evening.

I can see how the same kind of cognitive fatigue would make LLM "assistance" appealing, even if it's wrong, because it's so much less work.

bluefirebrand
Reading this, I'm wondering if I'm suffering "Long Covid"

I've recently had tons of memory and brain fog. I thought it was related to stress, and it's severe enough that I'm on medical leave from work right now

My memory is absolutely terrible

Do you know if it is possible to test or verify if it's COVID related?

gdbsjjdn
I haven't had a lot of success so far in getting a diagnosis, there's a lot of different possible things that can be wrong. Chronic Fatigue Syndrome is one place to start. I'm seeing an allergist about MCAS, I've had limited success taking antihistamines and anti-inflammatory drugs.

Mostly you talk to your doctor and read stuff and advocate for more testing to figure out why you're not able to function like before. Even if it's not "Long COVID" it definitely sounds like something is causing these problems and you should get it looked at.

Lu2025
> An underrated concern (IMO) is the impact of COVID on cognition

Car accidents came down from the Covid uptick but only slightly. Aviation... ugh. And there is some evidence it accelerates Altzheimer's and other dementias. We are so screwed.

tokioyoyo
Counter data point — my surroundings use ChatGPT basically for anything and say it’s good enough.
glotzerhotze
Same here, people use it like google for searching answers. It‘s a shortcut for them to not have to screen results and reason about them.
amalcon
This is precisely the problem: users still need to screen and reason about results of LLMs. I am not sure what is generating this implied permission structure, but it does seem to exist.

(I don't mean to imply that parent doesn't know this, it just seems worth saying explicitly)

tokioyoyo
It’s only a problem for people who care about its precision. If it’s right about 80-90% of stuff, it’s good enough.
Lu2025
> say it’s good enough

How do they know?

tokioyoyo
Doesn’t matter. If they feel “good enough” that’s already “good enough”. Super majority of the world doesn’t revolve around truth seeking, fact -checking or curiosity.
intended
The things I have noted offline included a HK case where someone got a link to a zoom call with what seemed to be his team mates and CFO, and then transferring money as per the CFOs instructions.

The error here was to click on a phishing email.

But something I have seen myself is Tim Cook talking about a crypto coin right after the 2024 Apple keynote, on a YT channel that showed the Apple logo. It took me a bit to realize and reassure myself that it was a scam. Even though it was a video of the shoulders up.

The bigger issue we face isn’t the outright fraud and scamming, it’s that our ability to make out fakes easily is weakened - the Liar’s dividend.

It’s by default a shot in the arm for bullshit and lies.

On some days I wonder if the inability to sort between lies, misinformation, initial ideas, fair debate, argument, theory and fact at scale - is the great filter.

LilBytes
The tragic part of fraud is it's not too different to operational health and safety.

The rules and standards we take for granted were built with blood, for fraud? It's built on the path of lost livelihoods and manipulated gold intent.

pyman
How do you know this is fraud and not the actions of former employees in Kenya [1] who were exploited [2] to train the models?

[1] https://www.cbsnews.com/amp/news/ai-work-kenya-exploitation-...

[2] https://www.theguardian.com/technology/2023/aug/02/ai-chatbo...

_Algernon_
We got the boring version of the cyberpunk future. No cool body mods, neon city scapes and space travel. Just megacorps manipulating the masses to their benefit.
filoeleven
The cool body mods are coming!

The work at the Levin Lab ( https://drmichaellevin.org/ ) is making great progress in the basic science that supports this. They can make two-headed planaria, regenerate frog limbs, cure cancer in tadpoles; all via bioelectric communication with cellular networks. No gene editing.

Levin believes this stuff will be very available to humans within the next 10 years, and has talked about how widespread body-modding is something we're going to have to wrestle with societally. He is of course very close to the work, but his cautious nature and the lab's astounding results give that 10-year prediction some weight. From his blog:

> We were all born into physical and mental limitations that were set at arbitrary levels by chance and genetics. Even those who have “perfect” standard human health and capabilities are limited by anatomical decisions that were not made with anyone’s well-being or fulfillment in mind. I consider it to be a core right of sentient beings to (if they wish) move beyond the involuntary vagaries of their birth and alter their form and function in whatever way suits their personal goals and potential.- Copied from https://thoughtforms.life/faqs-from-my-academic-work/

Terr_
> cellular networks

I often like to point out--satisfying a contrarian streak--that our original human equipment is literally the most mind-bogglingly complicated nanotechnology beyond our understanding, packed with dozens of incredible features we cannot imitate with circuits or chrome.

So as much as I like the aesthetics of cyberpunk metal arms, keeping our OEM parts is better. If we need metal bodies at a construction site, let them be remote-controlled bodies that stay there for the next shift to use.

fc417fc802
At this point the biochemists have managed to create amino acid based structures that are stronger than the vast majority of building materials. Surely enhanced organic parts is the way to go?

I see no reason to expect that superwood is incompatible with in place biological synthesis from scratch. That's entirely organic and there's no question that its material properties far exceed those of our OEM specifications.

For most of our use-cases, we can probably do even better, since we don't necessarily want or require the product to remain "alive."

For example, dental enamel is a really neat crystalline material, and a biological process makes it before withdrawing to use it as a shield.

tclancy
In retrospect, it should have been obvious. I guess I should have known it would all be more Repo Man than Blade Runner. I just didn’t imagine so many people cheering for the non-Wolverines side in Red Dawn.

(Now I want to change the Blade Runner reference to something with Harry Dean Stanton in it just for consistency)

Lu2025
Oh well, at least the futuristic sunglasses are back in fashion.
thunky
> Before the bubble does pop, which I think is inevitable

Curious what you think a popping bubble looks like?

A stock market crash and recession, where innocent bystanders lose their retirements? Or only AI speculators taking the brunt of the losses?

Will Google, Meta, etc stop investing in AI because nobody uses it post-crash? Or will it be just as prevalent (or more) than today but with profits concentrated in the winning/surviving companies?

imiric OP
We've seen this before in 1983 and 2000. Many companies will fold, and those that don't will take a substantial hit. The public sentiment about "AI" will sour, but after that a new breed of more practical tools will emerge under different and more fairly marketed branding.

I do think that the industry and this technology will survive, and we'll enjoy many good applications of it, but it will take a few more years of hype and grifting to get there.

Unless, of course, I'm entirely wrong and their predicted AI 2027 timeline[1] comes to pass, and we have ASI by the end of the decade, in which case the world will be much different. But I'm firmly in the skeptical camp about this, as it seems like another product of the hype machine.

[1]: I just took a closer look at ai-2027.com and here's their prediction for 2029 in the conservative scenario:

> Robots become commonplace. But also fusion power, quantum computers, and cures for many diseases. Peter Thiel finally gets his flying car. Cities become clean and safe. Even in developing countries, poverty becomes a thing of the past, thanks to UBI and foreign aid.

Yeah, these people are full of shit.

thunky
> We've seen this before in 1983 and 2000. Many companies will fold, and those that don't will take a substantial hit.

Makes sense, but if the negative effect of the bubble popping is largely limited to AI startups and speculators, while the rest of us keep enjoying the benefits of it, then I don't see why the average person should be too concerned about a bubble.

In 2000, cab drivers were recommending tech stocks. I don't see this kindof thing happening today.

> Yeah, these people are full of shit.

I think it's fair to keep LLMs and AGI seperate when we're talking about "AI". LLMs can make a huge impact even if AGI never happens. We're already seeing now it imo.

AI 2027 says:

  - Early 2026: Coding Automation
  - Late 2026: AI Takes Some Jobs
These things are already happening today without AGI.
fc417fc802
> In 2000, cab drivers were recommending tech stocks. I don't see this kindof thing happening today.

Nontechnical acquaintances with little to no financial background have been (rather cluelessly) debating nvidia versus other ML hardware related stocks. I'd say we're in exactly the same territory.

thunky
Counter-anecdote: several people I talk to think the only way to use "AI" is via the Google "AI Summary" at the top of search results.
fc417fc802
> Cities become clean and safe. Even in developing countries, poverty becomes a thing of the past, thanks to UBI and foreign aid.

The other things on that list seem fairly reasonable (if uncertain). Those last two not only depend on wide reaching political transformations in a specific direction but even then fail to account for lag time in the real world. If you started moving in the right direction in (say) 2027 it would presumably take many years to get there.

It's a weird mix of "already happening", "well yeah, obviously", and "clearly full of shit".

imiric OP
> The other things on that list seem fairly reasonable (if uncertain).

Nah. Thinking that poverty will be significantly reduced, let alone eliminated, in 4 years is simply delusional. Primarily because a reduction of poverty won't happen because of AI, but in spite of it. All AI does is concentrate wealth among the wealthy, and increase inequality. This idea that wealth will trickle down is a fantasy that has been sold by those in power for decades.

And UBI? That's another pipe dream. There have been very limited pilots around the world, but no indication that it's something governments are willing to adopt globally. Let alone those where "socialism" is a boogeyman.

The entire document is full of similar claims that AI will magically solve all our problems. Nevermind the fact that they aggrandize the capabilities of the technology, and think exponential growth is guaranteed. Not only are the timelines wrong, the predictions themselves have no basis in reality. It's pure propaganda produced by tech bros who can't see the world outside of their bubble.

fc417fc802
That's ... exactly what I said? That the poverty and UBI and "woooo clean cities" claims are obvious bullshit.

However the other things (the ones I didn't quote) seem quite reasonable on the whole. Robots are well on their way to becoming commonplace already. Quantum computers exist, although it remains to be seen how far and how fast they scale in practice. Fusion power continues to make incremental gains, which machine learning techniques have noticeably accelerated. Cures for many diseases easily checks out - ML has been broadly applied to protein structure prediction with great success for a while now. Helicopters obviously already exist, but quite a few autonomous electric flying cars are in the works and appear likely to be viable ... at least eventually.

But people were also hating about media piracy, video games, and the internet in general.

The dotcom bubble popped, but the general consensus didn't become negative.

imiric OP
Sure. I was referring more to the general consensus about products from companies that are currently riding the AI hype train, not about machine learning in general.

When the dot-com bubble burst in 2000, and after the video game crash in 1983, most of the companies within the bubble folded, and those that didn't took a large hit and barely managed to survive. If the technology has genuine use cases then the market can recover, but it takes a while to earn back the trust from consumers, and the products after the crash are much more practical and are marketed more fairly.

So I do think that machine learning has many potentially revolutionary applications, but we're currently still high around the Peak of Inflated Expectations. After the bubble pops, the Plateau of Productivity will showcase the applications with actual value and benefit to humanity. I just hope we get there sooner rather than later.

morngn
The bubble won’t pop on anything that’s correlated with scammers. Exhibit A: bitcoin. The problem is not one of public knowledge or will of the people, it’s congress being irresponsible because it’s captured by the 2 parties. You can’t politicize scamming in a way that benefits either party so nothing happens. And the scammers themselves may be big donors (eg SBF’s ties to the dem party, certain ai players purchase of Trump’s favor with respect to their business interests, etc). Scammers all the way down.
imiric OP
Good point. I suppose that if grifters can get in positions of power, then the bubble can just keep growing.

Though cryptocurrencies are slightly different because of how they work. They're inherently decentralized, so even though there have been many smaller bubble pops along the way (Mt. Gox, FTX, NFTs, every shitcoin rug pull, etc.), inevitably more will appear with different promises, attracting others interested in potential riches.

I don't think the technology as a whole will ever burst, particularly because I do think there are valid and useful applications of it. Bitcoin in particular is here to stay. It will just keep attracting grifters and victims, just like any other mainstream technology.

morngn
“Bitcoin in particular is here to stay.”

It’s here to stay not because it solves a legitimate problem or makes people’s lives better, but because like cancer, there is no cure. Bitcoin and other crypto are for crime, mostly. It’s not useable as actual money given volatility and other properties.

Grandmothers having their life savings stolen by scammers to the tune of 10s of billions annually, that is the primary use case for bitcoin. That and churning out a handful of SBF style gamer turned politically connected billionaires. Nakamoto was smart enough to remain anonymous, lest history remember his name as the person responsible.

This item has no comments currently.