Preferences

throwaway0123_5
Joined 251 karma

  1. Given what I understand about the nature of competitive programming competitions, using an LLM seems kind of like using a calculator in an arithmetic competition (if such a thing existed) or a dictionary in a spelling bee.
  2. This is also why I'm skeptical of claims that it would be impossible (or nearly so) for governments to meaningfully regulate AI R&D/deployment (regardless of whether or not they should). The "you can't regulate math" arguments. Yeah, you can't regulate math, but using the math depends on some of the most complex technologies humanity has produced, with key components handled by only one or a few companies in only a handful of countries (US, China, Taiwan, South Korea, Netherlands, maybe Japan?). US-China cooperation could probably achieve any level of regulation they want up to and including "shut it all down now." Likely? Of course not. But also not impossible if the US and China both felt sufficiently threatened by AI.

    The only thing that IMO would be really hard to regulate would be the distribution of open-weight models existing at the time regulations come into effect, although I imagine even that would be substantially curtailed by severe enough penalties for doing so.

  3. I think you're overestimating how much real damage someone can cause with burpsuite and "a few youtube videos." I'd imagine if you pick a random person off the street, subject them to a full month's worth of cybersecurity YouTube videos, and hand them an arsenal of traditional security tools, that they would still be borderline useless as a black-hat hacker against all but the absolute weakest targets. But if instead of giving them that, you give them an AI that is functionally a professional security researcher in its own right (not saying we're there yet, but hypothetically), the story is clearly very different.

    > Yeah, I'll concede, some physical tools like TNT or whatever should probably not be available to Joe Public. But digital tools?

    Digital tools can affect the physical world though, or at least seriously affect the people who live in the physical world (stealing money, blackmailing with hacked photos, etc.).

    To see if there's some common ground to start a debate from, do you agree that at least in principle there are some kinds of intelligence that are too dangerous to allow public access to? My extreme example would be an AI that could guide an average IQ novice in producing biological weapons.

  4. If they're truly Chinese state-sponsored actors, does it really matter if their actions/methods are exposed? What is Anthropic going to do, send the Anthropic Police Force to China to arrest them?

    I suppose I could see this argument if their methods were very unique and otherwise hard to replicate, but it sounds like they had Claude do the attack mostly autonomously.

  5. This definition makes sense, but in the context of LLMs it still feels misapplied. What the model providers call "guardrails" are supposed to prevent malicious uses of the LLMs, and anyone trying to maliciously use the LLM is "explicitly trying to get off the road."
  6. > I'd really hate to see the world go down the path of gatekeeping tools behind something like ID or career verification.

    This is already done for medicine, law enforcement, aviation, nuclear energy, mining, and I think some biological/chemical research stuff too.

    > It's a tradeoff we need to be willing to make.

    Why? I don't want random people being able to buy TNT or whatever they need to be able to make dangerous viruses*, nerve agents, whatever. If everyone in the world has access to a "tool" that requires little/no expertise to conduct cyberattacks (if we go by Anthropic's word, Claude is close to or at that point), that would be pretty crazy.

    * On a side note, AI potentially enabling novices to make bioweapons is far scarier than it enabling novices to conduct cyberattacks.

  7. Are you saying this because you think that people should still try to learn things for personal interest in a world where AI makes learning things to make money pointless (I agree completely, though what I spend time learning would change), or do disagree with their assessment of where AI capabilities are heading?
  8. > So like these are less serious issues if you are paid an extra $1-200k/ year

    Ok but to be fair most people in the US aren't making "extra $1-200k / year" over a person in Europe. They aren't even making $100k / year to begin with.

  9. Making all of those things cheaper is great, as long the automation isn't also making everyone poorer at an equal or faster rate. It doesn't really help if house prices and food prices are cut in half if most people lose their employment because of automation.

    I think the concern is that true human+ AGI and advanced robotics would obsolete so many roles that it doesn't matter if things can be made more efficiently, because nobody will have any money at all. If/when AI can do my job better than me, it isn't giving me leverage, it is removing all leverage I have as someone who puts food on the table through labor.

    In the interim period before that happens then sure, the automation is great for some people who can best leverage it.

  10. I don't disagree.

    Social/economic stratification (to a certain degree) makes sense as long as there is a reasonable amount of social mobility. AGI paired with advanced robotics seems as though it would all but eliminate social mobility. What would your options be? Politics, celebrity, or a small number of jobs where the human element is essential? I think the economic system needs to dramatically change if/when we reach that point (and ideally before, so people don't suffer in the transition).

  11. I think the problem is if/when AGI enables "someone else" to not need human employees for ~anything. The people that own physical capital (land, farms, mines, etc.) would have robots and GPT-N to extract value from it. The people who survive based on their labor are SOL. I think it is reasonable that many people won't be excited about that kind of automation.
  12. Is the suggestion that AGI (or even current AI) lowers the barrier of entry to making a company so much that regular people can just create a company in order to make money (and then buy food/shelter)? If so, I think there's a lot of problems with that:

    1) It doesn't solve the problem of obtaining physical capital. So you're basically limited to just software companies.

    2) If the barrier to entry to creating a software product that can form the basis of a company is so low that a single person can do it, why would other companies (the ones with money and physical capital) buy your product instead of just telling their GPT-N to create them the same software?

    3) Every other newly-unemployed person is going to have the same idea. Having everyone be a solo-founder of a software company really doesn't seem viable, even if we grant that it could be viable for a single person in a world where everyone has a GPT-N that can easily replicate the company's software.

    On a side note, one niche where I think a relatively small number of AI-enabled solo founders will do exceptionally well is in video games. How well a video game will do depends a lot on how fun it is to humans and the taste of the designer. I'm suspicious that AIs will have good taste in video game design and even if they do I think it would be tough for them to evaluate how fun mechanics would be for a person.

  13. Lets say I have a robot or two with a genius-level intellect. In theory it could manufacture a car for me or cook my dinner or harvest the crops needed for my dinner. But I don't own the mine where the metals needed to make a car come from. I don't own a farm where the food for my dinner comes from. Unless the distribution of resources changes significantly, it doesn't really help me that I have a genius robot. It needs actual physical resources to create wealth.

    Right now the people that own those resources also depend on human labor to create wealth for them. You can't go from owning a mine and a farm to having a mega-yacht without people. You have to give at least some wealth to them to get your wealth. But if suddenly you can go from zero to yacht without people, because you're rich enough to have early access to lots of robots and advanced AI, and you still own the mine/farm, you don't need to pay people anymore.

    Now you don't need to share resources at all. Human labor no longer has any leverage. To the extent most people get to benefit from the "magic machine," it seems to me like it depends almost entirely on the benevolence of the already wealthy. And it isn't zero cost for them to provide resources to everyone else either. Mining materials to give everyone a robot and a car means less yachts/spaceships/mansions/moon-bases for them.

    Tldr: I don't think we get wealth automatically because of advanced AI/robotics. Social/economic systems also need to change.

  14. I don't see how owning a robot helps me with obtaining the essentials of life in this scenario. There's no reason for a corporation to hire my robot if it has its own robots and can make/power them more efficiently with economy of scale. I can't rent it out to other people if they also have their own robots. If I already own a farm/house (and solar panels to recharge the robots) I guess it can keep me alive. But for most people a robot isn't going to be able to manufacture food and shelter for them out of nothing.
  15. In terms of quality of life, much/most of the value of intelligence is in how it lets you affect the physical world. For most knowledge workers, that takes the form of using intelligence to increase how productively some physical asset can be exploited. The owner of the asset gives some of the money/surplus earned to the knowledge worker, and they can use the money to affect change in the physical world by paying for food, labor, shelter, etc.

    If the physical asset owner can replace me with a brain in a jar, it doesn't really help me that I have my own brain in a jar. It can't think food/shelter into existence for me.

    If AI gets to the point where human knowledge is obsolete, and if politics don't shift to protect the former workers, I don't think widespread availability of AI is saving those who don't have control over substantial physical assets.

  16. > plus there is all the energy needed for me to live in a modern civilization and make the source material available to me for learning (schools, libraries, internet)

    To be fair, this is true of LLMs too, and arguably more true for them than it is for humans. LLMs would've been pretty much impossible to achieve w/o massive amounts of digitized human-written text (though now ofc they could be bootstrapped with synthetic data).

    > but a modern human uses between 20x and 200x as much energy in supporting infrastructure than the food calories they consume, so we're at about 1 to 10 GWh, which according to GPT5 is in the ballpark for what it took to train GPT3 or GPT4

    But if we're including all the energy for supporting infrastructure for humans, shouldn't we also include it for GPT? Mining metals, constructing the chips, etc.? Also, the "modern" is carrying a lot of the weight here. Pre-modern humans were still pretty smart and presumably nearly as efficient in their learning, despite using much less energy.

  17. Anti-AI shill? A cofounder of OpenAI?
  18. The only place I commonly see escalators besides these places is airports, I don't think they've expanded that much?
  19. I'm a heavy Notion user and haven't once used the AI features. I use AI on a near-daily basis outside Notion, but it just isn't something I need from Notion. On the other hand at least it isn't that intrusive in Notion unlike in some other apps.
  20. Not quite enough random capital letters for me.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal