- Frontier models are good at offensive capabilities.
Scary good.
But the good ones are not open. It's not even a matter of money. I know at OpenAI they are invite only for instance. Pretty sure there's vetting and tracking going on behind those invites.
- Hate to be “that guy”, but this isn’t regulation or government. Rather it’s the free market with actors willing to do anything to make gentrification profits. Which are, unfortunately for our society, quite sizable.
You can have flophouses. Officially or under the table. But you can’t have them in areas with the vultures circling so to speak.
Not saying gentrification is good or bad. Just saying, if gentrification profits are there to be had, it’s a bit foolish to expect people to not do whatever it takes to secure those profits.
- Which is not to say there are no bitter grad students or colleagues at MIT.
Which was gopher_space’s material point.
- Low percentage of grad students
- And to be frank, both are pretty far fetched. His thing was plasma and fusion.
If people want a conspiracy theory, tell them to go with alien civilizations wanted to prevent humans from achieving fusion.
- Don't know about either of those? But militarily, it's suicide to set up all your generation in one area.
Always overwhelm the enemy when possible. Even when he's planning.
- Well, yeah..
Devil's always in the details. But in this instance, any even partial win is still a win. Something is better than nothing.
- This.
Don't believe for a second that Sora will allow you to make racist content with Disney characters.
That said, there are a lot of other models out there that care about neither licensing nor alignment. So those will allow you to make racist content. Then you can do whatever you like with that generated content.
A lot of IP owners will learn that there is more than one way to skin a cat. It's easier than people think to turn children's characters, like say, Hermoine, into a raging racist. And there's very little technically speaking that they can do to stop it.
But yes, on OpenAI specific properties, they can definitely stop it dead in its tracks. They can even get better at stopping it over time. In fact, the more users try to generate it, the better the system will get at stopping it.
- >I don’t want to live in that world; where curiosity and thought are cast aside in favor of faster results.
To be fair, we, all of us, have been living in that world for quite some time now. Not really sure how we'd ever slow down our advance down that road?
- Let's be honest, it's not even that.
It's more like "relative to historic trends, many people want houses [in desirable areas] who can't afford the current prices."
Building tons of new houses outside of the hot areas that all these people want to live would still elicit cries of a shortage and an affordability crisis. Because there are currently affordable places outside of hot areas, but not very many takers.
It's a really tough nut to crack. Because how do you reorient the demand to those areas that have the supply? It's not easy. We can't seem to do it currently, and there's no real plan to do it if even if we could somehow build even more housing. We'd have to build lots of housing only in hot areas. Which sounds easy enough until you realize the economics don't make sense and even on the off chance that you could, it would only generate more demand.
First order of business however should be to find a clever way to stop abuses like the ones outlined in the article. The housing that would free up in the hot areas would not be near enough to meet the demand, but if we stop that nonsense at least we're not "digging the hole deeper" so to speak.
- Because AI, or rather, an LLM, is the consensus of many human experts as encoded in its embedding. So it is better, but only for those who are already expert in what they're asking.
The problem is, you have to know enough about the subject on which you're asking a question to land in the right place in the embedding. If you don't, you'll just get bunk. (I know it's popular to call AI bunk "hallucinations" these days, but really if it was being spouted by a half wit human we'd just call it "bunk".)
So you really have to be an expert in order to maximize your use of an LLM. And even then, you'll only be able to maximize your use of that LLM in the field in which your expertise lies.
A programmer, for instance, will likely never be able to ask a coherent enough question about economics or oncology for an LLM to give a reliable answer. Similarly, an oncologist will never be able to give a coherent enough software specification for an LLM to write an application for him or her.
That's the achilles heel of AI today as implemented by LLMs.
- >Maybe the media benefits from it?
Nah.
We just love our dog whistles.
Abortion!!
Immigrants!!
The billionaires are stealing from you!!
Libtards!!
Right wing shooters!!
Oh yeah, almost forgot..
Something, something blacks!!!
That's how you win elections these days.
You can try to win an election on something substantive, say, infrastructure repair and updates for instance. But the wedge issue fueled campaigns are gonna wipe the floor with you.
We the people have voted on this over and over. The data is irrefutable. We love our wedge issues.
- >It’s already in use, or models like it, by companies selling access to The State
Doesn't that pretty much cover Palantir as well?
- This.
Anthropic is not going to interrupt their competitors if their competitors don't want to use trainium. Neither would you, I, nor anyone else. The only potential is downside. There's no upside potential for them at all in doing so.
From Anthropic's perspective, if the rest of us can't figure out how to make trainium work? Good.
Amazon will fix the difficulty problem with time, but that's time Anthropic can use to press their advantages and entrench themselves in the market.
- Mmmm..
I don’t know man?
I actually don’t mind 14+14 for corps. Because corps could conceivably never “die”. (In fact, I wouldn’t even be too opposed to getting rid of the +14 part).
But for individual people who make things, I think if they’re alive, it should be theirs. And I’m a guy who’s not a creative.
I just think if you come up with a painting, or story, or video game, why should a big corporate be able to swoop in and just copy it while you’re alive without paying you?
The copyright should lapse after a reasonable amount of time following your death. But while you’re alive, what you made should be yours.
- >Even in mainland China [..] Apple does not pre-install any apps from anyone.
That's because China has no regulation obliging them to do so.
China takes the other, more comprehensive, route to privacy invasion. Sucking up every bit of data at the router.
- >I guess it's counting on deepseek being banned
And the people making the bets are in a position to make sure the banning happens. The US government system being what it is.
Not that our leaders need any incentive to ban Chinese tech in this space. Just pointing out that it's not necessarily a "bet".
"Bet" imply you don't know the outcome and you have no influence over the outcome. Even "investment" implies you don't know the outcome. I'm not sure that's the case with these people?
- The deaths of masons and builders. All the way back to Hammurabi.
BTW, Hammurabi was particularly dastardly in his building code specifications. You could, of course, be put to death if a building or wall collapsed and killed someone. But that was just table stakes. Even Ur-Nammu had that much figured out.
Hammurabi added on to the punishment by forcing you to rebuild the wall..
to the specifications of reputable builders..
at your own expense..
and then be put to death.
Don't even get me started on Asian "building codes" back in the day.
HN user Arainach is right, no one was guessing, or intuiting, while building in a lot of these empires. It was wayyy too risky. Pretty much everyone was following rules passed down by the builders for centuries. In some cases, millennia. Only an actual ruler would dare even consider deviating from the known good building forms.
You can expect to be able to buy exactly that many chinese GPU or neural processors.