- paradox242Like anyone wants an OS that not only gatekeeps the software you run but surveils everything you do.
- Also, I am unimpressed.
- Gross.
- This stops short of smirking at the users who continue to pay because they are ignorant of just how shitty the software system they are using is. Until everything blows up with the inevitable data breach or data loss incident that is.
- This is definitely going to be one of those studies that fails to replicate.
- People who don't see the utility in an Alexa just see the listening device they have paid to place in their home and might be tempted to smugly imagine that they would never be so stupid. But consider, do you have own an Android or iPhone device? You know, the ones with geolocation services, camera, and microphone? Do you also keep it near you almost all the time? You can probably see where I am going with this.
- I hope we never have to find out how wrong you are.
- From the very first sentence, I can already tell we are doomed if the right breakthroughs are made. It's like people's brains short-circuit when they imagine all the power and wealth that they might command through AI, and they don't take the extra two seconds to consider that any super intelligent system that can deliver on these promises will likely be very quickly outside of their control by definition.
Even during the interval of time these remained under human control, we are talking about people like Altman, Musk and Zuckerberg unilaterally wielding unprecedented economic power. What evidence is there in their behavior or human history in general to believe that this would be anything but bad for the majority of the worlds population?
Meanwhile these companies have been able to successfully nerd-snipe a small army of engineers who are right at that sweet spot of technical excellence and naivete. I won't say this is actually that difficult as these traits actually seem highly correlated in that population as a whole, but they have become the willing instruments of masters which will discard them at the first opportunity.
A global commitment to banning the development of AGI is the only sane response, and the number of people to whom this very premise itself sounds insane tells you just how fucked we are if they pull this off even halfway.
- This seems to contradict the article, which among other criticisms, specifically says that these drones are more expensive and less reliable than mortars.
- Imagine what happens if we awaken an actual god (AGI or ASI depending on your definition). I have no doubt that it would have any trouble enlisting the help of willing human accomplices for whatever purposes it wishes. I expect it would understand how to play the role of the unknowable all-knowing entity that is here to save us from ourselves, no matter what it's actual objectives might be (and I doubt they would be benevolent).
- If we achieve true AGI we enter into a state where there is more value in withholding the technology than by even selling it to the highest bidder. It would conceivably enable a winner-takes-all scenario of the highest order.
- It really makes you wonder what the motivations are behind anyone who would want to drive society down this road just to see what's at the end of it when it's already pretty clear that it's nothing good for the majority of us.
- They "tempt" us into cognitive offloading? That is just about the entire value proposition.
- Isn't it more likely that Meta has been infiltrated by Mossad, just as they no doubt have by other intelligence services and they use these insiders to exfiltrate location data on specific targets?
- This is a condensed version of Altman's greatest hits when it comes to his pitch for the promise of AI as he (allegedly) conceives it, and in that sense it is nothing new. What is conspicuous is that there is a not-so-subtle reframing. No longer is AGI just around the corner, instead one gets the sense that OpenAI they have already looked around that corner and seen nothing there. No, this is one of what I expect will be many more public statements intended to cool things down a bit, and to reframe (investor) expectations that the timelines are going to be longer than were previously implied.
- The value of LLMs is that they do things for you, so yeah the incentive is to have them take over more and more of the process. I can also see a future not far into the horizon where those who grew up with nothing but AI are much less discerning and capable and so the AI becomes more and more a crutch, as human capability withers from extended disuse.
- The implication is that they are hoping to bridge the gap between current AI capabilities and something more like AGI in the time it takes the senior engineers to leave the industry. At least, that's the best I can come up with, because they are kicking out all of the bottom rings of the ladder here in what otherwise seems like a very shortsighted move.
- I like Thomas, but I find his arguments include the same fundamental mistake I see made elsewhere. He acknowledged that the tools need an expert to use properly, and as he illustrated, he refined his expertise over many years. He is of the first and last generation of experienced programmers who learned without LLM assistance. How is someone just coming out of school going to get the encouragement and space to independently develop the experience they need to break out of the "vibe coding" phase? I can almost anticipate an interjection along the lines of "well we used to build everything with our hands and now we have tools etc, it's just different" but this is an order of magnitude different. This is asking a robot to design and assemble a shed for you, and you never even see the saw, nails, and hammer being used, let alone understand enough about how the different materials interact to get much more than a "vibe" for how much weight the roof might support.
- Screw the rich, they can afford it.