- gipp parentAnd NYC. Just saw one this morning
- > This feels forced, there are obvious and good reasons for running that experiment. Namely, learning how it fails and to generate some potentially viral content for investor relationship. The second one seems like an extremely good business move. It is also a great business move from WSJ, get access to some of that investor money in an obviously sponsored content bit that could go viral.
That's... exactly what the author said in the post. But with the argument that those are cynical and terrible reasons. I think it's pretty clear the "you" in "why would you want an AI" vending machine is supposed to be "an actual user of a vending machine."
- I think we're using very different senses of "deterministic," and I'm not sure the one you're using is relevant to the discussion.
Those proprietary blobs are either correct or not. If there are bugs, they fail in the same way for the same input every time. There's still no sense in which ongoing human verification of routine usage is a requirement for operating the thing.
- Those are completely deterministic systems, of bounded scope. They can be ~completely solved, in the sense that all possible inputs fall within the understood and always correctly handled bounds of the system's specifications.
There's no need for ongoing, consistent human verification at runtime. Any problems with the implementation can wait for a skilled human to do whatever research is necessary to develop the specific system understanding needed to fix it. This is really not a valid comparison.
- Engineers at Google are much less likely to be doing green-field generation of large amounts of code . It's much more incremental, carefully measured changes to mature, complex software stacks, and done within the Google ecosystem, which is heavily divergent from the OSS-focused world of startups, where most training data comes from
- I think the OP meant something far simpler (and perhaps less interesting), which is that you simply cannot encounter key errors due to missing fields, since all fields are always initialized with a default value when deserializing. That's distinct from what a "required" field is in protobuf
- The company's main site is truly bonkers. 27 repetitions of the term "Tier 1," half of them applied to nonsensical things. The CEOs bio lists his League of Legends ranking, twice. 14 available products listed for a supposedly 4 month old company. 24-point feature comparison against ChatGPT, almost none of them even remotely related to anything ChatGPT is even targeting.
Honestly this seems like the product of a guy on a fast track to a major nervous breakdown.
- That probably describes some corners of Tesla's market, but 99% of people buying Teslas and FSD are doing it because it is (was?) a cool car with a potentially cool feature. You're letting the wildly unrepresentative sample of "loud people on the Internet" distort your perception of the world at large.
- A lot of arguing "against LLMs" is not arguing "shovels aren't useful," it's arguing "maybe shovels aren't actually going to replace all human labor, and sinking so much capital into it we're starting to conceptualize it in terms of 'percent of global GDP' might not be such a great idea."
- 2 points
- While it's definitely more complicated than necessary, and silly that the game doesn't explain it at all, it also... doesn't really seem that complicated? Certainly not enough to live up to the amount of text spent building it up, or the amount of text explaining it, for that matter. You've got some offense and defense stats, each card draws a random number from 0 to its applicable stat (determined by type), highest number wins.
- Here's someone publishing most all their raw chat logs to Substack, if you care to read: