- maplethorpeYou made the mistake of looking at the code, though. If you didn't look at the code, you wouldn't have known those bugs existed.
- > shall, in consultation with the Special Advisor for AI and Crypto
It's funny to me that they categorise AI and crypto together like this, two technologies that have nothing to do with each other (other than both being favoured by grifters).
- But nevertheless, a human did still do it. How gruelling and exploitative the process was, or how many humans it took, is beside the point. The fact that the image existed meant that there was an attainable skill that a person could learn in order to do that same thing, and that was inspiring.
Since the days of cave paintings, that experience has been available to all humans. In the year 2025, it died, and I will never experience it again.
- That's weird, Gemini told me not to do this.
- This reminds me of how I thought I didn't like the taste of butter growing up, because my family called margarine "butter" and never bought the real stuff.
- For anyone confused by this, what you're probably forgetting is that children make no distinction between slop and high quality content. You know all those bad 3D knock-off YouTube videos of that everyone was in a moral panic about a few years ago? Disney wasn't upset those were damaging their brand. They were upset that they weren't making any money from them. But they just found a way to undercut all the sweatshops in Bangladesh pumping that stuff out: recruit children to make videos for children.
- They didn't say what model they used. The difference between GPT 3.5 and GPT 4 is night and day. This is exactly what I'd expect from 3.5, but 4 wouldn't make this mistake.
Note: I haven't updated this comment template recently, so the versions may be a bit outdated.
- HN has historically been very pro-Musk. The negativity is recent.
- I have a similar faith in Musk to you. I was arguing with one of his detractors recently, who said something to the tune of "Musk said we would have humans on Mars by 2025. He's a grifter. He'll say anything to drum up investment." They had a table of people laughing along with them, until I asked how much money they had in the bank, and whether it equaled even one one-thousandth of Musk's net worth. That shut them up pretty fast.
- What you're probably failing to grasp is that all technology is good, and AI is technology, therefore AI is good. Notable examples are the printing press and the automobile. Would you prefer a world without those things? How ridiculous!
Please ignore "technology" such as leaded gasoline and CFCs. No one could have known those were harmful, anyway.
- Can I ask what you do? I suspect there is a type of job that AI excels at, and it makes everyone in that job unreasonably bullish on AI.
- > then the game is who has the smartest agi, who can offer it cheapest, who can specialise it for my niche etc.
I always thought the use case for developing AGI was "if it wants to help us, it will invent solutions to all of our problems". But it sounds like you're imagining a future in which companies like Google and OpenAI each have their own AGI, which they somehow enslave and offer to us as a subscription? Or has the definition of AGI shifted?
- > Maybe you don't love your mom enough to do this
I actually love my mom enough not to do this.
- Is it really here to stay? If the wheels fells off the investment train and ChatGPT etc. disappeared tomorrow, how many people would be running inference locally? I suspect most people either wouldn't meet the hardware requirements or would be too frustrated with the slow token generation to bother. My mom certainly wouldn't be talking to it anymore.
Remember that a year or two ago, people were saying something similar about NFTs —that they were the future of sharing content online and we should all get used to it. Now, they still might exist, it's true, but they're much less pervasive and annoying than they once were.
- They should include users who used a double hyphen, too -- not everyone has easy access to em dashes.
- > LLM's are wrong way more often but are also more versatile than a calculator.
LLMs are wrong infinitely more than calculators, because calculators are never wrong (unless they're broken).
If you input "1 + 3" into your calculator and get "4", but you actually wanted to know the answer to "1 + 2", the calculator wasn't "wrong". It gave you the answer to the question you asked.
Now you might say "but that's what's happening with LLMs too! It gave you the wrong answer because you didn't ask the question right!" But an LLM isn't an all-seeing oracle. It can only interpolate between points in its training data. And if the correct answer isn't in its training data, then no amount of "using it with care" will produce the correct answer.
- Even worse is if it's in the other room and your fingers can't reach the keys. It delivers no answers at all!
- I do wonder if the calculator would have been as successful if it regularly delivered wrong answers.
- Windows doesn't even feel like a native Windows app anymore.
- > I'm spending an unreasonable amount of time debunking false claims and crap research from colleagues who aren't experts in my field
Same. It's become quite common now to have someone post "I asked ChatGPT and it said this" along with a completely nonsense solution. Like, not even something that's partially correct. Half of the time it's just a flat out lie.
Some of them will even try to implement their nonsense solution, and then I get a ticket to fix the problem they created.
I'm sure that person then goes on to tell their friends how ChatGPT gives them superpowers and has made them an expert over night.