- I don't understand where $2000 comes from.
Relatively heavy cursor usage in my experience is around 100USD/month. You can set a limit to on demand billing.
- Hard disagree.
Composer is extremely dumb compared to sonnet, let alone opus. I see no reason to use it. Yes, it's cheaper, but your time is not free.
- I just use instances, nothing proprietary from them
- Oracle Cloud has really good price and many locations. That's why I use it.
- I'd say for it to be called a new pretrained model, it'd need to be trained from scratch (like llama 1, 2, 3).
But it's just semantics.
- I think it's more likely to be the old base model checkpoint further trained on additional data.
- It's trivial for a human that knows what a pc looks like. Maybe mistaking displayport for hdmi.
- I think the main reason is that when they architected it, RDRAM seemed like the better choice based on price and bandwidth at that time, and they underestimated the performance issues it would cause (RDRAM has amazing bandwidth but atrocious latency).
By the time the N64 launched, SDRAM was better and cheaper, and they considered it was too late to make the switch. Allegedly SGI wanted to make changes but Nintendo refused.
Basically they made the wrong bet and didn't want to change it closer to release.
- The RAMBUS speed is the main issue. The RDP can literally be stalled over 70% of the time waiting for memory. It's extremely flawed.
They could have used SDRAM and it would perform so much better, and I believe the cost is around the same.
If you wanted to cut something, cut the antialiasing. While very cool, it is a bit wasted on CRTs. Worst of all, for some reason they have this blur filter which smears the picture horizontally. Luckily it can be deblured by appliying the inverse operation.
- There are some misconceptions here.
It's incorrect to think because it is trained on buggy human code it will make these mistakes. It predicts the most likely token. Let's say 100 programmers write a function, most (unless it's something very tricky), won't forget to free that particular function. So the most likely tokens are those which do not leak.
In addition, this is not GPT 3. There's a massive amount of reinforcement learning at play, which reinforces good code, particularly verifiably good (which includes no leaks). And also a massive amount of synthetic data which can also be generated in a way that is provably correct.
- In my opinion programming has never been this much fun. The vast vast majority of code is repetitive stuff that now is a breeze. I can build so much stuff now, and with more beautiful code because refactoring is effortless.
I think it's like going from pre industrial revolution manual labor, to modern tools and machines.
- Unless we go extinct, I would assume eventually it will happen. Maybe in tens of thousands of years.
- AI market is all the jobs that could be replaced by AI in the future. People paying 20USD/month for ChatGPT is a drop in the bucket.
- AWS charges probably around 100 times what bandwidth actually costs. Maybe more.
- Considering GPT 5 was only recently released, it's very unlikely GPT will achieve these scores in just a couple of months. If they had something this good in the oven, they'd probably left the GPT 5 name to it.
Or maybe Google just benchmaxxed and this doesn't translate at all in real world performance.
- That's probably the worst benchmark you could choose.
- That was a great read
- If you actually try Llama you'll see it's significantly worse than the top dogs.
Something like 90th percentile usage is what I'd call relatively heavy.