2
points
WithinReason
Joined 10,254 karma
WithinReasonHNs at duck dot com
- I'm sure 2 LLMs wouldn't hallucinate the same thing, especially when using RAG, so I'm confident in the accuracy in the information.
- Good to know!
- I verified it with Grok, it says the same thing
- According to the comments this cable supports VRR:
https://www.amazon.co.uk/dp/B094XR43M5/
Official site: https://www.cablematters.com/pc-1398-154-cable-matters-8k-di...
- Hisense U8QG (over USB-C), no VRR though
- That's an overly strong claim, an LLM could also be used to normalise style
- Not sure what the fuss in this thread is about, this is a completely believable claim. In table 5 he gets 83.26% with labels only (which I assume means not using the teacher) and 91.40% with the teacher. This is a nice result, not hugely ground breaking I'd say. Maybe training longer or using some clever normalisation would even close the gap. It's not something you can call 224x compression though so I would remove that claim everywhere.
This is basically a variation of distillation through the entire network, not just the last layer as typical
- "Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns"
I think the AI is making fun of us
- That's what the upvote/downvote system is for.
- The archaean Methanopyrus kandleri can grow at 122°C, while among bacteria, Geothermobacterium ferrireducens can grow at temperatures up to 100°C
- Look up what kind of tracking UK ISPs are mandated to do by law and how easy it is to request that information. Your VPN can't possibly be worse than that.
- MLID says 2028+: https://www.youtube.com/shorts/Srvv_Zd_k4c
- OLED has the same HW as the LCD, with only very minor differences
- -power consumption
-display quality
-sharp edges
- 4 points
- George Hotz is unpacking his new Framework right now and he's not happy: https://www.twitch.tv/georgehotz
- lots of people on this board are philosophically opposed to them so it was a reasonable question, especially in light of your description of them
- > this is pretty much what LLMs are doing
I think this is the part where we disagree. Have you used LLMs, or is this based on something you read?
I understand that after reading the paper, but it's not in the title and that's what people read first. Omitting it from the title might have given you a much more favorable reception.
It's not easy to get noticed when you're not from a big lab, don't get discouraged. It's nice work.