Preferences

WithinReason
Joined 10,254 karma
WithinReasonHNs at duck dot com

  1. > On the 224× compression language, the claim is specifically about task-specific inference paths, NOT about compressing the entire model or eliminating the teacher.

    I understand that after reading the paper, but it's not in the title and that's what people read first. Omitting it from the title might have given you a much more favorable reception.

    It's not easy to get noticed when you're not from a big lab, don't get discouraged. It's nice work.

  2. I'm sure 2 LLMs wouldn't hallucinate the same thing, especially when using RAG, so I'm confident in the accuracy in the information.
  3. Good to know!
  4. I verified it with Grok, it says the same thing
  5. Hisense U8QG (over USB-C), no VRR though
  6. That's an overly strong claim, an LLM could also be used to normalise style
  7. Not sure what the fuss in this thread is about, this is a completely believable claim. In table 5 he gets 83.26% with labels only (which I assume means not using the teacher) and 91.40% with the teacher. This is a nice result, not hugely ground breaking I'd say. Maybe training longer or using some clever normalisation would even close the gap. It's not something you can call 224x compression though so I would remove that claim everywhere.

    This is basically a variation of distillation through the entire network, not just the last layer as typical

  8. "Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns"

    I think the AI is making fun of us

  9. That's what the upvote/downvote system is for.
  10. The archaean Methanopyrus kandleri can grow at 122°C, while among bacteria, Geothermobacterium ferrireducens can grow at temperatures up to 100°C
  11. Look up what kind of tracking UK ISPs are mandated to do by law and how easy it is to request that information. Your VPN can't possibly be worse than that.
  12. OLED has the same HW as the LCD, with only very minor differences
  13. -power consumption

    -display quality

    -sharp edges

  14. George Hotz is unpacking his new Framework right now and he's not happy: https://www.twitch.tv/georgehotz
  15. lots of people on this board are philosophically opposed to them so it was a reasonable question, especially in light of your description of them
  16. > this is pretty much what LLMs are doing

    I think this is the part where we disagree. Have you used LLMs, or is this based on something you read?

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal