Preferences

elcomet
Joined 2,672 karma
elcomet at mailbox dot org

  1. The heat pump will pull heat from inside the house? This sounds terrible for efficiency in winter, as you will need to reheat the room
  2. Yeah this sounds disingenuous. I can also see my heater to 70C if I want, that does not increase its size...
  3. I don't understand all the hype for generating SVG with LLM. The task is not really useful, doesn't seem that interesting in single shot as it's really hard, and no human could do it (it would be more useful if the model has visual feedback and could correct the result).

    And also, since it becomes a popular task, companies will add the examples in their training set, so you're just benchmarking who has the better text to SVG training set, not the overall quality of the model.

  4. I'd say that thoughts and reasoning are two different things, you're moving the goalpost.

    But what makes the computer hardware fundamentally incompatible with thinking? Compared to a brain

  5. How can you know?
  6. Can you explain more? Which things are impossible in blender
  7. Too many acronyms, what's FE, BFF?
  8. Arc browser unifies the tabs and bookmarks in a very clever way.
  9. I'm wondering if you can prompt it to work like this - make minimal changes, and run the tests at each step to make sure the code is still working
  10. If it's chromium based, they will need to remove manifest v2 at some point to stay close to the upstream version.
  11. You can offload tensors to the cpu memory. It will make your model run much slower but it will work
  12. I can't tell if this is a joke if if you're serious
  13. Book clubs, art clubs, movie club... Lot of options
  14. What do you mean? Async/await uses threads
  15. Multi-step reasoning means that the LLM is giving a question (maths here), and generating an answer that consists of many intermediate words, before returning the solution. Here, we don't want to tell the LLM how to solve the problem word-by-word. We want to tell it at the end, "correct" or "incorrect", and have the model learn on its own to generate intermediate steps, to reach the solution.

    That's typically a setup where RL is desirable (even necessary): we have sparse rewards (only at the end) and give no details to the model on how to reach the solution. It's similar to training models to play chess against a specific opponent.

  16. He means 10 humans voting for the answer
  17. Those are tree search techniques, they are not metrics to assess the "human" complexity of a line. They could be used for this purpose but out of the box they just give you winning probability
  18. I'm wondering why there isn't a cheapest alternative to the pi without those features that most find useless. Or a more powerful for the same price
  19. It's possible, the question is how to choose which submodel will be used for a given query.

    You can use a specific LLM, or a general larger LLM to do this routing.

    Also, some work suggest using smaller llms to generate multiple responses and use a stronger and larger model to rank the responses (which is much more efficient than generating them)

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal