- 5 points
- elcometThe heat pump will pull heat from inside the house? This sounds terrible for efficiency in winter, as you will need to reheat the room
- Yeah this sounds disingenuous. I can also see my heater to 70C if I want, that does not increase its size...
- I don't understand all the hype for generating SVG with LLM. The task is not really useful, doesn't seem that interesting in single shot as it's really hard, and no human could do it (it would be more useful if the model has visual feedback and could correct the result).
And also, since it becomes a popular task, companies will add the examples in their training set, so you're just benchmarking who has the better text to SVG training set, not the overall quality of the model.
- I'd say that thoughts and reasoning are two different things, you're moving the goalpost.
But what makes the computer hardware fundamentally incompatible with thinking? Compared to a brain
- How can you know?
- Can you explain more? Which things are impossible in blender
- Too many acronyms, what's FE, BFF?
- Arc browser unifies the tabs and bookmarks in a very clever way.
- I'm wondering if you can prompt it to work like this - make minimal changes, and run the tests at each step to make sure the code is still working
- If it's chromium based, they will need to remove manifest v2 at some point to stay close to the upstream version.
- You can offload tensors to the cpu memory. It will make your model run much slower but it will work
- I can't tell if this is a joke if if you're serious
- Book clubs, art clubs, movie club... Lot of options
- What do you mean? Async/await uses threads
- Multi-step reasoning means that the LLM is giving a question (maths here), and generating an answer that consists of many intermediate words, before returning the solution. Here, we don't want to tell the LLM how to solve the problem word-by-word. We want to tell it at the end, "correct" or "incorrect", and have the model learn on its own to generate intermediate steps, to reach the solution.
That's typically a setup where RL is desirable (even necessary): we have sparse rewards (only at the end) and give no details to the model on how to reach the solution. It's similar to training models to play chess against a specific opponent.
- He means 10 humans voting for the answer
- Those are tree search techniques, they are not metrics to assess the "human" complexity of a line. They could be used for this purpose but out of the box they just give you winning probability
- I'm wondering why there isn't a cheapest alternative to the pi without those features that most find useless. Or a more powerful for the same price
- It's possible, the question is how to choose which submodel will be used for a given query.
You can use a specific LLM, or a general larger LLM to do this routing.
Also, some work suggest using smaller llms to generate multiple responses and use a stronger and larger model to rank the responses (which is much more efficient than generating them)