- It requires tax increases, and the average earner's UBI will typically balance out the tax increase, meaning they don't directly profit.
UBI isn't about giving everyone free money. It's about giving everyone a safety net, so that they can take bigger economic risks and aren't pushed into crime or bullshit work.
The upper half of society will only see the indirect benefits, like having greater employment/investment choices due to more entrepreneurialism.
- That discussion also makes me worry that they may try to use LLMs or LLM-based metrics to measure the size of the gap as a proxy for value of the content.
The landlord of the marketplace should probably not dabble in the appraisal of products, whether for factuality or value.
- > without punishing regular browsing humans.
As a content consumer, I'm also hoping to be part of the ecosystem. I already use Patreon a lot as "AdBlock absolution", but it doesn't fix the market dynamics. Major content platforms tend to stagnate or worsen over time, because they prefer to sell impressions to advertisers than a good product to consumers.
- What makes you think the secrets are small enough to fit inside people's heads, and aren't like a huge codebase of data scraping and filtering pipelines, or a DB of manual labels?
- Please consider also describing the business model on the website, even if hidden away on a FAQ. I've so much subscription fatigue now, I just don't try things out if needing a subscription is an inevitability. I'm happy to pay for good products, just not happy to be forced to pay a fixed rate for continued access even if my usage dwindles.
If you are thinking of adding a one-off-donation-style purchase method, consider giving annual reminders to renew it. At least in my case, I'm not unwilling to pay repeatedly if development continues, just unwilling to make an upfront ongoing commitment.
- I don't think retrofitting existing languages/ecosystems is necessarily a lost cause. Static enforcement requires rewrites, but runtime enforcement gets you most of the benefit at a much lower cost.
As long as all library code is compiled/run from source, a compiler/runtime can replace system calls with wrappers that check caller-specific permissions, and it can refuse to compile or insert runtime panics if the language's escape hatches would be used. It can be as safe as the language is safe, so long as you're ok with panics when the rules are broken.
It'd take some work to document and distribute capability profiles for libraries that don't care to support it, but a similar effort was proven possible with TypeScript.
- The last major innovation as a product was PWA support starting in 2016.
Browsers used to try new ideas like RSS, widgets, shared and social browser sessions. Interfaces to facilitate low-friction integration with the rest of your life, and to multiplex data sources so that it's not a hassle to have many providers for [news, entertainment, social] experiences.
Likely no coincidence that this innovation languished once monopolies started pumping money into the ecosystem.
- > It's interesting that there are no reasoning models yet
This may be merely a naming distinction, leaving the name open for a future release based on their recent research such as coconut[1]. They did RL post-training, and when fed logic problems it appears to do significant amounts of step-by-step thinking[2]. It seems it just doesn't wrap it in <thinking> tags.
[1] https://arxiv.org/abs/2412.06769 "Training Large Language Models to Reason in a Continuous Latent Space" [2] https://www.youtube.com/watch?v=12lAM-xPvu8 (skip through this - it's recorded in real time)
- > Or is Behemoth just going through post-training that takes longer than post-training the distilled versions?
This is the likely main explanation. RL fine-tuning repeatedly switches between inference to generate and score responses, and training on those responses. In inference mode they can parallelize across responses, but each response is still generated one token at a time. Likely 5+ minutes per iteration if they're aiming for 10k+ CoTs like other reasoning models.
There's also likely an element of strategy involved. We've already seen OpenAI hold back releases to time them to undermine competitors' releases (see o3-mini's release date & pricing vs R1's). Meta probably wants to keep that option open.
- My thoughts go out to the poor engineers who got put on call because someone scheduled a product release on the day before the biggest holiday of their year.
- It's not even "nearly as good as o1". They only compared to the older 4o.
You can safely assume Qwen2.5-Max will score worse than all of the recent reasoning models (o1, DeepSeek-R1, Gemini 2.0 Flash Thinking).
It'll probably become a very strong model if/when they apply RL training for reasoning. However, all the successful recipes for this are closed source, so it may take some time. They could do SFT based on another model's reasoning chains in the meantime, though the DeepSeek-R1 technical report noted that it's not as good as RL training.
- Vegetarian keto is certainly possible, but vegan would be very tough. Only 2 out of 6 of my regular meals[1] have meat in them, and I'd probably replace these with tofu and mushrooms if I could tolerate them. There's a world of keto&vege analogues to try for noodles, breads, and pizza bases. IMO some of them are nicer than the carby versions.
I also struggle with willpower and it took me ~10 big attempts over ~14 years before I managed to stay on it long enough to fix my metabolism. I just wanted to spread the message of hope that every attempt gets easier. Mindset plays a big role - I've seen a few people push themselves really hard then declare it impossible and never give it another shot. If you know you're playing a long game, take a break if you're really suffering, and don't beat yourself up over failures, it's easier to try again next time you have the energy.
[1] I've lots of intolerances - whitelisting was easier than blacklisting. Here's the list: flaxmeal porridge, keto bread + cream/cottage cheese, omelette w/ pizza toppings, egg & cheese salad, caesar salad (w/ chicken), mince+vege+cheese mealprep'd casserole
- I'm at 3 years with occasional breaks. At a certain point my weight wouldn't go lower and I started feeling terrible. I think I was producing more ketones than I could use. I'm not sure exactly what fixed it, but now I'm sustaining a low-carb, low-but-nonzero-ketone mode, and still getting 50-75% of the mental/energy/anti-inflammatory advantages.
I think it was either changing my diet to focus on veges instead of meats (still 15-30g net carbs/day though), or adding artificial sweetener to maybe fool my body to making insulin? The science says that shouldn't happen, but idk what else it could be.
- Don't give up! Induction gets easier every time, and you learn lots of tricks/recipes, like keto-ade to feel better during induction, and making oats/flaxmeal tasty for cheap & quick breakfasts. You don't have to commit to long streaks, or feel bad about sunk cost when you cheat. All that progress accumulates.
I've been in and out so often now I can happily switch between keto at home & unrestricted on vacation/occasions. At worst I get 1 day of dopiness starting carbs, and 1 day of mild cravings stopping them, but usually I don't even notice.
- > it's not clear if that was the author's actual intention
The paper[1] doesn't appear to have any other connections to the book/response/memes. A clear distinction is that the UB paper very directly and prominently states the question, rather than cloaking it in allusion or having a lengthy preface trying to contextualize it.
[1] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p34...
- > Her channel has strayed far beyond the topics she has credibility in.
I appreciate that she makes her videos so easily verifiable, by prominently showing her research, that it was easy to see the point when this started happening and tune out. A lot of opinion-faucets on the internet try to be irrefutable by hiding their sources.
I don't trust Sabine intrinsically, but I trust that I can notice when she under-researches a topic or makes a leap of logic. She conveys enough good information that I find it worth my time to watch.
- The difference is sources. Sabine shows her sources prominently on screen, with searchable citations to find the original. She makes it clear in her phrasing whether she's paraphrasing a source, or passing her own judgement.
It's easy to know whether to internalize what she says when you view it critically. Ask "does the presented research seem legit, complete, and impartial?" and "is her conclusion logical?". She gives you the receipts to check. This is not the same as deciding whether to put blind faith into a comedian's off-the-cuff anecdotes and opinions.
I often disagree with her conclusions, but at least she makes it very easy to validate her chain of though, find where our views diverge, and only absorb the information I trust.
- I've seen similar reactions and I can't help but think she's intentionally communicating provocatively to make people engage their brains.
You shouldn't just "take her seriously", you should take what she says *critically*. Hear the information and opinions, then decide for yourself whether to accept them.
- The patients know this. Asking for consent would still yield the vast majority of the data.
It would also mean more people analyzing it - I know at least one big pharma company that won't touch patient data that was taken without consent because they don't want to be associated with such unethical practices.
In recent LLMs, filtered internet text is at the low end of the quality spectrum. The higher end is curated scientific papers, synthetic and rephrased text, RLHF conversations, reasoning CoTs, etc. English/Chinese/Python/JavaScript dominate here.
The issue is that when there's a difference in training data quality between languages, LLMs likely associate that difference with the languages if not explicitly compensated for.
IMO it would be far more impactful to generate and publish high-quality data for minority languages for current model trainers, than to train new models that are simply enriched with a higher percentage of low-quality internet scrapings for the languages.