- I really take issue with studies like this that put meat and meat products together.
Unprocessed meat is what humans what have been eating for hundreds of thousand of years.
Meat products are commercial new inventions and contain stuff like preservatives, volume expanders, flavor enhancers and coloring agents. They also typically contain added sugars, sodium, malto dextrin, corn syrup.
One can't seriously put these together and call them the same, make a study where participants might be eating SPAM and then conclude that "red meat is bad".
Given the choice between "Domino's vegetarian pizza", "IKEAs meatballs" and "steak that is fried,salted and peppered" which one do you think will be the healthiest option?
- The fat is an excellent source of energy though and it's very hard to get fat by eating fat because it's essentially hormonally inert. I.e. eating fat doesn't precipitate insulin which is the hormone that enables body fat accumulation.
So the problem with steak isn't the steak itself it's the "steak dinner" where the meat comes with sides such as french fries and drinks such as beer.
- Steak is actually an excellent source of protein (and fat, if you get the fattier steak as you should).
Just because vegetables, lentils or nuts contain protein it doesn't mean it's the same/equivalent to the protein in an animal product.
Meat is actually super easy for humans to digest and it has no downsides to it. All vegetables on the other side contain plenty of anti-nutrients such as folate and oxalates.
Everything in human body, skin, connective tissues, tendons, hair, nails, muscles is essentially built out of protein and collagen. Fats are essential for hormone function.
- After we've completed the knowledge transfer from the public domain, across all potential sources of information, from books to open source code to private data banks and LLMs then what comes next? Destroying the said works so that nobody else can access them ? Privatize knowledge, hoard all the data, limit access, sell ads?
- It's really sad if you think about that delivering the best desktop UX is all about just not actively sabotaging it on purpose.
Win the game by doing nothing while the competition drives themselves into the ground via enshittification.
After 40 years of computing this is the best we can do? No wonder we can't have nice things.
Maybe the Windows users should be called "victims" at this point.
- There are several levels here.
In your C++ (or C) program you have one (or more) allocators. These are just pieces of code that juggle blocks of memory into smaller chunks for the program to use. Typically the allocators get their memory from the OS in pages using some OS system call such as sbrk or mmap.
For the sake of argument, let's say I write an allocator that has a limit of 2MiB, while my system has 64Gib or RAM. The allocator can then fail some request when it's internal 2MiB has been exhausted. In C world it'd return a nullptr. In C++ world it would normally throw bad_alloc.
If this happens does this mean the process is out of memory? Or the system is out of memory? No, it doesn't.
That being said where things get murky is because there are allocators that in the absence of limits will just map more and more pages from the OS. The OS can "overcommit" which is to say it gives out more pages than can actually fit into the available physical memory (after taking into account what the OS itself uses etc). And when the overall system memory demand grows too high it will just kill some arbitrary process. On Linux this is the infamous OOM killer that uses the "niceness" score to determine what to kill.
And yes, for the OOM killer there's very little you can do.
But an allocation failure (nullptr or bad_alloc) does not mean OOM condition is happening in the system.
- "Are we playing word games here? If a process has a set amount of memory, and it's out of it, then that process is OOM, if a VM is out of memory, it's OOM. Yes, OOM is typically used for OS OOM, and Linus is talking about rust in the kernel, so that's what OOM would mean."
As I just explained an allocator can have its own limits.
A process can have multiple allocators. There's no direct logical step that says that because some allocator some failed some allocation, the process itself cannot allocate more ever.
"Of course there is, would you treat being out of bread similar to being out of oxygen? Again this can be explained by the context being kernel development and not application development."
The parent comment is talking about over commitment and OOM as if these are situations that are completely out of the programs control. They aren't.
- Talking as a long time C++ programmer. I really don't get this mind set.
First off allocation failure (typically indicated by bad_alloc exception in C++ code, or nullptr in C style code) does not mean that the system (or even the process) as a whole is out of memory.
It just means that this particular allocator could not satisfy the allocation request. The allocator could have "ulimit" or such limit that is completely independent from actual process/system limitations.
Secondarily what reason is there to make an allocation failure any different than any other resource allocation failure?
A normal structure for a program is to catch these exceptions at a higher level in the stack close to some logical entry point, such as thread entry point, UI action handler etc. where they can be understood and possibly shown to the user or logged or whatever. It shouldn't really matter if the failure is about failing to allocate socket or failing to allocate memory.
You could make the case that if the system is out of memory the exception propagation itself is going to fail. Maybe..but IMHO on the code path that is taken when stack is unwound due to exception you should only release resources not allocate more anyway.
- Same here. I'm "happy" that I'm old "enough" to be able to wrap up my career in a few years time and likely be able to get out of this mess before this "agentic AI slop" becomes the expected workflow.
On my personal project I do sometimes chat with ChatGPT and it works as a rubber duck. I explain, put my thoughts into words and typically I already solve my problem when I'm thinking it through while expressing it in words. But I must also admit that ChatGPT is very good at producing prose and I often use it for recommending names of abstractions/concepts, modules, functions, enums etc. So there's some value there.
But when it comes to code I want to understand everything that goes into my project. So in the end of the day I'm always going to be the "bottle neck", whether I think through the problem myself and write the code or I review and try to understand the AI generated code slop.
It seems to me that using the AI slop generation workflow is a great fit for the industry though, more quantity rather quality and continuous churn. Make it cheaper to replace code so that the replacement can be replaced a week later with another vibe-coded slop. Quality might drop, bugs might proliferate but who cares?
And to be fair, code itself has no value, it's ephemeral, data and its transformations are what matter. Maybe at some point we can just throw out the code and just use the chatbots to transform the data directly!
- As long as you have power structures in place in society where labor is weak and capital class is strong the capital class is going to use all their wealth and power to extract everything they can from the rest of the society.
Regardless of whether you bring or do not bring manufacturing back you also have to fix these socioeconomic issues before all can prosper.
- This is putting the apple cart before the horse.
Economy should be a tool for the society and to benefit everyone. Instead it's becoming more and more a playground for the rich to extract wealth and the proletariats have only purpose to serve the bourgeois lest they be discarded to the outskirts of the economy and often to the literal slums of the society while their peers shout "you're just not working hard enough".
- Yet more and more people are struggling to afford even basic necessities and one can only dream of the luxury of the 50's when a single working class person was able to pay and cover for housing, car, family and even have enough for leisure. Where has all the economic surplus gone? Right...to the bourgeois, the capital owning class that exceedingly extract more and more of the wealth generated by the society.
- 1 point
- I wish people who ship crappy software didn't ship it and would let someone else ship something better instead.
It really sucks when the first mover / incumbent is some crappy half assed solution.
But unfortunately we live in a world where quality is largely irrelevant and other USPs are more important. For example these little weekend projects that become successful despite their distinct lack of quality
Linux kernel - free Unix.
JavaScript - scripting in browser
Python - sane "perl"
Today on GitHub alone you can probably find 100 more featured and higher quality projects than any of these were when they launched but nobody cares.
- I've been trying to switch to Wayland and KDE plasma for some time but it's just so glitchy. Graphics bugs such as the tasks switcher showing black or flickery preview thumbnails or Firefox bringing down the whole system when opening a single 4k PNG indicate that it's still unfortunately very much an alpha.
Maybe in another decade or so.
- Yeah theres the "progressive overload" + volume camp.
It can work.. the problem is that if you do too little you get no result, if you do too much you burn out. So you have to manage both volume and intensity so that you have a progressive overload. This is difficult.
Easier way is to just ignore the volume in the first place, train as hard as you can (so go to failure, or very close), for maximal effort, i.e. increase the intensity then RECOVER then go back to the gym when you're no longer sore.
This is much easier routine to follow and it will produce development assuming other factors (quality of sleep and nutrition) are in check.
So therefore a shortcut summary is to forget about the volume, focus on the intensity and then make volume follow your capacity to recover. Avoid injuries and burnout while precipitating growth.
Using the bench press example again, in a volume program I might do
6 sets of 6 reps for a total of 36 reps. Since I'm doing so much volume it's clear that my first 5 sets will not be challenging because with this amount of sets I HAVE to save my energy for 6 sets. MAYBE the last rep or two in the last set will be what will start challenging me. So I'd say that with this volume workout you get 2 reps out of 36 that are "progressive". That's 5% and 95% of my work is just junk that produces only wear and tear.
In high intensity method I continue with drop sets after I fail. So.. let's say I do my initial set, 8 reps until I fail, I drop weights and do 3 more reps until I fail, I drop the weights and do 2 more reps. And then I'm done and that's the workout. My total reps are 13 but there are at least 5 reps that are in the zone that challenges me. That's 5/13 for 38%.
- Volume itself is meaningless. The only thing that matters is the intensity of the workout. In fact you want the maximum intensity with minimum volume to have less wear and tear and more recovery while maximizing the growth stimulus.
First intensity. Then recovery. These two dictate the volume. If volume exceeds recovery injury and burnout will follow.
- False,
failing to lift is not the same as lifting until failure.
Consider, if I load up the bench press to 200kg I won't get a single rep. If I try to rep it I'll fail but I'm not lifting until failure.
If I load it up to smaller weight lets say 100kg and crank out rep after rep I'll get much closer to "lifting until failure."
When I reach the end, the last rep is a rep I won't make. But I'm still not at a point where I can't do no more, just the weight is too big, so I must reduce the weight and go again. When I do this I get even closer to "lifting until failure".
It's like integration, the smaller the infinitesimal the closer to the true value you get when you sum up (integrate) all the parts.
I struggled a lot with my nutrition and eating "regular food" always mad me fat. I tried various keto and low carb variants but never made it work and always hit a wall after 2-3 weeks. UNTIL I discovered intermittent fasting. After having done the intermittent fasting for about 5 years I started another low carb/keto journey but this time I went all in on fat and protein. No holding back. And I also cut excessive vegetables (especially the raw stuff). So now I'm eating all the eggs, meat, butter, bacon as much as I want. About a year in. The results so far.. dropped 4kg body fat and put on 2kg of muscle.