Preferences

ben-schaaf parent
Battery efficiency comes from a million little optimizations in the technology stack, most of which comes down to using the CPU as little as possible. As such the instruction set architecture and process node aren't usually that important when it comes to your battery life.

If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.

Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.

Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.

All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.

It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.


jonwinstanley
A huge reason for the low power usage is the iPhone.

Apple spent years incrementally improving efficiency and performance of their chips for phones. Intel and AMD were more desktop based so power efficiency wasnt the goal. When Apple's chips got so good they could transition into laptops, x86 wasn't in the same ballpark.

Also the iPhone is the most lucrative product of all time (I think) and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.

twilo
Apple purchased Palo Alto Semi which made the biggest difference. One of their best acquisitions ever in my opinion… not that they make all that many of those anyway.
Apple actually makes a lot more acquisitions than you think, but they are rarely very high profile/talked about: https://en.wikipedia.org/wiki/List_of_mergers_and_acquisitio...
simonh
> One of their best acquisitions ever in my opinion…

NeXT? But yes, I completely get what you’re saying, I just couldn’t resist. It was an amazingly long sighted strategic move, for sure.

linotype
I almost feel like NeXT was a reverse acquisition, like Apple became NeXT with an Apple logo.
pjmlp
Pretty much so, I would say.
nxobject
Equally (arguably) importantly, Johny Srouji joined Apple the same year as PA Semi's acquisition - '08 – and led Apple A4. (He previously worked at IBM on POWER7, which is a fascinating switch in market segment.)
Cthulhu_
I vaguely remember Intel tried to get into the low power / smartphone / table space at the time with their Atom line [0] in the late 00's, but due to core architecture issues they could never reach the efficiency of ARM based chips.

[0] https://en.wikipedia.org/wiki/Intel_Atom

usr1106
Intel and Nokia partnered around 2007 .. 09 to introduce x86 phone SoCs and the required software stack. Remember MeeGo? Nokia engineers were horrified by the power consumption and were convinced it wouldn't work. But Nokia management wanted to go to a dual supplier model instead of just relying on TI at all cost.

MeeGo proceeded far too slowly and Elop chose his former employers' Windows instead in 2011. Nokia's decline only increased and Intel hired many Nokia engineers.

Soon Nokia made no phone anymore and Intel did not even manage to make their first mass-selling product.

ARM-based SoCs were 10 years ahead in power saving. The ARM ecosystem did not make any fatal mistakes, Intel never caught up.

pjmlp
Symbian was using ARM, though. And no one on Espoo office was that happy with Elop, except for the board members that invited him.
aidenn0
I don't think it was core architecture issues. My impression is that over the years their efforts to get into low-power devices never got the full force of their engineering prowess.
kimixa
I worked for an IP vendor that was in some Atom SoCs (over a decade ago now though) - from what I remember the perf/w was actually pretty competitive for contemporary ARM devices when we supplied the IP, but then took so long to actually end up in products it ended up behind others - other customers were already on the next generation by that point, even if the initial projects started at about the same time. And the atoms were buggy as hell, never had more problems with dumb cache/fabric/memory controller issues.

To me the Atom team always felt like a dead-end inside intel - everyone seemed to be trying to get in to a different higher-status team ASAP - our engineering contacts often changed monthly, if we even knew who our "contacts" were meant to be at any time. I think any product developed like that would struggle.

RossBencina
I thought they just acquired P.A. Semi, job done.
simonh
When they bought PA Semi the company worked on IBM Power architecture chips. It was very much the team Apple was after, not any one particular technology.
lstodd
that was a part of it, yes.

but do not forget how focused they (amd/intel, esp in opteron days -- edit) were on the server market.

skeezyboy
> and Apple poured a tonne of that money into R&D and taking the top engineers from Intel, AMD, and ARM, building one of the best silicon teams.

how much silicon did Apple actually create? I thought they outsourced all the components?

twilo
They bought Palo Alto Semiconductor in 2008 which is where all their ARM chip designs came from.

https://en.wikipedia.org/wiki/P.A._Semi

kube-system
Besides Apple's SoCs they also have made dedicated silicon for secure enclaves, wifi, bluetooth, ultra-wideband, and cellular radios, and motion coprocessors.
giantrobot
Apple bought PA Semi a long time ago. They have a significant silicon development group. Their architecture license (they were an early investor in ARM) for ARM means they get to basically do whatever they want using the ARM ISA. The SoCs in pretty much all their devices are designed in-house.
ljosifov
Were they ARM investors at the time they needed CPU for Newton? Was that before or after e.g. iPaq PDA-s? And latter - was it that it looked that Apple maybe in danger of going under, and then they sold their ARM stake and got a cash injection that way?

I remember iPaq PDA fondly. Wrote a demo to select a song from a playlist with few thousand author-album-song with voice query. The WiFi add-on was a big plastic "sleeve", that the iPaq slid into, not the other way around. Could run the ASR engine for about whole 10 mins before it drained the battery flat, haha. :-)

giantrobot
IIRC Apple originally invested in ARM during the development of the Newton. The original Newtons used ARM 610 CPUs. I don't know exactly when they sold their ARM stake but they kept their architecture license.

The Newton was long before the iPaq, the MessagePad was released in 1993.

ljosifov
On selling of the ARM stake - asked ChatGPT:

Q> And latter - was it that it looked that Apple maybe in danger of going under, and then they sold their ARM stake and got a cash injection that way?

A> And yes. In the late-1990s turnaround, Apple sold down its ARM stake in multiple tranches after ARM’s 1998 IPO, realizing hundreds of millions of dollars that helped shore up finances (alongside the well-known $150 million Microsoft deal in Aug 1997).

skeezyboy
what about all the components and sensors
simonh
Apple has bought startups with various technologies like Anobit, that developed advanced flash memory controllers, and have funded development efforts by partners. For example Apple worked hand in glove with Sharp to develop the tech for their 5K display panels. They also now have their own cellular chip designs in some models, in their quest for independence from Qualcomm. That’s all from memory, I’m sure there are many more examples.
skeezyboy
so they didnt design all the components and sensors then
brokencode
Outsourced to who? The only companies with the engineers you’d need are the other CPU makers like Intel, AMD, Qualcomm, and Nvidia. And none of them make a CPU as efficient as Apple does.
skeezyboy
cpu yes, but what about the rest of the iphone?
brokencode
They design much more in house than any other smartphone brand, except maybe Samsung.

CPU, GPU, neural processor, image signal processor, U1 chip for device tracking, Secure Enclave for biometrics, a 5G modem (only used in the 16e so far)…

They don’t manufacture the chips in house of course. They contract that out to TSMC and other companies.

ChrisGreenHeur
Arm exists, it is unknown how much tech apple gets from Arm.
brokencode
Arm licenses their designs to everybody. They are okay, but you are never going to make market leading processors by using the Arm designs.
fennecbutt
And tsmc (and therefore asml etc), usually apple reserves the newest upcoming node for their own production.
DanielHB
I don't think it is so much efficiency of their chips for their hardware (phones) so much as efficiency of their OS for their chips and hardware design (like unified memory).
zipityzi
It is likely the hardware effiency of their chips. Apple SoCs running industry-standard benchmarks still run very cool, yet still show dominant performance. The OS efficiency helps, but even under extreme stress tests like SPEC, the Apple SoCs dominate in perf & power.

See Lunar Lake on TSMC N3B, 4+4, on-package DRAM versus the M3 on TSMC N3B, 4+4, on-package DRAM: https://youtu.be/ymoiWv9BF7Q?t=531

The 258V (TSMC N3B) has a worse perf / W 1T curve than the Apple M1 (TSMC N5).

jhoechtl
> It is likely the hardware effiency of their chips. Apple SoCs running industry-standard benchmarks still run very cool, yet still show dominant performance

Dieselgate?

Eric_WVGG
I have heard that Apple Silicon chips are designed around the retain-release cycle that goes back to NeXT and is still here today (hidden by ARC compilation), but I don't think that's the whole story. Back when the M1's came out, many benchmarks showed virtualized Windows blowing the doors off of market-equivalent x86 CPUs.

Also, there's the obvious benefits of being TSMC's best customer. And when you design a chip for low power consumption, that means you've got a higher ceiling when you introduce cooling.

waffletower
The SoC benefits are being ignored by some people here. Apple doesn't control every piece of software as some here posit, however, OS optimizations and utilization of extra-efficiency cores (though still requiring SoC design they do also need specific OS code support) are part of the performance.
jimbokun
Textbook Innovator’s Dilemma.
alt227
> A huge reason for the low power usage is the iPhone.

No, the main reason for better battery life is the RISC architecture. PC on ARM architecture has the same gains.

BearOso
Those PC ARM chips like Snapdragon were designed first and foremost for mobile, too.
alt227
Any downvoters care to actually leave me a reply telling me why?

Im not wrong!

tacticalturtle
You might find these posts informative:

https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter

https://chipsandcheese.com/p/why-x86-doesnt-need-to-die

All instructions across x86 and Arm are being decoded to micro-operations, which are implementation specific. You could have an implementation which prioritizes performance, or an implementation that prioritizes power consumption, regardless of the ISA.

Decoding instructions, particularly on a modern die, doesn’t consume a significant amount of area or power, even for complicated variable length instructions.

ben-schaaf OP
You are wrong. The Snapdragon X Elite is actually a great example, unlike M1 it's performance isn't particularly great and it eats 50W under load. That makes its CPU cores a fair bit less efficient that AMDs even on the same production node. If Apple Silicon didn't exist then you might instead argue that x86-64 is more efficient than ARM.

If all that's true then why does Snapdragon have better battery life? As I said in my comment the great battery life comes from when the CPU isn't being used. It's everything else around it. That's where AMD is still significantly behind.

JustExAWS
Because it’s a take thst sounds like someone who has been reading comp.sys.mac.advocacy from 1995 when the PPC vs x86 wars were going on (and when PPC chips were already behind in performance) up through 2005 when Apple gave up and went to Intel.
RajT88
> All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

Apple is vertically integrated and can optimize at the OS and for many applications they ship with the device.

Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack. Unless something's changed, last I checked it was a circular firing squad between laptop manufacturer, Microsoft and various hardware vendors all blaming each other.

diggan
> Apple is vertically integrated and can optimize

> Compare that to how many cooks are in the kitchen in Wintel land. Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack

So, I was thinking like this as well, and after I lost my Carbon X1 I felt adventurous, but not too adventurous, and wanted a laptop that "could just work". The thinking was "If Microsoft makes both the hardware and the software, it has to work perfectly fine, right?", so I bit my lip and got a Surface Pro 8.

What a horrible laptop that was, even while I was trialing just running Windows on it. Overheated almost immediately by itself, just idling, and STILL suffers from the issue where the laptop sometimes wake itself while in my backpack, so when I actually needed it, of course it was hot and without battery. I've owned a lot of shit laptops through the years, even some without keys in the keyboard, back when I was dirt-poor, but the Surface Pro 8 is the worst of them all, I regret buying it a lot.

I guess my point is that just because Apple seem really good at the whole "vertically integrated" concept, it isn't magic by itself, and Microsoft continues to fuck up the very same thing, even though they control the entire stack, so you'll still end up with backpack laptops turning themselves on/not turning off properly.

I'd wager you could let Microsoft own every piece of physical material in the world, and they'd still not be able to make a decent laptop.

gowld
Apple has been vertically integrate for 50 years. Microsoft has been horizontally integrated for 50 years.

That's why Apple is good at making a whole single system that works by itself, and Microsoft is good at making a system that works with almost everything almost everyone has made almost ever.

kalleboo
Microsoft has been vertically integrated for nearly 25 years with the Xbox. I wonder if their internally-siloed nature doesn't allow them to learn from individual teams' success.
rerdavies
Once very decade or so, they build a thinly disguised low-end PC, and then spend another 8 years shipping a thinly disguised obsolete low-end PC.

I don't think that really counts as vertical integration.

williamDafoe
The 2019 Macs were vertically integrated and Apple could do NOTHING good with the Intel PowerPig i9 CPUs. My i9 once once ran down from 100% charge to 0% in 90 mins PLUGGED IN ON 95W CHARGER! I was hosting a meeting. The M1-M4 CPUs forsake multithreading and downclock and this is one of the many ways they save power. Video codecs are particularly power efficient on mobile chips!
danielbarla
I used a 2019 MacBook Pro for quite a while, and it was my first (and so far only) dip into Apple-land. While I appreciated the really solid build quality, great screen, etc, the battery life was pretty abysmal. We're talking easily under 2 hours if I had to be in a video call, which basically meant taking a charger to any meeting of decent length.

The 2nd biggest disappointment was when I ran my team's compute-heavy workload locally, expecting blistering performance from the i9, only to find that the CPU got throttled to under 50% (I seem to recall 47%, but my memory is fuzzy), within 6 seconds of starting the workload. And this was essentially a brand new laptop, so it likely wasn't blocked fan intakes. I fail to see the point of putting a CPU in a laptop that your thermal design simply can't handle.

0xffff2
Surprised to hear this. Back in the Surface Pro 4 days, the hardware was great. I made it through college doing 95% of my work on a Surface Pro 4 tablet with the magnetic keyboard and almost always made it through the entire day without having to plug it in.
RajT88
My wife swears by her surface pros, and she has owned a few.

I've had a few Surface Book 2's for work, and they were fine except: needed more RAM, and there was some issue with connection between screen and base which make USB headsets hinky.

ben-schaaf OP
This is easy to disprove. The Snapdragon X Elite has significantly better battery life than what AMD or Intel offer, and yet it's got the same number of cooks in the kitchen.

> Perfect example is trying to get to the bottom of why your windows laptop won't go to sleep and cooks itself in your backpack

Same thing happens in Apple land: https://www.hackerneue.com/item?id=44745897. My Framework 16 hasn't had this issue, although the battery does deplete slowly due to shitty modern standby.

williamDafoe
X Elite is not better than Ryzen5. Not better at all! Its why i own a hx365 AMD laptop...
ben-schaaf OP
Do you have a source on that? From every benchmark I've looked at the X Elite gets similar battery life to Apple Silicon, pretty far ahead of AMD.
bitwize
Microsoft is pushing "Modern Standby" over actual sleep, so laptops can download and install updates while closed at night.
reaperducer
Microsoft is pushing "Modern Standby" over actual sleep, so laptops can download and install updates while closed at night.

Apple has this. It's called Power Nap. But for some reason, it doesn't cause the same problems reported by people here on HN.

diffeomorphism
It does cause the same problem but seems to be somewhat less frequent.
bitwize
It doesn't cause the same problems because Apple's Power Nap is something you have to enable. It's an option for users who find it useful. It's not replacing traditional S3 sleep, wherein virtually everything is unpowered except a trickle to keep the RAM alive. Microsoft is supplanting traditional sleep with Modern Standby. You can disable Modern Standby, but only with registry jiggery-pokery, and Microsoft is pressuring OEMs to remove S3 support altogether.
kube-system
Also on the HN front page today:

> Framework 16

> The 2nd Gen Keyboard retains the same hardware as the 1st Gen but introduces refreshed artwork and updated firmware, which includes a fix to prevent the system from waking while carried in a bag.

RajT88
There are some reports of this with Macbooks as well. But my (non-scientific) impression is that a lot more people in Wintel land are seeing it. All of my work laptops, and a few of my personal laptops have done this to me since I started using Windows 10/11.
amazingman
I remember a time when this was supposed to be Wintel's advantage. It's really strange to now be in a time where Apple leads the consumer computing industry in hardware performance, yet is utterly failing at evolving the actual experience of using their computers. I'm pretty sure I'm not the only one who would gladly give up a bit of performance if it were going to result in a polished, consistent UI/UX based on the actual science of human interface design rather than this usability hellscape the Alan Dye era is sending us into.
galad87
macOS is a resource hungry pig, I wouldn't bet too much on it making a difference.
aurareturn

  All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
This isn't true. Yes, uncore power consumption is very important but so is CPU load efficiency. The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.

Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

Another thing that makes Apple laptops feel way more efficient is that they use a true big.Little design while AMD and Intel's little cores are actually designed for area efficiency and not power efficiency. In the case of Intel, they stuff as many little cores as possible to win MT benchmarks. In real world applications, the little cores are next to useless because most applications prefer a few fast cores over many slow cores.

yaro330
> Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

This is false, in cross platform tasks it's on par if not worse than latest X86 arches. As others pointed out: 2.5h in gaming is about what you'd expect from a similarly built X86 machine.

They are willing due to lower idle and low load consumption, which they achieve by integrating everything as much as possible - something that's basically impossible for AMD and Intel.

> The faster the CPU can finish a task, the faster it can go back to sleep, aka race to sleep.

May have been true when CPU manufacturers left a ton of headroom on the V/F curve, but not really true anymore. Zen 4 core's power draw shoots up sharply pass 4.6 GHz and nearly triples when you approach 5.5 GHz (compared to 4.6), are you gonna complete the task 3 times faster at 5.5 GHz?

aurareturn

  This is false, in cross platform tasks it's on par if not worse than latest X86 arches.
This is Cinebench 2024, a cross platform application: https://imgur.com/a/yvpEpKF

  They are willing due to lower idle and low load consumption, which they achieve by integrating everything as much as possible - something that's basically impossible for AMD and Intel.
Weird because LNL achieved similar idle wattage as Apple Silicon.[0] Why do you say it's impossible?

  May have been true when CPU manufacturers left a ton of headroom on the V/F curve, but not really true anymore. Zen 4 core's power draw shoots up sharply pass 4.6 GHz and nearly triples when you approach 5.5 GHz (compared to 4.6), are you gonna complete the task 3 times faster at 5.5 GHz?
Honestly not sure how your statement is relevant.

[0]https://www.notebookcheck.net/Dell-XPS-13-9350-laptop-review...

atwrk
This is Cinebench 2025, a cross platform application: https://imgur.com/a/yvpEpKF

You sure like that table, don't you? Trying to find the source of that blender numbers, I came across many reddit posts of you with that exact same table. Sadly those also don't have a source - the are not from the notebookcheck source.

aurareturn
The reason why I keep reposting this table is because people post incorrect statements about AMD/Apple so often, often with zero data backing.

For Blender numbers, M4 Pro numbers came from Max Tech's review.[0] I don't remember where I got the Strix Halo numbers from. Could have been from another Youtube video or some old Notebookcheck article.

Anyway, Blender has official GPU benchmark numbers now:

M4 Pro: 2497 [1]

Strix Halo: 1304 [2]

So M4 Pro is roughly 90% faster in the latest Blender. The most likely reason for why Blender's official numbers favors M4 Pro even more is because of more recent optimizations.

Sources:

[0]https://youtu.be/0aLg_a9yrZk?si=NKcx3cl0NVdn4bwk&t=325

[1] https://opendata.blender.org/devices/Apple%20M4%20Pro%20(GPU...

[2] https://opendata.blender.org/devices/AMD%20Radeon%208060S%20...

yaro330
> Weird because LNL achieved similar idle wattage as Apple Silicon.[0] Why do you say it's impossible?

And where is LNL now? How's the company that produced it? Even under Pat Gelsinger they said that LNL is a one off and they're not gonna make any more of them. It's commercially infeasible.

> Honestly not sure how your statement is relevant.

How is you bringing up synthetics relevant to race to idle?

Regardless, a number of things can be done on Strix Halo to improve the performance, first would be switching to some optimized Linux distro, or at least the kernel. That would claw back 5-20% depending on the task. It would also improve single core efficiency, I've seen my 7945hx drop from 14-15w idle on Windows to about 7-8 on Linux, because Windows likes to jerk off the CCDs non stop and throw the tasks around willy nilly which causes the second CCD and I/O die to never properly idle.

aurareturn

  And where is LNL now? How's the company that produced it? Even under Pat Gelsinger they said that LNL is a one off and they're not gonna make any more of them. It's commercially infeasible.
Why does it matter that LNL is bad economically? LNL shows that it's definitely possible to achieve same idle or even better idle wattage than Apple Silicon.

  How is you bringing up synthetics relevant to race to idle?
I truly don't understand what you mean.
ben-schaaf OP
> This is Cinebench 2024, a cross platform application: https://imgur.com/a/yvpEpKF

Cool, now compare M1 to AI 340. The AI 340 has slightly better single core and better multi-core. If battery life was all about race to idle like you claim then the AI 340 should be better than the M1.

See also Snapdragon X Elite, which is significantly slower than the AI 340, uses more power under load, so in total has much less efficient cores, and yet still beats the AI 340 on battery life.

williamDafoe
I did mips per watt calculations in 2017 and Apple (A10 i think) was 2-3x better than intel. See "how to build a computer" by donald gillies (slideshare slides). I was shocked, i didnt expect this at all!
jandrewrogers
> Apple Silicon is 2-4x more efficient than AMD and Intel CPUs during load while also having higher top end speed.

This is not true. For high-throughput server software x86 is significantly more efficient than Apple Silicon. Apple Silicon optimizes for idle states and x86 optimizes for throughput, which assumes very different use cases. One of the challenges for using x86 in laptops is that the microarchitectures are server-optimized at their heart.

ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial. I'd still much rather have Apple Silicon in my laptop.

aurareturn

  For high-throughput server software x86 is significantly more efficient than Apple Silicon.
In the server space, x86 has the highest performance right now. Yes. That's true. That's also because Apple does not make server parts. Look for Qualcomm to try to win the server performance crown in the next few years with their Oryon cores.

That said, Graviton is at least 50% of all AWS deployments now. So it's winning vs x86.

  ARM in general does not have the top-end performance of x86 if you are doing any kind of performance engineering. I don't think that is controversial.
I think you'll have to define what top-end means and what performance engineering means.
ksec
I dont think the point Amazon uses ARM was about performance but purely cost optimisation. At one point, nearly 40% of Intel's server revenue was coming from Amazon. They just figure it out at their scale it would be cheaper to do it themselves.

But I am purely guessing ARM has risen their price per core so it makes less financial sense to do a yearly update on CPU. They are also going into Server CPU business meaning they now have some incentives to keep it all to themselves. Which makes the Nvidia moves really smart as they decided to go for the ISA licences and do it by themselves.

aurareturn
Server CPUs do not win on performance alone. They win on performance/$, LTV/$, etc. That's why Graviton is winning on AWS.
rollcat
> It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.

I've worked in video delivery for quite a while.

If I were to write the law, decision-makers wilfully forcing software video decoding where hardware is available would be made to sit on these CPUs with their bare buttocks. If that sounds inhumane, then yes, this is the harm they're bringing upon their users, and maybe it's time to stop turning the other cheek.

throwawaylaptop
I run Linux Mint Mate on a 10 year old laptop. Everything works fine, but watching YouTube makes my wireless USB dongle mouse stutter a LOT. Basically if CPU usage goes up, mouse goes to hell.

Are you telling me that for some reason it's not using any hardware acceleration available while watching YouTube? How do I fix it?

olyjohn
It's probably the 2.4GHz WiFi transmitter interfering with the 2.4GHz mouse transmitter. You probably notice it during YouTube because it's constantly downloading. Try a wired mouse.
throwawaylaptop
Interesting theory. The wired mouse is trouble free, but I figured that's because of a better sampling rate and less overhead over all. Maybe I'll try a bluetooth mouse or some other frequency, or the laptop on fired Ethernet to see if the theory pans out.
Sohcahtoa82
> Maybe I'll try a bluetooth mouse

Bluetooth is also 2.4 Ghz.

lostmsu
Or just switch to 5GHz or 6GHz range.
dismalaf
Easiest way is to use Chrome or a Chrome based browser since they bundle codecs with the browser. If you're using Firefox, need to make sure you have the codecs. I know nothing about Mint specifically though to know if they'd automatically install codecs or not.
lights0123
You specifically don't want to use the bundled codecs since those would be CPU decode only.
throwawaylaptop
Interesting. I'll look into that more.
dismalaf
Straight up false. I have both Chrome and Vivaldi installed on Linux, both have hardware video decoding on OOTB...

You check it by putting chrome://gpu in the address bar.

throwawaylaptop
Im using Brave and it seems the enable hardware acceleration box is checked.
throwup238
> All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.

A good demonstration is the Android kernel. By far the biggest difference between it and the stock Linux kernel is power management. Many subsystems down to the process scheduler are modified and tuned to improve battery life.

qcnguy
And the more relevant case for laptops is macOS, which is heavily optimized for battery life and power draw in ways that Linux just isn't, neither is Windows. A lot of the problems here can't actually be fixed by intel, amd, or anyone designing x86 laptops because getting that level of efficiency requires the ability to strongly lead the app developer community. It also requires highly competent operating system developers focusing on the issue for a very long time, and being able to co-design the operating system, firmware and hardware together. Microsoft barely cares about Windows anymore, the Linux guys only care about servers since forever, and that leaves Apple alone in the market. I doubt anything will change anytime soon.
curt15
>And the more relevant case for laptops is macOS, which is heavily optimized for battery life and power draw in ways that Linux just isn't, neither is Windows.

What are some examples of power draw savings that Linux is leaving on the table?

qcnguy
There's no equivalent of AppNap if I recall correctly and drivers often aren't aggressive at shutting down unused devices, or they don't do it at all. Linux has historically had a lot of problems with reliable suspends too.
john01dav
Power efficiency is very important to servers too, for cost instead of for battery life. But, energy is energy. Thus, I suspect that the power draw is in userland systems that are specific to desktop, like desktop environments. Thus, using a simpler desktop environment may be worthwhile.
qcnguy
It's important but not relative to performance. Perf/watt thinking has a much longer history in mobile and laptop spaces. Even in servers most workloads haven't migrated to ARM.
I used Ubuntu around 2015 - 2018 and got hit with a nasty defect around gnome online accounts integrations (please correct me if the words are wrong here). For some reason, it got stuck in a loop or a bad state on my machine. I have since then decided that I will never add any of my online accounts, Facebook, Google, or anything to Gnome.
umbra07
I assumed the same thing, until I tested my hypothesis. KDE Plasma 6 uses less power on idle than just `Hyprland` (tiling WM) without anything like a notification daemon, idler, status bar, etc.
pdimitar
Were you able to find out why? This is very interesting and I'd never guess it.
deaddodo
If x86 just officially said “we’re cutting off 32-bit legacy” one day (similar to how Apple did), they could toss out 95% of the crap that makes them power inefficient. Just think of the difference dropping A10 offered for memory efficiency.

“Modern Standby” could be made to actually work, ACPI states could be fixed, a functional wake-up state built anew, etc. Hell, while it would allow pared down CPUs, you could have a stop-gap where run mode was customized in firmware.

Too much credit is given to Apple for “owning the stack” and too little attention to legacy x86 cruft that allows you to run classic Doom and Commander Keen on modern machines.

fluoridation
>If x86 just officially said “we’re cutting off 32-bit legacy” one day (similar to how Apple did), they could toss out 95% of the crap that makes them power inefficient.

Where do you get this from? I could understand that they could get rid of the die area devoted to x86 decoding, but as I understand it x86 and x86-64 instructions get interpreted by the same execution units, which are bitness blind. What makes you think it's x86 support that's responsible for the vast majority of power inefficiency in x86-64 processors?

hajile
Intel has proposed APX to address this. It does away with some of the 32-bit garbage that complicates design for no good payoff. Most importantly, it increases from 16 to 32 registers and allows 3-register instructions (almost all x86 instructions are 1-register or 2-register instructions). This would strip out tons of MOV instructions which was proven with AMD64 to have a decent impact on performance.

Reduced I-Cache, uop cache, and decoder pressure would also have a beneficial impact. On the flip side, APX instructions would all be an entire byte longer than their AMD64 counterparts, so some of the benefits would be more muted than they might first appear and optimizing between 16 registers and shorter instructions vs 32 registers with longer instructions is yet another tradeoff for compilers to make (and takes another step down the path of being completely unoptimizable by humans).

delfinom
From what I understood. It's not "32-bit instructions" that are the problem. It's a load of crap associated with those 32-bit processors. There's more to x86 than just the instruction set. Operating systems need to carry the baggage in x86 if they want to allow users to run on old and new processors.
anonymars
> “Modern Standby” could be made to actually work, ACPI states could be fixed, a functional wake-up state built anew, etc. Hell, while it would allow pared down CPUs, you could have a stop-gap where run mode was customized in firmware.

I'm confused, how is any of this related to "x86" and not the diverse array of third party hardware and software built with varying degrees of competence?

stuaxo
It's a shame they are so bad at upstreaming stuff, and run on older kernels (which in turn makes upstreaming harder).
prmoustache
> It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding.

To be fair, usually the linux itself has hardware acceleration available but the browser vendors tend to disable gpu rendering except on controlled/known perfectly working combinations of OS/Hardware/Drivers and they have much less testing in Linux. In most case you can force enabling gpu rendering in about:config and try it out yourself and leave it unless you get recurring crashes.

deaddodo
The only browser I’ve ever had issues with enabling video acceleration on Linux is Firefox.

All the Blink-based ones just work as long the proper libraries are installed and said libraries properly detect hardware support.

goneri
I run Fedora and for legal reasons, they ship a version that has this problem. Have you tried Mozilla's Flatpak build? I use it instead and it resolves all my problem.
int_19h
When I enabled HW acceleration on my Linux laptop to see how much it would improve battery life in Linux, my automated test (which is basically just browsing Reddit) would start crashing every 20 minutes or so.
mayama
I disable turbo boost in cpu on linux. Fans rarely start on the laptop and the system is generally cool. Even working on development and compilation I rarely need the extra perf. For my 10yr old laptop I cap max clock to 95% too to stop the fans from always starting. YMMV
just6979
This is a big reason. Apple tunes their devices to not push the extreme edges of the performance that is possible, so they don't fall off that cliff of inefficiency. Combined with a really great perf/watt, they can run them at "90%" and stay nice and cool and sipping power (relatively), while most Intel/AMD machines are allowed to push their parts to "110%" much more often, which might give them a leg up in raw performance (for some workloads), but runs into the gross inefficiencies of pushing the envelope so that marginal performance increase takes 2-3x more power.

If you manually go in and limit a modern Windows laptop's max performance to just under what the spec sheet indicates, it'll be fairly quiet and cool. In fact, most have a setting to do this, but it's rarely on by default because the manufacturers want to show off performance benchmarks. Of course, that's while also touting battery life that is not possible when in the mode that allows the best performance...

This doesn't cover other stupid battery life eaters like Modern Standy (it's still possible to disable it with registry tweaks! do it!), but if you don't need absolute max perf for renders or compiling or whatever, put your Windows or Linux laptop into "cool & quiet" mode and enjoy some decent extra battery.

It would also be really interesting to see what Apple Silicon could do under some Extreme OverClocking fun with sub-zero cooling or such. Would require a firmware & OS that allows more tuning and tweaking, so it's not going to happen anytime soon, but could actually be a nice brag for Apple it they did let it happen.

koala_man
I once saw a high resolution CPU graph of a video playing in Safari. It was completely dead except for a blip every 1/30th of a second.

Incredible discipline. The Chrome graph in comparison was a mess.

novok
Safari team explicitly targets perf as a target. I just wish they weren't so bad about extensions and adblock and I'd use it as my daily driver. But those paper cuts make me go back to chromium browsers all the time.
impure-aqua
I find Orion has similar power efficiency but avoids those papercuts: https://kagi.com/orion/
lenkite
Hell, Apple CPU's are even optimized for Apple software GC calls like Retain/Release objects. It seems if you want optimal performance and power efficiency, you need to own both hardware and software.

Looks like general purpose CPUs are on the losing train.

Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

NobodyNada
> Apple CPU's are even optimized for Apple software GC calls like Retain/Release objects.

I assume this is referring to the tweet from the launch of the M1 showing off that retaining and releasing an NSObject is like 3x faster. That's more of a general case of the ARM ISA being a better fit for modern software than x86, not some specific optimization for Apple's software.

x86 was designed long before desktops had multi-core processors and out-of-order execution, so for backwards compatibility reasons the architecture severely restricts how the processor is allowed to reorder memory operations. ARM was designed later, and requires software to explicitly request synchronization of memory operations where it's needed, which is much more performant and a closer match for the expectations of modern software, particularly post-C/C++11 (which have a weak memory model at the language level).

Reference counting operations are simple atomic increments and decrements, and when your software uses these operations heavily (like Apple's does), it can benefit significantly from running on hardware with a weak memory model.

stinkbeetle
> I assume this is referring to the tweet from the launch of the M1 showing off that retaining and releasing an NSObject is like 3x faster. That's more of a general case of the ARM ISA being a better fit for modern software than x86, not some specific optimization for Apple's software.

It's not really even the ISA, mainly the implementation. Atomics on Apple cores are 3x faster than Intel (18 cycles back to back latency vs 6). AMD's atomics have 6 cycle latency.

aurareturn

  It seems if you want optimal performance and power efficiency, you need to own both hardware and software.
Does Apple optimize the OS for its chips and vice versa? Yes. However, Apple Silicon hardware is just that good and that far ahead of x86.

Here's an M4 Max running macOS running Parallels running Windows when compared to the fastest AMD laptop chip: https://browser.geekbench.com/v6/cpu/compare/13494385?baseli...

M4 Max is still faster even with 14 out of 16 possible cores being used. You can't chalk that up to optimizations anymore because Windows has no Apple Silicon optimizations.

lenkite
Not really sure whether it makes a difference, but the Parallel VM is running Windows Pro, while the Windows OS on ASUS Gaming Laptop is running Windows Home.
sabdaramadhan
Isn't most gaming laptops had Home Singe Language built in? (never had a gaming laptop before)
lenkite
I believe this depends on the OEM.
sabdaramadhan
> Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

Intel is busy fixing up their shit after what happened with their 13 & 14th gen CPU. Making imagine they making OS its called IntelOS and the only thing you can run is only by using Intel CPU

davsti4
> Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

Or, contribute efficiency updates to popular open projects like firefox, chromium, etc...

lelanthran
> Maybe Intel should invent desktop+mobile OS and design bespoke chips for those.

Wouldn't it be easier for Intel to heavily modify the linus kernel instead of writing their own stack?

They could even go as far as writing the sleep utilities for laptops, or even their own window manager to take advantage of the specific mods in the ISA?

hajile
Intel was working with Nokia to heavily invest into Meego OS until it was killed by Elop+Microsoft.

If it hadn't been killed, it may have become something interesting today.

WorldPeas
they /did/ this but notice the "was" at the top of the page: https://www.clearlinux.org/
hoppp (dead)
> most of which comes down to using the CPU as little as possible.

it least on mobile platform apple advocate the other way with race to sleep - do calculation as fast as you can with powerful cores so that whole chip can go back to sleep earlier and more often take naps.

creshal
Intel stipulated the same under the name HUGI (Hurry Up and Go Idle) about 15 years ago when ultrabooks were the new hot thing.

But when Apple says it, software devs actually listen.

redwall_hp
Apple was talking about batching tasks for battery life back when they shipped Grand Central Dispatch back in 2009. It was a major part of that year's WWDC keynote. Race to Zero was also a major part of how they designed networking for iOS.
int_19h
Peer pressure. When everybody else does it and you don't, your app sticks out like a sore thumb and makes users unhappy.

The other aspect of it is that paid software is more prevalent in macOS land, and the prices are generally higher than on Windows. But the flip side of that is that user feedback is taken more seriously.

nikanj
And then Microsoft adds an animated news tracker to the left corner of the start bar, making sure the cpu never gets to idle.
ben-schaaf OP
Race to sleep is all about using the CPU as little as possible. Given that the modern AMD chips are faster than Apple M1 this clearly does not account for the disparity in battery life.
mrtksn
Which also should mean that using that M1 machine with Linux will have Intel/AMD like experience, not the M1 with macOS experience.
ben-schaaf OP
Yes and no. The optimizations made for battery life are a combination of software and hardware. You'll get bad battery life on an M1 with Linux when watching youtube without hardware acceleration, but if you're just idling (and if Linux idles properly) then it should be similar to macOS.
ToucanLoucan
I honestly don't see myself ever leaving Macbooks at this point. It's the whole package: the battery life is insane, I've literally never had a dead laptop when I needed it no matter what I'm doing or where I'm at; it runs circles around every other computer I own, save for my beastly gaming PC; the stability and consistency of MacOS, and the underlying unix arch for a lot of tooling, all the way down to the build quality being damn near flawless save for the annoying lack of ports (though increasingly, I find myself needing ports less and less).

Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure. But to get that now I have to go to a PC laptop and there's so many compromises there. The battery life isn't even in the same zip code as a Mac, they're much heavier, the chips run hot even just doing web browsing let alone any actual work, and they CREAK. Like my god I don't remember the last time I had a Windows laptop open and it wasn't making all manner of creaks and groans and squeaks.

The last one would be solved I guess if you went for something super high end, or at least I hope it would be, but I dunno if I'm dropping $3k+ either way I'd just as soon stay with the Macbook.

AlexandrB
> Like, would I prefer an older-style Macbook overall, with an integrated card reader, HDMI port, ethernet jack, all that? Yeah, sure.

Modern MacBook pros have 2/3 (card reader and HDMI port), and they brought back my beloved MagSafe charging.

prewett
I was all for MagSafe, but after buying an M2, I realized that the USB-C charging was better. I found the cables came out almost as well as the MagSafe if I stepped on them, but you can plug them in to either side. I seem to always be on the wrong side, so the MagSafe cable has to snake around to the other side.
ToucanLoucan
No shit! I'm still rocking the M1 Pro for personal and the M2 Air for work so I do have magsafe back for one of them at least, but just USB-C besides that.

But yeah IMHO there's just no comparison. Unless you're one of those folks who simply cannot fucking stand Mac, it's just no contest.

solardev
Even the high end ones (Razers, Asus, Surface Books, Lenovos) are mere lookalikes and don't run anywhere as well as the MacBooks. They're hot and heavy and loud and full of driver issues and discrete graphics switching headaches and of course the endless ads and AI spam of modern Windows. No comparison at all...
whatevaa
Turning down the settings will get you worse experiece, especially if you turn down that they are "mostly idle". Not comparable.
sys_64738
Sounds like death by (2^10)-24) cuts for the x86 architecture.

This item has no comments currently.