Nothing in x86 prohibits you from an implementation less efficient than what you could do with ARM instead.
x86 and ARM have historically served very different markets. I think the pattern of efficiency differences of past implementations is better explained by market forces rather than ISA specifics.
Linux can actually meet or even exceed Window's power efficiently, at least at some tasks, but it takes a lot of work to get there. I'd start with powertop and TLP.
As usual, the Arch wiki is a good place to find more information: https://wiki.archlinux.org/title/Power_management
I've used Linux laptops since ~2007, and am well aware of the issues. 12x is well beyond normal.
I don't think I ever saw 50W at all, even under load; they probably run an Ultra U1xxH, permanently turbo-boosted.
For some reason. Given the level of tinkering (with schedulers and interrupt frequencies), it's likely self-imposed at this point, but you never know.
If nothing would be wrong, it'd be at something like 1.5GHz with most of the cores unpowered.
My TLP and LPMD configs: https://gist.github.com/vient/f8448d56c1191bf6280122e7389fc1...
TLP: don't remember details now, as I recall scaling governor does not do anything on modern CPUs when energy perf policy is used. CPU_MAX_PERF_ON_BAT=30 seems to be crucial for battery savings, sacrificing performance (not too much for everyday use really) for joules in battery. CPU_HWP_DYN_BOOST_ON_BAT=0 further prohibits using turbo on battery, just in case.
LPMD: again, did not use it much in the end so not sure what even is written in this config. May need additional care to run alongside TLP.
Also, I used these boot parameters. For performance, I think, beneficial one are *mitigations, nohz_full, rcu*
quiet splash sysrq_always_enabled=1 mitigations=off i915.mitigations=off transparent_hugepage=always iommu=pt intel_iommu=on nohz_full=all rcu_nocbs=all rcutree.enable_rcu_lazy=1 rcupdate.rcu_expedited=1 cryptomgr.notests no_timer_check noreplace-smp page_alloc.shuffle=1 tsc=reliable
Given that videos spin up those coolers, there is actually a problem with your GPU setup on Linux, and I expect there'd be an improvement if you managed to fix it.
Another thing is that Chrome on Linux tends to consume exorbitant amount of power with all the background processes, inefficient rendering and disk IO, so updating it to one of the latest versions and enabling "memory saving" might help a lot.
Switching to another scheduler, reducing interrupt rate etc. probably help too.
Linux on my current laptop reduced battery time x12 compared to Windows, and a bunch of optimizations like that managed to improve the situation to something like x6, i.e. it's still very bad.
> Is x86 just not able to keep up with the ARM architecture?
Yes and no. x86 is inherently inefficient, and most of the progress over last two decades was about offloading computations to some more advanced and efficient coprocessors. That's how we got GPUs, DMA on M.2 and Ethernet controllers.
That said, it's unlikely that x86 specifically is what wastes your battery. I would rather blame Linux, suspect its CPU frequency/power drivers are misbehaving on some CPUs, and unfortunately have no idea how to fix it.