5% is 5% at any scale.
Here the scale of time is larger and does make the 5$ significant, while it isn't at the scale of a few days.
If that moves the needle on your ability to purchase the car, you probably shouldn't be buying it.
5% is 5%.
- Doing 1 hour of effort to save 5% on your $20 lunch is foolhardy for most people. $1/hr is well below US minimum wage. - Doing 1 hour of effort to save 5% on your $50k car is wise. $2500/hr is well above what most people are making at work.
It's not about whether the $2500 affects my ability to buy the car. It's about whether the time it takes me to save that 5% ends up being worthwhile to me given the actual amount saved.
The question is really "given the person-hours it takes to apply the savings, and the real value of the savings, is the savings worth the person-hours spent?"
I'm sure you can use your imagination to substitute "lunch" and "car" with other examples where the absolute change makes a difference despite the percent change being the same.
Even taking it literally... The 5% might not tip the scale of whether or not I can purchase the car, but I'll spend a few hours of my time comparing prices at different dealers to save $2500. Most people would consider it dumb if you didn't shop around when making a large purchase.
On the other hand, I'm not going to spend a few hours of my time at lunch so that I can save an extra $1 on a meal.
Given his numbers, let's say he saves 100Tb of bandwidth over a year. At AWS egress pricing... that's $5,000 total saved.
And arguably - NPM is getting at least some of that savings by adding CPU costs to publishers at package time.
Feels like... not enough to warrant a risky ecosystem change to me.
NPM uses at least 5 petabytes per week. 5% of that is 250 terabytes.
So $15,000 a week, or $780,000 a year in savings could’ve been gained.
For reference, total bandwidth used by all 5000 packages is 4_752_397 GiB.
Packages >= 1GiB bandwidth/week - That turns out to be 437 packages (there's a header row, so it's rows 2-438) which uses 4_205_510 GiB.
So 88% of the top 5000 bandwidth is consumed by downloading the top 8.7% (437) packages.
5% is about 210 TiB.
Limiting to the top 100 packages by bandwidth results in 3_217_584 GiB, which is 68% of total bandwidth used by 2% of the total packages.
5% is about 161 TiB.
Less than 1% of top 5000 packages took 53% of the bandwidth.
5% would be about 127 TiB (rounded up).
Even that's addressable though if there's motivation, since something like transcoding server side during publication just for popular packages would probably get 80% of the benefit with no client-side increase in publication time.
The more bandwidth that Cloudflare needs, the more leverage they have at the peering table. As GitHub's largest repo (the @types / DefinitelyTyped repo owned by Microsoft) gets larger, the more experience the owner of GitHub (also Microsoft) gets in hosting the world's largest git repos.
I would say this qualifies as one of those cases, as npmjs is hosted on Azure. The more resources that NPM needs, the more Microsoft can build towards parity with AWS's footprint.
But it is possible to do it more gradually, I.e. by sneaking it in with a new API that's used by new npm version or similar.
But it was his choice to make, and it's fine that he didn't feel enough value in pursuing such a tiny file size change
This is a solid example of how things change at scale. Concerns I wouldn't even think about for my personal website become things I need to think about for the download site being hit by 50,000 of my customers become big deals when operating at the scale of npm.
You'll find those arguments the pointless nitpicking of entrenched interests who just don't want to make any changes, until you experience your very own "oh man, I really thought this change was perfectly safe and now my entire customer base is trashed" moment, and then suddenly things like "hey, we need to consider how this affects old signatures and the speed of decompression and just generally whether this is worth the non-zero risks for what are in the end not really that substantial benefits".
I do not say this as the wise Zen guru sitting cross-legged and meditating from a position of being above it all; I say it looking at my own battle scars from the Perfectly Safe things I've pushed out to my customer base, only to discover some tiny little nit caused me trouble. Fortunately I haven't caused any true catastrophes, but that's as much luck as skill.
Attaining the proper balance between moving forward even though it incurs risk and just not changing things that are working is the hardest part of being a software maintainer, because both extremes are definitely bad. Everyone tends to start out in the former situation, but then when they are inevitably bitten it is important not to overcorrect into terrified fear of ever changing anything.