I could never get Docker to work on my ADSL when it was 2 Mbps (FTTN got it up to 20) though it was fine in the Montreal office which had gigabit.
Here the scale of time is larger and does make the 5$ significant, while it isn't at the scale of a few days.
If that moves the needle on your ability to purchase the car, you probably shouldn't be buying it.
5% is 5%.
- Doing 1 hour of effort to save 5% on your $20 lunch is foolhardy for most people. $1/hr is well below US minimum wage. - Doing 1 hour of effort to save 5% on your $50k car is wise. $2500/hr is well above what most people are making at work.
It's not about whether the $2500 affects my ability to buy the car. It's about whether the time it takes me to save that 5% ends up being worthwhile to me given the actual amount saved.
The question is really "given the person-hours it takes to apply the savings, and the real value of the savings, is the savings worth the person-hours spent?"
I'm sure you can use your imagination to substitute "lunch" and "car" with other examples where the absolute change makes a difference despite the percent change being the same.
Even taking it literally... The 5% might not tip the scale of whether or not I can purchase the car, but I'll spend a few hours of my time comparing prices at different dealers to save $2500. Most people would consider it dumb if you didn't shop around when making a large purchase.
On the other hand, I'm not going to spend a few hours of my time at lunch so that I can save an extra $1 on a meal.
Given his numbers, let's say he saves 100Tb of bandwidth over a year. At AWS egress pricing... that's $5,000 total saved.
And arguably - NPM is getting at least some of that savings by adding CPU costs to publishers at package time.
Feels like... not enough to warrant a risky ecosystem change to me.
NPM uses at least 5 petabytes per week. 5% of that is 250 terabytes.
So $15,000 a week, or $780,000 a year in savings could’ve been gained.
For reference, total bandwidth used by all 5000 packages is 4_752_397 GiB.
Packages >= 1GiB bandwidth/week - That turns out to be 437 packages (there's a header row, so it's rows 2-438) which uses 4_205_510 GiB.
So 88% of the top 5000 bandwidth is consumed by downloading the top 8.7% (437) packages.
5% is about 210 TiB.
Limiting to the top 100 packages by bandwidth results in 3_217_584 GiB, which is 68% of total bandwidth used by 2% of the total packages.
5% is about 161 TiB.
Less than 1% of top 5000 packages took 53% of the bandwidth.
5% would be about 127 TiB (rounded up).
Even that's addressable though if there's motivation, since something like transcoding server side during publication just for popular packages would probably get 80% of the benefit with no client-side increase in publication time.
The more bandwidth that Cloudflare needs, the more leverage they have at the peering table. As GitHub's largest repo (the @types / DefinitelyTyped repo owned by Microsoft) gets larger, the more experience the owner of GitHub (also Microsoft) gets in hosting the world's largest git repos.
I would say this qualifies as one of those cases, as npmjs is hosted on Azure. The more resources that NPM needs, the more Microsoft can build towards parity with AWS's footprint.
5% is 5% at any scale.