NPM uses at least 5 petabytes per week. 5% of that is 250 terabytes.
So $15,000 a week, or $780,000 a year in savings could’ve been gained.
For reference, total bandwidth used by all 5000 packages is 4_752_397 GiB.
Packages >= 1GiB bandwidth/week - That turns out to be 437 packages (there's a header row, so it's rows 2-438) which uses 4_205_510 GiB.
So 88% of the top 5000 bandwidth is consumed by downloading the top 8.7% (437) packages.
5% is about 210 TiB.
Limiting to the top 100 packages by bandwidth results in 3_217_584 GiB, which is 68% of total bandwidth used by 2% of the total packages.
5% is about 161 TiB.
Less than 1% of top 5000 packages took 53% of the bandwidth.
5% would be about 127 TiB (rounded up).
Even that's addressable though if there's motivation, since something like transcoding server side during publication just for popular packages would probably get 80% of the benefit with no client-side increase in publication time.
Given his numbers, let's say he saves 100Tb of bandwidth over a year. At AWS egress pricing... that's $5,000 total saved.
And arguably - NPM is getting at least some of that savings by adding CPU costs to publishers at package time.
Feels like... not enough to warrant a risky ecosystem change to me.