I've long desired this approach (backporting security fixes) to be commercialized instead of the always-up-to-date-even-if-incompatible push, and on top of Red Hat, Suse, Canonical (with LTS), nobody has been doing it for product teams until recently (Chainguard seems to be doing this).
But, if you ignore speed, you also fail: others will build less secure products and conquer the market, and your product has no future.
The real engineering trick is to be fast and build new things, which is why we need supply chain commoditized stewards (for a fee) that will solve this problem for you and others at scale!
which is a bit silly considering that if you want fast, most packages land in testing/unstable pretty quickly.
I believe the sweet spot is Debian-like stable as the base platform to build on top of, and then commercial-support in a similar way for any dependencies you must have more recent versions on top.
If you need latest packages, you have to do it anyway.
> I believe the sweet spot is Debian-like stable as the base platform to build on top of, and then commercial-support in a similar way for any dependencies you must have more recent versions on top.
That if the company can build packages properly. Also too old OS deps sometimes do throw wrench in the works.
Tho frankly "latest Debian Testing" have far smaller chance breaking something than "latest piece of software that couldn't figure out how to upstream to Debian"
The latter has a huge maintenance burden, the former is the, as I said already, sweet spot. (And let's not talk about combining stable/testing, any machine I tried that on got into an non-upgradeable mess quickly)
I am not saying it is easy, which is exactly why I think it should be a commercial service that you pay for for it to actually survive.
> supply chain commoditized stewards (for a fee)
I agree with this, but the open source licenses allow anyone who purchases a stewarded implementation to distribute it freely.I would love to see a software distribution model in which we could pay for vetted libraries, from bodies that we trust, which would become FOSS after a time period - even a month would be fine.
There are flaws in my argument, but it is a safer option than the current normal practices.
I guess the takeaway is that, doubly so, trusting rust code to be memory safe, simply because it is rust isn't sensible. All its protections can simple be invalidated, and an end user would never know.
Then things like this appear:
https://www.phoronix.com/news/First-Linux-Rust-CVE
And I'm all warm and feeling schadenfreude.
To hear "yes, it's safer" and yet not "everyone on the planet not using rust is a moron!!!", is a nice change.
Frankly, the whole cargo side of rust has the same issues that node has, and that's silly beyond comprehension. Memory safe is almost a non-concern, compared to installing random, unvetted stuff. Cargo vet seems barely helpful here.
I'd want any language caring about security and code safety, to have a human audit every single diff, on every single package, and host those specific crates on locked down servers.
No, I don't care about "but that will slow down development and change!". Security needs to be first and front.
And until the Rust community addresses this, and its requirement for 234234 packages, it's a toy.
And yes, it can be done. And no, it doesn't require money. Debian's been doing just this very thing for decades, on a far, far, far larger scale. Debian developers gatekeep. They package. They test and take bug reports on specific packages. This is a solved problem.
Caring about 'memory safe!' is grand, but ignoring the rest of the ecosystem is absurd.