Even if Debian wasn't perfect before systemd was introduced, at least I knew there was a very high probability that I could trust it to function well.
That stopped being the case after systemd was introduced. I've had far too many problems caused by systemd, to the point where all trust I had in newer versions of Debian has been lost.
Initially, I thought that maybe the problem was with me. But as I investigated the issues I was having with systemd, I'd see so many other bug reports, forum postings, mailing list postings, IRC logs, blog articles, and other online communications from people who were also having many other problems with systemd.
Debian offered a much better user experience before it switched to systemd, and a much worse user experience since.
Any specific issues? I didn’t see any. No offense. One factor may be that Arch prioritizes not patching upstream - helped save them from targeting here, and it doesn’t go overboard with default configs, which I’ve long appreciated.
Not to distro-war, I’m very grateful for Debian. My background is finding Linux in the mid-00s and breaking many SuSE, Ubuntu, and one or two Debian systems before finding something I could understand, repair, and maintain in 2008 Arch.
systemd accentuated its ability to stay relevant with enterprise Linux, made it even easier to package for, and has been a useful tool in diagnosing service issues and managing bad software for me.
I’m not sure how often it’s posted here but Benno Rice formerly of FreeBSD Core Team has an excellent and amusing discussion of systemd’s technical merits.
IMO he makes a couple good points (and a couple poor ones), but it’s about everything except technical merits. It’s more about social and philosophical aspects.
Magic, indeeded.
That is not to say systemd isn't a hypertrophied pig, but it does do important work.
What could be done to prevent supply chain attacks more broadly?
But libsystemd is not linked to xz only. By removing it, sshd is free of many other potential risks.
Still the question remains: what technology could be implemented to mitigate this type of attack (beyond sshd)?
For example, Linux sandboxing is poor, and SeLinux is not usually enforced.
Why should sshd be allowed to call an xz function directly without xz being an immediate dependency.
I'm not sure what all that would entail with the ifunc stuff, but I remember encountering a glibc linking change moving from Red Hat 6 to RH7 that did something similar and broke the build process for some legacy code.
UNIX ? (do one thing and do it well).
When your program links with 20 libraries, i have i very hard time to believe that security is one of your goals.
There was a pull request to stop linking liblzma into libsystemd a month before the backdoor was found
https://github.com/systemd/systemd/pull/31550
This was likely one of many things that pushed the attackers to work faster, and forced them into making mistakes.
They felt threatened by the idea that open source maintenance can ever go wrong and started attacking me. They argued closed source was worse.
That was not my point at all. I was not raising a weakness of open source. I was just pointing out that linking to libsystemd had that kind of problem.
For example, dropping the libsystemd dependency.
The UNIX philosophy was right all along - each tool does one simple thing.
Most things we want to do are necessarily complex, so dogmatically adhering to "one simple thing" necessarily drives you towards a towering heap of composed dependencies.
If you want to get rid of the supply chain, you want everything specifically to be non-composable, so that everything has to be reinvented from scratch for that specific solution.
But this implies removing dependencies on various libraries, and keep a such important process small is already relevant, despite the fact that it will load PAM libraries, making it quite still prone to issues.