Cargo does have lock files by default. But we really need better tooling for auditing (and enforcing tha auditing has happened) to properly solve this.
So it's ultimately a trade off rather than a strictly superior solution.
Also, nothing in Rust prevents you from doing the same thing. In fact, I would argue that Cargo makes this process easier.
The distros try, but one complex problem with a project that holds strong opinions and you may not have a fix.
The gnome keyring secrets being available to any process running under your UID, unless that process ops into a proxy as an example.
Looking at how every browser and busybox is exempted from apparmor is another.
It is not uncommon to punt the responsibility to users.
However, when you install Servo, you just install a single artefact. You don't need to juggle different versions of these different packages to make sure they're all compatible with each other, because the Servo team have already done that and compiled the result as a single static binary.
This creates a lot of flexibility. If the Servo maintainers think they need to make a breaking change somewhere, they can just do that without breaking things for other people. They depend internally on the newer version, but other projects can still continue using the older version, and end-users and distros don't need to worry about how best to package the two incompatible versions and how to make sure that the right ones are installed, because it's all statically built.
And it's like this all the way down. The regex crate is a fairly standard package in the ecosystem for working with regexes, and most people will just depend on it directly if they need that functionality. But again, it's not just a regex library, but a toolkit made up of the parts needed to build a regex library, and if you only need some of those parts (maybe fast substring matching, or a regex parser without the implementation), then those are available. They're all maintained by the same person, but split up in a way that makes the package very flexible for others to take exactly what they need.
In theory, all this is possible with traditional distro packages, but in practice, you almost never actually see this level of modularity because of all the complexity it brings. With Rust, an application can easily lock its dependencies, and only upgrade on its own time when needed (or when security updates are needed). But with the traditional model, the developers of an application can't really rely on the exact versions of dependencies being installed - instead, they need to trust that the distro maintainers have put together compatible versions of everything, and that the result works. And when something goes wrong, the developers also need to figure out which versions exactly were involved, and whether the problem exists only with a certain combination of dependencies, or is a general application problem.
All this means that it's unlikely that Servo would exist in its current form if it were packaged and distributed under the traditional package manager system, because that would create so much more work for everyone involved.
The Rust approach is to split-off a minimal subset of functionality from your project onto an independent sub-crate, which can then be depended on and audited independently from the larger project. You don't need to get all of ripgrep[1] in order to get access to its engine[2] (which is further disentangled for more granular use).
Beyond the specifics of how you acquire and keep that code you depend on up to date (including checking for CVEs), the work to check the code from your dependencies is roughly the same and scales with the size of the code. More, smaller dependencies vs one large dependency makes no difference if the aggregate of the former is roughly the size of the monolith. And if you're splitting off code from a monolith, you're running the risk of using it in a way that it was never designed to work (for example, maybe it relies on invariants maintained by other parts of the library).
In my opinion, more, smaller dependencies managed by a system capable of keeping track of the specific version of code you depend on, which structured data that allows you to perform checks on all your dependencies at once in an automated way is a much better engineering practice than "copy some code from some project". Vendoring is anathema to proper security practices (unless you have other mechanisms to deal with the vendoring, at which point you have a package manager by another name).
Not having a dependency management system isn't a solution to supply chain attacks, auditing your dependencies is
How do you do that practically? Do you read the source of every single package before doing a `brew update` or `npm update`?
What if these sources include binary packages?
The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3
Can anyone even review it in a month? And they publish a new update weekly.
You’re looking at the number of dependents. The React package has no dependencies.
Asides:
> Do you read the source of every single package before doing a `brew update` or `npm update`?
Yes, some combination of doing that or delegating it to trusted parties is required. (The difficulty should inform dependency choices.)
> What if these sources include binary packages?
Reproducible builds, or don’t use those packages.
Indeed.
My apologies for misinterpreting the link that I posted.
Consider "devDependencies" here
https://github.com/facebook/react/blob/main/package.json
As far as I know, these 100+ dev dependencies are installed by default. Yes, you can probably avoid it, but it will likely break something during the build process, and most people just stick to the default anyway.
> Reproducible builds, or don’t use those packages.
A lot of things are not reproducible/hermetic builds. Even GitHub Actions is not reproducible https://nesbitt.io/2025/12/06/github-actions-package-manager...
Most frontend frameworks are not reproducible either.
> don’t use those packages.
And do what?
devDependencies should only be installed if you're developing the React library itself. They won't be installed if you just depend on React.
> The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3
You’re looking a dependents. The core React package has no dependencies.
Not all dependencies are created equal. A dependency with millions of users under active development with a corporate sponsor that has a posted policy with an SLA to respond to security issues is an example of a low-risk dependency. Someone's side project with only a few active users and no way to contact the author is an example of a high-risk dependency. A dependency that forces you to take lots of indirect dependencies would be a high-risk dependency.
Here's an example dependency policy for something security critical: https://github.com/tock/tock/blob/master/doc/ExternalDepende...
Practically, unless you code is super super security sensitive (something like a root of trust), you won't be able to review everything. You end up going for "good" dependencies that are lower risk. You throw automated fuzzing and linting tools, and these days ask AI to audit it as well.
You always have to ask: what are the odds I do something dumb and introduce a security bug vs what are the odds I pull a dependency with a security bug. If there's already "battle hardened" code out there, it's usually lower risk to take the dep than do it yourself.
This whole thing is not a science, you have to look at it case-by-case.
(so yes, I'm stating that 99% of JS devs who _do_ precisely that, are not being serious, but at the same time I understand they just follow the "best practices" that the ecosystem pushes downstream, so it's understandable that most don't want to swim against the current when the whole ecosystem itself is not being serious either)
There are several ways to do this. What you mentioned is the brute-force method of security audits. That may be impractical as you allude to. Perhaps there are tools designed to catch security bugs in the source code. While they will never be perfect, these tools should significantly reduce the manual effort required.
Another obvious approach is to crowd source the verification. This can be achieved through security advisory databases like Rust's rustsec [1] service. Rust has tools that can use the data from rustsec to do the audit (cargo-audit). There's even a way to embed the dependency tree information in the target binary. Similar tools must exist for other languages too.
> What if these sources include binary packages?
Binaries can be audited if reproducible builds are enforced. Otherwise, it's an obvious supply chain risk. That's why distros and corporations prefer to build their software from source.
(The next most useful step, in the case where someone in your dependency tree is pwned, is to not have automated systems that update to the latest version frequently. Hang back a few days or so at least so that any damage can be contained. Cargo does not update to the latest version of a dependency on a built because of its lockfiles: you need to run an update manually)
That doesn't necessarily help you in the case of supply chains attacks. A large proportion of them are spread through compromised credentials. So even if the author of a package is reputable, you may still get malware through that package.
We need better tooling to enable crowdsourcing and make it accessible for everyone.
Someone committed malicious code in Amazon Developer Q.
AWS published a malicious version of their own extension.
https://aws.amazon.com/security/security-bulletins/AWS-2025-...
Some people use 'pnpm', which only runs installScripts for a whitelisted subset of packages, so an appreciable fraction of the npm ecosystem (those that don't use npm or yarn, but pnpm) do not run scripts by default.
Cargo compiles and runs `build.rs` for all dependencies, and there's no real alternative which doesn't.
Oh lord.
It does unlock some interesting things to be sure, like sqlx’ macros that check the query at compile time by connecting to the database and checking the query against it. If this sounds like the compiler connecting to a database, well, it’s because it is.
Or is there something that cargo does to manage it differently (due diligence?).