Preferences

At risk like node/npm with all the supply-chain attacks then?

Or is there something that cargo does to manage it differently (due diligence?).


You can use "cargo vendor" to copy-paste your dependencies C-style if you want to, and audit them all if you want. Mozilla does this for Firefox.

Cargo does have lock files by default. But we really need better tooling for auditing (and enforcing tha auditing has happened) to properly solve this.

I think the broader point being made here is that the C-style approach is to extract a minimal subset of the dependency and tightly review it and integrate it into your code. The Rust/Python approach is to use cargo/pip and treat the dependency as a black box outside your project.
Advocates of the C approach often gloss over the increased maintenance burden, especially when it comes to security issues. In essence, you’re signing up to maintain a limited fork & watch for CVEs separately from upstream.

So it's ultimately a trade off rather than a strictly superior solution.

Also, nothing in Rust prevents you from doing the same thing. In fact, I would argue that Cargo makes this process easier.

But that's what Linux distros are for, package maintainers watch the CVEs for you, and all you have to do is "apt upgrade"
Not sure I follow. Suppose you tore out a portion of libxml2 for use in your HTTP server. A CVE is filed against libxml2 that is related to the subset you tore out. Obviously, your server doesn't link against libxml2. How exactly would distro maintainers know to include your package in their list?
That's assuming you're using dynamically linked libraries/shared libraries. They're talking about "vendoring" the library into a statically linked binary or its own app-specific DLL.
Be very careful with that assumption.

The distros try, but one complex problem with a project that holds strong opinions and you may not have a fix.

The gnome keyring secrets being available to any process running under your UID, unless that process ops into a proxy as an example.

Looking at how every browser and busybox is exempted from apparmor is another.

It is not uncommon to punt the responsibility to users.

In theory yes, but in practice I don't think you could build something like Servo very easily like that. Servo is a browser, but it's also purposefully designed to be a browser-developer's toolkit. It is very modular, and lots of pieces (like the aforementioned CSS selector library) are broken out into separate packages that anyone can then use in other projects. And Servo isn't alone in this.

However, when you install Servo, you just install a single artefact. You don't need to juggle different versions of these different packages to make sure they're all compatible with each other, because the Servo team have already done that and compiled the result as a single static binary.

This creates a lot of flexibility. If the Servo maintainers think they need to make a breaking change somewhere, they can just do that without breaking things for other people. They depend internally on the newer version, but other projects can still continue using the older version, and end-users and distros don't need to worry about how best to package the two incompatible versions and how to make sure that the right ones are installed, because it's all statically built.

And it's like this all the way down. The regex crate is a fairly standard package in the ecosystem for working with regexes, and most people will just depend on it directly if they need that functionality. But again, it's not just a regex library, but a toolkit made up of the parts needed to build a regex library, and if you only need some of those parts (maybe fast substring matching, or a regex parser without the implementation), then those are available. They're all maintained by the same person, but split up in a way that makes the package very flexible for others to take exactly what they need.

In theory, all this is possible with traditional distro packages, but in practice, you almost never actually see this level of modularity because of all the complexity it brings. With Rust, an application can easily lock its dependencies, and only upgrade on its own time when needed (or when security updates are needed). But with the traditional model, the developers of an application can't really rely on the exact versions of dependencies being installed - instead, they need to trust that the distro maintainers have put together compatible versions of everything, and that the result works. And when something goes wrong, the developers also need to figure out which versions exactly were involved, and whether the problem exists only with a certain combination of dependencies, or is a general application problem.

All this means that it's unlikely that Servo would exist in its current form if it were packaged and distributed under the traditional package manager system, because that would create so much more work for everyone involved.

And advocates of the opposite approach created the dependencies hellscape that NPM is nowadays.
I mean, that's exactly what you are doing with every single dependency you take on regardless of language.
Let's be real about dependencies https://wiki.alopex.li/LetsBeRealAboutDependencies seems to give a different perspective on C dependencies though.
> the C-style approach is to extract a minimal subset of the dependency and tightly review it and integrate it into your code. The Rust/Python approach is to use cargo/pip and treat the dependency as a black box outside your project.

The Rust approach is to split-off a minimal subset of functionality from your project onto an independent sub-crate, which can then be depended on and audited independently from the larger project. You don't need to get all of ripgrep[1] in order to get access to its engine[2] (which is further disentangled for more granular use).

Beyond the specifics of how you acquire and keep that code you depend on up to date (including checking for CVEs), the work to check the code from your dependencies is roughly the same and scales with the size of the code. More, smaller dependencies vs one large dependency makes no difference if the aggregate of the former is roughly the size of the monolith. And if you're splitting off code from a monolith, you're running the risk of using it in a way that it was never designed to work (for example, maybe it relies on invariants maintained by other parts of the library).

In my opinion, more, smaller dependencies managed by a system capable of keeping track of the specific version of code you depend on, which structured data that allows you to perform checks on all your dependencies at once in an automated way is a much better engineering practice than "copy some code from some project". Vendoring is anathema to proper security practices (unless you have other mechanisms to deal with the vendoring, at which point you have a package manager by another name).

[1]: https://crates.io/crates/ripgrep

[2]: https://crates.io/crates/grep/

Supply-chain attacks aren't really a property of the dependency management system

Not having a dependency management system isn't a solution to supply chain attacks, auditing your dependencies is

> auditing your dependencies is

How do you do that practically? Do you read the source of every single package before doing a `brew update` or `npm update`?

What if these sources include binary packages?

The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3

Can anyone even review it in a month? And they publish a new update weekly.

> The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3

You’re looking at the number of dependents. The React package has no dependencies.

Asides:

> Do you read the source of every single package before doing a `brew update` or `npm update`?

Yes, some combination of doing that or delegating it to trusted parties is required. (The difficulty should inform dependency choices.)

> What if these sources include binary packages?

Reproducible builds, or don’t use those packages.

> You’re looking at the number of dependents. The React package has no dependencies.

Indeed.

My apologies for misinterpreting the link that I posted.

Consider "devDependencies" here

https://github.com/facebook/react/blob/main/package.json

As far as I know, these 100+ dev dependencies are installed by default. Yes, you can probably avoid it, but it will likely break something during the build process, and most people just stick to the default anyway.

> Reproducible builds, or don’t use those packages.

A lot of things are not reproducible/hermetic builds. Even GitHub Actions is not reproducible https://nesbitt.io/2025/12/06/github-actions-package-manager...

Most frontend frameworks are not reproducible either.

> don’t use those packages.

And do what?

> As far as I know, these 100+ dev dependencies are installed by default.

devDependencies should only be installed if you're developing the React library itself. They won't be installed if you just depend on React.

> And do what?

Keep on keepin on

The best tool for your median software-producing organization, who can’t just hire a team of engineers to do this, is update embargoes. You block updating packages until they’ve been on the registry for a month or whatever by default, allowing explicit exceptions if needed. It would protect you from all the major supply-chain attacks that have been caught in the wild.

> The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3

You’re looking a dependents. The core React package has no dependencies.

In security-sensitive code, you take dependencies sparingly, audit them, and lock to the version you audited and then only take updates on a rigid schedule (with time for new audits baked in) or under emergency conditions only.

Not all dependencies are created equal. A dependency with millions of users under active development with a corporate sponsor that has a posted policy with an SLA to respond to security issues is an example of a low-risk dependency. Someone's side project with only a few active users and no way to contact the author is an example of a high-risk dependency. A dependency that forces you to take lots of indirect dependencies would be a high-risk dependency.

Here's an example dependency policy for something security critical: https://github.com/tock/tock/blob/master/doc/ExternalDepende...

Practically, unless you code is super super security sensitive (something like a root of trust), you won't be able to review everything. You end up going for "good" dependencies that are lower risk. You throw automated fuzzing and linting tools, and these days ask AI to audit it as well.

You always have to ask: what are the odds I do something dumb and introduce a security bug vs what are the odds I pull a dependency with a security bug. If there's already "battle hardened" code out there, it's usually lower risk to take the dep than do it yourself.

This whole thing is not a science, you have to look at it case-by-case.

If that is really the case (I don't know numbers about React), in projects with a sane criteria of security, they would either only jump between versions that have passed a complete verification process (think industry certifications); or the other option is that simply by having such an enormous amount of dependencies would render that framework an undesirable tool to use, so they would just avoid it. What's not serious is living the life and incorporating 15-17K dependencies blindly because YOLO.

(so yes, I'm stating that 99% of JS devs who _do_ precisely that, are not being serious, but at the same time I understand they just follow the "best practices" that the ecosystem pushes downstream, so it's understandable that most don't want to swim against the current when the whole ecosystem itself is not being serious either)

> How do you do that practically? Do you read the source of every single package before doing a `brew update` or `npm update`?

There are several ways to do this. What you mentioned is the brute-force method of security audits. That may be impractical as you allude to. Perhaps there are tools designed to catch security bugs in the source code. While they will never be perfect, these tools should significantly reduce the manual effort required.

Another obvious approach is to crowd source the verification. This can be achieved through security advisory databases like Rust's rustsec [1] service. Rust has tools that can use the data from rustsec to do the audit (cargo-audit). There's even a way to embed the dependency tree information in the target binary. Similar tools must exist for other languages too.

> What if these sources include binary packages?

Binaries can be audited if reproducible builds are enforced. Otherwise, it's an obvious supply chain risk. That's why distros and corporations prefer to build their software from source.

[1] https://rustsec.org/

More useful than reading the code, in most cases, is looking at who's behind the code. Can you identify the author? Do they have an identity and reputation in the space? Are you looking at the version of the package they manage? People often freak out about the number of packages in such ecosystems but what matters a lot more is how many different people are in your dependency tree, who they are, and how they operate.

(The next most useful step, in the case where someone in your dependency tree is pwned, is to not have automated systems that update to the latest version frequently. Hang back a few days or so at least so that any damage can be contained. Cargo does not update to the latest version of a dependency on a built because of its lockfiles: you need to run an update manually)

> More useful than reading the code, in most cases, is looking at who's behind the code. Can you identify the author? Do they have an identity and reputation in the space?

That doesn't necessarily help you in the case of supply chains attacks. A large proportion of them are spread through compromised credentials. So even if the author of a package is reputable, you may still get malware through that package.

Normally it would omly be the diff from a previous version. But yes, it's not really practical for small companies or individuals atm. Larger companies do exactly this.

We need better tooling to enable crowdsourcing and make it accessible for everyone.

> Larger companies do exactly this.

Someone committed malicious code in Amazon Developer Q.

AWS published a malicious version of their own extension.

https://aws.amazon.com/security/security-bulletins/AWS-2025-...

I don't know much about node but cargo has lock file with hashes which prevents dep substitution unless dev decide to update lock file. Updating lock file has same risks as initial decision to depend on deps.
Edit: I misremembered a Rust crates capability (pre- and post-install hooks), so my comment was useless and misleading.
Rust crates run arbitrary code more often at build/install time than npm packages do.

Some people use 'pnpm', which only runs installScripts for a whitelisted subset of packages, so an appreciable fraction of the npm ecosystem (those that don't use npm or yarn, but pnpm) do not run scripts by default.

Cargo compiles and runs `build.rs` for all dependencies, and there's no real alternative which doesn't.

Rust crates can run arbitrary code at build time: https://doc.rust-lang.org/cargo/reference/build-scripts.html
> Build scripts communicate with Cargo by printing to stdout.

Oh lord.

Wrote an entire crate to clean up that mess (and provide traditional autoconf-ish features for build.rs): https://crates.io/crates/rsconf
Geez, thank you.
Aren't procedural macros amd build.rs arbitrary code being executed at build time?
Pretty much, yes. And they don’t have much as far as isolation goes. It’s a bit frightening honestly.

It does unlock some interesting things to be sure, like sqlx’ macros that check the query at compile time by connecting to the database and checking the query against it. If this sounds like the compiler connecting to a database, well, it’s because it is.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal