When I first started investigating supply chain attacks back in 2019, they were still considered an edge case. A curiosity for security researchers, sure, but not something most development teams worried about. That changed dramatically.
The fundamental problem is trust. Every time you run pip install or npm install, you're extending an enormous amount of trust to strangers on the internet. You trust that the package maintainer is who they claim to be. You trust that their account hasn't been compromised. You trust that the package hasn't been modified in transit. You trust that the dependencies of your dependencies are equally trustworthy.
That's a lot of trust for a single command.
How These Attacks Actually Work
The most common technique I see in the wild is what we call dependency confusion. It exploits a simple quirk in how package managers resolve names. If your company has an internal package called company-utils and an attacker publishes a package with the same name to the public npm registry with a higher version number, many build systems will happily pull down the malicious public version instead.
This isn't theoretical. In 2021, a researcher named Alex Birsan used this exact technique to demonstrate vulnerabilities in Apple, Microsoft, and dozens of other major companies. He didn't even have to be clever about it—the attack essentially runs itself once the package is published.
Typosquatting is another favorite. Attackers register packages with names that are common misspellings of popular libraries. Think reqeusts instead of requests, or electorn instead of electron. Developers are human. They make typos. And when they do, they might end up running code that exfiltrates environment variables, installs backdoors, or worse.
Why Traditional Security Tools Miss This
Your antivirus isn't going to catch this. Neither is your firewall, your SIEM, or your EDR solution. The malicious code arrives through your normal package installation process, often signed and delivered over HTTPS from legitimate infrastructure. It looks exactly like a normal software update because, in many ways, it is one—just not the update you intended.
The code often doesn't do anything obviously malicious on first inspection either. Sophisticated attackers hide their payloads in obfuscated post-install scripts, encode them in what looks like localization data, or simply wait until the package has been running in production for weeks before phoning home.
What Actually Helps
The only reliable defense is controlling what packages can enter your environment in the first place. This means maintaining explicit allowlists of approved packages, blocking known-malicious packages, and implementing a review process for new dependencies.
It also means implementing what I call "soak time"—a waiting period before new package versions are allowed into your build pipeline. Many malicious packages are caught and removed within 24-48 hours of publication. If you're not pulling packages the moment they're released, you've dramatically reduced your exposure.
None of this is easy. It requires tooling, it requires process changes, and it requires buy-in from development teams who are used to having unrestricted access to package registries. But the alternative—trusting the entire open source ecosystem implicitly—is increasingly untenable.
The question isn't whether your organization will encounter a supply chain attack. It's whether you'll notice when it happens.