Last fall, our threat intelligence team noticed an unusual pattern on PyPI. Someone was registering packages at an unusually high rate—roughly one every fifteen minutes, around the clock. The names weren't random. Each one was a careful misspelling of a popular package.
Over the next 48 hours, we watched as 200 packages were published. requets, reequests, requessts. djanngo, djano, djangoo. flassk, flaask, flaskk. The attacker had clearly done their homework, identifying the most commonly mistyped variations of the most commonly installed packages.
The Payload
Each malicious package contained the same basic structure: a setup.py that would execute during installation, a small loader, and an obfuscated payload. The loader was clever—it checked for various indicators that it might be running in a sandbox or analysis environment before proceeding.
If the checks passed, the payload would exfiltrate environment variables (looking specifically for AWS credentials, API keys, and database connection strings), establish persistence through a cron job, and download a second-stage payload from a command-and-control server.
The C2 infrastructure was professionally operated. The domains had been registered months earlier and had built up enough reputation to avoid most blocklists. The servers were hosted across multiple providers and countries. Whoever was behind this had done it before.
Why It Works
You might think that developers would notice when they install reqeusts instead of requests. But that's not how these attacks typically succeed.
Most installations don't happen through a developer carefully typing a package name in a terminal. They happen through requirements files that get copy-pasted between projects, through AI coding assistants that occasionally hallucinate package names, through documentation that contains typos, through automated systems that scrape dependencies from upstream sources.
In a CI/CD pipeline, there's often no human reviewing the actual packages being installed. The build either succeeds or fails. A typosquatted package that installs successfully and provides roughly similar functionality to the real package might never be noticed at all.
Response Challenges
We reported the packages to PyPI, and they were removed within a few hours. But "removed" doesn't mean "uninstalled from systems that already downloaded them." Organizations that had pulled any of these packages during that 48-hour window were already compromised.
There's no central notification system for this. PyPI doesn't know who installed what—packages are distributed through a CDN without authentication. The attacker doesn't need to maintain their packages long-term; they just need them to exist long enough for vulnerable systems to pull them.
We published indicators of compromise and reached out to affected organizations we could identify, but realistically, most victims probably never learned about the incident.
What Would Have Helped
Allowlist-based package management would have prevented this entirely. If your build systems can only install packages that have been explicitly approved, a typosquatted package can't be installed regardless of what someone accidentally types.
Soak time would have reduced exposure significantly. Most malicious packages are detected and removed within 24-48 hours. If you're not installing packages until they've been available for at least that long, you're protected against the majority of typosquatting campaigns.
Hash verification would have detected the discrepancy for anyone who previously installed the legitimate packages and then accidentally installed the typosquatted version.
None of these are novel ideas. They've been standard practice in high-security environments for years. But for most development teams, the friction of implementing them has outweighed the perceived risk. Incidents like this one are slowly changing that calculation.