cachetools
Extensible memoizing collections and decorators
This package has a good security score with no known vulnerabilities.
Community Reviews
Solid caching primitives with excellent decorator ergonomics
The cache eviction policies are configurable and predictable, which is critical for production systems. I've used it extensively for API response caching and expensive computation memoization without surprises. The TTLCache is particularly useful for time-sensitive data.
The main friction point is the lack of async support - you'll need workarounds or alternative libraries for async/await code. Documentation is functional but sparse on advanced patterns like custom eviction policies or thread-safety considerations. Error messages are standard Python fare, nothing exceptional but rarely cryptic.
Best for: Synchronous Python applications needing straightforward function memoization or in-memory caching with standard eviction policies.
Avoid if: You're building async-first applications or need distributed caching across multiple processes or machines.
Solid caching primitive with minimal security surface area
The main security consideration is cache poisoning via untrusted input keys. The library doesn't sanitize or validate keys—you must ensure cache keys from user input are properly bounded and can't exhaust memory. TTLCache with maxsize limits helps, but you need discipline around key generation. Error handling is straightforward with standard Python exceptions that don't leak sensitive data.
The API is well-designed with clear bounds checking and predictable eviction behavior. No TLS/network concerns since it's purely in-memory. Thread safety is opt-in via locks or RLock decorators, which I appreciate—explicit is better than implicit. The code is readable enough to audit yourself if needed.
Best for: In-memory caching of trusted computations where you control cache key generation and need minimal dependencies.
Avoid if: You need distributed caching, persistence, or built-in input sanitization for user-controlled cache keys.
Solid caching primitives with minimal security surface area
The API is simple and predictable. Cache implementations behave like standard dicts with size/time constraints. Error handling is minimal but appropriate—you get standard KeyError and ValueError where expected. There's no logging or exception exposure that could leak sensitive data. Input validation is basic: the library trusts you not to cache sensitive objects inappropriately, which is the right design choice for a low-level primitive.
The main security consideration is on you: cachetools won't prevent you from accidentally caching authentication tokens or sensitive data across user contexts. The @cached decorator's key function must be carefully designed in multi-tenant scenarios. No built-in thread safety on standard caches (use locks explicitly), though this is well-documented.
Best for: Projects needing simple, auditable in-memory caching with minimal dependency risk and full control over cache key generation.
Avoid if: You need distributed caching, built-in encryption, or automatic cache invalidation across services.
Sign in to write a review
Sign In