lru-memoizer
Memoize functions results using an lru-cache.
This package has a good security score with no known vulnerabilities.
Community Reviews
Solid async-aware memoization with LRU eviction, minimal observability
Configuration is straightforward with sensible defaults. You get control over cache size (max), TTL, and custom hash functions for complex argument serialization. The load/normalizeArguments pattern lets you intercept and transform arguments before hashing, useful when dealing with objects that need deep comparison. Error handling is transparent—rejected promises aren't cached by default, so transient failures don't poison your cache.
The main limitation is observability. There's no built-in metrics for hit/miss rates, evictions, or cache size monitoring. You'll need to wrap it or instrument separately if you need production visibility. Also, the clone option for cached values can surprise you with memory usage if you're caching large objects without realizing it creates deep copies on every hit.
Best for: Memoizing async operations like database queries or API calls where you need LRU eviction and promise deduplication.
Avoid if: You need detailed cache metrics and observability out-of-the-box or require custom eviction policies beyond LRU.
Simple, effective memoization with minimal setup overhead
The documentation is minimal but sufficient for straightforward use cases. The README covers the essentials with clear examples. I did have to dig into the source code once when dealing with custom cache key generation, which could be better documented. Error messages are generally clear when you misconfigure options, though debugging cache misses requires adding your own logging.
Community support is sparse - don't expect quick answers on Stack Overflow. However, the package is stable enough that I've rarely needed help. The biggest gotcha is understanding how the `hash` function works for generating cache keys with complex arguments - this tripped me up initially when objects weren't being cached as expected.
Best for: Caching expensive async operations like database queries or API calls in Node.js applications where you need automatic memory management.
Avoid if: You need advanced features like cache warming, distributed caching, or detailed cache analytics and monitoring capabilities.
Functional memoization with LRU eviction, but lacks modern DX polish
The developer experience falls short of modern standards, though. TypeScript support is minimal—there are community-maintained types but they're basic and don't preserve your function signatures properly, forcing you to re-annotate return types. Error messages when you misconfigure options are cryptic at best. The documentation covers the basics but lacks practical examples for common scenarios like custom key generation or handling complex argument types.
The API for customizing cache keys (the `hash` option) works but isn't intuitive—you have to understand it stringifies by default, which can cause issues with objects. There's also a sync vs async mode that trips people up initially. It's serviceable for backend services where you need simple function-level caching, but you'll spend time reading the source code to understand edge cases.
Best for: Backend services needing simple function memoization with automatic memory management through LRU eviction.
Avoid if: You need robust TypeScript support, complex cache key strategies, or are building a TypeScript-first application requiring strong type inference.
Sign in to write a review
Sign In