* Refactor caching system to use pluggable stores
The commit modernizes the caching implementation by introducing a pluggable store interface that allows different cache backends. Key changes:
- Add Store interface for custom cache implementations
- Create default TTL-based store for backwards compatibility
- Add example LRU store for memory-bounded caching
- Support cache store configuration via options pattern
- Make cache cleanup logic implementation-specific
- Add comprehensive tests and documentation
The main goals were to:
1. Prevent unbounded memory growth through pluggable stores
2. Enable distributed caching support
3. Maintain backwards compatibility
4. Improve testability and maintainability
Signed-off-by: franchb <hello@franchb.com>
* Add custom cache stores docs and navigation
Signed-off-by: franchb <hello@franchb.com>
* Use GetOrCompute for atomic cache access
The commit introduces an atomic GetOrCompute method to the cache interface and refactors all cache implementations to use it. This prevents race conditions and duplicate computations when multiple goroutines request the same uncached key simultaneously.
The changes eliminate a time-of-check to time-of-use race condition in the original caching implementation, where separate Get/Set operations could lead to duplicate renders under high concurrency.
With GetOrCompute, the entire check-compute-store operation happens atomically while holding the lock, ensuring only one goroutine computes a value for any given key.
The API change is backwards compatible as the framework handles the GetOrCompute logic internally. Existing applications will automatically benefit from the
* rename to WithCacheStore
---------
Signed-off-by: franchb <hello@franchb.com>
Co-authored-by: maddalax <jm@madev.me>