htmgo/framework/h/cache/interface.go
franchb cfcfe7cb21
Use GetOrCompute for atomic cache access
The commit introduces an atomic GetOrCompute method to the cache interface and refactors all cache implementations to use it. This prevents race conditions and duplicate computations when multiple goroutines request the same uncached key simultaneously.

The changes eliminate a time-of-check to time-of-use race condition in the original caching implementation, where separate Get/Set operations could lead to duplicate renders under high concurrency.

With GetOrCompute, the entire check-compute-store operation happens atomically while holding the lock, ensuring only one goroutine computes a value for any given key.

The API change is backwards compatible as the framework handles the GetOrCompute logic internally. Existing applications will automatically benefit from the
2025-07-03 17:46:09 +03:00

28 lines
1,017 B
Go

package cache
import (
"time"
)
// Store defines the interface for a pluggable cache.
// This allows users to provide their own caching implementations, such as LRU, LFU,
// or even distributed caches. The cache implementation is responsible for handling
// its own eviction policies (TTL, size limits, etc.).
type Store[K comparable, V any] interface {
// Set adds or updates an entry in the cache. The implementation should handle the TTL.
Set(key K, value V, ttl time.Duration)
// GetOrCompute atomically gets an existing value or computes and stores a new value.
// This method prevents duplicate computation when multiple goroutines request the same key.
// The compute function is called only if the key is not found or has expired.
GetOrCompute(key K, compute func() V, ttl time.Duration) V
// Delete removes an entry from the cache.
Delete(key K)
// Purge removes all items from the cache.
Purge()
// Close releases any resources used by the cache, such as background goroutines.
Close()
}