Documentation
¶
Overview ¶
Package lru provides generic, thread-safe LRU cache implementations.
Three cache types are provided:
- Cache: A standard LRU cache with fixed capacity
- Expirable: An LRU cache with per-entry TTL expiration
- Sharded: A sharded LRU cache for reduced lock contention under high concurrency
All are safe for concurrent use and support eviction callbacks.
Basic Usage ¶
Create a cache and store values:
cache := lru.MustNew[string, int](100)
cache.Set("key", 42)
value, found := cache.Get("key")
Memoization with GetOrSet ¶
Compute values on cache miss:
result, err := cache.GetOrSet("key", func() (int, error) {
return expensiveComputation()
})
For expensive computations where concurrent cache misses for the same key should only trigger a single computation, use Cache.GetOrSetSingleflight:
result, err := cache.GetOrSetSingleflight("key", func() (int, error) {
return expensiveAPICall()
})
Expirable Cache ¶
Create a cache where entries expire after a duration:
cache := lru.MustNewExpirable[string, int](100, 5*time.Minute)
cache.Set("key", 42)
value, ttl, found := cache.GetWithTTL("key")
TTL is fixed per write; reads do not reset the TTL (no sliding expiration). Each entry's expiration time is set when written via Expirable.Set or Expirable.GetOrSet and is not extended by subsequent reads.
Per-entry TTL can be set using the WithTTL option:
cache.Set("shortLived", 42, lru.WithTTL(30*time.Second))
cache.Set("longLived", 100, lru.WithTTL(1*time.Hour))
Expired entries are removed lazily on access or during write operations. Call Expirable.RemoveExpired to explicitly purge all expired entries.
Eviction Callbacks ¶
Register a callback to be notified when entries are evicted:
cache.OnEvict(func(key string, value int) {
fmt.Printf("evicted: %s=%d\n", key, value)
})
Callbacks are invoked for capacity evictions, explicit removals via Cache.Remove, and Cache.Clear. For Expirable.Clear, callbacks are only invoked for entries that have not yet expired. However, capacity-based evictions trigger the callback even if the evicted entry has already expired.
Callbacks are invoked after the cache's internal lock is released and may be called concurrently from multiple goroutines. Callback implementations must be safe for concurrent use.
Example (Basic) ¶
This example demonstrates basic usage of the LRU cache.
package main
import (
"fmt"
"github.com/rselbach/lru"
)
func main() {
// Create a new LRU cache with a capacity of 3 items
cache := lru.MustNew[string, int](3)
// Add items to the cache
cache.Set("one", 1)
cache.Set("two", 2)
cache.Set("three", 3)
// Get an item from the cache
value, found := cache.Get("two")
if found {
fmt.Printf("Value for 'two': %d\n", value)
}
// Adding a fourth item will evict the least recently used item ("one")
cache.Set("four", 4)
// "one" is no longer in the cache
_, found = cache.Get("one")
fmt.Printf("Is 'one' in the cache? %t\n", found)
// Print all keys in the cache (most recently used first)
fmt.Printf("Cache keys: %v\n", cache.Keys())
}
Output: Value for 'two': 2 Is 'one' in the cache? false Cache keys: [four two three]
Example (Eviction) ¶
This example demonstrates eviction of items when the cache is at capacity.
package main
import (
"fmt"
"github.com/rselbach/lru"
)
func main() {
// Create a small cache with capacity of 2
cache := lru.MustNew[string, string](2)
// Add two items to fill the cache
cache.Set("A", "Item A")
cache.Set("B", "Item B")
// Print current keys
fmt.Printf("After adding A, B: %v\n", cache.Keys())
// Access A to make B the least recently used
cache.Get("A")
fmt.Printf("After accessing A: %v\n", cache.Keys())
// Add C, which should evict B
cache.Set("C", "Item C")
fmt.Printf("After adding C: %v\n", cache.Keys())
// Verify B is gone
_, hasB := cache.Get("B")
fmt.Printf("Contains B? %t\n", hasB)
}
Output: After adding A, B: [B A] After accessing A: [A B] After adding C: [C A] Contains B? false
Example (EvictionCallback) ¶
This example demonstrates using the eviction callback to track which items are evicted from the cache.
package main
import (
"fmt"
"github.com/rselbach/lru"
)
func main() {
// Create a cache with a small capacity
cache := lru.MustNew[string, int](3)
// Keep track of evicted items
evictedKeys := make([]string, 0)
evictedValues := make([]int, 0)
// Set the eviction callback
cache.OnEvict(func(key string, value int) {
evictedKeys = append(evictedKeys, key)
evictedValues = append(evictedValues, value)
fmt.Printf("Evicted: %s=%d\n", key, value)
})
// Fill the cache to capacity
cache.Set("a", 1)
cache.Set("b", 2)
cache.Set("c", 3)
// Adding a fourth item will evict the least recently used one (a)
cache.Set("d", 4)
// Explicitly remove an item
cache.Remove("b")
// Clear the cache - this will evict all remaining items
cache.Clear()
// Print all evicted items in the order they were evicted
fmt.Printf("All evicted keys: %v\n", evictedKeys)
fmt.Printf("All evicted values: %v\n", evictedValues)
}
Output: Evicted: a=1 Evicted: b=2 Evicted: d=4 Evicted: c=3 All evicted keys: [a b d c] All evicted values: [1 2 4 3]
Example (ExpirableBasic) ¶
This example demonstrates basic usage of the Expirable cache with time-to-live functionality.
package main
import (
"fmt"
"time"
"github.com/rselbach/lru"
)
func main() {
// Create a new Expirable cache with a capacity of 3 items and a TTL of 1 hour
cache := lru.MustNewExpirable[string, int](3, time.Hour)
// Add items to the cache
cache.Set("one", 1)
cache.Set("two", 2)
cache.Set("three", 3)
// Get an item from the cache
value, found := cache.Get("two")
if found {
fmt.Printf("Value for 'two': %d\n", value)
}
// Check if a key exists in the cache
if cache.Contains("three") {
fmt.Println("'three' is in the cache")
}
// Print all keys in the cache (most recently used first)
fmt.Printf("Cache keys: %v\n", cache.Keys())
}
Output: Value for 'two': 2 'three' is in the cache Cache keys: [two three one]
Example (ExpirableEvictionCallback) ¶
This example demonstrates using eviction callbacks with the Expirable cache.
package main
import (
"fmt"
"sync"
"time"
"github.com/rselbach/lru"
)
func main() {
// Create a timer simulation function for testing
createTimedCache := func() (*lru.Expirable[string, int], func(time.Duration)) {
cache := lru.MustNewExpirable[string, int](3, time.Minute)
var mutex sync.Mutex
simulatedTime := time.Now()
// Set the time function
cache.SetTimeNowFunc(func() time.Time {
mutex.Lock()
defer mutex.Unlock()
return simulatedTime
})
// Create a function to advance time
advanceTime := func(duration time.Duration) {
mutex.Lock()
defer mutex.Unlock()
simulatedTime = simulatedTime.Add(duration)
fmt.Printf("Time advanced by %v\n", duration)
}
return cache, advanceTime
}
// Create our cache and time advancement function
cache, advanceTime := createTimedCache()
// Set up eviction tracking
evictedItems := make(map[string]int)
cache.OnEvict(func(key string, value int) {
evictedItems[key] = value
fmt.Printf("Evicted: %s=%d\n", key, value)
})
// Add items to the cache
cache.Set("a", 1)
cache.Set("b", 2)
cache.Set("c", 3)
// This should evict the least recently used item (a)
cache.Set("d", 4)
fmt.Printf("After capacity eviction: %v\n", cache.Keys())
// Advance time to expire all items
advanceTime(time.Minute + time.Second)
// Expired items won't be automatically removed until a write operation
fmt.Printf("After time advance, items still in cache (lazy): %v\n", cache.Keys())
// Explicit removal of expired items will trigger callbacks
removed := cache.RemoveExpired()
fmt.Printf("Items removed by RemoveExpired: %d\n", removed)
// Add a new item after expiration
cache.Set("e", 5)
// Print all evicted items
fmt.Printf("Total evicted items: %d\n", len(evictedItems))
}
Output: Evicted: a=1 After capacity eviction: [d c b] Time advanced by 1m1s After time advance, items still in cache (lazy): [] Evicted: d=4 Evicted: c=3 Evicted: b=2 Items removed by RemoveExpired: 3 Total evicted items: 4
Example (GetOrSet) ¶
This example demonstrates using GetOrSet for memoizing expensive computations.
package main
import (
"fmt"
"math"
"github.com/rselbach/lru"
)
func main() {
// A simulated expensive computation
computeCount := 0
computeExpensive := func(n int) (float64, error) {
computeCount++
return math.Pow(float64(n), 2), nil
}
cache := lru.MustNew[int, float64](10)
// First call computes the value
result, err := cache.GetOrSet(5, func() (float64, error) {
return computeExpensive(5)
})
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Printf("Result: %.1f (computed: %t)\n", result, computeCount == 1)
// Second call gets from cache
result, err = cache.GetOrSet(5, func() (float64, error) {
return computeExpensive(5)
})
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Printf("Result: %.1f (from cache: %t)\n", result, computeCount == 1)
// Different key computes a new value
result, err = cache.GetOrSet(10, func() (float64, error) {
return computeExpensive(10)
})
if err != nil {
fmt.Println("Error:", err)
return
}
fmt.Printf("Result: %.1f (computed: %t)\n", result, computeCount == 2)
}
Output: Result: 25.0 (computed: true) Result: 25.0 (from cache: true) Result: 100.0 (computed: true)
Example (GetWithTTL) ¶
This example demonstrates using GetWithTTL to retrieve a value along with its remaining TTL.
package main
import (
"fmt"
"time"
"github.com/rselbach/lru"
)
func main() {
// Instead of a custom wrapper, we'll directly use the Expirable cache
startTime := time.Date(2023, 1, 1, 12, 0, 0, 0, time.UTC)
var currentTime = startTime
cache := lru.MustNewExpirable[string, string](5, 1*time.Hour)
// Replace the timeNow function with our simulated time
cache.SetTimeNowFunc(func() time.Time {
return currentTime
})
// Function to advance our simulated time
advanceTime := func(duration time.Duration) {
currentTime = currentTime.Add(duration)
fmt.Printf("Time is now: %s\n", currentTime.Format(time.Kitchen))
}
// Start the example
fmt.Printf("Time is now: %s\n", currentTime.Format(time.Kitchen))
// Add items to the cache
cache.Set("key1", "value1")
cache.Set("key2", "value2")
// Check TTLs
_, ttl1, _ := cache.GetWithTTL("key1")
_, ttl2, _ := cache.GetWithTTL("key2")
fmt.Printf("key1 TTL: %s\n", ttl1.Round(time.Second))
fmt.Printf("key2 TTL: %s\n", ttl2.Round(time.Second))
// Advance time by 20 minutes
advanceTime(20 * time.Minute)
// Check TTLs again - both should still be valid
_, ttl1, found1 := cache.GetWithTTL("key1")
_, ttl2, found2 := cache.GetWithTTL("key2")
fmt.Printf("key1 TTL: %s (exists: %t)\n", ttl1.Round(time.Second), found1)
fmt.Printf("key2 TTL: %s (exists: %t)\n", ttl2.Round(time.Second), found2)
// Advance time past the TTL
advanceTime(41 * time.Minute) // Now at 1:01 PM (past the 1 hour TTL)
// Both should be expired now
// Note: accessing expired items now removes them automatically
_, _, found1 = cache.GetWithTTL("key1")
_, _, found2 = cache.GetWithTTL("key2")
fmt.Printf("key1 exists: %t\n", found1)
fmt.Printf("key2 exists: %t\n", found2)
// Items were already removed by the Get calls above
removed := cache.RemoveExpired()
fmt.Printf("Removed %d expired entries\n", removed)
}
Output: Time is now: 12:00PM key1 TTL: 1h0m0s key2 TTL: 1h0m0s Time is now: 12:20PM key1 TTL: 40m0s (exists: true) key2 TTL: 40m0s (exists: true) Time is now: 1:01PM key1 exists: false key2 exists: false Removed 0 expired entries
Example (LruAndExpiration) ¶
This example demonstrates cache eviction based on both LRU and expiration.
package main
import (
"fmt"
"sync"
"time"
"github.com/rselbach/lru"
)
func main() {
// Create a new cache with a small capacity
cache := lru.MustNewExpirable[string, string](2, 1*time.Minute)
// Override time function for testing
var mutex sync.Mutex
simulatedTime := time.Now()
cache.SetTimeNowFunc(func() time.Time {
mutex.Lock()
defer mutex.Unlock()
return simulatedTime
})
// Function to advance time
advanceTime := func(duration time.Duration) {
mutex.Lock()
defer mutex.Unlock()
simulatedTime = simulatedTime.Add(duration)
}
// Add two items to fill the cache
cache.Set("A", "Item A")
cache.Set("B", "Item B")
fmt.Printf("After adding A, B: %v\n", cache.Keys())
// Access A to make B the least recently used
_, _ = cache.Get("A")
fmt.Printf("After accessing A: %v\n", cache.Keys())
// Add C, which should evict B due to LRU
cache.Set("C", "Item C")
fmt.Printf("After adding C: %v\n", cache.Keys())
// Advance time past expiration for all entries
advanceTime(61 * time.Second) // Now past the 1 minute TTL
// This should only return D since all other items have expired and
// our Set operation automatically removes expired items
cache.Set("D", "Item D")
fmt.Printf("After adding D: %v\n", cache.Keys())
}
Output: After adding A, B: [B A] After accessing A: [A B] After adding C: [C A] After adding D: [D]
Index ¶
- Constants
- type Cache
- func (c *Cache[K, V]) Capacity() int
- func (c *Cache[K, V]) Clear()
- func (c *Cache[K, V]) Contains(key K) bool
- func (c *Cache[K, V]) Get(key K) (V, bool)
- func (c *Cache[K, V]) GetOrSet(key K, compute func() (V, error)) (V, error)
- func (c *Cache[K, V]) GetOrSetSingleflight(key K, compute func() (V, error)) (V, error)
- func (c *Cache[K, V]) Keys() []K
- func (c *Cache[K, V]) Len() int
- func (c *Cache[K, V]) OnEvict(f OnEvictFunc[K, V])
- func (c *Cache[K, V]) Peek(key K) (V, bool)
- func (c *Cache[K, V]) Remove(key K) bool
- func (c *Cache[K, V]) Set(key K, value V)
- type Expirable
- func (c *Expirable[K, V]) Capacity() int
- func (c *Expirable[K, V]) Clear()
- func (c *Expirable[K, V]) Contains(key K) bool
- func (c *Expirable[K, V]) Get(key K) (V, bool)
- func (c *Expirable[K, V]) GetOrSet(key K, compute func() (V, error), opts ...SetOption) (V, error)
- func (c *Expirable[K, V]) GetOrSetSingleflight(key K, compute func() (V, error), opts ...SetOption) (V, error)
- func (c *Expirable[K, V]) GetWithTTL(key K) (V, time.Duration, bool)
- func (c *Expirable[K, V]) Keys() []K
- func (c *Expirable[K, V]) Len() int
- func (c *Expirable[K, V]) OnEvict(f OnEvictFunc[K, V])
- func (c *Expirable[K, V]) Peek(key K) (V, bool)
- func (c *Expirable[K, V]) Remove(key K) bool
- func (c *Expirable[K, V]) RemoveExpired() int
- func (c *Expirable[K, V]) Set(key K, value V, opts ...SetOption)
- func (c *Expirable[K, V]) SetTTL(ttl time.Duration) error
- func (c *Expirable[K, V]) SetTimeNowFunc(f func() time.Time)
- func (c *Expirable[K, V]) TTL() time.Duration
- type OnEvictFunc
- type SetOption
- type Sharded
- func MustNewSharded[K comparable, V any](capacity int) *Sharded[K, V]
- func MustNewShardedWithCount[K comparable, V any](capacity, shardCount int) *Sharded[K, V]
- func NewSharded[K comparable, V any](capacity int) (*Sharded[K, V], error)
- func NewShardedWithCount[K comparable, V any](capacity, shardCount int) (*Sharded[K, V], error)
- func (s *Sharded[K, V]) Capacity() int
- func (s *Sharded[K, V]) Clear()
- func (s *Sharded[K, V]) Contains(key K) bool
- func (s *Sharded[K, V]) Get(key K) (V, bool)
- func (s *Sharded[K, V]) GetOrSet(key K, compute func() (V, error)) (V, error)
- func (s *Sharded[K, V]) GetOrSetSingleflight(key K, compute func() (V, error)) (V, error)
- func (s *Sharded[K, V]) Keys() []K
- func (s *Sharded[K, V]) Len() int
- func (s *Sharded[K, V]) OnEvict(f OnEvictFunc[K, V])
- func (s *Sharded[K, V]) Peek(key K) (V, bool)
- func (s *Sharded[K, V]) Remove(key K) bool
- func (s *Sharded[K, V]) Set(key K, value V)
- func (s *Sharded[K, V]) ShardCount() int
Examples ¶
Constants ¶
const DefaultShardCount = 16
DefaultShardCount is the default number of shards for a Sharded cache.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Cache ¶
type Cache[K comparable, V any] struct { // contains filtered or unexported fields }
Cache represents a thread-safe, fixed-size LRU cache. A Cache must be created with New or MustNew; the zero value is not ready for use.
func MustNew ¶
func MustNew[K comparable, V any](capacity int) *Cache[K, V]
MustNew creates a new LRU cache with the given capacity. It panics if the capacity is less than or equal to zero.
func New ¶
func New[K comparable, V any](capacity int) (*Cache[K, V], error)
New creates a new LRU cache with the given capacity. The capacity must be greater than zero.
func (*Cache[K, V]) Get ¶
Get retrieves a value from the cache by key. It returns the value and a boolean indicating whether the key was found. This method also updates the item's position in the LRU list.
func (*Cache[K, V]) GetOrSet ¶
GetOrSet retrieves a value from the cache by key, or computes and sets it if not present. The compute function is only called if the key is not present in the cache. Note: if multiple goroutines call GetOrSet concurrently for the same missing key, compute may be called multiple times but only one result will be cached.
func (*Cache[K, V]) GetOrSetSingleflight ¶
GetOrSetSingleflight retrieves a value from the cache by key, or computes and sets it if not present. Unlike Cache.GetOrSet, if multiple goroutines call GetOrSetSingleflight concurrently for the same missing key, the compute function is called exactly once and all callers receive the same result. This is useful when the compute function is expensive (e.g., database queries, API calls).
The singleflight deduplication only applies to concurrent in-flight calls; once a value is cached, subsequent calls return the cached value without invoking singleflight.
func (*Cache[K, V]) Keys ¶
func (c *Cache[K, V]) Keys() []K
Keys returns a slice of all keys in the cache. The order is from most recently used to least recently used.
func (*Cache[K, V]) OnEvict ¶
func (c *Cache[K, V]) OnEvict(f OnEvictFunc[K, V])
OnEvict sets a callback function that will be called when an entry is evicted from the cache. The callback will receive the key and value of the evicted entry.
The callback is invoked after the cache's internal lock is released and may be called concurrently from multiple goroutines. It must be safe for concurrent use.
func (*Cache[K, V]) Peek ¶
Peek retrieves a value from the cache by key without updating its position in the LRU list. This is useful for checking a value without affecting eviction order. Returns the value and a boolean indicating whether the key was found.
type Expirable ¶
type Expirable[K comparable, V any] struct { // contains filtered or unexported fields }
Expirable represents a thread-safe, fixed-size LRU cache with expiry functionality. Each entry has an absolute expiration time set when written via Expirable.Set or Expirable.GetOrSet. The TTL is not refreshed on reads (no sliding expiration). An Expirable must be created with NewExpirable or MustNewExpirable; the zero value is not ready for use.
func MustNewExpirable ¶
MustNewExpirable creates a new LRU cache with the given capacity and TTL. It panics if the capacity or TTL is less than or equal to zero.
func NewExpirable ¶
NewExpirable creates a new LRU cache with the given capacity and TTL. Each entry expires a fixed duration after it is written via Set or GetOrSet. Reads (Get, Peek, GetWithTTL) do not extend an entry's TTL. The capacity must be greater than zero, and the TTL must be greater than zero.
func (*Expirable[K, V]) Clear ¶
func (c *Expirable[K, V]) Clear()
Clear removes all items from the cache.
If an eviction callback is set, it is called only for entries that have not yet expired at the time of clearing.
func (*Expirable[K, V]) Contains ¶
Contains checks if a key exists in the cache and is not expired.
Note: This method does not remove expired entries from the cache. Use Expirable.RemoveExpired to explicitly purge expired entries.
func (*Expirable[K, V]) Get ¶
Get retrieves a value from the cache by key. It returns the value and a boolean indicating whether the key was found and not expired. This method also updates the item's position in the LRU list. Expired items are removed when accessed.
func (*Expirable[K, V]) GetOrSet ¶
GetOrSet retrieves a value from the cache by key, or computes and sets it if not present or expired. The compute function is only called if the key is not present in the cache or is expired. Note: if multiple goroutines call GetOrSet concurrently for the same missing/expired key, compute may be called multiple times but only one result will be cached.
Options can be passed to customize the entry, such as WithTTL to override the cache's default TTL for this specific entry.
func (*Expirable[K, V]) GetOrSetSingleflight ¶
func (c *Expirable[K, V]) GetOrSetSingleflight(key K, compute func() (V, error), opts ...SetOption) (V, error)
GetOrSetSingleflight retrieves a value from the cache by key, or computes and sets it if not present or expired. Unlike Expirable.GetOrSet, if multiple goroutines call GetOrSetSingleflight concurrently for the same missing/expired key, the compute function is called exactly once and all callers receive the same result. This is useful when the compute function is expensive (e.g., database queries, API calls).
The singleflight deduplication only applies to concurrent in-flight calls; once a value is cached, subsequent calls return the cached value without invoking singleflight.
Options can be passed to customize the entry, such as WithTTL to override the cache's default TTL for this specific entry.
func (*Expirable[K, V]) GetWithTTL ¶
GetWithTTL retrieves a value and its remaining TTL from the cache by key. It returns the value, remaining TTL, and a boolean indicating whether the key was found and not expired. Expired items are removed when accessed.
func (*Expirable[K, V]) Keys ¶
func (c *Expirable[K, V]) Keys() []K
Keys returns a slice of all keys in the cache that haven't expired. The order is from most recently used to least recently used.
func (*Expirable[K, V]) Len ¶
Len returns the current number of non-expired items in the cache.
Note: This method does not remove expired entries; it only excludes them from the count. Use Expirable.RemoveExpired to explicitly purge expired entries.
func (*Expirable[K, V]) OnEvict ¶
func (c *Expirable[K, V]) OnEvict(f OnEvictFunc[K, V])
OnEvict sets a callback function that will be called when an entry is evicted from the cache. The callback will receive the key and value of the evicted entry. This includes both manual removals and automatic evictions due to capacity or expiry.
The callback is invoked after the cache's internal lock is released and may be called concurrently from multiple goroutines. It must be safe for concurrent use.
func (*Expirable[K, V]) Peek ¶
Peek retrieves a value from the cache by key without updating its position in the LRU list. This is useful for checking a value without affecting eviction order. Returns the value and a boolean indicating whether the key was found and not expired.
Note: Unlike Expirable.Get, expired items are not removed from the cache. Use Expirable.RemoveExpired to explicitly purge expired entries.
func (*Expirable[K, V]) Remove ¶
Remove deletes an item from the cache by key. It returns whether the key was found and removed.
func (*Expirable[K, V]) RemoveExpired ¶
RemoveExpired explicitly removes all expired items from the cache. Returns the number of items removed. This method will call the eviction callback for each expired item if one is set.
func (*Expirable[K, V]) Set ¶
Set adds or updates an item in the cache. If the key already exists, its value is updated. If the cache is at capacity, the least recently used item is evicted. Expired items are removed lazily on access or via RemoveExpired.
Options can be passed to customize the entry, such as WithTTL to override the cache's default TTL for this specific entry.
func (*Expirable[K, V]) SetTTL ¶
SetTTL updates the TTL for future cache entries. It does not affect existing entries.
func (*Expirable[K, V]) SetTimeNowFunc ¶
SetTimeNowFunc replaces the function used to get the current time. This is primarily useful for testing. Passing nil resets to time.Now.
type OnEvictFunc ¶
type OnEvictFunc[K comparable, V any] func(key K, value V)
OnEvictFunc is a function that is called when an entry is evicted from the cache.
type SetOption ¶
type SetOption func(*setOptions)
SetOption is a functional option for Expirable.Set, Expirable.GetOrSet, and Expirable.GetOrSetSingleflight.
type Sharded ¶
type Sharded[K comparable, V any] struct { // contains filtered or unexported fields }
Sharded represents a thread-safe, sharded LRU cache. It distributes keys across multiple Cache instances to reduce lock contention under high concurrency. Each shard is an independent LRU cache with its own lock, allowing concurrent operations on different shards. A Sharded must be created with NewSharded, MustNewSharded, NewShardedWithCount, or MustNewShardedWithCount; the zero value is not ready for use.
func MustNewSharded ¶
func MustNewSharded[K comparable, V any](capacity int) *Sharded[K, V]
MustNewSharded creates a new sharded LRU cache with the given total capacity. It panics if the capacity is less than or equal to zero.
func MustNewShardedWithCount ¶
func MustNewShardedWithCount[K comparable, V any](capacity, shardCount int) *Sharded[K, V]
MustNewShardedWithCount creates a new sharded LRU cache with the given total capacity and number of shards. It panics if the capacity or shard count is less than or equal to zero.
func NewSharded ¶
func NewSharded[K comparable, V any](capacity int) (*Sharded[K, V], error)
NewSharded creates a new sharded LRU cache with the given total capacity. The capacity is distributed evenly across DefaultShardCount shards. The capacity must be greater than zero.
func NewShardedWithCount ¶
func NewShardedWithCount[K comparable, V any](capacity, shardCount int) (*Sharded[K, V], error)
NewShardedWithCount creates a new sharded LRU cache with the given total capacity and number of shards. The capacity is distributed evenly across all shards. Both capacity and shardCount must be greater than zero.
func (*Sharded[K, V]) Clear ¶
func (s *Sharded[K, V]) Clear()
Clear removes all items from all shards.
func (*Sharded[K, V]) Get ¶
Get retrieves a value from the cache by key. It returns the value and a boolean indicating whether the key was found. This method also updates the item's position in the LRU list within its shard.
func (*Sharded[K, V]) GetOrSet ¶
GetOrSet retrieves a value from the cache by key, or computes and sets it if not present. The compute function is only called if the key is not present in the cache. Note: if multiple goroutines call GetOrSet concurrently for the same missing key, compute may be called multiple times but only one result will be cached.
func (*Sharded[K, V]) GetOrSetSingleflight ¶
GetOrSetSingleflight retrieves a value from the cache by key, or computes and sets it if not present. Unlike Sharded.GetOrSet, if multiple goroutines call GetOrSetSingleflight concurrently for the same missing key, the compute function is called exactly once and all callers receive the same result. This is useful when the compute function is expensive (e.g., database queries, API calls).
The singleflight deduplication only applies to concurrent in-flight calls; once a value is cached, subsequent calls return the cached value without invoking singleflight.
func (*Sharded[K, V]) Keys ¶
func (s *Sharded[K, V]) Keys() []K
Keys returns a slice of all keys in the cache. The order is from most recently used to least recently used within each shard, with shards processed in order. Note that the global LRU order is not preserved across shards.
The result is a point-in-time snapshot and is not atomic with respect to concurrent updates.
func (*Sharded[K, V]) Len ¶
Len returns the current number of items in the cache across all shards. The result is a point-in-time snapshot and may not reflect concurrent updates.
func (*Sharded[K, V]) OnEvict ¶
func (s *Sharded[K, V]) OnEvict(f OnEvictFunc[K, V])
OnEvict sets a callback function that will be called when an entry is evicted from any shard. The callback will receive the key and value of the evicted entry.
Warning: The callback may be invoked concurrently from multiple shards. Ensure the callback is safe for concurrent use.
func (*Sharded[K, V]) Peek ¶
Peek retrieves a value from the cache by key without updating its position in the LRU list. This is useful for checking a value without affecting eviction order. Returns the value and a boolean indicating whether the key was found.
func (*Sharded[K, V]) Remove ¶
Remove deletes an item from the cache by key. It returns whether the key was found and removed.
func (*Sharded[K, V]) Set ¶
func (s *Sharded[K, V]) Set(key K, value V)
Set adds or updates an item in the cache. If the key already exists, its value is updated. If the shard is at capacity, the least recently used item in that shard is evicted.
func (*Sharded[K, V]) ShardCount ¶
ShardCount returns the number of shards in the cache.