spring-data-redis
spring-data-redis copied to clipboard
sync = true flag only considers cache name for synchronization. [DATAREDIS-678]
David Tanner opened DATAREDIS-678 and commented
https://docs.spring.io/spring/docs/current/spring-framework-reference/html/cache.html Synchronized caching * In a multi-threaded environment, certain operations might be concurrently invoked for the same argument (typically on startup). By default, the cache abstraction does not lock anything and the same value may be computed several times, defeating the purpose of caching.
For those particular cases, the sync attribute can be used to instruct the underlying cache provider to lock the cache entry while the value is being computed. As a result, only one thread will be busy computing the value while the others are blocked until the entry is updated in the cache.*
However, the cache is not locked based on the key/entry, it is only locked using the cache name. https://github.com/spring-projects/spring-data-redis/blob/1.8.x/src/main/java/org/springframework/data/redis/cache/RedisCache.java#L381
This also goes against the documentation in Cacheable.java "Synchronize the invocation of the underlying method if several threads are attempting to load a value for the same key. The synchronization leads to a couple of limitations:"
Based on the documentation, the lock should also be based on the cache key. Currently the functionality takes away the most compelling reason to use the sync attribute on a slow, high throughput method with a wide variety of parameters
Affects: 1.8.6 (Ingalls SR6)
Issue Links:
- DATAREDIS-1100 Spring Cacheable sync=true not working across multiple instances
3 votes, 5 watchers
Bradley Plies commented
An additional undesirable consequence of this current implementation is that if you have many applications that use different key prefixes, but happen to name one of their caches the same thing - will lead to a problem.
+Example+ App 1, keyPrefix = cache:app1, cacheNames: users, orders, notes App 2, keyPrefix = cache:app2, cacheNames: users, products
Both apps can conflict with each other over usage of cacheName=users even though they ordinarily use different prefixes for their keys. It shouldn't be necessary for all apps to know about cache names of all other apps to avoid a collision, and cache names shouldn't need to include other decorations in hopes of making it unique like cacheName=app1Users
+Abandoned Lock+ We encountered a case today where the lock indeed wasn't deleted after +clean+ service shutdowns. So yes it effectively froze the entire cache and caused connections to accumulate and spin-wait for the lock to be released (which never happened).
https://github.com/spring-projects/spring-data-redis/blob/master/src/main/java/org/springframework/data/redis/cache/DefaultRedisCacheWriter.java#L266
private void checkAndPotentiallyWaitUntilUnlocked(String name, RedisConnection connection) {
if (!isLockingCacheWriter()) {
return;
}
try {
while (doCheckLock(name, connection)) {
Thread.sleep(sleepTime.toMillis());
}
} catch (InterruptedException ex) {
// Re-interrupt current thread, to allow other participants to react.
Thread.currentThread().interrupt();
throw new PessimisticLockingFailureException(String.format("Interrupted while waiting to unlock cache %s", name),
ex);
}
}
redis-cli> monitor
1503511180.523051 [0 10.99.122.148:54946] "EXISTS" "someCacheName~lock"
1503511180.523440 [0 10.99.122.149:53998] "EXISTS" "someCacheName~lock"
1503511180.525419 [0 10.99.122.147:48400] "EXISTS" "someCacheName~lock"
1503511180.527089 [0 10.99.122.148:55252] "EXISTS" "someCacheName~lock"
1503511180.527229 [0 10.99.122.148:55376] "EXISTS" "someCacheName~lock"
1503511180.527389 [0 10.99.122.148:54236] "EXISTS" "someCacheName~lock"
1503511180.527463 [0 10.99.122.149:55332] "EXISTS" "someCacheName~lock"
1503511180.540654 [0 10.99.122.149:55168] "EXISTS" "someCacheName~lock"
1503511180.546141 [0 10.99.122.149:53064] "EXISTS" "someCacheName~lock"
1503511180.546407 [0 10.99.122.149:53172] "EXISTS" "someCacheName~lock"
1503511180.547578 [0 10.99.122.146:38110] "EXISTS" "someCacheName~lock"
1503511180.547670 [0 10.99.122.146:37834] "EXISTS" "someCacheName~lock"
1503511180.547820 [0 10.99.122.146:38398] "EXISTS" "someCacheName~lock"
1503511180.553981 [0 10.99.122.149:54374] "EXISTS" "someCacheName~lock"
DATAREDIS-582 proposes to make an opt-out of this behavior
Bradley Plies commented
Just a follow-up that we encountered this again today. This time without any service stop/start. So perhaps it is possible that abandoned locks might unluckily manifest due to redis connection pool settings
spring.redis.pool.max-active=8 # Max number of connections that can be allocated by the pool at a given time. Use a negative value for no limit.
spring.redis.pool.max-idle=8 # Max number of "idle" connections in the pool. Use a negative value to indicate an unlimited number of idle connections.
spring.redis.pool.max-wait=-1 # Maximum amount of time (in milliseconds) a connection allocation should block before throwing an exception when the pool is exhausted. Use a negative value to block indefinitely.
spring.redis.pool.min-idle=0 # Target for the minimum number of idle connections to maintain in the pool. This setting only has an effect if it is positive.
spring.redis.timeout=0 # Connection timeout in milliseconds.
knyttl commented
We just experienced this issue and we found it affects critically our application. The fix should be quite easy (extending the lock key) - is there a reason why this issue is abandoned for two years?
I created at least work-in-progress proposal: https://github.com/spring-projects/spring-data-redis/pull/476 - however I am unable to solve correctly the clean method
Mark Paluch commented
Locking is a pretty sensitive topic given the possibilities we have. Each lock needs to be represented in Redis in some way. Using a per-cache lock was the only viable option to avoid excessive per-key locking since each lock would require additional Redis interaction for locking/unlocking and scanning for keys.
That being said, we're happy to learn how to consistently implement locking for all functions without introducing too much overhead.
In any case, RedisCacheWriter is an interface so you can apply customizations/implement an own RedisCacheWriter that suits your needs
Using the https://github.com/redisson/redisson client solves this problem.