guava
guava copied to clipboard
guava cache的LRU算法有bug吗
我做了一下试验如下:
为什么我最先put的k_0没有被删掉,反而是k_2.
次线程的testMap.getIfPresent 操会影响expireAfterWrite的执行效果吗?
The cache is partitioned into multiple LRUs, hashed to by key, and sized by the concurrency level (default 4). If you set to that to 1
then it will be strict LRU but lower performance, else it is roughly equivalent. This is indicated in CacheBuilder's javadoc.
The cache is partitioned into multiple LRUs, hashed to by key, and sized by the concurrency level (default 4). If you set to that to
1
then it will be strict LRU but lower performance, else it is roughly equivalent. This is indicated in CacheBuilder's javadoc.
是不是说把concurrencyLevel设置为1?我试了, CacheBuilder.newBuilder().maximumSize(4).concurrencyLevel(1), 但是效果还是一样的啊。断点调试的时候,concurrencyLevel值好像并未影响segments的大小,不设置concurrencyLevel时,调试截图如下:
是不是我的理解不对吗?
The map’s toString() is unordered and the LRU order it retained on the segment by linked list chains embedded in the entry. You should see it evict in LRU order via the removal cause.
FYI, LRU is a good starting point but is often has low hit rates. A lot of work went into researching new algorithms that are robustly near optimal. The successor caching library, Caffeine, is a rewrite based on everything we’ve learned.
按你说的调试evict相关的方法,如下图:
但是我看代码是调用到了accessQueue属性去删除第一个元素,但是我用的是expireAfterWrite, 不是expireAfterAccess方法,这是bug吗?accessQueue中的每个元素的accessTime都是9223372036854775807为什么?
截图如下:
版本:guava-20.0.jar
You'd need to provide a failing unit test instead of screenshots to debug on my side. I don't recall ever observing Guava performing significantly differently from LRU in simulations, though. I think since it only promises to be LRU-like and is not actively maintained, any non-critical bug won't be fixed. I haven't actively worked on this code for many years and the latest JavaDoc recommends Caffeine going forward, where I am most familiar.
You'd need to provide a failing unit test instead of screenshots to debug on my side. I don't recall ever observing Guava performing significantly differently from LRU in simulations, though. I think since it only promises to be LRU-like and is not actively maintained, any non-critical bug won't be fixed. I haven't actively worked on this code for many years and the latest JavaDoc recommends Caffeine going forward, where I am most familiar.
测试代码: import java.util.concurrent.TimeUnit; import com.google.common.cache.Cache; import com.google.common.cache.CacheBuilder; import com.google.common.cache.RemovalListener; import com.google.common.cache.RemovalNotification; public class Test3 { public static void main(String[] args) throws Exception { Cache<String, String> testMap = CacheBuilder.newBuilder().maximumSize(4).expireAfterWrite(2000, TimeUnit.SECONDS) .removalListener(new RemovalListener<String, String>() { public void onRemoval(RemovalNotification<String, String> notification) { System.out.println(notification.getKey() + "/" + notification.getValue() + "/" + notification.getCause()); } }).build(); new Thread(new Runnable() { public void run() { for (int i = 0; i < 8; i++) { testMap.getIfPresent("k_" + i % 2); try {Thread.sleep(800);} catch (InterruptedException e) {} } } }).start(); for (int i = 0; i < 6; i++) { testMap.put("k_" + i, "v_" + i); System.out.println(testMap.asMap()); Thread.sleep(1000); } } }
运行结果: {k_0=v_0} {k_1=v_1, k_0=v_0} {k_1=v_1, k_2=v_2, k_0=v_0} {k_1=v_1, k_3=v_3, k_0=v_0, k_2=v_2} k_2/v_2/SIZE {k_1=v_1, k_3=v_3, k_4=v_4, k_0=v_0} k_3/v_3/SIZE {k_1=v_1, k_5=v_5, k_4=v_4, k_0=v_0}
The reader thread's intent is that k_0
and k_1
are the most recently used items so they should not be size evicted. Therefore the least recently are k_2_
and k_3
when the cache overflows. The System.out.println(testMap.asMap())
printout is unordered. The test shows that the cache is correctly evicting the LRU'd entries.
expireAfterWrite
q
The reader thread's intent is that
k_0
andk_1
are the most recently used items so they should not be size evicted. Therefore the least recently arek_2_
andk_3
when the cache overflows. TheSystem.out.println(testMap.asMap())
printout is unordered. The test shows that the cache is correctly evicting the LRU'd entries.
我用的是expireAfterWrite方法,不是expireAfterAccess, 难道这两个方法是一样的?
No, but in your test case the expiration period is large enough to never occur. The duration of expireAfterWrite
will be reset when the entry is created or updated (e.g. Cache.put
). The duration of expireAfterAccess
will also reset it when the entry is read (e.g. Cache.getIfPresent
).
No, but in your test case the expiration period is large enough to never occur. The duration of
expireAfterWrite
will be reset when the entry is created or updated (e.g.Cache.put
). The duration ofexpireAfterAccess
will also reset it when the entry is read (e.g.Cache.getIfPresent
).
是不是说,因为空间大小不够用的删除,就不能满足LRU规则?
Expiration is independent of LRU and expired items will be removed prior to size eviction. Internally, expireAfterAccess will reuse the LRU queue because it follows the same ordering rules (reorder on read or write) and is bounded by time instead of length. In your test case LRU is being satisfied.
@ben-manes Is there anything Guava team should be doing here? I haven't run all the text through Google Translate, but it sounds like this is a bug report that you're saying is working as intended?
@amalloy please close