dgl
dgl copied to clipboard
[GraphBolt] CPU RAM Feature Cache for DiskBasedFeature
🚀 Feature
When we use a DiskBasedFeatureStore, we will need to cache frequently accessed items in a CPU cache so that the disk read bandwidth requirements are reduced.
Motivation
Will improve performance immensely on large datasets whose data do not fit the CPU RAM.
- [ ] #7492 develops the cache primitive
- [ ] The FeatureStore and Feature classes need to be extended to support cached features in a better way that allows overlap.
- [ ] The cache needs to be incorporated into gb.DataLoader.
@mfbalin
what is the difference between manually cache frequently accessed items with DiskBasedFeature
and TorchBasedFeature
with in_memory=False
in which cache is automatically applied by OS?
Actually, this raise me the basic question: in what kind of scenario we prefer DiskBasedFeature
than TorchBasedFeature
with in_memory=False
? What is the advantages of DiskBasedFeature
?
@Rhett-Ying io_uring is more efficient and faster compared to using mmap. With io_uring, you need fewer threads to saturate the SSD bandwidth. When it comes to caching, the OS caches pages usually in sizes 4KB, however, feature dimension * dtype_bytes is usually smaller than that. Thus when the OS caches a page, it will cache unnecessary vertex features along with it too. The cache will be less effective because of that.
And I believe we can use a better caching strategy than the one used inside the Linux kernel. For example, see this paper on a state-of-the-art simple caching policy: https://dl.acm.org/doi/10.1145/3600006.3613147
As the indices
of feature data are random and scattered, it requires separate I/O request to be submitted to submission queue without any explicit optimization in application level. As for the cache, it also requires app-level optimization to make io_uring perform comparable to mmap which involves cache automatically.
With io_uring, you need fewer threads to saturate the SSD bandwidth.
Is it achieved by submit many I/O request to submission queue and wait for completion?
As the
indices
of feature data are random and scattered, it requires separate I/O request to be submitted to submission queue without any explicit optimization in application level. As for the cache, it also requires app-level optimization to make io_uring perform comparable to mmap which involves cache automatically.With io_uring, you need fewer threads to saturate the SSD bandwidth.
Is it achieved by submit many I/O request to submission queue and wait for completion?
Yes, that is how io_uring works, you batch your requests and submit them with a single linux system call. When we also have a cache, it will outperform mmap approach significantly.
I am not sure if it's easy and clean to implement caching policy in app-level. The trade-off on performance improvement and code logic complexity needs to be taken into consideration.
@pyynb Please read the paper @mfbalin suggested for caching policy: https://dl.acm.org/doi/10.1145/3600006.3613147.
Last month, we compared three different cache libraries and various cache eviction policies. Regarding the eviction policies, we found that the hit rate of S3-FIFO cache was higher than LRU, but the time usage was slightly higher. Both of them are significantly better than other eviction methods(see documentation for details). As for cache libraries, cachelib performed the best. However, cachelib uses CXX11 ABI, and Torch does not support CXX11 (wrote in the TorchConfig.cmake file), so cachelib is not compatible with Torch. And the performance of cachetools and cachemoncache libraries was not very well (see documentation for details), so we have decided to temporarily suspend the development of Cache for DiskBasedFeature. https://docs.google.com/document/d/1idVOwZTc_wX9u1UUFC4Ms-lBTkmEavEDnDocE1-Qp5E/edit
Thank you for the preliminary study.
I have decided to implement a parallel S3-fifo cache implementation in the upcoming weeks. Assigning the issue to myself.
#7492 implements the s3-fifo caching policy and the FeatureCache classes. The design is made to be easily extendible in case we want to try more caching policies in the future. @frozenbugs @Rhett-Ying