bazel
bazel copied to clipboard
Implement automatic garbage collection for the disk cache
Break out from https://github.com/bazelbuild/bazel/issues/4870.
Bazel can use a local directory as a remote cache via the --disk_cache
flag.
We want it to also be able to automatically clean the cache after a size threshold
has been reached. It probably makes sense to clean based on least recently used
semantics.
@RNabel would you want to work on this?
@RNabel @davido
I will look into implementing this, unless someone else is faster than me.
I don't have time to work on this right now. @davido, if you don't get around to working on this in the next 2-3 weeks, I'm happy to pick this up.
Hi, I would also very much like to see this feature implemented! @davido , @RNabel did you get anywhere with your experiments?
Not finished, but had an initial stab: https://github.com/RNabel/bazel/compare/baseline-0.16.1...RNabel:feature/5139-implement-disk-cache-size (this is mostly plumbing and figuring out where to put the logic it definitely doesn't work)
I figured the simplest solution is an LRU relying on the file system for access times and modification times. Unfortunately, access times are not available on windows through Bazel's file system abstraction. One alternative would be a simple database, but that feels like overkill here. @davido, what do you think is the best solution here? Also happy to write up a brief design doc for discussion.
What do you guys think about just running a local proxy service that has this functionality already implemented? For exampe: https://github.com/Asana/bazels3cache or https://github.com/buchgr/bazel-remote? One could then point Bazel to it using --remote_http_cache=http://localhost:XXX. We could even think about Bazel automatically launching such a service if it is not running already.
I think @aehlig solved this problem for the repository cache. Maybe you can borrow his implementation here as well. @buchgr, I feel this is core Bazel functionality and in my humble opinion outsourcing it isn’t the right direction. People at my company are often amazed Bazel doesn’t have this fully supported out of the box. On Tue, 11 Sep 2018 at 13:14 Robin Nabel [email protected] wrote:
Not finished, but had an initial stab: RNabel/bazel@ baseline-0.16.1...RNabel:feature/5139-implement-disk-cache-size https://github.com/RNabel/bazel/compare/baseline-0.16.1...RNabel:feature/5139-implement-disk-cache-size (this is mostly plumbing and figuring out where to put the logic it definitely doesn't work)
I figured the simplest solution is an LRU relying on the file system for access times and modification times. Unfortunately, access times are not available on windows through Bazel's file system abstraction. One alternative would be a simple database, but that feels like overkill here. @davido https://github.com/davido, what do you think is the best solution here? Also happy to write up a brief design doc for discussion.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bazelbuild/bazel/issues/5139#issuecomment-420221831, or mute the thread https://github.com/notifications/unsubscribe-auth/ABUIF_yJPnfWAoPzJufI6WwjckenYmNUks5uZ4zygaJpZM4TvSgK .
I think @aehlig solved this problem for the repository cache. Maybe you can borrow his implementation here as well.
@ittaiz, what solution are you talking about? What we have so far for the repository cache is that the file gets touched on every cache hit (see e0d80356eed), so that deleting the oldest files would be a cleanup; the latter, however, is not yet implemented, for lack of time.
For the repository cache, it is also a slightly different story, as clean up should always be manual; upstream might have disappeared, to the cache might be last copy of the archive available to the user—and we don't want to remove that on the fly.
outsourcing it isn’t the right direction
I would be interested to learn more about why you think so.
@aehlig sorry, my bad. You are indeed correct. @buchgr, I think so because I think a disk cache is a really basic feature of Bazel and the fact that it doesn’t work like this by default is IMHO a leaky abstraction (of how exactly the cached work) and influenced greatly by the fact that googlers work mainly (almost exclusively?) with remote execution. I’ve explained bazel to tens maybe hundreds of people. All of them were surprised disk cache isn’t out of the box (eviction wise and also plans wise like we discussed). On Tue, 11 Sep 2018 at 16:24 Jakob Buchgraber [email protected] wrote:
outsourcing it isn’t the right direction
I would be interested to learn more about why you think so.
— You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/bazelbuild/bazel/issues/5139#issuecomment-420273144, or mute the thread https://github.com/notifications/unsubscribe-auth/ABUIF-CT0FTFJOrIqJUvj5rmeKlfT502ks5uZ7mKgaJpZM4TvSgK .
@ittaiz the disk cache is indeed a leaky abstraction that was mainly added because it was easy to do so. I agree that if Bazel should have a disk cache in the long term, then it should also support read/write through to a remote cache and garbage collection.
However, I am not convinced that Bazel should have a disk cache built in but instead this functionality could also be handled by another program running locally. So I am trying to better understand why this should be part of Bazel. Please note that there are no immediate plans to remove it and we will not do so without a design doc of an alternative. I am mainly interested in kicking off a discussion.
Thanks for the clarification and I appreciate the discussion. I think that users don’t want to operate many different tools and servers locally. They want a build tool that works. The main disadvantage I see is that it sounds like you’re offering a cleaner design at the user’s expense. On Thu, 13 Sep 2018 at 22:56 Jakob Buchgraber [email protected] wrote:
@ittaiz https://github.com/ittaiz the disk cache is indeed a leaky abstraction that was mainly added because it was easy to do so. I agree that if Bazel should have a disk cache in the long term, then it should also support read/write through to a remote cache and garbage collection.
However, I am not convinced that Bazel should have a disk cache built in but instead this functionality could also be handled by another program running locally. So I am trying to better understand why this should be part of Bazel. Please note that there are no immediate plans to remove it and we will not do so without a design doc of an alternative. I am mainly interested in kicking off a discussion.
— You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/bazelbuild/bazel/issues/5139#issuecomment-421132801, or mute the thread https://github.com/notifications/unsubscribe-auth/ABUIF8ewS8x09uklzku9r6-aS6zUeLqYks5uarh4gaJpZM4TvSgK .
I think that users don’t want to operate many different tools and servers locally.
I partly agree. I'd argue in many companies that would change as you would typically have an IT department configuring workstations and laptops.
The main disadvantage I see is that it sounds like you’re offering a cleaner design at the user’s expense.
I think that also depends. I'd say that if one only wants to use the local disk cache then I agree that providing two flags is as friction less as it gets. However, I think it's possible that most disk cache users will also want to do remote caching/execution and that for them this might not be noteworthy additional work.
So I think there are two possible future scenarios for the disk cache:
- Add garbage collection to the disk cache and be done with it.
- Add garbage collection, remote read fallback, remote write and async remote writes.
I think 1) makes sense if we think that the disk cache will be a standalone feature that a lot of people will find useful on its own and if so I think its worth the effort to implement this in Bazel. For 2) I am not so sure as I can see several challenges that might be better solved in a separate process:
- Async remote writes are the idea that Bazel writes blobs to the disk cache and then asynchronously (to the build) writes them to the remote cache thereby removing the upload time from the build's critical path. This is difficult to implement in Bazel, partly because there are no guarantees about the lifetime of the server process and partly because of lots of edge cases.
- We might want to move authentication for remote caching/execution out of Bazel in the long term. We currently support Google Cloud authentication, we are about to add AWS and if we are successful I think its likely that we will need to add many more in the future and these authentication SDKs are quite large and increase the binary size. So we might end up with a separate proxy process anyway.
- It's unconventional and potentially insecure that one has to pass authentication flags and secrets to Bazel itself. It seems to me that a separate process running as a different user that hides the authentication secrets from the rest of the system using OS security mechanisms is a better idea.
- Once we implement a virtual remote filesystem (planned for Q4) in Bazel and then Bazel does not need to download cached artifacts anymore then the combination of a local disk cache and remote cache might become less attractive because downloads should no longer be a bottleneck (if it works out as expected).
So I think it might make sense for us to think about having a standard local caching proxy that's a separate process and that can be operated independently and/or that Bazel can launch automatically for improved usability might be an idea worth thinking about.
Is there any plan to roll out the "virtual remote filesystem" soon? I am interested to learn more about it and can help if needed. We are hitting network speed bottleneck.
yep, please follow https://github.com/bazelbuild/bazel/issues/6862
any plan of implementing the max size feature or a garbage collector for the local cache?
This is a much needed feature in order to use Remote Builds without the Bytes, since naively cleaning up the disk cache results in build failures.
Any updates on this?
+1 We would like to be able to set the max size for the cache. Currently we rely on users doing this manually. We could add a script to do this but it feels like it would be a good feature for Bazel to have.
+1 on this, I had to write a script to keep my local disk from filling up.
(by doing this I also discovered that something creates non-writable directories in .cache/bazel
which seems bad in general)
+1 on this feature require. I need it so I can run it inside a docker container.
+1.
Some of you have mentioned that you have implemented your own workarounds, it would be great to post them in this thread. Because mine is just terrible; when my OS complains that it has 0 bytes left, I delete ~/.cache/bazel
and the next build will be very slow.
On linux, I was using the find command to delete the oldest files. I use something like:
find /PATH_TO_DIRECTORY -type f -mtime +60 -delete
The +60
mean to delete files not changed in the last 60 days. So depending how quickly it fill, adjust this value.
Take care with that command. It is dangerous! It can easily delete foo much files too.
I had a workaround similar to @nouiz but on a crontab
@daily find /usr/local/google/home/gcmn/.cache/bazel* -mtime +12 -type f -delete
but it ended up causing really hard to debug issues (see https://github.com/bazelbuild/bazel/issues/12630).
Note that there are two Bazel caches here.
The one stored in ~/.cache/bazel
by default is not the disk cache referenced in this bug. It's the output directory for builds (see https://docs.bazel.build/versions/master/output_directories.html). This will contain:
-
install/
, unpacked data files from the Bazel binary (0.5G on my machine) -
cache/
, which is not mentioned anywhere in the documentation and I have no idea what it does but based on the file paths likecache/repos/v1/content_addressable/sha256/
is I guess some kind of content addressable indexing of repos :shrug: (300M on my machine) - One directory per workspace root from which you've invoked Bazel (as in the path to the workspace on your machine). Each of these is an "output base" On my machine these are typically a 0.5-1.5G each, but it's going to depend on what you're building.
Probably you don't want to delete the first two directories (well, as I said, I have no idea what the second one is for, but best not to touch it). They don't seem to grow in size over time either. Based on my experience in https://github.com/bazelbuild/bazel/issues/12630, the cache entries for the individual workspace roots are not at the file-level, however. That is, you can't just delete a single file in this "cache" and expect a correct build. They're at some directory level that is more granular than the whole directory, but I'm honestly not sure how much more. To make things more interesting, the timestamps of files are stubbed out in some places, so mtime is going to behave poorly on them. So I think the thing to do here is to look at the mtime of $OUTPUT_BASE/lock
. This should contain the last time this entire directory was actually used and would help you clean up old directories. I'm pretty sure you could delete things in a more granular fashion, but it would require more investigation to see how to do so safely. Like some of these are fetches of entire external repositories that Bazel will refetch if they're not present (but will get very upset if only part of them is present).
Now moving to the Bazel disk cache, which is actually what's referenced in this bug. You determine the location of this directory based on --disk_cache
. Personally, I set build --disk_cache=~/.cache/bazel-disk-cache
in ~/.bazelrc
so it always goes there. I think my aforementioned cron was behaving fine with this cache, for which individual files are entire cache entries (at least I didn't notice anything like the other issue). For now, I've disabled my cron and will reinvestigate it the next time Bazel brings my machine to a screeching halt by using all my disk space.
The general theme here is that Bazel has caches, but they're missing a pretty key feature of caches: eviction. Without them, users are left implementing weird and hacky workarounds. I wish someone from the Bazel team could at least endorse some workaround (like a safe script to run on a cron).
~/.cache/bazel
was definitely growing (apparently without bound?) for me.
I changed jobs so I'm not using bazel anymore, but this is the script I was running from cron: https://gist.github.com/mr-salty/a66119941e797d9eb49b15ea211ea968
It's mostly just find
but takes case of some subtle issues. I never did track down how I ended up with non-writable directories in my cache, maybe something involving docker... so most people may not need that. Feel free to use it as needed, but bazel should really take care of this itself.
@nouiz thank you for the valiant effort; but at least on my machine, I can see from this command that some if not all my bazel files were made on january 1st 1970.
find ~/.cache/bazel/ -type f -mtime +12000 | xargs ls -la
If anyone wants to use find mtime to get old files, run something like that first to check that you are not just going to delete everything.
Thanks for the warning. I do not recall having such problem with the dates of times. It is good to know!
I think you want to use access time -atime
instead of modified time -mtime
, there could be older files that don't change frequently but are still used during each build.
Good point. But atime doesn't always works. On NFS server, most of the time the update of atime is disabled. I do not recall all the detail of why I used mtime. I found one that works well enough for me and used it ;)
TF frequently force the rebuild of the everything. So for me it wouldn't have helped.
at least on my machine, I can see from this command that some if not all my bazel files were made on january 1st 1970.
Yeah this is what I mentioned above
the timestamps of files are stubbed out in some places
apparently they do this to avoid build dependence on the timestamp of the file (and therefore loss in hermeticness/caching)