[SPARK-44639][SS][YARN] Use Java tmp dir for local RocksDB state storage on Yarn
What changes were proposed in this pull request?
Update the RocksDB state store to store its local data underneath java.io.tmpdir instead of going through Utils.getLocalDir when running on Yarn.
Why are the changes needed?
On YARN, the local RocksDB directory is placed in a directory created inside the root application folder such as
/tmp/nm-local-dir/usercache/<user>/appcache/<app_id>/spark-<uuid>/StateStoreId(...)
The problem with this is that if an executor crashes for some reason (like OOM) and the shutdown hooks don't get run, this directory will stay around forever until the application finishes, which can cause jobs to slowly accumulate more and more temporary space until finally the node manager goes unhealthy.
Because this data will only ever be accessed by the executor that created this directory, it would make sense to store the data inside the container folder, which will always get cleaned up by the node manager when that yarn container gets cleaned up. Yarn sets the java.io.tmpdir to a path inside this directory, such as
/tmp/nm-local-dir/usercache/<user>/appcache/<app_id>/<container_id>/tmp/StateStoreId(...)
It looks like only Yarn setts the tmpdir property, and other resource managers (standalone and k8s) always rely on the local dirs setting/env vars. I went back and forth on whether to add a new Util function (i.e. getExecutorLocalDir) or add a flag to getLocalDir, but just kept it local to the RocksDB provider for now. There may be more use cases where it makes more sense to use the tmpdir on Yarn rather than the local dirs for things that don't need to persist beyond executor shutdown.
Does this PR introduce any user-facing change?
Shouldn't be any effective changes, other than preventing disk space from filling up on Node Managers under certain scenarios.
How was this patch tested?
New UT
@HeartSaVioR
The config name is a little awkward and I hate adding more configs but I'm not familiar with other resource managers beyond yarn and any effects of making this change the default. I think on yarn there's no real drawbacks to this. The node manager picks one of the local dirs to put the container dir in, it just makes Spark deterministically use that same path instead of potentially a different configured local dir
Yeah, this seems to be working around a specific resource manager in a corner case, I'm not sure if this is a good solution.
Yeah, this seems to be working around a specific resource manager in a corner case, I'm not sure if this is a good solution.
I agree I don't like adding a config, I think it should be just globally changed to use the java tmp dir instead of the configured local dirs, but I don't know what the behavior is of k8s or standalone for how local directories work. I think it is currently incorrect behavior on YARN because it's using application level directories for executor/container specific temp files that should be deleted when that executor/container is finished or fails. It's not like shuffle data with a shuffle service that should outlive the executor/container.
After looking through the code a little bit, it looks like setting the java.io.tmpdir is only a YARN thing, so I updated this to just always use the tmpdir when running on Yarn, and removed the new ugly config.
Updated to create a util function for this behavior to make it more clear what's happening, and less specific to RocksDB, though the RocksDB state store is the only use case currently.