[SPARK-46912] Use worker JAVA_HOME and SPARK_HOME instead of from submitter
What changes were proposed in this pull request?
Replace JAVA_HOME and SPARK_HOME from submitter by value of worker when building localCommand.
Why are the changes needed?
There is a problem when submit a job in cluster mode to a standalone cluster. The worker start a java job using value from submitter JAVA_HOME instead of itself.
Does this PR introduce any user-facing change?
No
Was this patch authored or co-authored using generative AI tooling?
No
Hm, how does JAVA_HOME get from the 'submitter' - what do you mean, the application submitter? but the worker is already running by that point
Hm, how does JAVA_HOME get from the 'submitter' - what do you mean, the application submitter? but the worker is already running by that point
- The submitter is the client machine which run then command spark-submit (with deploy-mode = cluster).
- The worker is already running at this point but the driver does not. When master received a submit request, it starts creating a driver on a worker, at that point, the driver copy the command and environment variable from submit command and use in its session. It sounds weird but that what I am facing in my case.
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. If you'd like to revive this PR, please reopen it and ask a committer to remove the Stale tag!