cubed
cubed copied to clipboard
Globus Compute executor
This is a proof-of-concept for #467. It takes advantage of the fact that Globus Compute has an implementation of Python's concurrent.futures.Executor.
I tested it by running a Globus Compute endpoint locally, by following https://globus-compute.readthedocs.io/en/latest/endpoints/endpoints.html.
To set up the execution environment, I created a globus-compute-worker conda env, and ran pip install cubed globus-compute-endpoint in it.
Here's the config.yaml for the endpoint (stored in ~/.globus_compute/<endpoint-name>/):
display_name: null
engine:
max_workers_per_node: 1
provider:
init_blocks: 1
max_blocks: 1
min_blocks: 0
type: LocalProvider
worker_init: |
conda activate globus-compute-worker
conda env list
type: GlobusComputeEngine
Here's my cubed.yaml (stored in $(pwd)/globus-compute):
spec:
allowed_mem: "2GB"
executor_name: "globus-compute"
executor_options:
endpoint_id: '1345cf05-8762-4c24-8def-a131252cf2de'
Then I was successfully able to run
CUBED_CONFIG=$(pwd)/globus-compute python add-asarray.py