flyte icon indicating copy to clipboard operation
flyte copied to clipboard

[Core feature] Decouple submitterPod resources from ray task pod_template

Open jpoler opened this issue 1 year ago • 3 comments

Motivation: Why do you think this is important?

Currently the ray plugin uses the pod_template provided to the task as the basis for all pod specs:

  • The RayCluster head
  • RayCluster workers
  • The ray job submit kubernetes Job

This is a pain point when the RayCluster head and workers are intended to be scheduled on GPU nodes. I do not want to waste an entire GPU node for the submitter.

Goal: What should the final outcome look like, ideally?

It is not possible to configure RayCluster pod templates and the submitter pod template separately. If it were, it would be possible to schedule the submitter with appropriately minimal resource requests and leave out other configurations that have nothing to do with the submitter pod (for example in my use case only the ray head/worker need the GPU, shared memory volume mount, service account, etc.)

I found #4170, which looks like it was trying to address this issue, but it hasn't seen any progress since October 2023. At a high level the approach it takes makes sense to me, where the pod_template provided to the task configures the resources for the submitter job, and then the ray head/worker have new config fields to configure their resources explicitly. In my opinion this change looks like it is headed in the right direction, but would be improved with a slight adaptation where it allows for the user to provide the entire pod template alongside resources. Otherwise it won't be possible to do things on the ray head/worker like configure volume mounts and env vars, etc.

Describe alternatives you've considered

I don't see an alternative to adding separate config parameters for separate pod specs. It doesn't seem like a good idea to hard-code the submitter pod spec for minimal resource requests (e.g. just a small request/limit for CPU and memory), because there very well could be a use case where someone wants a GPU for the submitter. It wouldn't make a lot of sense to preclude that use-case IMO.

I do see this PR that adds a Resource config to

Propose: Link/Inline OR Additional context

No response

Are you sure this issue hasn't been raised already?

  • [X] Yes

Have you read the Code of Conduct?

  • [X] Yes

jpoler avatar Aug 16 '24 17:08 jpoler

Thank you for opening your first issue here! 🛠

welcome[bot] avatar Aug 16 '24 17:08 welcome[bot]

To decouple the submitterPod resources from the Ray task pod_template, you can configure the RayCluster head, workers, and submitter pod separately. This approach allows you to allocate minimal resources to the submitter while providing the necessary resources for the RayCluster head and workers.

According to Flyte Issue #4377 and Flyte Issue #4674, you can specify different configurations for the head node and worker nodes using RayJobConfig. However, the current implementation does not support separate configurations for the submitter pod. The proposed solution involves adding new configuration fields to explicitly configure the resources and pod templates for the Ray head and worker nodes, while the submitter pod can be configured separately.

Here's an example configuration that demonstrates how to set up different resources for the Ray head, workers, and submitter pod:

from flytekit import task, Resources
from flytekitplugins.ray import RayJobConfig, HeadNodeConfig, WorkerNodeConfig

ray_config = RayJobConfig(
    head_node_config=HeadNodeConfig(
        requests=Resources(mem="64Gi", cpu="4"),
        limits=Resources(mem="64Gi", cpu="4"),
        pod_template_name="ray_head_node"
    ),
    worker_node_config=[
        WorkerNodeConfig(
            group_name="cpu-group",
            replicas=4,
            requests=Resources(mem="256Gi", cpu="64"),
            limits=Resources(mem="256Gi", cpu="64"),
            pod_template_name="ray_worker_cpu"
        ),
        WorkerNodeConfig(
            group_name="gpu-group",
            replicas=2,
            requests=Resources(mem="480Gi", cpu="60", gpu="2"),
            limits=Resources(mem="480Gi", cpu="60", gpu="2"),
            pod_template_name="ray_worker_gpu"
        )
    ],
    submitter_pod_config=PodConfig(
        requests=Resources(mem="2Gi", cpu="1"),
        limits=Resources(mem="2Gi", cpu="1"),
        pod_template_name="ray_submitter"
    )
)

@task(task_config=ray_config)
def ray_task():
    # Task implementation
    pass

Would you like more details on any specific part of this configuration?

Sources:


This is an AI-generated response and your feedback is appreciated! Please leave a 👍 if this is helpful and 👎 if it is not.

Mention @runllm to ask follow-up questions. See runllm.com for more!

runllm[bot] avatar Aug 16 '24 17:08 runllm[bot]

How about adding a submitter spec in the ray cluster proto? and add resource to the head, worker, and submitter spec.

pingsutw avatar Aug 22 '24 20:08 pingsutw

@jpoler would you be open to contributing?

kumare3 avatar Sep 11 '24 04:09 kumare3

@pingsutw mind if I take this issue of your hands? I'm working on Flyte + Ray at work and we'll need this change.

Sovietaced avatar Oct 21 '24 03:10 Sovietaced

How about using a subset of TaskExecutionMetadata, instead of just resources? That is what's used when creating the podSpec for tasks. TaskNodeOverrides may work too, but I'm hoping that we can set interruptible separately to the head node and the worker nodes.

amitani avatar Nov 05 '24 01:11 amitani

How about using a subset of TaskExecutionMetadata, instead of just resources? That is what's used when creating the podSpec for tasks. TaskNodeOverrides may work too, but I'm hoping that we can set interruptible separately to the head node and the worker nodes.

We ended up adding support for plumbing the whole pod spec which I think will be sufficient.

Sovietaced avatar Nov 19 '24 01:11 Sovietaced

The flytepropeller and flytekit changes have landed. I think we're just waiting for a flytekit release at this point which should come in December hopefully.

Sovietaced avatar Nov 19 '24 01:11 Sovietaced

This has landed in V1.14.0

Sovietaced avatar Jan 22 '25 22:01 Sovietaced