elastic icon indicating copy to clipboard operation
elastic copied to clipboard

[request] Do we have plan to merge Kubernetes part to kubeflow/pytorch-operator?

Open gaocegege opened this issue 5 years ago • 22 comments

cc @Jeffwan

I think we share the similar scope between pytorch-operator and elasticjob-operator. I am wondering if we can collaborate to support PyTorch and PyTorch elastic well.

gaocegege avatar Aug 11 '20 02:08 gaocegege

Hi @gaocegege thanks for bringing this up! We've been looking into adding support for TorchElastic parameters to the existing Kubeflow pytorch operators and are planning to bundle TorchElastic in the pytorch docker image for the pytorch 1.7 release - which shouldn't be a blocker for the Kubeflow work but certainly makes things smoother for users.

We welcome collaborations! and can certainly use some help from someone who is knowledgeable with Kubeflow in making pytorch operators support elastic parameters.

kiukchung avatar Aug 11 '20 21:08 kiukchung

@kiukchung Thanks for your reply! We are glad to support it in PyTorch operator, and we can help implement or propose it in Kubeflow community.

gaocegege avatar Aug 12 '20 01:08 gaocegege

/cc @johnugeorge @andreyvelich @kuikuikuizzZ

gaocegege avatar Aug 12 '20 01:08 gaocegege

That would be awesome! I think we can approach this from both sides. You from Kubeflow side and us from PyTorch/TorchElastic side.

We can help from torchelastic and pytorch side and review/expedite relevant PRs and prioritize any changes required on TorchElastic or PyTorch (in general Facebook side) user-facing APIs and documentation to make the work on your end smoother (e.g bundling torchelastic in the pytorch docker image for instance). Let us know if you have any specific asks.

Looking forward to seeing elastic support added to Kubeflow pytorch operators!

cc) @drdarshan, @chauhang, @tierex, @jspisak

kiukchung avatar Aug 12 '20 03:08 kiukchung

Yeah, we are going to have some action items these days and maybe we will have some specific asks when we are ready. Thanks for your help!

gaocegege avatar Aug 12 '20 03:08 gaocegege

+1 @gaocegege - thanks for reaching out on this! KubeFlow is a high priority platform for us so anything we can do to make it easier for users to leverage advanced features would be great, especially if we can collaborate to make it happen..

cc @chauhang

jspisak avatar Aug 12 '20 03:08 jspisak

@gaocegege @kiukchung I can write a proposal on it. It needs some interface changes. We can have some discussion by end of month.

Jeffwan avatar Aug 13 '20 17:08 Jeffwan

sounds good @Jeffwan, looking forward to the proposal! Keeping this issue open to continue the conversation on the same thread...

kiukchung avatar Aug 13 '20 23:08 kiukchung

@Jeffwan Thanks! Looking forward to it.

gaocegege avatar Aug 14 '20 02:08 gaocegege

ping @Jeffwan

Any progress for the issue?

gaocegege avatar Sep 16 '20 06:09 gaocegege

@gaocegege did you have some thoughts on how we can merge elasticjob-operator with pytorch-operator? FWIW torchelastic is being bundled with the PT docker (https://github.com/pytorch/pytorch/blob/master/Dockerfile#L55) in the next PT release so it should make things simpler to add an option to the existing pytorch-operator to wrap the entrypoint with python -m torchelastic.distributed.launch and set some launch args via env vars (I'm guessing this is why @kuikuikuizzZ asked for this feature in https://github.com/pytorch/elastic/issues/128 (I've got an open PR for this, waiting for someone in my team to review it).

cc) @jspisak , @chauhang

kiukchung avatar Oct 03 '20 05:10 kiukchung

cc @Jeffwan

gaocegege avatar Oct 04 '20 01:10 gaocegege

FWIW torchelastic is being bundled with the PT docker (https://github.com/pytorch/pytorch/blob/master/Dockerfile#L55) in the next PT release

That's a good news. operator will be only used for orchestration, it's user's responsibility to wrap entrypoint. I think operator side we can determine if that's a torch elastic job by checking args or envs. common logic will be similar, adding/removing pods will be allowed for elastic job. (it's not allowed for general torch jobs).

It's a little bit delayed, I was busy with testing-infra issue in kubeflow org. Currently presubmit jobs blocks all the PRs, it will takes some time to get all issues fixed.

What's the release schedule for torchelastic.0.2.1 and the pytorch version with torchelastic?

Jeffwan avatar Oct 08 '20 18:10 Jeffwan

sorry for the late reply.

@Jeffwan -

  1. torchelastic 0.2.1 has been released as of 10/05/2020 (https://pypi.org/project/torchelastic/#history)
  2. As of PT 1.7 torchelastic 0.2.1+ is bundled with the main PT docker

what do you recommend are the next steps? cc) @chauhang , @jspisak, @BorisValkov5

kiukchung avatar Dec 15 '20 20:12 kiukchung

Awesome! I think it LGTM:

add an option to the existing pytorch-operator to wrap the entrypoint with python -m torchelastic.distributed.launch and set some launch args via env vars

gaocegege avatar Dec 23 '20 10:12 gaocegege

I looked through the code base of elastic and elasticjob operator, and propose such design in pytorch-operator:

// PyTorchJobSpec is a desired state description of the PyTorchJob.
type PyTorchJobSpec struct {
+	// A switch to enable dynamic worker
+	EnableDynamicWorker *bool `json:"enableDynamicWorker,omitempty"`

+	MinReplicas *int32 `json:"minReplicas,omitempty"`
+	MaxReplicas *int32 `json:"maxReplicas,omitempty"`

	// Specifies the duration (in seconds) since startTime during which the job can remain active
	// before it is terminated. Must be a positive integer.
	// This setting applies only to pods where restartPolicy is OnFailure or Always.
	// +optional
	ActiveDeadlineSeconds *int64 `json:"activeDeadlineSeconds,omitempty"`

	// Number of retries before marking this job as failed.
	// +optional
	BackoffLimit *int32 `json:"backoffLimit,omitempty"`

	// Defines the policy for cleaning up pods after the PyTorchJob completes.
	// Defaults to None.
	CleanPodPolicy *common.CleanPodPolicy `json:"cleanPodPolicy,omitempty"`

	// Defines the TTL for cleaning up finished PyTorchJobs (temporary
	// before Kubernetes adds the cleanup controller).
	// It may take extra ReconcilePeriod seconds for the cleanup, since
	// reconcile gets called periodically.
	// Defaults to infinite.
	TTLSecondsAfterFinished *int32 `json:"ttlSecondsAfterFinished,omitempty"`

	// A map of PyTorchReplicaType (type) to ReplicaSpec (value). Specifies the PyTorch cluster configuration.
	// For example,
	//   {
	//     "Master": PyTorchReplicaSpec,
	//     "Worker": PyTorchReplicaSpec,
	//   }
	PyTorchReplicaSpecs map[PyTorchReplicaType]*common.ReplicaSpec `json:"pytorchReplicaSpecs"`
}

The switch EnableDynamicWorker is similar to https://github.com/kubeflow/tf-operator/blob/master/pkg/apis/tensorflow/v1/types.go#L67, but maybe we could have a declarative API DynamicWorker. It is used to decide if we should inject the corresponding entrypoint and args.

The other two fields MinReplicas and MaxReplicas are similar to the defs in elasticjob.

The only question is if we should add these fields and logic in PyTorchJob v1 or v2alpha1? /cc @andreyvelich @johnugeorge @Jeffwan

gaocegege avatar Dec 28 '20 16:12 gaocegege

Thanks @gaocegege. Should we follow the same way as TFJob instead of using pointer *bool?

EnableDynamicWorker bool `json:"enableDynamicWorker,omitempty"`

The only question is if we should add these fields and logic in PyTorchJob v1 or v2alpha1?

From my perspective, all new APIs should follow the Kubernetes version policy structure. With this process: alpha->beta->stable.

Currently, we have only v1 version of API: https://github.com/kubeflow/pytorch-operator/tree/master/pkg/apis/pytorch. What do you think @gaocegege @johnugeorge ?

andreyvelich avatar Jan 04 '21 11:01 andreyvelich

I think it should be a pointer since it can be nil.

gaocegege avatar Jan 04 '21 12:01 gaocegege

cc) @drdarshan

kiukchung avatar Jan 08 '21 18:01 kiukchung

Elastic is merged into PyTorch 1.9 and it is released, I think we can support it in pytorch-operator now.

gaocegege avatar Aug 04 '21 08:08 gaocegege

FYI, there's a proposal https://github.com/kubeflow/community/pull/522 to support Torch Elastic in Kubeflow. And we implemented the MVP here https://github.com/kubeflow/training-operator/pull/1453

It is tested with the echo and imagenet example (with some changes to support CPU training). C10D and ETCD are both supported and tested.

worker-0 is not fault-tolerant with c10d, but it is static and will not re-rank to another new worker. Workers are all fault-tolerant and elastic with etcd backend. There were some issues but already fixed in the upstream master branch:

  • https://github.com/pytorch/pytorch/issues/67616
  • https://github.com/pytorch/pytorch/issues/67742

It should work well if there is a new release involving the patches.

If you are interested in it, please comment in the PR. Thanks!

cc @Jeffwan

gaocegege avatar Nov 12 '21 07:11 gaocegege

HPA support has been added to Pytorch Elastic for Training operator in https://github.com/kubeflow/training-operator/pull/1701

johnugeorge avatar Dec 22 '22 07:12 johnugeorge