pytorch-operator icon indicating copy to clipboard operation
pytorch-operator copied to clipboard

Distributed mnist is unexpectedly slow

Open panchul opened this issue 5 years ago • 7 comments

I ran mnist example with 2 workers on a 2-node Kubernetes cluster running on 2 VMs, and expected it be faster comparing with 1-worker case. However the time actually increased, and was even slower the more workers I added. Made several test runs, timing is reproducible:

  • 1 master 1 worker : 100 seconds
  • 1 master 2 workers: 2 minutes 56 seconds (176 seconds)
  • 1 master 6 workers: 7 minutes 59 seconds (479 seconds)

No GPUs(explicitly disabling them in container spec template). Here is the node information:

$ k get nodes
NAME                                STATUS   ROLES   AGE   VERSION
aks-nodepool1-12102812-vmss000000   Ready    agent   28d   v1.15.10
aks-nodepool1-12102812-vmss000001   Ready    agent   28d   v1.15.10

Below is the minimally-modified pytorch-operator/examples/mnist/v1/pytorch_job_mnist_gloo.yaml I used:

apiVersion: "kubeflow.org/v1"
kind: "PyTorchJob"
metadata:
  name: "pytorch-dist-mnist-gloo"
spec:
  pytorchReplicaSpecs:
    Master:
      replicas: 1
      restartPolicy: OnFailure
      template:
        spec:
          containers:
            - name: pytorch
              image: alek8106/pytorch-dist-mnist-test:1.0
              args: ["--backend", "gloo", "--no-cuda"]
              resources:
                limits:
              #    nvidia.com/gpu: 1
    Worker:
      #replicas: 1
      replicas: 2
      restartPolicy: OnFailure
      template:
        spec:
          containers:
            - name: pytorch
              image: alek8106/pytorch-dist-mnist-test:1.0
              args: ["--backend", "gloo", "--no-cuda"]
              resources:
                limits:
               #   nvidia.com/gpu: 1```

panchul avatar May 11 '20 22:05 panchul

Issue-Label Bot is automatically applying the labels:

Label Probability
kind/bug 0.78

Please mark this comment with :thumbsup: or :thumbsdown: to give our bot feedback! Links: app homepage, dashboard and code for this bot.

issue-label-bot[bot] avatar May 11 '20 22:05 issue-label-bot[bot]

How about the bandwidth in your cluster?

gaocegege avatar May 12 '20 01:05 gaocegege

@gaocegege , local network, no unusual bottlenecks.

panchul avatar May 26 '20 18:05 panchul

@panchul I met similar problem when using DataParallel(...) in code, but I did not find good solution. Distribute deep learning learning workload tightly depends on the network bandwidth. If there is not bottlenecks on network, try to enlarge the batch size based on the number of workers.

Refer https://github.com/pytorch/pytorch/issues/3917 for more detail.

xq2005 avatar Jun 18 '20 04:06 xq2005

I am coming across the same problem,have you solved it?

lwj1980s avatar Nov 06 '20 06:11 lwj1980s

After each iteration == a batch, all of the replicas will send out their gradient (size = network size) If model size is 100MB: 1 node: no need to send gradient 2 nodes: 2 x 100MB = 200MB/it

You can check your network bandwidth and compare with the model size to see if the network is the bottleneck. If it is because of network then can use bigger batch size as xq2005 said, or using no_sync in DDP.

jalola avatar Jun 25 '21 07:06 jalola

Ref https://github.com/kubeflow/training-operator/issues/1454

gaocegege avatar Oct 28 '21 09:10 gaocegege