Arraymancer icon indicating copy to clipboard operation
Arraymancer copied to clipboard

OpenMP barrier slowness in neural network examples with high core count.

Open mratsim opened this issue 3 years ago • 1 comments

The following bench, reduced to only call linear which is just a thin wrapper around BLAS, takes 1.6s without -d:openmp and 15s with -d:openmp

import ../src/arraymancer

# Learning XOR function with a neural network.
proc main() =
  # Autograd context / neuralnet graph
  let ctx = newContext Tensor[float32]
  let bsz = 32 # batch size

  let x_train_bool = randomTensor([bsz * 100, 2], 1).astype(bool)
  let y_bool = x_train_bool[_,0] xor x_train_bool[_,1]
  let x_train = ctx.variable(x_train_bool.astype(float32))
  let y = y_bool.astype(float32)

  # We will build the following network:
  # Input --> Linear(out_features = 3) --> relu --> Linear(out_features = 1) --> Sigmoid --> Cross-Entropy Loss

  let layer_3neurons = ctx.variable(
                        randomTensor(3, 2, 2.0f) -. 1.0f,
                        requires_grad = true
                      )

  let classifier_layer = ctx.variable(
                          randomTensor(1, 3, 2.0f) -. 1.0f,
                          requires_grad = true
                        )

  # Stochastic Gradient Descent
  let optim = newSGD[float32](
      layer_3neurons, classifier_layer, 0.01f
    )

  # Learning loop
  for epoch in 0..10000:
    for batch_id in 0..<100:

      # minibatch offset in the Tensor
      let offset = batch_id * 32
      let x = x_train[offset ..< offset + 32, _]
      let target = y[offset ..< offset + 32, _]

      # Building the network
      let n1 = linear(x, layer_3neurons) # <-- problematic line (linear without bias).

It seems like the machine stalls on OpenMP barriers. Also it seems like the more core you have the more problematic it is.

image

mratsim avatar Jan 03 '21 13:01 mratsim

So at first I thought it was setZero used in parallel region and so that couldn't be parallelized or we would have too many threads.

But then for this script

import ../src/arraymancer

# Learning XOR function with a neural network.
proc main() =
  # Autograd context / neuralnet graph
  let ctx = newContext Tensor[float32]
  let bsz = 32 # batch size

  let x_train_bool = randomTensor([bsz * 100, 2], 1).astype(bool)
  let y_bool = x_train_bool[_,0] xor x_train_bool[_,1]
  let x_train = ctx.variable(x_train_bool.astype(float32))
  let y = y_bool.astype(float32)

  # We will build the following network:
  # Input --> Linear(out_features = 3) --> relu --> Linear(out_features = 1) --> Sigmoid --> Cross-Entropy Loss

  let layer_3neurons = ctx.variable(
                        randomTensor(3, 2, 2.0f) -. 1.0f,
                        requires_grad = true
                      )

  let classifier_layer = ctx.variable(
                          randomTensor(1, 3, 2.0f) -. 1.0f,
                          requires_grad = true
                        )

  # Stochastic Gradient Descent
  let optim = newSGD[float32](
      layer_3neurons, classifier_layer, 0.01f
    )

  # Learning loop
  for epoch in 0..10000:
    for batch_id in 0..<100:

      # minibatch offset in the Tensor
      let offset = batch_id * 32
      let x = x_train[offset ..< offset + 32, _]
      let target = y[offset ..< offset + 32, _]

      # Building the network
      let n1 = relu linear(x, layer_3neurons)
      let n2 = linear(n1, classifier_layer)
      let loss = n2.sigmoid_cross_entropy(target)

We have:

image

which I suspected might be due to nested parallelism here: https://github.com/mratsim/Arraymancer/blob/bdcdfe13e47d3f6e0c5247e16640e45982b83ecc/src/arraymancer/nn_primitives/nnp_sigmoid_cross_entropy.nim#L49-L51

but even rewritten with map-reduce fusion and ultimately serial we're still stuck in those barriers for seemingly no reason. https://github.com/mratsim/Arraymancer/blob/979f5d5894ec225220f4ec3c02d248a1b1eb7131/src/arraymancer/nn_primitives/nnp_sigmoid_cross_entropy.nim#L44-L57

image

Then seemed like copyFromRaw was the biggest culprit left.

Conclusion

I'm unsure if the issue is in misuse of OpenMP parallel sections or tied to OpenMP design but we are clearly reaching its limits. I suspenct Facebook came to the same conclusion in PyTorch and introduced their C10 threadpool or Halide with their custom threadpool.

AKA, we likely need to introduce Weave sooner than later as OpenMP doesn't cut it and is also hard to debug/profile (and a pain to install on Mac).

mratsim avatar Jan 03 '21 14:01 mratsim