dlib icon indicating copy to clipboard operation
dlib copied to clipboard

[Bug]: Wrong gradients in the CUDA implementation of Layer Norm

Open arrufat opened this issue 1 year ago • 10 comments

What Operating System(s) are you seeing this problem on?

Linux (x86-64)

dlib version

19.24

Python version

N/A

Compiler

GCC 12.3.0

Expected Behavior

Expect the tests to pass for LayerNorm, both in CPU and CUDA implementations.

Current Behavior

This test passed (I have CUDA enabled of course): https://github.com/davisking/dlib/blob/46e59a2174228922d19d2887756d6dbfef80dc04/dlib/test/dnn.cpp#L603-L653

In the first part (before the #if DLIB_USE_CUDA) we are checking that the Layer Normalization actually does what it says on CPU: normalizes each sample.

In the second part, I compute the CUDA version and check it is equal to the CPU version. That works, Then, I proceed to compute the gradients on both CPU and GPU and check if they are equal.

Both these tests pass. However, this one only passes on CPU and does not pass on GPU: https://github.com/davisking/dlib/blob/46e59a2174228922d19d2887756d6dbfef80dc04/dlib/test/dnn.cpp#L2007-L2012

I have stared at both implementations for a while and I can't see what I am doing wrong. If someone could have an extra look, I'll appreciate that.

Steps to Reproduce

Just run the test suite with CUDA enabled.

Anything else?

No response

arrufat avatar Dec 26 '23 13:12 arrufat

Yeah I noticed this a bit ago but didn't figure it out when I looked. Need to look again though and figure it out. Although maybe you will figure it out first :) Must just be some subtle typo somewhere.

davisking avatar Dec 29 '23 14:12 davisking

Yes, I will try to figure out. I will check what test the test_layer is doing to see why is not passing... The CPU version passes, but not the GPU version. But then when the test for equality in CPU and GPU passes, too, which is really odd. Maybe rewriting the whole thing from scratch is easier than trying to figure out the typo, haha.

arrufat avatar Dec 29 '23 14:12 arrufat

Yeah hard to say, bugs can be anywhere and you never know until after the fact :shrug: :D

davisking avatar Dec 29 '23 14:12 davisking

It's really weird. If I change this line: https://github.com/davisking/dlib/blob/46e59a2174228922d19d2887756d6dbfef80dc04/dlib/test/dnn.cpp#L605 to the size that test_layer tests

resizable_tensor x(4, 2, 2, 4);

Then, this one fails: the values are entirely different. But with the previous size, the values are the same. https://github.com/davisking/dlib/blob/46e59a2174228922d19d2887756d6dbfef80dc04/dlib/test/dnn.cpp#L650

arrufat avatar Dec 31 '23 15:12 arrufat

I will tackle this again at some point. Sometimes, when things like this happen and I have absolutely no idea why, I question myself a lot...

arrufat avatar Feb 14 '24 01:02 arrufat

Na everyone thinks that from time to time. Don't sweat it :)

davisking avatar Feb 14 '24 03:02 davisking

I could be naive here, is there a reason why Layernorm isn't using CUDNN ? Will cudnnNormalizationForwardInference, cudnnNormalizationForwardTraining and cudnnNormalizationBackward work ? It looks like those functions can be used for batchnorm, layernorm and groupnorm.

pfeatherstone avatar Feb 25 '24 11:02 pfeatherstone

Oh, I wasn't aware it was possible to do Layer Normalization with cuDNN. This link says that: https://docs.nvidia.com/deeplearning/cudnn/api/cudnn-ops-library.html#cudnnnormalizationforwardtraining is deprecated in v9. Could you please show me how to use it for LayerNorm? I can't see any mention in the API.

It seems weird that this layer, now used in most Transformer-Based networks, has no cuDNN implementation…

arrufat avatar Feb 25 '24 12:02 arrufat

Somewhere in the docs I read it could be used for multiple types of normalization... I agree, it's hard to believe cudnn doesn't have first class support for it. Maybe libraries like Pytorch, GGML etc have moved away from cudnn and just use vanilla CUDA or https://github.com/NVIDIA/cccl to write their kernels from scratch. Don't know. Also a lot of Transformers use different normalization functions now, like RMSNorm for example.

pfeatherstone avatar Feb 25 '24 12:02 pfeatherstone

Ah, maybe you're referring to this? This is for FC or Conv mode in Batch Norm, which dlib already uses. https://docs.nvidia.com/deeplearning/cudnn/api/cudnn-ops-library.html#cudnnnormmode-t

arrufat avatar Feb 25 '24 12:02 arrufat