torchtitan icon indicating copy to clipboard operation
torchtitan copied to clipboard

numerical difference for SDPA between non-dtensor vs dtensor, when math attention and fp16 are used

Open tianyu-l opened this issue 9 months ago • 1 comments

Higher loss (9.5602 vs. 9.3164) was observed for the dtensor case, after 10 steps on the llama2 debug model. This happens even without applying rotary embedding, and the complex number multiplication issue mentioned in #267.

Note: to apply math attention with dtensor, one needs to set _allow_implicit_replication to true (because a non-dtensor mask will be generated if is_causal=True for SDPA).

This issue doesn't seem to be urgent, as math attention is only a fallback option for flash attention and memory-efficient attention.

tianyu-l avatar May 08 '24 21:05 tianyu-l

Is the numeric difference seen in backward only or in forward too?

kwen2501 avatar Jun 13 '24 00:06 kwen2501