stable-diffusion
stable-diffusion copied to clipboard
Why bias in Q, K, V projection of SpatialSelfAttention?
https://github.com/CompVis/stable-diffusion/blob/21f890f9da3cfbeaba8e2ac3c425ee9e998d5229/ldm/modules/attention.py#L99
As I understand, other implementations of attention except for SpatialSelfAttention
in this module are set with bias=False. Why is it different?
Any explanation will be greatly appreciated.