lightweight-gan icon indicating copy to clipboard operation
lightweight-gan copied to clipboard

When I should use --attn-res-layers and what principle to setup values at this parameter?

Open Dok11 opened this issue 3 years ago • 8 comments

Why default is [32] and when it need to increase count of items ([32,64,128] for example) or values? I see what it use more memory so I think it must increase quality but where is tradeoff?

Dok11 avatar Dec 23 '20 10:12 Dok11

Yes using attention does improve quality at least reflected in the FID scores they tend to go lower. Tradeoff is larger more memory usage and longer training time.

Mut1nyJD avatar Dec 23 '20 12:12 Mut1nyJD

What is better? Change [32] to [96] or [32,64]? What is the different?

Dok11 avatar Dec 23 '20 12:12 Dok11

What is better? Change [32] to [96] or [32,64]? What is the different?

I think it should be a power of 2, so 96 would not be valid.

https://github.com/lucidrains/lightweight-gan/blob/845eb9d872e020c3c083e1d966811e52d8b26229/lightweight_gan/lightweight_gan.py#L396-L401

woctezuma avatar Dec 23 '20 14:12 woctezuma

Of course, but my question about different beween one large value vs. two smaller values

Dok11 avatar Dec 23 '20 16:12 Dok11

@Dok11 I think you are misunderstanding the value, it puts multiple attention layers at the resolutions you specify into the neural network graph, so at more resolutions the better of course as you'll get attention at different levels. It's the same as convolutions. If you can only effort one it depends on your training data has it lot of global structure (then lower resolution layer is beneficial) or lot of local structure (then a higher resolution layer is more beneficial)

Mut1nyJD avatar Dec 23 '20 16:12 Mut1nyJD

I thought same but I hoped someone can help me with some examples with this values. For example, for some purpose we use param is [32], for other purpose/images we use params if [8,16,32,64]. Maybe have reason to create some synthetic dataset to test this parameters in practice? Like this: image

Dok11 avatar Dec 23 '20 17:12 Dok11

@Mut1nyJD I still not undrestanding attn layers, but I think have reasonable question. By changing attn-res-layers from [32] to [32,64,128,256] the model file size does not increase more then two megabytes. So does it really must improve quality? Yes, model trainig requires more memory and time. So I confused, trainig slower, but model size still same size (almost). I think it mean that model doesnt increse own possibilities. How model will make more detailed images with same size.. If you know some sources with simple description of this technique let me know please.

Dok11 avatar Jan 23 '21 11:01 Dok11

Implementation of GSA in the code is from:

  • https://github.com/lucidrains/global-self-attention-network
  • https://openreview.net/forum?id=KiFeuZu24k (rejected by ICLR 2021 by the way) Rejection

Based on lucidrains' repository, one could refer to this for prior work:

  • https://github.com/cmsflash/efficient-attention
  • https://arxiv.org/abs/1812.01243

Efficient attention is an attention mechanism that substantially optimizes the memory and computational efficiency while retaining exactly the same expressive power as the conventional dot-product attention.

Apparently, it is a cheaper way to have attention. It brings attention mechanism to the model, but does not increase its size a lot because it does not add new features, etc.

woctezuma avatar Jan 23 '21 11:01 woctezuma