walkwithfastai.github.io
walkwithfastai.github.io copied to clipboard
Fix: adding import nececeary for using attention layers as bottlenecks
Adding the following import from fastai.layers import _get_norm in order for the following code to be able to run
def BatchNormZero(nf, ndim=2, **kwargs):\n",
" "BatchNorm layer with nf features and ndim initialized depending on norm_type. Weights initialized to zero."\n",
" return _get_norm('BatchNorm', nf, ndim, zero=True, **kwargs)\n",
After this fix it becomes possible to train a timm model based unet with attention or double_attention as bottleneck
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Thanks!