fastai_xla_extensions
fastai_xla_extensions copied to clipboard
creating a model with concat_pool = True creates a model that can't run on multiple tpu cores
Setting the concat_pool = True
in cnn_learner (which is the default) creates a model that doesn't run on multiple tpu cores.
Setting it to concat_pool = False
works fine however.
Based on the code, concat_pool=True
uses fastai's AdaptiveConcatPool2D
while setting it to False
uses nn.AdaptiveAvgPool2D
so it might be that using the fastai version causes a lowering or something that causes
the model to not run at all.
Workaround: set the concat_pool=False
when creating the custom head for a Learner for transfer learning (e.g. cnn_learner
)