avalanche
avalanche copied to clipboard
Cannot use checkpoint.WeightCheckpoint in evaluation metrics
🐛 The Bug Was trying to save model weights using checkpoint evaluation metric, but failed with the following error
TypeError Traceback (most recent call last)
<ipython-input-101-10f676e4e0b3> in <module>()
10
11 print('Computing accuracy on the whole test set')
---> 12 results.append(cl_strategy.eval(scenario.test_stream))
6 frames
/usr/lib/python3.7/copy.py in deepcopy(x, memo, _nil)
167 reductor = getattr(x, "__reduce_ex__", None)
168 if reductor:
--> 169 rv = reductor(4)
170 else:
171 reductor = getattr(x, "__reduce__", None)
TypeError: can't pickle generator objects
🐜 To Reproduce Follow the tutorial evaluation page
EvaluationPlugin(
accuracy_metrics(minibatch=True, epoch=True, experience=True, stream=True),
loss_metrics(minibatch=True, epoch=True, experience=True, stream=True),
timing_metrics(epoch=True, epoch_running=True),
forgetting_metrics(experience=True, stream=True),
forward_transfer_metrics(experience=True, stream=True),
confusion_matrix_metrics(
num_classes=scenario.n_classes,
save_image=True,
stream=True
),
[checkpoint.WeightCheckpoint()],
loggers=[interactive_logger, text_logger, tb_logger]
)
Then train the model normally, the weight failed to be saved
🐝 Expected behavior Guessing that the model weight would be saved somewhere, but unsure how to use it properly
You should see it on Tensorboard, but we should also save it in memory. @AndreaCossu can we fix a limited number of types for metrics so that we can check that loggers are able to save everything? or warn the user otherwise.
I agree, this is something long overdue.
Hi @AntonioCarta and @AndreaCossu. I can work on this, can you please expand on why fixing the number of types for metrics would circumvent this issue?