candle
candle copied to clipboard
How to do freeze VarMap Vars?
Hello everybody,
Is there away to freeze all Var Tensors in the VarMap like the below snippet ?
means something like implement the Iterator trait and detach the contained tensors from the graph and add a Var which can be trained !!!
# Freeze all the pre-trained layers
for param in model.parameters():
param.requires_grad = False
Originally posted by @mohamed-180 in https://github.com/huggingface/candle/issues/891#issuecomment-2214407719