ViLT
ViLT copied to clipboard
`self.mask_token` at 553 line in `vision_transformer.py`
Hi, i'm really impressed with your work and i'm getting a lot of help !
But, there is an error in vilt/modules/vision_transformer.py
when mask_it == True.
The error occur because self.mask_token
at 553 line in vision_transformer.py
is not initilized.
So, i wonder what self.mask_token
means?
Thanks.
Hi @hspark92,
mask_it=False
is a legacy argument for the MPP (masked patch projection) objective.
I refer you to https://github.com/dandelin/ViLT/issues/18
Hi @dandelin ,
Thanks you for your reply. It's helpful !
But i wonder why self.mask_token
is trainable parameters.
if self.mask_toekn
is cnostant, will it affect training ?
Thanks,
@hspark92
I set it as a trainable parameter following BERT's way of treating [MASK] tokens. I'm not sure whether constant [MASK] tokens' embeddings affect the training or not.
But if an experiment on those fixed [MASK] tokens' embeddings exists in the NLP domain, then I believe ViLT's MPP would follow the trend.
Thanks.
I get it, thank you for answer !
Hi, I find the same problem as yours. The 'self.mask_token' has not been defined. So I can't run the pre-train code. How do you solve it? Thanks!
Hi, i'm really impressed with your work and i'm getting a lot of help !
But, there is an error in
vilt/modules/vision_transformer.py
when mask_it == True. The error occur becauseself.mask_token
at 553 line invision_transformer.py
is not initilized. So, i wonder whatself.mask_token
means?Thanks.
Hi, I find the same problem as yours. The 'self.mask_token' has not been defined. So I can't run the pre-train code. How do you solve it? Thanks!