MIME icon indicating copy to clipboard operation
MIME copied to clipboard

Can't download the weights of the model

Open nastya236 opened this issue 3 years ago • 0 comments

I want to evaluate the model (I downloaded weights from google disk) , however unfortunately during testing the following error occurs:

RuntimeError: Error(s) in loading state_dict for Train_MIME:
	Missing key(s) in state_dict: "encoder.enc.multi_head_attention.query_linear.weight", 
"encoder.enc.multi_head_attention.key_linear.weight", "encoder.enc.multi_head_attention.value_linear.weight", 
"encoder.enc.multi_head_attention.output_linear.weight", "encoder.enc.positionwise_feed_forward.layers.0.conv.weight", 

"encoder.enc.positionwise_feed_forward.layers.0.conv.bias", "encoder.enc.positionwise_feed_forward.layers.1.conv.weight",
 "encoder.enc.positionwise_feed_forward.layers.1.conv.bias", "encoder.enc.layer_norm_mha.gamma", 
"encoder.enc.layer_norm_mha.beta", "encoder.enc.layer_norm_ffn.gamma", "encoder.enc.layer_norm_ffn.beta", 
"emotion_input_encoder_1.enc.enc.multi_head_attention.query_linear.weight", 
"emotion_input_encoder_1.enc.enc.multi_head_attention.key_linear.weight", 
"emotion_input_encoder_1.enc.enc.multi_head_attention.value_linear.weight", 
"emotion_input_encoder_1.enc.enc.multi_head_attention.output_linear.weight", 
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.0.conv.weight", 
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.0.conv.bias", 
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.1.conv.weight", 
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.1.conv.bias", 
"emotion_input_encoder_1.enc.enc.layer_norm_mha.gamma", "emotion_input_encoder_1.enc.enc.layer_norm_mha.beta", 
"emotion_input_encoder_1.enc.enc.layer_norm_ffn.gamma", "emotion_input_encoder_1.enc.enc.layer_norm_ffn.beta", 
"emotion_input_encoder_2.enc.enc.multi_head_attention.query_linear.weight", 
"emotion_input_encoder_2.enc.enc.multi_head_attention.key_linear.weight", 
"emotion_input_encoder_2.enc.enc.multi_head_attention.value_linear.weight", 
"emotion_input_encoder_2.enc.enc.multi_head_attention.output_linear.weight", 
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.0.conv.weight", 
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.0.conv.bias", 
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.1.conv.weight", 
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.1.conv.bias", 
"emotion_input_encoder_2.enc.enc.layer_norm_mha.gamma", "emotion_input_encoder_2.enc.enc.layer_norm_mha.beta", 
"emotion_input_encoder_2.enc.enc.layer_norm_ffn.gamma", "emotion_input_encoder_2.enc.enc.layer_norm_ffn.beta". 

	Unexpected key(s) in state_dict: "encoder.enc.0.multi_head_attention.query_linear.weight", 
"encoder.enc.0.multi_head_attention.key_linear.weight", "encoder.enc.0.multi_head_attention.value_linear.weight", 
"encoder.enc.0.multi_head_attention.output_linear.weight", "encoder.enc.0.positionwise_feed_forward.layers.0.conv.weight", 
"encoder.enc.0.positionwise_feed_forward.layers.0.conv.bias", 
"encoder.enc.0.positionwise_feed_forward.layers.1.conv.weight", 
"encoder.enc.0.positionwise_feed_forward.layers.1.conv.bias", "encoder.enc.0.layer_norm_mha.gamma", 
"encoder.enc.0.layer_norm_mha.beta", "encoder.enc.0.layer_norm_ffn.gamma", "encoder.enc.0.layer_norm_ffn.beta", 
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.query_linear.weight", 
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.key_linear.weight", 
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.value_linear.weight", 
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.output_linear.weight", 
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.0.conv.weight", 
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.0.conv.bias", 
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.1.conv.weight", 
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.1.conv.bias", 
"emotion_input_encoder_1.enc.enc.0.layer_norm_mha.gamma", "emotion_input_encoder_1.enc.enc.0.layer_norm_mha.beta", 
"emotion_input_encoder_1.enc.enc.0.layer_norm_ffn.gamma", "emotion_input_encoder_1.enc.enc.0.layer_norm_ffn.beta", 
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.query_linear.weight", 
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.key_linear.weight", 
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.value_linear.weight", 
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.output_linear.weight", 
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.0.conv.weight", 
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.0.conv.bias", 
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.1.conv.weight", 
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.1.conv.bias", 
"emotion_input_encoder_2.enc.enc.0.layer_norm_mha.gamma", "emotion_input_encoder_2.enc.enc.0.layer_norm_mha.beta",
 "emotion_input_encoder_2.enc.enc.0.layer_norm_ffn.gamma", "emotion_input_encoder_2.enc.enc.0.layer_norm_ffn.beta". 


I rewrote the saved_model dict in the following way: enc.0 -> enc That works for me. Is it correct way for downloading model's weights?

Thank you for your help, Anastasiia

nastya236 avatar Nov 14 '21 21:11 nastya236