deepmd-kit icon indicating copy to clipboard operation
deepmd-kit copied to clipboard

[BUG] Finetuned model has wrong type_map

Open iProzd opened this issue 1 year ago • 8 comments

Bug summary

When doing finetuing, the user-defined type_map (e.g. ['H', 'O']) will be covered by the type_map in the pretrained model (e.g. the whole periodic table) , which is confusing for users.

DeePMD-kit Version

3.0.0a

TensorFlow Version

2.6.0

How did you download the software?

Built from source

Input Files, Running Commands, Error Log, etc.

See above.

Steps to Reproduce

See above.

Further Information, Files, and Links

No response

iProzd avatar Mar 13 '24 03:03 iProzd

Idea: the easiest way is to add a virtual Model, "adapt type map model", which just adapts the input atom_type from the outer model type_map to the inner model type_map and forwards everything else like #3450.

njzjz avatar Mar 13 '24 04:03 njzjz

Idea: the easiest way is to add a virtual Model, "adapt type map model", which just adapts the input atom_type from the outer model type_map to the inner model type_map and forwards everything else like #3450.

I see, but this will cause the model to be wrapped repeatedly each time it is fine-tuned, as discussed with @wanghan-iapcm and @anyangml.

iProzd avatar Mar 13 '24 04:03 iProzd

repeatedly each time it is fine-tuned

Indeed, I don't understand why it needs to change the type map each time it is fine-tuned...

njzjz avatar Mar 13 '24 08:03 njzjz

repeatedly each time it is fine-tuned

Indeed, I don't understand why it needs to change the type map each time it is fine-tuned...

For the LinearModel, let's say we have two pre-trained models: model A with ["H", "O", "Na"], model B with ["H", "O", "K"]. Now if we want to finetune a LinearModel with this, the new type map becomes ["H", "O"].

anyangml avatar Mar 13 '24 08:03 anyangml

repeatedly each time it is fine-tuned

Indeed, I don't understand why it needs to change the type map each time it is fine-tuned...

For the LinearModel, let's say we have two pre-trained models: model A with ["H", "O", "Na"], model B with ["H", "O", "K"]. Now if we want to finetune a LinearModel with this, the new type map becomes ["H", "O"].

This is not correct. The type map should be the union of two models.

njzjz avatar Mar 13 '24 16:03 njzjz

repeatedly each time it is fine-tuned

Indeed, I don't understand why it needs to change the type map each time it is fine-tuned...

For the LinearModel, let's say we have two pre-trained models: model A with ["H", "O", "Na"], model B with ["H", "O", "K"]. Now if we want to finetune a LinearModel with this, the new type map becomes ["H", "O"].

This is not correct. The type map should be the union of two models.

I think the combined model should only handle the common types. Suppose the new type map is the union of the two, there will be unseen types for each individual models.

anyangml avatar Mar 14 '24 00:03 anyangml

repeatedly each time it is fine-tuned

Indeed, I don't understand why it needs to change the type map each time it is fine-tuned...

For the LinearModel, let's say we have two pre-trained models: model A with ["H", "O", "Na"], model B with ["H", "O", "K"]. Now if we want to finetune a LinearModel with this, the new type map becomes ["H", "O"].

This is not correct. The type map should be the union of two models.

I think the combined model should only handle the common types. Suppose the new type map is the union of the two, there will be unseen types for each individual models.

A model doesn't need to evaluate all types. A typical example is DPLR. Pairwise potentials may also be aimed at several certain types.

njzjz avatar Mar 14 '24 01:03 njzjz

repeatedly each time it is fine-tuned

Indeed, I don't understand why it needs to change the type map each time it is fine-tuned...

because the user may provide new type_map that is not consistent with the model type_map

wanghan-iapcm avatar Mar 14 '24 01:03 wanghan-iapcm