protein-sequence-embedding-iclr2019
protein-sequence-embedding-iclr2019 copied to clipboard
Unable to download links from the page
Hi, I am trying to run your pretrained models on my own machine. However, there is some architecture mismatch between the model and the torch version. For example, I tried to use the pfam_lm_lstm2x1024_tied_mb64.sav model to work on the pfam.fasta files but it gives the following error:
AttributeError: 'LSTM' object has no attribute '_flat_weights'. Did you mean: '_all_weights'?
The only suggestion I have received is to downgrade the torch version to 1.3.0, which does not exist. Do you have any updated pretrained models for use?
Hi, I am trying to run your pretrained models on my own machine. However, there is some architecture mismatch between the model and the torch version. For example, I tried to use the pfam_lm_lstm2x1024_tied_mb64.sav model to work on the pfam.fasta files but it gives the following error:
AttributeError: 'LSTM' object has no attribute '_flat_weights'. Did you mean: '_all_weights'?
The only suggestion I have received is to downgrade the torch version to 1.3.0, which does not exist. Do you have any updated pretrained models for use?
You would need to try downgrading pytorch (what version are you using? I'd recommend trying the earliest version you can find, but at least a version <2.0). You could then update the model to a newer version of pytorch by saving the state dict and reloading it with the new pytorch version. You may need to hack the key names in the state dict for this to work. We do not have updated pretrained models.