yamabuki-chan

Results 20 comments of yamabuki-chan

(To the best of my knowledge,) Jackhmmer scans all of the sequences in db, which means that there is > 100GB Read from storage for one search. I assume that...

I think it is because confidence scores for monomer is 'plddt' and for multimer is 'iptm+ptm'. https://github.com/deepmind/alphafold/blob/c42a96f3a5b6179484b5f0b936e3dd0c9b08fde1/run_alphafold.py#L267 The values in the b-factor columns are 'plddt' for both modes. https://github.com/deepmind/alphafold/blob/c42a96f3a5b6179484b5f0b936e3dd0c9b08fde1/run_alphafold.py#L231

I think the easiest way is to prepare dummy databases & obsolete.txt file. I uploaded my dummy dbs here. https://github.com/yamule/alphafold/tree/sep/dummy_database Please check readme.txt (the paths are better to be absolute).

I think the problem was caused by pytorch version 1.9.0. Upgrading pytorch to version 1.10.2 solved it & the error message provides much more appropriate information.

I reported PDB's help desk and they said that there is an issue in their side and they will fix it. But I don't know how it will be fixed...

Do you have unrelaxed pdb? And does it have many atom clashes? In alphafold code, **I thought** that was caused by newly created or discarded disulfide bonds during relaxation (so...

It may be a permissions issue. https://github.com/deepmind/alphafold/issues/202

'uniprot.fasta' is generated from 2 files. 'uniprot_trembl.fasta(.gz)' and 'uniprot_sprot.fasta(.gz)'. 'uniprot_trembl.fasta(.gz)' is a bit big that it may take some hours. If you don't find 'uniprot_sprot.fasta(.gz)', it might mean download process...

> > 'uniprot.fasta' is generated from 2 files. 'uniprot_trembl.fasta(.gz)' and 'uniprot_sprot.fasta(.gz)'. 'uniprot_trembl.fasta(.gz)' is a bit big that it may take some hours. If you don't find 'uniprot_sprot.fasta(.gz)', it might mean...

It seems that the MSA is too large. How about using --reduced_dbs option?