torchdrug
torchdrug copied to clipboard
[Bug] AttributeError: can't set attribute
In the retrosynthesis tutorial
this code of synthon completion
synthon_optimizer = torch.optim.Adam(synthon_task.parameters(), lr=1e-3)
synthon_solver = core.Engine(synthon_task, synthon_train, synthon_valid,
synthon_test, synthon_optimizer,
gpus=[0], batch_size=128)
synthon_solver.train(num_epoch=1)
synthon_solver.evaluate("valid")
synthon_solver.save("g2gs_synthon_model.pth")
gives below error
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [13], in <cell line: 11>()
7 synthon_optimizer = torch.optim.Adam(synthon_task.parameters(), lr=1e-3)
8 synthon_solver = core.Engine(synthon_task, synthon_train, synthon_valid,
9 synthon_test, synthon_optimizer,
10 gpus=[0], batch_size=128)
---> 11 synthon_solver.train(num_epoch=1)
12 synthon_solver.evaluate("valid")
13 synthon_solver.save("g2gs_synthon_model.pth")
File /usr/local/lib/python3.9/dist-packages/torchdrug/core/engine.py:155, in Engine.train(self, num_epoch, batch_per_epoch)
152 if self.device.type == "cuda":
153 batch = utils.cuda(batch, device=self.device)
--> 155 loss, metric = model(batch)
156 if not loss.requires_grad:
157 raise RuntimeError("Loss doesn't require grad. Did you define any loss in the task?")
File /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.9/dist-packages/torchdrug/tasks/retrosynthesis.py:592, in SynthonCompletion.forward(self, batch)
589 all_loss = torch.tensor(0, dtype=torch.float32, device=self.device)
590 metric = {}
--> 592 pred, target = self.predict_and_target(batch, all_loss, metric)
593 node_in_pred, node_out_pred, bond_pred, stop_pred = pred
594 node_in_target, node_out_target, bond_target, stop_target, size = target
File /usr/local/lib/python3.9/dist-packages/torchdrug/tasks/retrosynthesis.py:984, in SynthonCompletion.predict_and_target(self, batch, all_loss, metric)
981 with reactant.graph():
982 reactant.reaction = batch["reaction"]
--> 984 graph1, node_in_target1, node_out_target1, bond_target1, stop_target1 = self.all_edge(reactant, synthon)
985 graph2, node_in_target2, node_out_target2, bond_target2, stop_target2 = self.all_stop(reactant, synthon)
987 graph = self._cat([graph1, graph2])
File /usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File /usr/local/lib/python3.9/dist-packages/torchdrug/tasks/retrosynthesis.py:557, in SynthonCompletion.all_edge(self, reactant, synthon)
555 assert (graph.num_edges % 2 == 0).all()
556 # node / edge features may change because we mask some nodes / edges
--> 557 graph, feature_valid = self._update_molecule_feature(graph)
559 return graph[feature_valid], node_in_target[feature_valid], node_out_target[feature_valid], \
560 bond_target[feature_valid], stop_target[feature_valid]
File /usr/local/lib/python3.9/dist-packages/torchdrug/tasks/retrosynthesis.py:398, in SynthonCompletion._update_molecule_feature(self, graphs)
395 bond_type[edge_mask] = new_graphs.bond_type.to(device=graphs.device)
397 with graphs.node():
--> 398 graphs.node_feature = node_feature
399 with graphs.edge():
400 graphs.edge_feature = edge_feature
File /usr/local/lib/python3.9/dist-packages/torchdrug/data/graph.py:160, in Graph.__setattr__(self, key, value)
158 if hasattr(self, "meta_dict"):
159 self._check_attribute(key, value)
--> 160 super(Graph, self).__setattr__(key, value)
File /usr/local/lib/python3.9/dist-packages/torchdrug/core/core.py:84, in _MetaContainer.__setattr__(self, key, value)
82 if types:
83 self.meta_dict[key] = types.copy()
---> 84 self._setattr(key, value)
File /usr/local/lib/python3.9/dist-packages/torchdrug/core/core.py:93, in _MetaContainer._setattr(self, key, value)
92 def _setattr(self, key, value):
---> 93 return super(_MetaContainer, self).__setattr__(key, value)
AttributeError: can't set attribute
hi, I have the same problem. May I ask if you have solved this problem?
Hi @FanmengWang, I couldn't figureout yet!
Seems like I finally solved the issue. However, now I got another issue The problem is with setting an attribute in retrosynthesis.py: graphs.node_feature=node_feature because in the Molecule.py property was used:
@property
def node_feature(self):
return self.atom_feature
So we need to add this:
@node_feature.setter
def node_feature(self, value):
self._node_feature = value
Then attributes are okay. However, now I got another issue: rdkit.Chem.rdchem.KekulizeException: Can't kekulize mol. Unkekulized atoms: 20. And it doesn't seem like the reason is in adding these strings of code. You might have a different number of these molecules as I've shortened the dataset so it's a bit different. Let me know if you have the same issue please
Hi @DimGorr, I tried this code changes and ended up with same exception
08:53:36 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
08:53:36 Epoch 0 begin
08:53:37 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
08:53:37 bond accuracy: 0.690909
08:53:37 bond ce loss: 0.947608
08:53:37 node in accuracy: 0.030303
08:53:37 node in ce loss: 2.71172
08:53:37 node out accuracy: 0
08:53:37 node out ce loss: 3.50957
08:53:37 stop accuracy: 0.546075
08:53:37 stop bce loss: 0.718129
08:53:37 total accuracy: 0.0170648
08:53:37 total loss: 7.88703
[08:53:37] Can't kekulize mol. Unkekulized atoms: 4
---------------------------------------------------------------------------
KekulizeException Traceback (most recent call last)
Input In [16], in <cell line: 14>()
10 synthon_optimizer = torch.optim.Adam(synthon_task.parameters(), lr=1e-3)
11 synthon_solver = core.Engine(synthon_task, synthon_train, synthon_valid,
12 synthon_test, synthon_optimizer,
13 gpus=[0], batch_size=128)
---> 14 synthon_solver.train(num_epoch=1)
15 synthon_solver.evaluate("valid")
16 synthon_solver.save("g2gs_synthon_model.pth")
File /usr/local/lib/python3.9/dist-packages/torchdrug/core/engine.py:155, in Engine.train(self, num_epoch, batch_per_epoch)
152 if self.device.type == "cuda":
153 batch = utils.cuda(batch, device=self.device)
--> 155 loss, metric = model(batch)
156 if not loss.requires_grad:
157 raise RuntimeError("Loss doesn't require grad. Did you define any loss in the task?")
File /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.9/dist-packages/torchdrug/tasks/retrosynthesis.py:592, in SynthonCompletion.forward(self, batch)
589 all_loss = torch.tensor(0, dtype=torch.float32, device=self.device)
590 metric = {}
--> 592 pred, target = self.predict_and_target(batch, all_loss, metric)
593 node_in_pred, node_out_pred, bond_pred, stop_pred = pred
594 node_in_target, node_out_target, bond_target, stop_target, size = target
File /usr/local/lib/python3.9/dist-packages/torchdrug/tasks/retrosynthesis.py:984, in SynthonCompletion.predict_and_target(self, batch, all_loss, metric)
981 with reactant.graph():
982 reactant.reaction = batch["reaction"]
--> 984 graph1, node_in_target1, node_out_target1, bond_target1, stop_target1 = self.all_edge(reactant, synthon)
985 graph2, node_in_target2, node_out_target2, bond_target2, stop_target2 = self.all_stop(reactant, synthon)
987 graph = self._cat([graph1, graph2])
File /usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File /usr/local/lib/python3.9/dist-packages/torchdrug/tasks/retrosynthesis.py:557, in SynthonCompletion.all_edge(self, reactant, synthon)
555 assert (graph.num_edges % 2 == 0).all()
556 # node / edge features may change because we mask some nodes / edges
--> 557 graph, feature_valid = self._update_molecule_feature(graph)
559 return graph[feature_valid], node_in_target[feature_valid], node_out_target[feature_valid], \
560 bond_target[feature_valid], stop_target[feature_valid]
File /usr/local/lib/python3.9/dist-packages/torchdrug/tasks/retrosynthesis.py:384, in SynthonCompletion._update_molecule_feature(self, graphs)
382 valid = [mol is not None for mol in mols]
383 valid = torch.tensor(valid, device=graphs.device)
--> 384 new_graphs = type(graphs).from_molecule(mols, **self.feature_kwargs)
386 node_feature = torch.zeros(graphs.num_node, *new_graphs.node_feature.shape[1:],
387 dtype=new_graphs.node_feature.dtype, device=graphs.device)
388 edge_feature = torch.zeros(graphs.num_edge, *new_graphs.edge_feature.shape[1:],
389 dtype=new_graphs.edge_feature.dtype, device=graphs.device)
File /usr/local/lib/python3.9/dist-packages/torchdrug/utils/decorator.py:115, in deprecated_alias.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
112 warnings.warn("%s(): argument `%s` is deprecated in favor of `%s`" % (func.__name__, key, value))
113 kwargs[value] = kwargs.pop(key)
--> 115 return func(*args, **kwargs)
File /usr/local/lib/python3.9/dist-packages/torchdrug/data/molecule.py:716, in PackedMolecule.from_molecule(cls, mols, atom_feature, bond_feature, mol_feature, with_hydrogen, kekulize)
714 mol = Chem.AddHs(mol)
715 if kekulize:
--> 716 Chem.Kekulize(mol)
718 for atom in mol.GetAtoms():
719 atom_type.append(atom.GetAtomicNum())
KekulizeException: Can't kekulize mol. Unkekulized atoms: 4
hi @bhadreshpsavani @DimGorr I figured out this problem (AttributeError: can't set attribute) by the similar way and met the new problem as you. Then I use try...except to solve this problem.
if kekulize:
try:
Chem.Kekulize(mol)
except:
kekulize = False
for atom in mol.GetAtoms():
atom_type.append(atom.GetAtomicNum())
This approach may seem odd, but useful. We can get similar result as the authors for the synthon completion part.
@FanmengWang @bhadreshpsavani After applying except: pass I got an error different from yours:
Traceback (most recent call last):
File "tests.py", line 133, in <module>
predictions = synthon_task.predict_reactant(batch, num_beam=10, max_prediction=5)
File "/root/anaconda3/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/tasks/retrosynthesis.py", line 913, in predict_reactant
new_graph = self._apply_action(graph, action, logp)
File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/tasks/retrosynthesis.py", line 865, in _apply_action
meta_dict=meta_dict, **data_dict)
File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/data/molecule.py", line 617, in __init__
offsets=offsets, atom_type=atom_type, bond_type=bond_type, **kwargs)
File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/data/graph.py", line 1104, in __init__
num_relation=num_relation, **kwargs)
File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/data/molecule.py", line 58, in __init__
super(Molecule, self).__init__(edge_list=edge_list, **kwargs)
File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/data/graph.py", line 86, in __init__
self.edge_feature = torch.as_tensor(edge_feature, device=self.device)
File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/data/graph.py", line 161, in __setattr__
self._check_attribute(key, value)
File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/data/graph.py", line 1179, in _check_attribute
(key, self.num_edge, value.shape))
ValueError: Expect edge attribute `edge_feature` to have shape (1228, *), but found torch.Size([1160, 18])
Love this tutorial:) please let me know if you find out how to resolve it
@DimGorr Hi, I review the tutorial and find that this part is not necessary for retrosynthesis task in fact. Therefore, I do not try this part now. I think maybe you can also ignore this part firstly.
batch = []
reaction_set = set()
for sample in synthon_valid:
if sample["reaction"] not in reaction_set:
reaction_set.add(sample["reaction"])
batch.append(sample)
if len(batch) == 4:
break
batch = data.graph_collate(batch)
batch = utils.cuda(batch)
reactants, synthons = batch["graph"]
reactants = reactants.ion_to_molecule()
predictions = synthon_task.predict_reactant(batch, num_beam=10, max_prediction=5)
synthon_id = -1
i = 0
titles = []
graphs = []
for prediction in predictions:
if synthon_id != prediction.synthon_id:
synthon_id = prediction.synthon_id.item()
i = 0
graphs.append(reactants[synthon_id])
titles.append("Truth %d" % synthon_id)
i += 1
graphs.append(prediction)
if reactants[synthon_id] == prediction:
titles.append("Prediction %d-%d, Correct!" % (synthon_id, i))
else:
titles.append("Prediction %d-%d" % (synthon_id, i))
# reset attributes so that pack can work properly
mols = [graph.to_molecule() for graph in graphs]
graphs = data.PackedMolecule.from_molecule(mols)
graphs.visualize(titles, save_file="uspto50k_synthon_valid.png", num_col=6)
@FanmengWang Are you sure we can skip this part? because it is beam search and, for instance, num_synthon_beam is used even in init part of Retrosynthesis in retrosynthesis.py. And in this part, you somehow modify mols and graphs. seems like they might be used in Retrosynthesis.py and cause this error. Please let me know if misunderstood smth
@DimGorr Hi, I think that we just need g2gs_reaction_model.pth and g2gs_synthon_model.pth for retrosynthesis task in fact. Please let me know if I misunderstood it.
solver.load("g2gs_reaction_model.pth", load_optimizer=False)
solver.load("g2gs_synthon_model.pth", load_optimizer=False)
solver.evaluate("valid")
@bhadreshpsavani @DimGorr Hi, after CenterIdentification part and SynthonCompletion part, I want to know whether you have the same problem for the retrosynthesis part as following? Thanks !
Traceback (most recent call last):
File "C:/Users/Mn/Desktop/learn_torchdrug/retrosynthesis.py", line 102, in <module>
solver.evaluate("valid")
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchdrug\core\engine.py", line 206, in evaluate
pred, target = model.predict_and_target(batch)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchdrug\tasks\task.py", line 27, in predict_and_target
return self.predict(batch, all_loss, metric), self.target(batch)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchdrug\tasks\retrosynthesis.py", line 1101, in predict
reactant = self.synthon_completion.predict_reactant(synthon_batch, self.num_synthon_beam, self.max_prediction)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchdrug\tasks\retrosynthesis.py", line 885, in predict_reactant
synthon = synthon[feature_valid]
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchdrug\data\graph.py", line 1474, in __getitem__
return self.subbatch(index)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchdrug\data\graph.py", line 1638, in subbatch
return self.graph_mask(index, compact=True)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchdrug\data\graph.py", line 1616, in graph_mask
data_dict, meta_dict = self.data_mask(node_index, edge_index, graph_index=index)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchdrug\data\graph.py", line 1433, in data_mask
v = v[graph_index]
RuntimeError: CUDA error: device-side assert triggered
@FanmengWang Hi! Concerning your first note, yeah I agree, I mixed something and beam search is used in a different part. So it should be fine. And yeah, I have the same error.
@DimGorr Ok, I see. If you have any idea about this problem, please tell me. Thanks!
File /usr/local/lib/python3.9/dist-packages/torchdrug/core/core.py:93, in _MetaContainer._setattr(self, key, value) 92 def _setattr(self, key, value): ---> 93 return super(_MetaContainer, self).__setattr__(key, value) AttributeError: can't set attribute
This is due to a mistake in the commit. @DimGorr Your solution is right. I just fixed it in d282313.
I just found another bug for the dataset. You may get incorrect node feature shape for the synthon solver. This is fixed in a1df0a2. If you are using any old commit, you can use the following lines to construct the dataset instead
synthon_dataset = datasets.USPTO50k("~/molecule-datasets/", as_synthon=True,
atom_feature="synthon_completion",
kekulize=True)
which replaces the argument node_feature
with atom_feature
.
File /usr/local/lib/python3.9/dist-packages/torchdrug/data/molecule.py:716, in PackedMolecule.from_molecule(cls, mols, atom_feature, bond_feature, mol_feature, with_hydrogen, kekulize) 714 mol = Chem.AddHs(mol) 715 if kekulize: --> 716 Chem.Kekulize(mol) 718 for atom in mol.GetAtoms(): 719 atom_type.append(atom.GetAtomicNum()) KekulizeException: Can't kekulize mol. Unkekulized atoms: 4
I didn't reproduce this problem on my side, at least for molecules in USPTO50k. I guess it might be some issues in the old versions of RDKit. I am using version 2020.09.1.0 installed from conda, and the training part is okay.
File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/data/graph.py", line 161, in __setattr__ self._check_attribute(key, value) File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/data/graph.py", line 1179, in _check_attribute (key, self.num_edge, value.shape)) ValueError: Expect edge attribute `edge_feature` to have shape (1228, *), but found torch.Size([1160, 18])
This also happens to me. I am looking into it.
File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/data/graph.py", line 161, in __setattr__ self._check_attribute(key, value) File "/root/anaconda3/lib/python3.7/site-packages/torchdrug/data/graph.py", line 1179, in _check_attribute (key, self.num_edge, value.shape)) ValueError: Expect edge attribute `edge_feature` to have shape (1228, *), but found torch.Size([1160, 18])
This is a compatible issue caused by update in f8d2de6. I just fixed that in 342c87e.
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchdrug\data\graph.py", line 1616, in graph_mask data_dict, meta_dict = self.data_mask(node_index, edge_index, graph_index=index) File "D:\Anaconda3\envs\pytorch\lib\site-packages\torchdrug\data\graph.py", line 1433, in data_mask v = v[graph_index] RuntimeError: CUDA error: device-side assert triggered
This is actually the same issue as #115 if run on CPUs. Already fixed in #115.
@KiddoZhu Ok, thanks! I will try it again.
This issue is resolved with RDKit version 2020.09.1.0 installed from conda.
May be in the installation step we should add this so others don't face such issue
I am closing the issue