pytorch_geometric
pytorch_geometric copied to clipboard
Custom GraphGym config not working
🐛 Describe the bug
Registering custom configs in GraphGym does not work. It is already not possible to access the custom configs that are specified in the example. To reproduce:
- Clone PyG from Master
- Try to access the example custom configs in
graphgym\main.pyby adding the following after line 31:
print(cfg.example_arg)
- Run
run_single.shIf I do this I get the following error:
Traceback (most recent call last):
File "...\pytorch_geometric\graphgym\main.py", line 32, in <module>
print(cfg.example_arg)
File "...\.conda\envs\test\lib\site-packages\yacs\config.py", line 141, in __getattr__
raise AttributeError(name)
AttributeError: example_arg
Environment
- PyG version: 2.1.0
- PyTorch version: 1.12.1
- OS: Windows 11
- Python version: 3.9.12
- CUDA/cuDNN version: Running on CPU
- How you installed PyTorch and PyG (
conda,pip, source): viapip install git+https://github.com/pyg-team/pytorch_geometric
Thanks for reporting. I think this is fully intentional and the yacs library we are using internally does not support this either. We only want users to specify config parameters that GraphGym uses internally. What would the use-case of this?
Sorry, maybe my minimal working example was a bit too minimal.
I want to write a custom encoder with configurations that can then be configured using the .yaml-file. But if I try to set a new value for a custom config the following error occurs (here I tried to set a new value for the example custom config given here by adding a new line containing example_arg: test to the .yaml-file):
Traceback (most recent call last):
File "C:\Users\morit\workspace\pytorch_geometric\graphgym\main.py", line 27, in <module>
load_cfg(cfg, args)
File "C:\Users\morit\.conda\envs\test\lib\site-packages\torch_geometric\graphgym\config.py", line 503, in load_cfg
cfg.merge_from_file(args.cfg_file)
File "C:\Users\morit\.conda\envs\test\lib\site-packages\yacs\config.py", line 213, in merge_from_file
self.merge_from_other_cfg(cfg)
File "C:\Users\morit\.conda\envs\test\lib\site-packages\yacs\config.py", line 217, in merge_from_other_cfg
_merge_a_into_b(cfg_other, self, self, [])
File "C:\Users\morit\.conda\envs\test\lib\site-packages\yacs\config.py", line 491, in _merge_a_into_b
raise KeyError("Non-existent config key: {}".format(full_key))
KeyError: 'Non-existent config key: example_arg'
and when I try to access it for example in a custom encoder:
import torch
from torch_geometric.graphgym.config import cfg
from torch_geometric.graphgym.register import register_node_encoder
@register_node_encoder('example')
class ExampleNodeEncoder(torch.nn.Module):
def __init__(self, emb_dim, num_classes=None):
super().__init__()
# Some dummy code to throw the error
self.example = cfg.example_arg
self.encoder = torch.nn.Embedding(num_classes, emb_dim)
torch.nn.init.xavier_uniform_(self.encoder.weight.data)
def forward(self, batch):
# Encode just the first dimension if more exist
batch.x = self.encoder(batch.x[:, 0])
return batch
I get a similar error as above.
I thought this is what the custom configs are for or did I misunderstand something?
I think you need to register the new attribute in cfg as well. For example:
@register_node_encoder('example')
class ExampleNodeEncoder(torch.nn.Module):
pass
cfg.example_arg = default_value
That works, thank you. But then what is a use case for config/example.py? I tested it and it also works just by specifying it only where you suggested.
By the way: While testing this I found another bug and tried to fix it here: #5243
Oh, you are right. You can also register a new config and initialize cfg parameters. I think both approaches work fine here. @JiaxuanYou can give more insights on which way is preferred.
I just ran into the same issue. I am trying to create custom config args to specify in the yaml file, so that I can also use these custom configs in my other custom graphgym modules.
It does seem like these custom configs are supposed to be set in lines 448-450 in torch_geometric/graphgym/config.py in set_cfg():
# Set user customized cfgs
for func in register.config_dict.values():
func(cfg)
However, it doesn't seem to be working as intended because the first time set_cfg() is run, register.config_dict is empty. This is due to the fact that importing register_config first goes to torch_geometric/graphgym/__init__.py and imports a bunch of other modules, which import configs first and, therefore, initialize cfg without ever setting user defined configs.
I was able to fix this issue by running set_cfg(cfg) again in my main before running load_cfg(cfg, args).
@JiaxuanYou Can you take a look?
A bit more on this: turns out running set_cfg(cfg) before load_cfg(cfg, args) only solves part of the problem. It is still not possible to create custom modules with custom configs, because these configs are still not available during registering these modules. For example, if I'm trying to create a custom activation function:
from functools import partial
import torch.nn as nn
from torch_geometric.graphgym.config import cfg
from torch_geometric.graphgym.register import register_act
class CustomActivation(nn.Module):
def __init__(self, custom_arg):
super().__init__()
self.custom_arg = custom arg
def forward(self, x):
...
register_act("custom_act", partial(CustomAct, custom_arg=cfg.custom_act_arg))
This will not work, because custom_act_arg does not yet exist.
I solved the issue by creating all my custom configs in the module's __init__.py. In your example, I would create it in torch_geometric/graphgym/custom_graphgym/act/__init__.py. Then it would exist when you try to use it in your example, but when doing it like this the torch_geometric/graphgym/custom_graphgym/config module has no use anymore. I guess it could be removed altogether before anyone else gets confused.
Any updates on this? For me what worked (partially) was to run set_cfg(cfg) as @do-lania suggested