pytorch_tabular icon indicating copy to clipboard operation
pytorch_tabular copied to clipboard

omegaconf.errors.ConfigKeyError: Key 'config' not in 'InferredConfig'

Open mjiangtao opened this issue 7 months ago • 1 comments

When I run the code 'omegaconf.errors.ConfigKeyError: Key 'config' not in 'InferredConfig'', it return a mistake that shows ' [5 rows x 3 columns] {'target': ['CVDdeath2024'], 'continuous_cols': ['Age', 'BMI', 'Waist', 'Hipline', 'Neckcircumference', 'meanSBP', 'meanDBP', 'HR', 'ALT', 'AST', 'GGT', 'ALP', 'TP', 'TBIL', 'BT', 'UREA', 'CREA', 'UA', 'TG', 'CHOL', 'LDLc1', 'HDLc1', 'CK', 'WBC', 'RBC', 'HGB', 'PLT', 'HbA1C', 'eGFR', 'SIRI', 'PRI', 'QRSI', 'QT', 'QTc', 'PEleaxis', 'REleaxis', 'TEleaxis', 'Ravl', 'SV1', 'SV3', 'SV4', 'RV5', 'RV6', 'Smax', 'SmaxD', 'CbulbD', 'ICADmean', 'IMTmean', 'CCAD', 'CPT'], 'categorical_cols': ['Sex', 'Currentsmoking', 'Currentdrinking', 'Lackexercise', 'Eduaction', 'overweightobesity', 'HP', 'DM', 'Dyslipidemia', 'CAD', 'Stroke', 'Electrocardiogram', 'stenosis_Grade', 'CAS', 'dzT'], 'date_columns': [], 'encode_date_columns': True, 'validation_split': 0.2, 'continuous_feature_transform': None, 'normalize_continuous_features': True, 'quantile_noise': 0, 'num_workers': 0, 'pin_memory': True, 'handle_unknown_categories': True, 'handle_missing_values': True, 'pickle_protocol': 2, 'dataloader_kwargs': {}, 'task': 'classification', 'head': 'LinearHead', 'head_config': {'layers': ''}, 'embedding_dims': None, 'embedding_dropout': 0.0, 'batch_norm_continuous_input': True, 'learning_rate': 0.001, 'loss': 'CrossEntropyLoss', 'metrics': ['f1_score', 'accuracy'], 'metrics_prob_input': [False, False], 'metrics_params': [{'num_classes': 2, 'average': 'macro'}, {}], 'target_range': None, 'virtual_batch_size': None, 'seed': 42, '_module_src': 'models.tab_transformer', '_model_name': 'TabTransformerModel', '_backbone_name': 'TabTransformerBackbone', '_config_name': 'TabTransformerConfig', 'input_embed_dim': 32, 'embedding_initialization': 'kaiming_uniform', 'embedding_bias': False, 'share_embedding': True, 'share_embedding_strategy': 'add', 'shared_embedding_fraction': 0.25, 'num_heads': 8, 'num_attn_blocks': 6, 'transformer_head_dim': None, 'attn_dropout': 0.1, 'add_norm_dropout': 0.1, 'ff_dropout': 0.1, 'ff_hidden_multiplier': 4, 'transformer_activation': 'GEGLU', 'batch_size': 512, 'data_aware_init_batch_size': 2000, 'fast_dev_run': False, 'max_epochs': 20, 'min_epochs': 10, 'max_time': None, 'accelerator': 'auto', 'devices': -1, 'devices_list': None, 'accumulate_grad_batches': 1, 'auto_lr_find': False, 'auto_select_gpus': True, 'check_val_every_n_epoch': 1, 'gradient_clip_val': 0.0, 'overfit_batches': 0.0, 'deterministic': False, 'profiler': None, 'early_stopping': 'valid_loss', 'early_stopping_min_delta': 0.001, 'early_stopping_mode': 'min', 'early_stopping_patience': 3, 'early_stopping_kwargs': {}, 'checkpoints': 'valid_loss', 'checkpoints_path': 'saved_models', 'checkpoints_every_n_epochs': 1, 'checkpoints_name': None, 'checkpoints_mode': 'min', 'checkpoints_save_top_k': 1, 'checkpoints_kwargs': {}, 'load_best': True, 'track_grad_norm': -1, 'progress_bar': 'rich', 'precision': '32', 'trainer_kwargs': {}, 'project_name': 'GATE Dev', 'run_name': 'gate_w_t_softmax_w_init', 'exp_watch': 'gradients', 'log_target': 'wandb', 'log_logits': True, 'exp_log_freq': 100, 'optimizer': 'Adam', 'optimizer_params': {}, 'lr_scheduler': None, 'lr_scheduler_params': {}, 'lr_scheduler_monitor_metric': 'valid_loss', 'categorical_dim': 15, 'enable_checkpointing': True} {'config': {'target': ['CVDdeath2024'], 'continuous_cols': ['Age', 'BMI', 'Waist', 'Hipline', 'Neckcircumference', 'meanSBP', 'meanDBP', 'HR', 'ALT', 'AST', 'GGT', 'ALP', 'TP', 'TBIL', 'BT', 'UREA', 'CREA', 'UA', 'TG', 'CHOL', 'LDLc1', 'HDLc1', 'CK', 'WBC', 'RBC', 'HGB', 'PLT', 'HbA1C', 'eGFR', 'SIRI', 'PRI', 'QRSI', 'QT', 'QTc', 'PEleaxis', 'REleaxis', 'TEleaxis', 'Ravl', 'SV1', 'SV3', 'SV4', 'RV5', 'RV6', 'Smax', 'SmaxD', 'CbulbD', 'ICADmean', 'IMTmean', 'CCAD', 'CPT'], 'categorical_cols': ['Sex', 'Currentsmoking', 'Currentdrinking', 'Lackexercise', 'Eduaction', 'overweightobesity', 'HP', 'DM', 'Dyslipidemia', 'CAD', 'Stroke', 'Electrocardiogram', 'stenosis_Grade', 'CAS', 'dzT'], 'date_columns': [], 'encode_date_columns': True, 'validation_split': 0.2, 'continuous_feature_transform': None, 'normalize_continuous_features': True, 'quantile_noise': 0, 'num_workers': 0, 'pin_memory': True, 'handle_unknown_categories': True, 'handle_missing_values': True, 'pickle_protocol': 2, 'dataloader_kwargs': {}, 'task': 'classification', 'head': 'LinearHead', 'head_config': {'layers': ''}, 'embedding_dims': None, 'embedding_dropout': 0.0, 'batch_norm_continuous_input': True, 'learning_rate': 0.001, 'loss': 'CrossEntropyLoss', 'metrics': ['f1_score', 'accuracy'], 'metrics_prob_input': [False, False], 'metrics_params': [{'num_classes': 2, 'average': 'macro'}, {}], 'target_range': None, 'virtual_batch_size': None, 'seed': 42, '_module_src': 'models.tab_transformer', '_model_name': 'TabTransformerModel', '_backbone_name': 'TabTransformerBackbone', '_config_name': 'TabTransformerConfig', 'input_embed_dim': 32, 'embedding_initialization': 'kaiming_uniform', 'embedding_bias': False, 'share_embedding': True, 'share_embedding_strategy': 'add', 'shared_embedding_fraction': 0.25, 'num_heads': 8, 'num_attn_blocks': 6, 'transformer_head_dim': None, 'attn_dropout': 0.1, 'add_norm_dropout': 0.1, 'ff_dropout': 0.1, 'ff_hidden_multiplier': 4, 'transformer_activation': 'GEGLU', 'batch_size': 512, 'data_aware_init_batch_size': 2000, 'fast_dev_run': False, 'max_epochs': 20, 'min_epochs': 10, 'max_time': None, 'accelerator': 'auto', 'devices': -1, 'devices_list': None, 'accumulate_grad_batches': 1, 'auto_lr_find': False, 'auto_select_gpus': True, 'check_val_every_n_epoch': 1, 'gradient_clip_val': 0.0, 'overfit_batches': 0.0, 'deterministic': False, 'profiler': None, 'early_stopping': 'valid_loss', 'early_stopping_min_delta': 0.001, 'early_stopping_mode': 'min', 'early_stopping_patience': 3, 'early_stopping_kwargs': {}, 'checkpoints': 'valid_loss', 'checkpoints_path': 'saved_models', 'checkpoints_every_n_epochs': 1, 'checkpoints_name': None, 'checkpoints_mode': 'min', 'checkpoints_save_top_k': 1, 'checkpoints_kwargs': {}, 'load_best': True, 'track_grad_norm': -1, 'progress_bar': 'rich', 'precision': '32', 'trainer_kwargs': {}, 'project_name': 'GATE Dev', 'run_name': 'gate_w_t_softmax_w_init', 'exp_watch': 'gradients', 'log_target': 'wandb', 'log_logits': True, 'exp_log_freq': 100, 'optimizer': 'Adam', 'optimizer_params': {}, 'lr_scheduler': None, 'lr_scheduler_params': {}, 'lr_scheduler_monitor_metric': 'valid_loss', 'categorical_dim': 15, 'enable_checkpointing': True}, 'inferred_config': {'categorical_dim': 15, 'continuous_dim': 50, 'output_dim': 2, 'output_cardinality': [2], 'categorical_cardinality': [3, 3, 3, 3, 5, 3, 3, 3, 3, 3, 3, 3, 5, 3, 3], 'embedding_dims': [[3, 2], [3, 2], [3, 2], [3, 2], [5, 3], [3, 2], [3, 2], [3, 2], [3, 2], [3, 2], [3, 2], [3, 2], [5, 3], [3, 2], [3, 2]], 'embedded_cat_dim': 32}} Traceback (most recent call last): File "D:\mjt\pytorch_tabular\22.py", line 114, in new_model = TabularModel.load_model("test_save1") File "D:\mjt\pytorch_tabular\src\pytorch_tabular\tabular_model.py", line 470, in load_model model = model_callable.load_from_checkpoint( File "C:\Users\User.conda\envs\tabr\lib\site-packages\pytorch_lightning\utilities\model_helpers.py", line 125, in wrapper return self.method(cls, *args, **kwargs) File "C:\Users\User.conda\envs\tabr\lib\site-packages\pytorch_lightning\core\module.py", line 1582, in load_from_checkpoint loaded = _load_from_checkpoint( File "C:\Users\User.conda\envs\tabr\lib\site-packages\pytorch_lightning\core\saving.py", line 86, in _load_from_checkpoint checkpoint[cls.CHECKPOINT_HYPER_PARAMS_KEY].update(kwargs) File "C:\Users\User.conda\envs\tabr\lib_collections_abc.py", line 832, in update self[key] = other[key] File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf\dictconfig.py", line 310, in setitem self._format_and_raise( File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf\base.py", line 231, in _format_and_raise format_and_raise( File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf_utils.py", line 819, in format_and_raise _raise(ex, cause) File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf_utils.py", line 797, in _raise raise ex.with_traceback(sys.exc_info()[2]) # set env var OC_CAUSE=1 for full trace File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf\dictconfig.py", line 308, in setitem self.__set_impl(key=key, value=value) File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf\dictconfig.py", line 318, in __set_impl self._set_item_impl(key, value) File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf\basecontainer.py", line 545, in _set_item_impl self._validate_set(key, value) File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf\dictconfig.py", line 180, in _validate_set target = self._get_node(key) if key is not None else self File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf\dictconfig.py", line 475, in _get_node self._validate_get(key) File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf\dictconfig.py", line 164, in _validate_get self._format_and_raise( File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf\base.py", line 231, in _format_and_raise format_and_raise( File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf_utils.py", line 899, in format_and_raise _raise(ex, cause) File "C:\Users\User.conda\envs\tabr\lib\site-packages\omegaconf_utils.py", line 797, in _raise raise ex.with_traceback(sys.exc_info()[2]) # set env var OC_CAUSE=1 for full trace omegaconf.errors.ConfigKeyError: Key 'config' not in 'InferredConfig' full_key: config object_type=InferredConfig'. I have try many methods to deal with it,however Ican't. Can you help to to solve the problem, thank you.

mjiangtao avatar Apr 28 '25 03:04 mjiangtao

Hello @mjiangtao! I'm Dosu, your friendly bot here to help you solve bugs, answer questions, and assist you in becoming a contributor. I'm here while you wait for a human maintainer. Let's tackle that issue together!

The error you're encountering, omegaconf.errors.ConfigKeyError: Key 'config' not in 'InferredConfig', suggests that the key 'config' is missing in the InferredConfig object. This issue typically arises when the configuration structure expected by the code does not match the actual structure provided.

Here are a few steps you can take to resolve this issue:

  1. Check Configuration Structure: Ensure that the configuration file or object you are using has the correct structure. The InferredConfig class in pytorch_tabular expects certain attributes, such as categorical_dim, continuous_dim, output_dim, etc. Make sure these attributes are correctly defined in your configuration [1].

  2. Update Configuration Handling: If you are manually updating or merging configurations, ensure that the merge process correctly integrates the base configuration with the inferred configuration. The safe_merge_config function is used in pytorch_tabular to merge configurations safely, ensuring that existing values are preserved [2].

  3. Verify Model Loading Process: When loading a model, ensure that the configuration is correctly passed and updated. The TabularModel class uses the safe_merge_config function to handle configuration updates during model loading [2].

  4. Check for Missing Keys: If the error persists, manually inspect the configuration object to ensure all required keys are present. You might need to add missing keys or adjust the configuration structure to match the expected format.

By following these steps, you should be able to resolve the ConfigKeyError and ensure that your configuration is correctly handled during the model loading process.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

dosubot[bot] avatar Apr 28 '25 03:04 dosubot[bot]

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Jun 27 '25 04:06 stale[bot]