Auto-PyTorch
Auto-PyTorch copied to clipboard
[refactor] `_search` function in autoPytorch/api/base_task.py
Currently, comments complement the information what we are doing inside the code, but we can split these sections into functions. In fact, too long function typically violates flake8 C901. Also you can check why it is not good to write a long function here.
Currently, the code looks like this:
def _search(self, ...):
# Initialise information needed for the experiment
process 1
# Print debug information to log
process 2
# Handle time resource allocation
process 3
# Make sure that at least 2 models are created for the ensemble process
process 4
# ============> Run dummy predictions
process 5
# ============> Run traditional ml
process 6
# ============> Starting ensemble
process 7
# ==> Run SMAC
process 8
post process
return results
However, we can split like this (I do not think we need to split everything):
def _init_experiment_info(self, ...):
process 1
return results
def _run_dummy_predictions(self, ...):
process 5
return results
def _run_traditional_ml(self, ...):
process 6
return results
def _start_ensemble(self, ...):
process 7
return results
def _run_smac(self, ...):
process 8
return results
def _post_processing(self, ...):
post processing
return results
def _search(self, ...):
self._init_experiment_info(...):
# Print debug information to log
process 2
# Handle time resource allocation
process 3
# Make sure that at least 2 models are created for the ensemble process
process 4
self._run_dummy_predictions(...)
self._run_traditional_ml(...)
self._start_ensemble(...)
self._run_smac(...)
self._post_processing(...)
Suggetions
- make each function shorter than 40 lines excluding doc-string
- prohibit the following indent level more than 3
^
| 40 lines are visible in one screen
|
class foo():
def func(self):
indent level 0 (normal)
indent level 1 (normal)
indent level 2 (can happen)
indent level 3 (much -> think about splitting the function)
indent level 4 (too much -> should be prohibited)
|
|
v
would you like to resolve this issue @nabenabe0928