neuralxc icon indicating copy to clipboard operation
neuralxc copied to clipboard

possible memory leak when parallel

Open alecpwills opened this issue 3 years ago • 0 comments

when running in parallel, during the machine learning iteration process the RAM usage during iteration one was stable around ~9GB. when the second iteration process started, the RAM usage increased until my system ran out. this doesn't happen when n_workers=1 in the json files

here are my json files: hyperparameters.json

{
    "hyperparameters": {
        "var_selector__threshold": 1e-10,
        "scaler__threshold": null,
        "estimator__n_nodes": 8, 
        "estimator__n_layers": 3,
        "estimator__b": 1e-2,
        "estimator__alpha": 0.001,
        "estimator__max_steps": 2001,
        "estimator__valid_size": 0,
        "estimator__batch_size": 0,
        "estimator__activation": "GELU"
    },
    "cv": 3,
    "n_workers": 2,
    "threads_per_worker": 1,
    "n_jobs": 1
}

basis_sgdml_asp.json

{
    "preprocessor": {
	"basis": "ccpvdz-jkfit",
	"extension": "chkpt",
    "application": "pyscf",
    "spec_agnostic": false,
    "projector_type":"pyscf",
    "symmetrizer_type":"trace"
    },
    "engine_kwargs": {
        "xc": "PBE",
        "basis": "ccpvdz"
    },
    "n_workers": 2
}

alecpwills avatar Apr 13 '21 19:04 alecpwills