deep-symbolic-optimization
deep-symbolic-optimization copied to clipboard
How can parallel computing improve speed in a single-task scenario?
When I set n_cores_batch to a number greater than 1, the program throws an error. I have a task with a large amount of data, and I want to speed up the training process. However, simple parameter settings don't seem to achieve that. Do you have any good solutions? Thank you. I'm looking forward to your response.
the json:
{
"task" : {
"task_type" : "regression",
"dataset" : "./dso/task/regression/data/Constant-1.csv",
"function_set" : ["add", "sub", "mul", "div", "sin", "cos", "exp", "log", "poly"]
},
"training" : {
"n_cores_batch" : 2
}
}
the error:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Sorry for the delayed reply! n_cores_batch
only works if you are otherwise running DSO in a single process. If you are perhaps running DSO in parallel some other way, it won't work. Are you running with CLI or Python interface?