pymoo
pymoo copied to clipboard
Multiprocessing process based parallelization fails on 2nd generation of NSGA-II -> pymoo 0.6.0
I am trying to execute the following moo using an NSGA-II algorithm and parallelized using the multiprocessing starmap. This problem previously worked with the 0.5.0 implementation of parallelization.
pool = multiprocessing.Pool(n_proccess)
runner = StarmapParallelization(pool.starmap)
problem = myproblem.(
elementwise_runner=runner,
)
algorithm = NSGA2(
pop_size= 10
n_offsprings=2
sampling=MixedVariableSampling(),
mating=MixedVariableMating(
eliminate_duplicates=MixedVariableDuplicateElimination()
),
eliminate_duplicates=MixedVariableDuplicateElimination(),
)
results = minimize(
problem,
algorithm,
termination=("n_gen", 10),
seed=1,
save_history=True,
verbose=True,
)
I receive the following error:
Traceback (most recent call last):
File "c:/Users/first.last/Documents/mypkg/src/my_module/__main__.py", line 128, in optimizer
results = minimize(
File "C:\Users\first.last\Documents\mypkg\.venv\lib\site-packages\pymoo\optimize.py", line 67, in minimize
res = algorithm.run()
File "C:\Users\first.last\Documents\mypkg Gears\.venv\lib\site-packages\pymoo\core\algorithm.py", line 141, in run
self.next()
File "C:\Users\first.last\Documents\mypkg\.venv\lib\site-packages\pymoo\core\algorithm.py", line 161, in next
self.evaluator.eval(self.problem, infills, algorithm=self)
File "C:\Users\first.last\Documents\mypkg\.venv\lib\site-packages\pymoo\core\evaluator.py", line 84, in eval
self._eval(problem, pop[I], evaluate_values_of, **kwargs)
File "C:\Users\first.last\Documents\mypkg\.venv\lib\site-packages\pymoo\core\evaluator.py", line 105, in _eval
out = problem.evaluate(
File "C:\Users\first.last\Documents\mypkg\.venv\lib\site-packages\pymoo\core\problem.py", line 190, in evaluate
_out = self.do(X, return_values_of, *args, **kwargs)
File "C:\Users\first.last\Documents\mypkg\.venv\lib\site-packages\pymoo\core\problem.py", line 230, in do
self._evaluate_elementwise(X, out, *args, **kwargs)
File "C:\Users\first.last\Documents\mypkg\.venv\lib\site-packages\pymoo\core\problem.py", line 248, in _evaluate_elementwise
elems = self.elementwise_runner(f, X)
File "C:\Users\first.last\Documents\mypkg\.venv\lib\site-packages\pymoo\core\problem.py", line 34, in __call__
return list(self.starmap(f, [[x] for x in X]))
File "C:\Users\first.last\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 372, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "C:\Users\first.last\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 771, in get
raise self._value
File "C:\Users\first.last\AppData\Local\Programs\Python\Python38\lib\multiprocessing\pool.py", line 537, in _handle_tasks
put(task)
File "C:\Users\first.last\AppData\Local\Programs\Python\Python38\lib\multiprocessing\connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "C:\Users\first.last\AppData\Local\Programs\Python\Python38\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "C:\Users\first.last\Documents\mypkg\.venv\lib\site-packages\pymoo\core\problem.py", line 38, in __getstate__
state.pop("starmap")
KeyError: 'starmap'
The error can be cleared by modifying this code in the pymoo/core/problem.py file
from:
class StarmapParallelization:
def __init__(self, starmap) -> None:
super().__init__()
self.starmap = starmap
def __call__(self, f, X):
return list(self.starmap(f, [[x] for x in X]))
def __getstate__(self):
state = self.__dict__.copy()
state.pop("starmap")
return state
to:
class StarmapParallelization:
def __init__(self, starmap) -> None:
super().__init__()
self.starmap = starmap
def __call__(self, f, X):
return list(self.starmap(f, [[x] for x in X]))
def __getstate__(self):
state = self.__dict__.copy()
state.pop("starmap",None)
return state
This provides a default value fro the state.pop() call in the getstate method
I have confirmed that this modification still produces multiple processes in each generation as intended
I have met the same error. Is this error going to be fixed in future versions?
It will be fixed in the next version: 0.6.0.1 Sorry for the delay.
I have released the new version. Please let me know if it is fixed now.
I can confirm that version 0.6.0.1 solves the issue.
Great! I am glad that speeds up the parallelization!