ipyparallel and pymoo doesn't work
Hello all, I am trying to use ipyparallel with a module for Genetic Algorithm (pymoo) and for that, I need to pass the class name 'MyProblem' to the clusters (don't know really why clusters are not aware but I have also the same problem with function defined in the main program). When I pass the name using 'dview["MyProblem"] = MyProblem' I am facing a recursing error.
This is the simplified code below which reproduce the error. Do you have any idea ?
import numpy as np
import ipyparallel as ipp
from pymoo.algorithms.moo.nsga2 import NSGA2
from pymoo.core.problem import ElementwiseProblem
from pymoo.optimize import minimize
# Very simplified problem
class MyProblem(ElementwiseProblem):
def __init__(self):
super().__init__(n_var = 1,
n_obj = 1,
n_constr = 0,
xl=np.array([0]),
xu=np.array([1]))
def _evaluate(self, x, out, *args, **kwargs):
# Fitness functions (values to minimize)
f1 = -x[0]
out["F"] = [f1]
# Start the clusters
nb_core = 16
rc = ipp.Cluster(n=nb_core).start_and_connect_sync()
rc.wait_for_engines(n=nb_core)
dview = rc[:]
# Send class def to all cluster for further use
dview["MyProblem"] = MyProblem
Thanks in advance for your help. Patrick.
This seems related to child classes that call super() and appears to be a bug in the pickling/canning process.
import ipyparallel as ipp
class A:
def __init__(self):
self.x = 1
class B(A):
def __init__(self):
pass
class C(A):
def __init__(self):
super().__init__()
# Start the clusters
rc = ipp.Cluster(n=2).start_and_connect_sync()
rc.wait_for_engines(n=2)
dview = rc[:]
dview["A"] = A
dview["B"] = B
dview["C"] = C # <-- RecursionError: maximum recursion depth exceeded
You might try dview.use_cloudpickle() if you have more complex local objects to push. Otherwise, it's often more robust to just define classes remotely instead of pushign them if you can, e.g. via view.run.