dispy
dispy copied to clipboard
How to update globals on all nodes?
I know that dispy can updating globals by using multiprocessing module on one node.But I want to share globals which can be updated by any nodes in the cluster, any idea? Thanks a lot.
I don't quite understand what the question is, so likely this may not work: You may want to run another function (e.g., by creating another cluster, as done in MapReduce example) and have jobs on it update nodes (use submit_node on each node to update nodes).
@pgiri Thanks for your reply. I have read the MapReduce example in doc. But I think it may not what I want.For example, I have a var(maybe a counter) in client, could any node read/write it simultaneously(with a lock)?
def compute(): # How to read and update the COUNTER here(maybe need a lock)? COUNTER += 1 return 0
if name == 'main':
import dispy
# Here is a COUNTER I want to share to all nodes.
COUNTER = 100
cluster = dispy.JobCluster(compute)
jobs = []
for i in range(20):
job = cluster.submit()
jobs.append(job)
for job in jobs:
job() # waits for job to finish and returns results
stdout = job.stdout
print(stdout)
cluster.print_status() # shows which nodes executed how many jobs etc.
See node_shvars.py in examples.
@pgiri Thanks for your reply.I think the example in node_shvars.py could only share variables on a node(but not by jobs in OTHER NODES).So how to share variables on all nodes in the cluster?Thanks again.
I am not sure I understand your question, but in case you are asking about replacing in-memory data, see latest release and example replace_inmem.py.