ddR
ddR copied to clipboard
worker to worker communication
I read the article dmapply: A functional primitive to express distributed
machine learning algorithms in R and found following diagram:

That was surprise to see peer-to-peer communication. I started to investigated to ddR code and found following - ddR.R#L278-L285. So essentially all communications go through master. Do I miss something or this is just misleading diagram?
I'm not that experienced with snow clusters. Can peer-to-peer communication be potentially done on parallel package framework?
And to be concrete, will it be possible to add something like MPI allreduce operation ? If someone more experienced can share ideas, I can try to implement this.