Results 401 comments of roryqi

> If you have time, could u help review this proposal? @jerqi I have left some comments.

> > If you have time, could u help review this proposal? @jerqi > > I have left some comments. updated the comments.

LGTM, I have no other suggestion. cc @colinmjj @duanmeng

> Gentle ping @jerqi Should you ping @colinmjj @duanmeng ?

When a shuffle have 0 reduce partition, is it a meaningful shuffle? What's situation? Is it a map only application?

Could you click your `job 4` and give me your details of `job 4`?

Is you config option `spark.default.parallelism` zero?

Could we ask why this partitioner has no partition in Hudi community? It's tricky that partition has no partition?

> I'm confused why not reuse the exclude-node-file? And the decommission operation maybe controlled by coordinator will be better? > > Do u have any ideas on it? @jerqi Agree...

> 1.Exclude is not equals to decommission, we can refer to **HDFS** 2.Coordinator should be stateless, if not, we need ensure multi instance data synchronization 3.I don't think store **exclude-node-file**...