John Huang

Results 80 comments of John Huang

> Can we remove the option 'local' from deploy-mode? > > We can only set cluster/client at deploy mode, the local should belong to master config. Thanks @ruanwenjun . I‘ve...

> @pegasas please Run 'mvn spotless:apply' to fix the code style Thanks, Done. ![image](https://github.com/apache/dolphinscheduler/assets/13224827/3ecbc5f3-4da9-43cb-b3bf-d912e8385c52)

> > Can we remove the option 'local' from deploy-mode? > > We can only set cluster/client at deploy mode, the local should belong to master config. > > Maybe...

> Hi, @pegasas , please fix CI It seems there's a lot refactor after last merge. I will fix & re-commit.

> > Hi, @pegasas , please fix CI > > It seems there's a lot refactor after last merge. I will fix & re-commit. fixed.

> @pegasas please check the failed CI ![image](https://github.com/apache/dolphinscheduler/assets/13224827/2d2d05e8-3480-432e-942c-f32a6ad6e2a8) Done. run spotless successfully in my local

https://spark.apache.org/docs/3.5.0/running-on-kubernetes.html#customized-kubernetes-schedulers-for-spark-on-kubernetes ![image](https://github.com/apache/dolphinscheduler/assets/13224827/6f720ddd-df71-4b0d-8309-a851e662ae8f) maybe you add `spark.kubernetes.namespace` to your script will work.

I would like to have a try on this issue.

> **Current workaround** for me is to pass `--master ... --deploy-mode cluster` in the extra options. Since _spark-submit_ will use the last values, this will send task to local cluster....