Dhruv Batheja
Dhruv Batheja
Hi @achille-roussel, I would like to pick this up :) I'll start digging this weekend. Please point me towards any useful starting points/docs apart from #216. Cheers.
I can dig into it 🎈
Hi @HariSekhon, Could you please show some love 😢 Here is a detailed error log if that helps ( + [link](https://travis-ci.org/github/mysql-time-machine/replicator/builds/667410013#L3039) to the failing travis build). ``` org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after...
Same problem. Please fix this. Can't move to Django2 without DRF Docs ❤️
Any ETA on merging and releasing this ?
Hey @GilShmaya 1. If you deploy your FlinkCluster as _Job Cluster / Application Cluster_, Cancelling the job from the FlinkConsole will cancel the cluster too. (That is the intended behaviour)...
Hi @guruguha Are you sure you set all the [high availability related flink properties](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/ha/kubernetes_ha/)? Also, the serviceAccount with which the `FlinkCluster` runs should have permissions to create and edit configmaps...
Hey @guruguha The configmap is what helps the job recover. That shouldn't be deleted so the job can recover. Can you try to delete the job manager pod for a...
Hey @guruguha These are the required HA flinkProperties for 1.16: [link](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/ha/kubernetes_ha/#configuration) (Depends on the Flink version) ``` kubernetes.cluster-id: high-availability: kubernetes high-availability.storageDir: hdfs:///flink/recovery ```
Also, there should be no job-submitter pod when you use the Application mode. I noticed you mentioned you need a jobSubmitter? Can you share your FlinkCluster yaml?