KubeFATE
KubeFATE copied to clipboard
no such file or directory: '/opt/app-root/pyvenv.cfg'
使用kubefate部署FATE on Spark fate on spark 状态下,service_conf.yaml中 dependent_distribution: true时, 出现出错no such file or directory: '/opt/app-root/pyvenv.cfg'
https://github.com/FederatedAI/FATE/issues/4255#issue-1347541803
We don't have bandwidth to do this in verison 1.9.0, we can support this in the next release, 1.10.0
We can prioritize this task to let you use this feature early, on the feature branch.
Hi @lvying0019 , could you elaborate why you want the dependent_distribution to be true?
In specific, which dependencies you want to distribute to spark workers?
Actually in the image docker file, we have a base image, in which we have done pip install for almost all the fate dependencies. https://github.com/FederatedAI/FATE-Builder/blob/main/docker-build/base/basic/Dockerfile
The fateflow image is built from the base image with pyspark installed. https://github.com/FederatedAI/FATE-Builder/blob/main/docker-build/modules/fateflow-spark/Dockerfile
The spark images (master and worker) are also built from the base image, so it inherits all the python dependencies. https://github.com/FederatedAI/FATE-Builder/blob/main/docker-build/modules/spark-base/Dockerfile
My current understanding is, dependent_destribution need not to be set to ture when you are using Kubefate to deploy the FATE cluster, so why do you want to set it to ture, we want to know the reason, thanks.
Hi @lvying0019 , could you elaborate why you want the dependent_distribution to be true?
In specific, which dependencies you want to distribute to spark workers?
Actually in the image docker file, we have a base image, in which we have done pip install for almost all the fate dependencies. https://github.com/FederatedAI/FATE-Builder/blob/main/docker-build/base/basic/Dockerfile
The fateflow image is built from the base image with pyspark installed. https://github.com/FederatedAI/FATE-Builder/blob/main/docker-build/modules/fateflow-spark/Dockerfile
The spark images (master and worker) are also built from the base image, so it inherits all the python dependencies. https://github.com/FederatedAI/FATE-Builder/blob/main/docker-build/modules/spark-base/Dockerfile
My current understanding is, dependent_destribution need not to be set to ture when you are using Kubefate to deploy the FATE cluster, so why do you want to set it to ture, we want to know the reason, thanks.
您好,将dependent_distribution = true主要是因为在实际生产中,我们对接的spark集群是已经存在的,而不是我们自己去创建的集群,而且生产上spark集群环境一般都以yarn cluster的形式,所以需要设置成true,我看到在FATE 1.10版本中已经在准备支持该使用场景,相信KubeFATE在后续中也会有支持。
This issue has been fixed by 3 submission in 3 repos: https://github.com/FederatedAI/KubeFATE/pull/806 https://github.com/FederatedAI/FATE-Flow/pull/352 https://github.com/FederatedAI/FATE-Builder/pull/19
And will be released with v1.10.0 at the end of December, 2022.