Manfei
Manfei
# keras_dataset.py ## client command ```bash ${SPARK_HOME}/bin/spark-submit \ --master ${RUNTIME_SPARK_MASTER} \ --deploy-mode client \ --conf spark.driver.host=172.16.0.200 \ --conf spark.driver.port=54321 \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \ --name analytics-zoo-autoestimator \ --conf spark.kubernetes.container.image=${RUNTIME_K8S_SPARK_IMAGE} \ --conf...
# train.py train.py's name need to be updated in the readme ## Client Command ```bash ${SPARK_HOME}/bin/spark-submit \ --master ${RUNTIME_SPARK_MASTER} \ --deploy-mode client \ --conf spark.driver.host=172.16.0.200 \ --conf spark.driver.port=54321 \ --conf...
# gan_train_and_evaluate.py ## Client Command ```bash ${SPARK_HOME}/bin/spark-submit \ --master ${RUNTIME_SPARK_MASTER} \ --deploy-mode client \ --conf spark.driver.host=172.16.0.200 \ --conf spark.driver.port=54321 \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=${RUNTIME_K8S_SERVICE_ACCOUNT} \ --name analytics-zoo-autoestimator \ --conf spark.kubernetes.container.image=${RUNTIME_K8S_SPARK_IMAGE} \ --conf...
ok, I would fix it now
This following error has been fixed in the new PR(https://github.com/intel-analytics/analytics-zoo/pull/4617), and the path has been updated, the other errors are processing "And when run ./ppml/scripts/generate-keys.sh, it get error: base64: ./keys/keystore.jks:...
This error has been fixed in the new PR( https://github.com/intel-analytics/analytics-zoo/pull/4627 ) by interaction and prompt information: "2.3.1. Run ./build-docker-image.sh, if no proxy setting needed, the script would be fail. Should...
New fix of "keytool: command not found" has been added in the PR #4627, please follow this doc to do the Prerequisite and create "keys" and "password": https://github.com/ManfeiBai/analytics-zoo/blob/patch-12/docs/readthedocs/source/doc/PPML/Overview/ppml.md#21-prerequisite
> deploy-distributed-standalone-spark.sh Could we use "sudo deploy-distributed-standalone-spark.sh" to run the script on Azure VM, rather than use "./deploy-distributed-standalone-spark.sh"?
Thanks, solving it now
only 0~20 example situation could pass, I doube the logic of fori_loop should be changed of the cond_fn part