Jiao Wang
Jiao Wang
@Cancerhzc This issue has been fixed recently. Please pip install latest nightly build wheels into your conda environment and try again. You can refer https://analytics-zoo.github.io/master/#PythonUserGuide/install/#install-the-latest-nightly-build-wheels-for-pip to download the nightly build...
On HDFS, no such issue. For log_dir, I think that BigDL RecordWriter only check file path prefix of "hdfs", and if not "hdfs", it uses Java FileOutputStream, which cannot support...
save model API use hadoop fs to copy file, But for tensorboard logging, from comment of BigDL, FSDataOutputStream(hadoop FSDataOutputStream) couldn't flush data to localFileSystem in time. So reading summaries will...
When run "bash work/start-scripts/start-spark-local-sql-sgx.sh", get this error: py4j.protocol.Py4JJavaError: An error occurred while calling o32.json. : org.apache.spark.sql.AnalysisException: Path does not exist: file:/examples/src/main/resources/people.json; at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:558) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at...
When run "bash work/start-scripts/start-spark-local-sql-sgx.sh", also get this error: 21/09/01 22:24:33 INFO DAGScheduler: Job 13 failed: runJob at PythonRDD.scala:153, took 453.136422 s Traceback (most recent call last): File "/ppml/trusted-big-data-ml/work/spark-2.4.6/examples/src/main/python/sql/basic.py", line 212,...
> > When run "bash work/start-scripts/start-spark-local-sql-sgx.sh", also get this error: > > 21/09/01 22:24:33 INFO DAGScheduler: Job 13 failed: runJob at PythonRDD.scala:153, took 453.136422 s > > Traceback (most recent...
When run ./deploy-distributed-standalone-spark.sh, it uses root user. But actually on Azure VM, no root user can be used. Can we provide a deploy script which uses non-root sudo user?
distributed-check-status.sh also need to support non-root user
When run work load on cluster, we need to replace --master 'local[4]' with such lines: ``` --master 'spark://your_master_url' \ --conf spark.authenticate=true \ --conf spark.authenticate.secret=your_secret_key \ ``` What should ```your_secret_key ```...
1.The code is https://github.com/jenniew/friesian/blob/wnd_train_twitter/Training/WideDeep/twitter/wnd_train_tf2_generator_horovod.py 2. tensorflow 2.3.0, latest zoo, horovod:0.19.2, ray:1.2.0 3. driver_cores :10 driver_memory: 30g num_executor: 8 executor_cores: 10 executor_memory: 30g