Jason Dai
Jason Dai
> The whole part of pmem is code commented. [https://github.com/intel-analytics/analytics-zoo/tree/bigdl-2.0/scala/dllib/src/main/scala/com/intel/analytics/bigdl/dllib/feature/pmem](https://github.com/intel-analytics/analytics-zoo/tree/bigdl-2.0/scala/dllib/src/main/scala/com/intel/analytics/bigdl/dllib/feature/pmem?rgh-link-date=2021-09-23T10%3A19%3A15Z) > Need to continue migration pmem java code and support related dependencies. No need to migrate pmem at this moment
> On HDFS, no such issue. For log_dir, I think that BigDL RecordWriter only check file path prefix of "hdfs", and if not "hdfs", it uses Java FileOutputStream, which cannot...
@bendavidsteel See the latest information at https://github.com/intel-analytics/BigDL (in particular, https://github.com/intel-analytics/BigDL/blob/branch-2.0/README.md). At this stage, Analytics Zoo is still actively maintained (no changes to the current users); in addition, we are working...
@EmiCareOfCell44 are loading a BigDL model? can you share an example for us to reproduce the issue?
What if ipex is no used?
@xunaichao As mentioned in https://github.com/intel-analytics/analytics-zoo/blob/master/README.md, we have migrated to project to https://github.com/intel-analytics/bigdl; please try https://bigdl.readthedocs.io/en/latest/doc/Orca/QuickStart/orca-tf2keras-quickstart.html instead
> Hello, > > I am trying to make Analytics-zoo work with my cluster, and I specified all the parameters in sparkmagic/config.json file, including the extraClassPath, spark.jars, pyFiles etc. >...
@yangw1234 please take a look; maybe due to incompatible Ray version?
I would suggest using the new Orca API; see https://analytics-zoo.readthedocs.io/
We are working on the explicit support for this scenario in https://github.com/intel-analytics/analytics-zoo/pull/4339; for now, you may do something like: ``` init_orca_context(cluster_mode="spark-submit", ...) ``` See https://github.com/intel-analytics/analytics-zoo/blob/master/pyzoo/zoo/orca/common.py#L161