spark-deep-learning icon indicating copy to clipboard operation
spark-deep-learning copied to clipboard

Issue in h5py

Open fsahba opened this issue 7 years ago • 6 comments

Hello all I am trying to run the following deep learning pipeline example: Introducing Deep Learning Pipelines for Apache Spark: https://community.cloud.databricks.com/?o=4841533921079887#notebook/4444902769053336/command/4444902769053351

I have successfully installed all libraries but when I run the program in databricks it says: ImportError: load_weights requires h5py.

I have attached the snapshot of the cell and its error: error

I don't understand why it cannot use h5py while it is in the libraries. Did I do anything wrong or did I install something being not the correct version? Your help is really appreciated Fari

fsahba avatar Aug 09 '17 15:08 fsahba

Did you attach the h5py library to the cluster? You can see what libraries are attached by going to the "Clusters" tab and clicking on your cluster, and then clicking on the "Libraries" tab.

sueann avatar Aug 09 '17 17:08 sueann

@fsahba would you mind detach the notebook and re-attach, after the library is attached to the cluster? Thanks! In general it might be best if h5py is attached to the cluster before Keras. It is likely a python issue as we have seen this in standalone ipython console with Keras.

phi-dbq avatar Aug 09 '17 17:08 phi-dbq

Thanks Sueaan and phi-dbq I checked and did your suggestions and that problem was fix but right after a new one came out: Although I attached py4j to the library cluster and detach and attach it again it gives me the following error: java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;

error2

I guess it is about the spark dependency and java but whatever I did I could not fix it. Fari

fsahba avatar Aug 09 '17 20:08 fsahba

You shouldn't need to attach py4j for the example notebook to run. What is your cluster configuration - do you know the Spark version or the Databricks Runtime version? Are you running it on community edition?

sueann avatar Aug 09 '17 20:08 sueann

It is runny Spark 2.1 and it is Community Edition 2.51.

fsahba avatar Aug 09 '17 20:08 fsahba

@fsahba from the last error you reported, it sounds like you are mixing different versions of scala. When you start a cluster on Community Edition, please make sure it is running scala 2.11.

thunterdb avatar Aug 11 '17 17:08 thunterdb