spark-deep-learning
spark-deep-learning copied to clipboard
Issue in h5py
Hello all I am trying to run the following deep learning pipeline example: Introducing Deep Learning Pipelines for Apache Spark: https://community.cloud.databricks.com/?o=4841533921079887#notebook/4444902769053336/command/4444902769053351
I have successfully installed all libraries but when I run the program in databricks it says:
ImportError: load_weights
requires h5py.
I have attached the snapshot of the cell and its error:
I don't understand why it cannot use h5py while it is in the libraries. Did I do anything wrong or did I install something being not the correct version? Your help is really appreciated Fari
Did you attach the h5py library to the cluster? You can see what libraries are attached by going to the "Clusters" tab and clicking on your cluster, and then clicking on the "Libraries" tab.
@fsahba would you mind detach the notebook and re-attach, after the library is attached to the cluster? Thanks! In general it might be best if h5py is attached to the cluster before Keras. It is likely a python issue as we have seen this in standalone ipython console with Keras.
Thanks Sueaan and phi-dbq I checked and did your suggestions and that problem was fix but right after a new one came out: Although I attached py4j to the library cluster and detach and attach it again it gives me the following error: java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
I guess it is about the spark dependency and java but whatever I did I could not fix it. Fari
You shouldn't need to attach py4j for the example notebook to run. What is your cluster configuration - do you know the Spark version or the Databricks Runtime version? Are you running it on community edition?
It is runny Spark 2.1 and it is Community Edition 2.51.
@fsahba from the last error you reported, it sounds like you are mixing different versions of scala. When you start a cluster on Community Edition, please make sure it is running scala 2.11.