snappydata icon indicating copy to clipboard operation
snappydata copied to clipboard

METASTORE_AUTO_CREATE_SCHEMA

Open thbeh opened this issue 7 years ago • 3 comments

Hi,

I have downloaded v1.0.0 for testing against MapR v6.0. Started snappydata and trying to get local spark-shell to use snappydata but got the following. Did I missed anything?

[mapr@lab1 quickstartdatadir]$ /opt/mapr/spark/spark-2.1.0/bin/spark-shell --conf spark.snappydata.store.sys-disk-dir=quickstartdatadir --conf spark.snappydata.store.log-file=quickstartdatadir/quickstart.log --conf spark.snappydata.connection=localhost:1527 --packages "SnappyDataInc:snappydata:1.0.0-s_2.11" Ivy Default Cache set to: /home/mapr/.ivy2/cache The jars for the packages stored in: /home/mapr/.ivy2/jars :: loading settings :: url = jar:file:/opt/mapr/spark/spark-2.1.0/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml SnappyDataInc#snappydata added as a dependency :: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0 confs: [default] found SnappyDataInc#snappydata;1.0.0-s_2.11 in spark-packages :: resolution report :: resolve 558ms :: artifacts dl 6ms :: modules in use: SnappyDataInc#snappydata;1.0.0-s_2.11 from spark-packages in [default] --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 0 | 0 | 0 || 1 | 0 | --------------------------------------------------------------------- :: retrieving :: org.apache.spark#spark-submit-parent confs: [default] 0 artifacts copied, 1 already retrieved (0kB/15ms) Spark context Web UI available at http://192.168.20.71:4040 Spark context available as 'sc' (master = local[*], app id = local-1518468605881). Spark session available as 'spark'. Welcome to ____ __ / / ___ / / \ / _ / _ `/ __/ '/ // .__/_,// //_\ version 2.1.0-mapr-1710 //

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_161) Type in expressions to have them evaluated. Type :help for more information.

scala> import org.apache.spark.sql.{SnappySession, SparkSession} import org.apache.spark.sql.{SnappySession, SparkSession}

scala> val snSession = new SnappySession(spark.sparkContext) snSession: org.apache.spark.sql.SnappySession = org.apache.spark.sql.SnappySession@78e7b83

scala> snSession.sql("create table TestColumnTable (id bigint not null, k bigint not null) using column") io.snappydata.com.google.common.util.concurrent.ExecutionError: java.lang.NoSuchFieldError: METASTORE_AUTO_CREATE_SCHEMA at io.snappydata.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2261) at io.snappydata.com.google.common.cache.LocalCache.get(LocalCache.java:4000) at io.snappydata.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4004) at io.snappydata.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874) at io.snappydata.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4880) at org.apache.spark.sql.SnappySession$.getPlan(SnappySession.scala:2092) at org.apache.spark.sql.SnappySession$$anonfun$sql$1.apply(SnappySession.scala:182) at org.apache.spark.sql.SnappySession$$anonfun$sql$1.apply(SnappySession.scala:182) at org.apache.spark.sql.aqp.SnappyContextFunctions.sql(SnappyContextFunctions.scala:91) at org.apache.spark.sql.SnappySession.sql(SnappySession.scala:182) ... 48 elided Caused by: java.lang.NoSuchFieldError: METASTORE_AUTO_CREATE_SCHEMA at org.apache.spark.sql.hive.HiveClientUtil.newClient(HiveClientUtil.scala:163) at org.apache.spark.sql.hive.HiveClientUtil.org$apache$spark$sql$hive$HiveClientUtil$$newClientWithLogSetting(HiveClientUtil.scala:137) at org.apache.spark.sql.hive.HiveClientUtil$.newClient(HiveClientUtil.scala:296) at org.apache.spark.sql.hive.SnappySharedState.initMetaStore(SnappySharedState.java:114) at org.apache.spark.sql.hive.SnappySharedState.snappyCatalog(SnappySharedState.java:143) at org.apache.spark.sql.internal.SnappySessionState.catalog$lzycompute(SnappySessionState.scala:369) at org.apache.spark.sql.internal.SnappySessionState.catalog(SnappySessionState.scala:365) at org.apache.spark.sql.internal.SnappySessionState$$anon$2.(SnappySessionState.scala:110) at org.apache.spark.sql.internal.SnappySessionState.analyzer$lzycompute(SnappySessionState.scala:110) at org.apache.spark.sql.internal.SnappySessionState.analyzer(SnappySessionState.scala:110) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592) at org.apache.spark.sql.SnappySession.executeSQL(SnappySession.scala:195) at org.apache.spark.sql.SnappySession$$anon$3.load(SnappySession.scala:1988) at org.apache.spark.sql.SnappySession$$anon$3.load(SnappySession.scala:1982) at io.snappydata.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599) at io.snappydata.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2379) at io.snappydata.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342) at io.snappydata.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2257) ... 57 more

scala>

thbeh avatar Feb 12 '18 20:02 thbeh

It looks like google commons cache jar version mismatch. We haven’t tested with MapR, but it should work. Can you give it a try with spark 2.1.1 version?

ymahajan avatar Feb 13 '18 21:02 ymahajan

Spark on MapR is still on 2.1.0. I could test with Spark 2.1.1 on MapR but that will mean no support from MapR. Is there a SnappyData branch that I can build with Spark 2.1.0?

thbeh avatar Feb 14 '18 01:02 thbeh

@thbeh Missed on followup. Yes, the current master should build against Spark 2.1.0 -- just change "sparkVersion" in top-level build.gradle (also store/build.gradle can be changed though not strictly required since it only uses spark-unsafe which is fully binary compatible between 2.1.0 and 2.1.1).

sumwale avatar Mar 08 '18 13:03 sumwale