Hao Zhu
Hao Zhu
Another maybe-related stacktrace is: ``` org.apache.hive.service.cli.HiveSQLException: Error running query: java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: com/nvidia/spark/rapids/RuleNotFoundDataWritingCommandMeta at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:44) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:488) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:246) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:79) at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:63) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:58) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:246) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:241)...
User will try to downgrade the JDK8 version on a test node with STS running to see if they can reproduce
@abellina @tgravescs @medb @mengdong FYI that this is to use a single init script(named spark-rapids.sh) instead of the install_gpu_driver.sh + rapids.sh for Dataproc + Spark RAPIDS as requested by our...
> Please add more description as to what this really is and how user would now call it. > Does this support MIG properly? I assume the install_gpu_driver stays where...
@medb this LGTM.
@jayadeep-jayaraman This is a missing PR for 22.08 release. Could you help test and merge it firstly?thx
@nvliyuan fyi as well
@medb could you help review and approve?thx This is just some default config settings for spark
@jlowe we do not need to explicitly set spark.sql.autoBroadcastJoinThreshold=10m because this is default value, right?
From the logs, i think it is good enough. I am also double checking with user.