delta icon indicating copy to clipboard operation
delta copied to clipboard

SHOW CREATE TABLE is unsupported

Open yifeng-chen opened this issue 2 years ago • 3 comments

Hi delta team, I'm trying to run SHOW CREATE TABLE in my local development environment and the error occurs, but the statement works well in Databricks environment.

I'm wondering whether this SHOW CREATE TABLE feature is proprietary for Databricks?

If it's true, is there any possible workaround for the OSS version to support SHOW CREATE TABLE statement?

Thanks.

yifeng-chen avatar Mar 25 '22 15:03 yifeng-chen

Thanks for raising this issue. This is an oversight in Delta. We will fix it. But if you have free time to work on this, feel free to open an PR.

zsxwing avatar Mar 29 '22 00:03 zsxwing

Stacktrace of the issue I'm dealing with as well:

22/05/11 14:00:08 ERROR SparkExecuteStatementOperation: Error executing query with 98f04258-59d4-43cd-b4a2-373a6684f3d9, currentState RUNNING, 
spark               | org.apache.spark.sql.AnalysisException: SHOW CREATE TABLE is not supported for v2 tables.
spark               |   at org.apache.spark.sql.execution.datasources.v2.DataSourceV2Strategy.apply(DataSourceV2Strategy.scala:350)
spark               |   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$1(QueryPlanner.scala:63)
spark               |   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
spark               |   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
spark               |   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
spark               |   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
spark               |   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:67)
spark               |   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
spark               |   at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:162)
spark               |   at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:162)
spark               |   at scala.collection.Iterator.foreach(Iterator.scala:941)
spark               |   at scala.collection.Iterator.foreach$(Iterator.scala:941)
spark               |   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
spark               |   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:162)
spark               |   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:160)
spark               |   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1429)
spark               |   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
spark               |   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
spark               |   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
spark               |   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
spark               |   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:67)
spark               |   at org.apache.spark.sql.execution.QueryExecution$.createSparkPlan(QueryExecution.scala:391)
spark               |   at org.apache.spark.sql.execution.QueryExecution.$anonfun$sparkPlan$1(QueryExecution.scala:104)
spark               |   at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
spark               |   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143)
spark               |   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
spark               |   at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143)
spark               |   at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:104)
spark               |   at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:97)
spark               |   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executedPlan$1(QueryExecution.scala:117)
spark               |   at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
spark               |   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143)
spark               |   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
spark               |   at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143)
spark               |   at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:117)
spark               |   at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:110)
spark               |   at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:101)
spark               |   at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
spark               |   at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
spark               |   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
spark               |   at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
spark               |   at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)
spark               |   at org.apache.spark.sql.Dataset.<init>(Dataset.scala:228)
spark               |   at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
spark               |   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
spark               |   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
spark               |   at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618)
spark               |   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
spark               |   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613)
spark               |   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:325)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:263)
spark               |   at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:78)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:62)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:263)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:258)
spark               |   at java.base/java.security.AccessController.doPrivileged(Native Method)
spark               |   at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
spark               |   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
spark               |   at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:272)
spark               |   at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
spark               |   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
spark               |   at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
spark               |   at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
spark               |   at java.base/java.lang.Thread.run(Thread.java:829)

findinpath avatar May 11 '22 15:05 findinpath

I'll take this

zpappa avatar Jul 05 '22 17:07 zpappa