starrocks-connector-for-apache-spark icon indicating copy to clipboard operation
starrocks-connector-for-apache-spark copied to clipboard

使用1.1.2版本的spark-connector查询数据报错

Open EuphoriaJJK opened this issue 3 months ago • 0 comments

我的spark版本的3.3.1 scala版本是2.12.14 starrocks的版本是3.2.0 spark-connector的版本是starrocks-spark-connector-3.3_2.12-1.1.2.jar mysql-jdbc的版本是6.0.6 当我查询表数据时,会报以下错误 starrocksSparkDF.show(10) Hive Session ID = 7e592fe6-75aa-4025-abac-4ac70fae390d 2024-03-28 14:39:16,623 Executor task launch worker for task 0.0 in stage 0.0 (TID 0) ERROR Recursive call to appender sparklog 2024-03-28 14:39:16,685 Executor task launch worker for task 0.0 in stage 0.0 (TID 0) ERROR Recursive call to appender sparklog 2024-03-28 14:39:16,686 Executor task launch worker for task 0.0 in stage 0.0 (TID 0) ERROR Recursive call to appender sparklog 2024-03-28 14:39:16,707 Executor task launch worker for task 0.0 in stage 0.0 (TID 0) ERROR Recursive call to appender sparklog org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (ugfv executor driver): com.starrocks.connector.spark.exception.StarrocksException at com.starrocks.connector.spark.serialization.RowBatch.(RowBatch.java:143) at com.starrocks.connector.spark.rdd.ScalaValueReader.hasNext(ScalaValueReader.scala:201) at com.starrocks.connector.spark.rdd.AbstractStarrocksRDDIterator.hasNext(AbstractStarrocksRDDIterator.scala:58) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:26) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760) at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:364) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:889) at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:889) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$5(Executor.scala:568) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1645) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:571) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750)

当我把spark-connector的版本换成1.1.0时starrocks-spark-connector-3.3_2.12-1.1.0,没有上述的报错能够正常查询数据

EuphoriaJJK avatar Mar 28 '24 06:03 EuphoriaJJK