SynapseML
SynapseML copied to clipboard
java.lang.NoSuchMethodError: spray.json.package$.enrichAny(Ljava/lang/Object;)Lspray/json/RichAny; Issue
SynapseML version
0.10.1
System information
- Language version: Python: 3.8.10, Scala 2.12
- Spark Version : Apache Spark 3.2.1,
- Spark Platform: Databricks
Describe the problem
Installed the package through: com.microsoft.azure:synapseml_2.12:0.10.1 with the resolver: https://mmlspark.azureedge.net/maven
But got this error message when try to initiate a lightgbm classifier: java.lang.NoSuchMethodError: spray.json.package$.enrichAny(Ljava/lang/Object;)Lspray/json/RichAny;
3 lgbmClassifier = (LightGBMClassifier() 4 .setFeaturesCol("features") 5 .setRawPredictionCol("rawPrediction")
/databricks/spark/python/pyspark/init.py in wrapper(self, *args, **kwargs) 112 raise TypeError("Method %s forces keyword arguments." % func.name) 113 self._input_kwargs = kwargs --> 114 return func(self, **kwargs) 115 return wrapper 116
/local_disk0/spark-36f90ef6-a68d-4c36-ba04-d72f939344e4/userFiles-842368d3-9f91-4dec-8c2d-38752346d587/addedFile462316793479631003synapseml_lightgbm_2_12_0_10_1-c15ba.jar/synapse/ml/lightgbm/LightGBMClassifier.py in init(self, java_obj, baggingFraction, baggingFreq, baggingSeed, binSampleCount, boostFromAverage, boostingType, catSmooth, categoricalSlotIndexes, categoricalSlotNames, catl2, chunkSize, dataRandomSeed, defaultListenPort, deterministic, driverListenPort, dropRate, dropSeed, earlyStoppingRound, executionMode, extraSeed, featureFraction, featureFractionByNode, featureFractionSeed, featuresCol, featuresShapCol, fobj, improvementTolerance, initScoreCol, isEnableSparse, isProvideTrainingMetric, isUnbalance, labelCol, lambdaL1, lambdaL2, leafPredictionCol, learningRate, matrixType, maxBin, maxBinByFeature, maxCatThreshold, maxCatToOnehot, maxDeltaStep, maxDepth, maxDrop, metric, microBatchSize, minDataInLeaf, minDataPerBin, minDataPerGroup, minGainToSplit, minSumHessianInLeaf, modelString, monotoneConstraints, monotoneConstraintsMethod, monotonePenalty, negBaggingFraction, numBatches, numIterations, numLeaves, numTasks, numThreads, objective, objectiveSeed, otherRate, parallelism, passThroughArgs, posBaggingFraction, predictDisableShapeCheck, predictionCol, probabilityCol, rawPredictionCol, repartitionByGroupingColumn, seed, skipDrop, slotNames, thresholds, timeout, topK, topRate, uniformDrop, useBarrierExecutionMode, useMissing, useSingleDatasetMode, validationIndicatorCol, verbosity, weightCol, xGBoostDartMode, zeroAsMissing) 387 super(LightGBMClassifier, self).init() 388 if java_obj is None: --> 389 self._java_obj = self._new_java_obj("com.microsoft.azure.synapse.ml.lightgbm.LightGBMClassifier", self.uid) 390 else: 391 self._java_obj = java_obj
/databricks/spark/python/pyspark/ml/wrapper.py in _new_java_obj(java_class, *args) 64 java_obj = getattr(java_obj, name) 65 java_args = [_py2java(sc, arg) for arg in args] ---> 66 return java_obj(*java_args) 67 68 @staticmethod
/databricks/spark/python/lib/py4j-0.10.9.1-src.zip/py4j/java_gateway.py in call(self, *args) 1566 1567 answer = self._gateway_client.send_command(command) -> 1568 return_value = get_return_value( 1569 answer, self._gateway_client, None, self._fqn) 1570
/databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw) 115 def deco(*a, **kw): 116 try: --> 117 return f(*a, **kw) 118 except py4j.protocol.Py4JJavaError as e: 119 converted = convert_exception(e.java_exception)
/databricks/spark/python/lib/py4j-0.10.9.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --> 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". 328 format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling None.com.microsoft.azure.synapse.ml.lightgbm.LightGBMClassifier.
: java.lang.NoSuchMethodError: spray.json.package$.enrichAny(Ljava/lang/Object;)Lspray/json/RichAny;
at com.microsoft.azure.synapse.ml.logging.BasicLogging.logBase(BasicLogging.scala:30)
at com.microsoft.azure.synapse.ml.logging.BasicLogging.logBase$(BasicLogging.scala:29)
at com.microsoft.azure.synapse.ml.lightgbm.LightGBMClassifier.logBase(LightGBMClassifier.scala:27)
at com.microsoft.azure.synapse.ml.logging.BasicLogging.logClass(BasicLogging.scala:40)
at com.microsoft.azure.synapse.ml.logging.BasicLogging.logClass$(BasicLogging.scala:39)
at com.microsoft.azure.synapse.ml.lightgbm.LightGBMClassifier.logClass(LightGBMClassifier.scala:27)
at com.microsoft.azure.synapse.ml.lightgbm.LightGBMClassifier.
Code to reproduce issue
lgbmClassifier = (LightGBMClassifier() .setFeaturesCol("features") .setRawPredictionCol("rawPrediction") .setDefaultListenPort(12402) .setNumLeaves(5) .setNumIterations(10) .setObjective("binary") .setLabelCol("labels") .setLeafPredictionCol("leafPrediction") .setFeaturesShapCol("featuresShap"))
Other info / logs
No response
What component(s) does this bug affect?
- [ ]
area/cognitive
: Cognitive project - [ ]
area/core
: Core project - [ ]
area/deep-learning
: DeepLearning project - [X]
area/lightgbm
: Lightgbm project - [ ]
area/opencv
: Opencv project - [ ]
area/vw
: VW project - [ ]
area/website
: Website - [ ]
area/build
: Project build system - [ ]
area/notebooks
: Samples under notebooks folder - [ ]
area/docker
: Docker usage - [ ]
area/models
: models related issue
What language(s) does this bug affect?
- [ ]
language/scala
: Scala source code - [X]
language/python
: Pyspark APIs - [ ]
language/r
: R APIs - [ ]
language/csharp
: .NET APIs - [ ]
language/new
: Proposals for new client languages
What integration(s) does this bug affect?
- [X]
integrations/synapse
: Azure Synapse integrations - [ ]
integrations/azureml
: Azure ML integrations - [ ]
integrations/databricks
: Databricks integrations
Hey @sibyl1956 :wave:! Thank you so much for reporting the issue/feature request :rotating_light:. Someone from SynapseML Team will be looking to triage this issue soon. We appreciate your patience.
"io.spray" %% "spray-json" % "1.3.5" There may be a problem with your spray-json version, it should be 1.3.5. Please confirm.
Agreeing with @neptune05
is this issue fixed?
i meet the same error, and i don't find the jars of spray-json in .ivy2/jars, how can i fixed?
Agreeing with @neptune05
i meet the same error, and i don't find the jars of spray-json in .ivy2/jars, how can i fixed?
I have same error I think it's because I have other UBER jar bring 1.3.6.
But spray.json.package$.enrichAny
both exist in 1.3.5 and 1.3.6.
My fix is compile spray-json 1.3.5 into one of the my UBER jar or copy it into spark's jars folder.
Or maybe you can try
spark.driver.userClassPathFirst spark.executor.userClassPathFirst