SynapseML icon indicating copy to clipboard operation
SynapseML copied to clipboard

Connection Refused Error while using lightgbmRegressor

Open jaipal1995 opened this issue 6 years ago • 3 comments

Hi, I have been trying to use lightGBM Regressor. After handling a certain set of errors I faced, while using lightGBM, I ended up with this error I am finally stuck at. I am using : Spark = 2.3.1 lightGBM = 2.2.350 mmlspark = 0.18

I am trying to train 5 different models for different groups in 5 different threads. I have tried training all the models sequentially as well. It ran for around 60 such groups and failed later with the error mentioned. Each group will have at least 60 data points. Could the following things cause the issue?

  1. Having very fewer data points.
  2. Having an empty partition or partition with very few data points.

Cluster Configuration : 20 executors each with 4 GB RAM and 1 core. Tasks per CPU is set with the default value(1). Spark dynamic allocation is turned off. Here's the parameter I have set for understanding, in case, if needed : val UPPER_LOG_WEIGHT_LIMIT: Double = 3D val LOWER_LOG_WEIGHT_LIMIT: Double = 1D val lightGBMRegressor: LightGBMRegressor = new LightGBMRegressor() .setLabelCol("label") .setFeaturesCol("features") .setObjective("regression") .setMinSumHessianInLeaf(10) .setVerbosity(3) .setBaggingFraction(0.9f) .setFeatureFraction(0.8) .setEarlyStoppingRound(50) .setBaggingSeed(10011993) .setNumLeaves(10) .setPredictionCol("prediction") .setBoostingType("gbdt") .setWeightCol("logWeights")

val paramGrid: Array[ParamMap] = new ParamGridBuilder() .addGrid(lightGBMRegressor.maxDepth, Array(4, 6, 8)) .addGrid(lightGBMRegressor.learningRate, Array(0.05, 0.1)) .addGrid(lightGBMRegressor.numLeaves, Array(5, 10, 20, 40)) .addGrid(lightGBMRegressor.numIterations, Array(200, 400)) .build()

val evaluator: RegressionEvaluator = new RegressionEvaluator() .setLabelCol("label") .setPredictionCol("prediction") .setMetricName("rmse")

val cv = new TrainValidationSplit() .setEstimator(lightGBMRegressor) .setEvaluator(evaluator) .setEstimatorParamMaps(paramGrid) .setTrainRatio(0.8)

** Stacktrace**

User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 16227.1 failed 4 times, most recent failure: Lost task 1.3 in stage 16227.1 (TID 765430, JMNGD1BAG70C07, executor 6): java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.<init>(Socket.java:434)
at java.net.Socket.<init>(Socket.java:211)
at com.microsoft.ml.spark.lightgbm.TrainUtils$.getNetworkInitNodes(TrainUtils.scala:324)
at com.microsoft.ml.spark.lightgbm.TrainUtils$$anonfun$15.apply(TrainUtils.scala:398)
at com.microsoft.ml.spark.lightgbm.TrainUtils$$anonfun$15.apply(TrainUtils.scala:393)
at com.microsoft.ml.spark.core.env.StreamUtilities$.using(StreamUtilities.scala:28)
at com.microsoft.ml.spark.lightgbm.TrainUtils$.trainLightGBM(TrainUtils.scala:392)
at com.microsoft.ml.spark.lightgbm.LightGBMBase$$anonfun$6.apply(LightGBMBase.scala:85)
at com.microsoft.ml.spark.lightgbm.LightGBMBase$$anonfun$6.apply(LightGBMBase.scala:85)
at org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$5.apply(objects.scala:188)
at org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$5.apply(objects.scala:185)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1602)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1590)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1589)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1589)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1823)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1772)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1761)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2131)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1029)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.reduce(RDD.scala:1011)
at org.apache.spark.sql.Dataset.reduce(Dataset.scala:1610)
at com.microsoft.ml.spark.lightgbm.LightGBMBase$class.innerTrain(LightGBMBase.scala:90)
at com.microsoft.ml.spark.lightgbm.LightGBMRegressor.innerTrain(LightGBMRegressor.scala:38)
at com.microsoft.ml.spark.lightgbm.LightGBMBase$class.train(LightGBMBase.scala:38)
at com.microsoft.ml.spark.lightgbm.LightGBMRegressor.train(LightGBMRegressor.scala:38)
at com.microsoft.ml.spark.lightgbm.LightGBMRegressor.train(LightGBMRegressor.scala:38)
at org.apache.spark.ml.Predictor.fit(Predictor.scala:118)
at org.apache.spark.ml.Predictor.fit(Predictor.scala:82)
at org.apache.spark.ml.Estimator.fit(Estimator.scala:61)
at org.apache.spark.ml.tuning.TrainValidationSplit.fit(TrainValidationSplit.scala:173)
at ai.couture.obelisk.retail.legos.plp.PredictProductDailySalesPerBrick$$anonfun$predictSalesPerBrick$1$$anonfun$apply$12.apply(PredictProductDailySalesPerBrick.scala:366)
at ai.couture.obelisk.retail.legos.plp.PredictProductDailySalesPerBrick$$anonfun$predictSalesPerBrick$1$$anonfun$apply$12.apply(PredictProductDailySalesPerBrick.scala:352)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at ai.couture.obelisk.retail.legos.plp.PredictProductDailySalesPerBrick$$anonfun$predictSalesPerBrick$1.apply(PredictProductDailySalesPerBrick.scala:352)
at ai.couture.obelisk.retail.legos.plp.PredictProductDailySalesPerBrick$$anonfun$predictSalesPerBrick$1.apply(PredictProductDailySalesPerBrick.scala:67)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at ai.couture.obelisk.retail.legos.plp.PredictProductDailySalesPerBrick.predictSalesPerBrick(PredictProductDailySalesPerBrick.scala:67)
at ai.couture.obelisk.retail.legos.plp.PredictProductDailySalesA4$.doTransformations(PredictProductDailySalesA4.scala:35)
at ai.couture.obelisk.retail.legos.plp.PredictProductDailySalesA4$.execute(PredictProductDailySalesA4.scala:13)
at ai.couture.obelisk.retail.legos.MainClass$.main(MainClass.scala:155)
at ai.couture.obelisk.retail.legos.MainClass.main(MainClass.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.<init>(Socket.java:434)
at java.net.Socket.<init>(Socket.java:211)
at com.microsoft.ml.spark.lightgbm.TrainUtils$.getNetworkInitNodes(TrainUtils.scala:324)
at com.microsoft.ml.spark.lightgbm.TrainUtils$$anonfun$15.apply(TrainUtils.scala:398)
at com.microsoft.ml.spark.lightgbm.TrainUtils$$anonfun$15.apply(TrainUtils.scala:393)
at com.microsoft.ml.spark.core.env.StreamUtilities$.using(StreamUtilities.scala:28)
at com.microsoft.ml.spark.lightgbm.TrainUtils$.trainLightGBM(TrainUtils.scala:392)
at com.microsoft.ml.spark.lightgbm.LightGBMBase$$anonfun$6.apply(LightGBMBase.scala:85)
at com.microsoft.ml.spark.lightgbm.LightGBMBase$$anonfun$6.apply(LightGBMBase.scala:85)
at org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$5.apply(objects.scala:188)
at org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$5.apply(objects.scala:185)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

jaipal1995 avatar Nov 25 '19 14:11 jaipal1995

I also encountered this error.

    from pyspark.ml.evaluation import RegressionEvaluator
    from pyspark.ml.tuning import CrossValidator, ParamGridBuilder 
    from mmlspark.lightgbm import LightGBMRegressor 

    regressor = LightGBMRegressor(numIterations=numIterations,
                              labelCol="RegLabel", 
                              featuresCol="Features",
                              useBarrierExecutionMode=False)

    pg = ParamGridBuilder()\
       .addGrid(regressor.learningRate, learning_rate)\
       .addGrid(regressor.numLeaves, num_leaves)\
       .build()  

    re = RegressionEvaluator(predictionCol="prediction", labelCol="RegLabel", metricName="rmse")
    cv = CrossValidator(estimator = regressor,
                    estimatorParamMaps = pg,
                    evaluator = re,
                    numFolds = 5)

    cv_model = cv.fit(reg_features_df.select("RegLabel", "Features"))

Failure is at fit(). The stacktrace is the same as above. Failed jobs had error java.lang.Exception: Dataset set feature names call failed in LightGBM with error: Do not support non-ascii characters in feature name. The only column names are "RegLabel" and "Features", neither of which have non-ascii characters. Using Databricks 6.2, Spark 2.4.4, Scala 2.11, mmlspark 2.11:1.0.0-rc1.

mcb0035 avatar Dec 27 '19 21:12 mcb0035

I had an issue with cross-validation being stuck and some time throwing "Connection refused". Have you both tried .setUseBarrierExecutionMode(true) on regressor? This helped me, but I still have some problems with execution speed

kogandaniil avatar Jan 17 '20 12:01 kogandaniil

@kogandaniil Hi, have you solved this issue? I tried to run this on databricks pyspark, but CrossValidator and Lightgbm have the execution speed error.

stupidoge avatar Dec 03 '24 02:12 stupidoge