Ilya Matiach

Results 261 comments of Ilya Matiach

@swaminaathan-pjm that is very strange, as latest 0.9.1 jar does support spark 3. What error are you currently seeing?

@devilwing0723 what version of mmlspark are you using? I recall this issue was fixed recently. There were actually several related issues like this. https://github.com/Azure/mmlspark/pull/676 https://github.com/Azure/mmlspark/pull/578 and one related PR to...

it looks like 0.18.1 does not have the fix: https://mvnrepository.com/artifact/com.microsoft.ml.spark/mmlspark_2.11/0.18.1 it uses lightgbm 2.2.350 but fix was in 2.2.400. Using the RC 1.0 version or latest snapshot should have the...

based on the line number in the stack trace: ``` at com.microsoft.ml.spark.lightgbm.TrainUtils$$anonfun$3.apply(TrainUtils.scala:29) ``` it looks like this error is due to this line: ``` val labels = rows.map(row => row.getDouble(schema.fieldIndex(columnParams.labelColumn)))...

interestingly, this would actually violate the schema in your dataset above: ``` StructType(List(StructField(label,DoubleType,true),StructField(features,VectorUDT,true))) ``` so I'm not quite sure what is happening there

@hebo-yang oh this is very interesting. Based on that stack trace it looks like there is some param that is being converted to an int instead of a double in...

@goodwanghan sorry about the trouble you are having. You need to run the autogen to autogenerate the wrappers, _LightGBMClassifier is automatically generated from the scala API. " I tried this...

@MarsXDM @MacJei @vinhdiesal @bkowshik not sure why you are having this issue. I need to understand more about your environment to try and reproduce the issue and diagnose it. What...

if you download the jar, you can run on linux: jar xvf mmlspark_2.11-1.0.0-rc1.jar this will extract the jar. Note there is one "com" folder where all of the scala code...

I think this was the question for the cloudera cluster: https://github.com/Azure/mmlspark/issues/311 There was also an external azure customer who had this issue and we were able to work around the...