Robert (Bobby) Evans
Robert (Bobby) Evans
@HaoYang670 > Is it expected that null should be returned if all the values are nulls ? For Spark yes ```scala scala> val allNull = Seq[java.lang.Double](null, null).toDF allNull: org.apache.spark.sql.DataFrame =...
@HaoYang670 Yes for both GpuMin and GpuMax we are going to have to remove the NaNs before calling Min/Max on them. @ttnghia is there some way we can make it...
I realize that it currently is undefined because of how the comparison is defined. Talking to others on the cudf team they were sure that the lexicographical compare code was...
I removed `GpuConcat` because it is covered by #5542
I thought I had filed an issue for this, but I could not find it. I was thinking that we could do a combination of operations when we bind the...
This would sill be good to fix
> I believe the desired output is to place all the `NaN` values at the end, right? That is correct for my understanding too.
This is a blocker for Spark to be able to use the JSON reader. Because we do not know all of the columns, the user just gives the ones that...
Not totally. In general we rely on Spark to tell us the schema of the data we want to read and then we pass it on to CUDF to select...
Long term yes we would want to be able to prune child columns as well. Unless the change is simple in the short term I would rather have us concentrate...