Mathieu Boespflug
Mathieu Boespflug
I would state that option this way: *reading* an RDD could still be in `IO`, but the RDD *composition* operators would not. The same principle would apply to the dataframes...
Wild guess: are you building on NixOS? That won't work, because the linker path is hard-coded to a non-standard location. Note that .jar is not entirely hermetic. There are three...
New Cabal or Java packages? We already depend on scala libs indirectly via Spark (scala is the implementation language for Spark), so I don't think we would have any new...
Only the standard library. Nothing else.
Didn't mean to imply that inline-java would be required. It was merely a more convenient notation for the sake of discussion than using `call` and friends.
@robinbb 1-2 days I'd say.
@robinbb it would turn an indirect dependency on the scala standard library (via Spark) into a direct one. So no new dependencies overall.
> So how do we deal with implicits? The only implicits in the API are evidence for `Ordering` constraints and `ClassTag`. Our bindings do know at runtime the ground type...
Do you mean `Encoder`? It looks like those can be created explicitly using static methods in the `Encoders` class.
Not sure what's going on here. Next step is to profile with the Scala version of the code on the same dataset to rule out sparkle being the culprit. `zipWithIndex`...