Fan Yang

Results 4 comments of Fan Yang

It is always the same jar, it just distribute the compute work to different executors in Spark which many executors may run on the same machine.

Indeed, that might be the reason! We actually worked around the issue with another method already, we basically set this property: https://github.com/bytedeco/javacpp/blob/master/src/main/java/org/bytedeco/javacpp/Loader.java#L999 We set this property to a random folder...

To confirm, so you already made the change in version `1.5.8-SNAPSHOT`?

Yeah that is very likely. This only fails on our spark cluster which I think the home dir is NFS mounted (I need to confirm this though). And as mentioned...