David Arroyo Cazorla
David Arroyo Cazorla
Changes in dist/server-application.conf and server/../server-reference.conf are required #crossdata-server.config.spark.jars = "/opt/sds/crossdata/lib/crossdata-server_${scala.binary.version}-${project.version}-jar-with-dependencies.jar"
Hi, The new aproach what @pfcoperez is referring to will be implemented for 0.12.X ASAP. In the meantime, we have added a PR ( #133) to solve the issue.
Hi @jaminglam #133 has already been merged.
Hi @gohilankit There are several ways to specify filters, the easiest one is just using the [filter function](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.filter) Thus, you should do something like data.filter(df.age > 3).collect(). The filter would...
Hi @JohnCunningham , You are right, Spark-MongoDB connector is not compatible with MongoDB 3.2 yet. Currently, we are using casbah 2.8.0 which relies on java-driver 2.13 ([compatibility matrix](https://docs.mongodb.com/ecosystem/drivers/driver-compatibility-reference/#java-driver-compatibility)) We'd like...
Hi @NiranjanMudhiraj @lizengfa You can create a MongodbDataFrame, and then, save it indicating the config. This is the source code: ``` class MongodbDataFrame(dataFrame: DataFrame) extends Serializable { /** * It...