Jags Ramnarayan

Results 19 comments of Jags Ramnarayan

if you aren't necessarily doing a perf benchmark/test, you could try turning off code generation using `snappySession.conf.set("spark.sql.codegen.wholeStage", false)` or using SQL `set spark.sql.codegen.wholeStage=false` .... until we resolve this.

use something like this " create external table xxx using CSV options(delimiter '..', path '....') .." For precise syntax search for how CSV data loading is done in Spark. -----...

Looks like you are using the SparkSession instance? spark.sql? Create a snappysession and execute the same.

Not yet. Sent from my iPhone > On Jun 4, 2019, at 12:40 AM, foxgarden wrote: > > @piercelamb > Does release 1.1 support Spark2.3 ? The document just mentioned...

Look for SSL configuration in the documentation and you should be able to configure mutual authentication. At some point, we will support basic auth (user/pass) for all users.

K8s. Here is the early [documentation and deploy info]( https://github.com/SnappyDataInc/spark-on-k8s/tree/master/charts/snappydata). Eager to get some feedback ....

We think k8s is more extensible and has a big eco-system and works well for apps (microservices) not just data driven apps. And, today, you can launch a k8s cluster...

Yes, you are correct. No need for hdfs. I am not familiar with Portworx but presumably it can be mounted as a persistent volume in k8s. If so, yes, Snappydata...

By default we dynamically provision a persistent volume(PV) and a claim and bind this to the Snappy pods. Essentially, snappydata uses /work sub-directory to store all its data/catalog files. And,...

Not sure how these options work without the hyphen (memory-size, timeout) .... > -critical-heap-percentage=90 -eviction-heap-percentage=81 member-timeout=60000 memory-size=50g