Oleksii Diagiliev

Results 87 comments of Oleksii Diagiliev

You can also build it with docker ``` docker run -it --rm --name my-project -v "$(pwd)":/root -w /root adoptopenjdk/maven-openjdk8:latest mvn clean package -DskipTests ```

HI @redis0303 , Please find the documentation for Java https://github.com/RedisLabs/spark-redis/blob/master/doc/java.md https://github.com/RedisLabs/spark-redis/blob/master/doc/configuration.md

Hi @avirats You should either use 'table' or 'keys.pattern' option. Can you please share code that throws an error when only 'keys.pattern' is set?

I see. Right, you should either specify the schema explicitly or use 'infer.schema' option when reading by keys pattern. If it's not specified the error message is misleading.

Hi @jaky0515 , Not sure I understood the issue. Can you load with two calls, first with `*~*` and second with `*~*~*`?

Hi @terrynice how do you run the application? Do you submit it to the cluster or through some notebook (e.g. Zeppelin) ?

@justinrmiller , agree this might be valuable even though spark has its own implementation of HLL. This is quite an old PR and was probably overlooked.

`sc.fromRedisHash` doesn't support limiting returned fields. This is possible with [DataFrames support](https://github.com/RedisLabs/spark-redis/blob/master/doc/dataframe.md#reading). You will need to provide the schema with a list columns you would like to read. E.g. ```...

Sorry for the late response. It is not possible to only load the data that matches your RDD of names, but you can load all data and then join with...

@vimal3271 , yeah, it makes sense to implement some generic function to execute arbitrary operation with redis, so in your use-case it would be used like: ``` rdd.mapWithRedis { case(name,...