raj-blis
raj-blis
@fe2s Apologies for the delay in getting back. Here is the Scala code that attempts to write a byte array -- ``` import org.apache.spark.sql.types._ import org.apache.spark.sql.SaveMode import org.apache.spark.sql.SparkSession import org.apache.spark.sql.Row...
@fe2s - Hi! Do you see anything wrong with the above code snippet?
Hi @fe2s , The input dataframe is a string with 32 hexadecimal characters. By saving it in binary format (i.e. 2 hexadecimal characters clubbed into a corresponding character) - I...
Hi @fe2s, For my use-case, I am writing data using Spark-Redis and querying through a C++ application. For the given input - `ABCD22A54B2E4368A0653A47897E50B0`, here is a TCP dump of the...
Hi @fe2s , Thank you for your patience with this matter. Do you have any idea about how much work is involved and how long it will take? We need...
Hi @fe2s That sounds like a good plan. Did you want to support the Array[Byte] in the HashRedisPersistence.save method in master branch or is this something we should be doing...
@fe2s Got it. The RDD should work too. Note that in my case, the key is an Array[Byte] and the value is a list of Array[Byte]. Can that be supported?
@fe2s That sounds good. I will work on using RDD within our code. Thanks for your help with this. Much appreciated.
Hi @fe2s I have verified that this fix works and the data is stored and retrieved correctly in binary format. Thank you for patiently going through this and coming up...