success-m

Results 6 comments of success-m

@rae89 - I went with "LATEST". Seems to work fine now. But we need to make sure the checkpoint don't go away.

@BreakpointsCA - Yes this is the expected behavior for LATEST. However, the checkpoints are a life-saver. Even if the spark cluster fails, the data ingestion is resumed from the point...

This seems to work for me: df.writeStream .trigger(Trigger.ProcessingTime(interval)) .foreachBatch { (batchDF: DataFrame, batchId: Long) => // Transform and write batchDF batchDF.persist() //some transformation batchDF.unpersist() () // for scala v 2.12...

@roncemer - Just saw this now. Great to know that you are maintaining this now. :)

@roncemer - Any idea why is this happening. My initial guess is the kinesis re-sharding. So I have added the option `.option("kinesis.client.describeShardInterval", "500ms")` but dont know if this will fix...

@roncemer - I don't have any changes that needs to be pushed yet. But ya, please do add me in. I would like to contribute to the library.