kinesis
kinesis copied to clipboard
putRecord is not emitted after splits and merges
I have a stream where I've split and merged shards a few times, and I have an app pushing data to it using this package (which is awesome, btw). I keep track of the send queue by tracking the number of pushes into the stream and subtracting the number of times I've received putRecord from it, but when I did my splitting and merging that stopped working.
The problem seems to start on line 73, where the list of shards is cached forever. Then, on line 228, it only emits putRecord if the shard in the ShardId in the response data matches with one of the known shards. If you've split all of your shards that existed when the app started running, then this will never happen.
I can think of two solutions, and I'd gladly submit a pull request for either of them if you gave me your preference. One would be to blow the cache of shards whenever we receive a response indicating a shard that's not in the list. Another would be to move the emit to line 220, which would cause it to be called regardless of whether the shard exists in the list.
For this particular problem, I don't see the harm in doing approach #2, but I would guess that other problems could show up as a result of the cache being incorrect.
Any thoughts?
Yeah, I haven't gotten around to dealing with splits and merges mid-stream yet - thanks for raising this.
I was considering whether the shard cache should be refreshed on a regular basis - although doing it when you receive a response that indicates the shard isn't in the list seems like a nice way of doing this.
I think I prefer that to just emitting regardless (because that way the shard list is mostly up to date)
Cool, I just sent a pull request that emits the message regardless. This will solve my problem, so I'm happy with it, though I'd be happy to help make the code more flexible if you have a specific direction you want to take that.
I guess I won't close it until the pull request is actually merged. : )
Ok cool - yeah, leave this open and when I get a chance to play around with something that's a bit more robust I'll close it (and the PR)
Glad that that solution works for you at least!