secor
secor copied to clipboard
Dropping last message or two before new parquet writer is created
We are using the AvroMessageParser and AvroParquetFileReaderWriterFactory and have noticed that a very small amount of messages are being dropped. Upon further investigation the sequence numbers of the messages being dropped correspond to the number right before (or sometimes 2 before) one of the files that was written to S3.
Ex: If one of the files on s3 is named: 1_1_00000000002329440769.gz.parquet (which I take to mean that the first piece of data in that file was from partition 1 with offset 2329440769), then the data which was dropped was in offset 2329440768.
The previous file I would have expected it to be in is well under our max file size param so I think it is getting finalized/written due to reaching max file age.
I will try to investigate more and see if I can write a unit test and figure out what is going on. If it turns out this is somehow related to our setup/config I'll add more detail here.
We are running of a fairly recent version we built off master: https://github.com/pinterest/secor/commit/359c8b8863248e7870350ea351bf4a0bb8118ebf
Thanks, Jeremy
Do you see consumer group rebalance during those times? There should be some logging messages indicating the rebalance was happening, that usually is a period of time which might cause some edge case bugs.
Also can you use the SequenceFileReadWritingFactory during debugging? It's much easier to debug and look at the sequence files (it's in sequential order of the records coming in).
The other possibility is the parquet file is not flushed to disk before S3 or HDFS uploading starts, take a look at AvroParquetReaderWriter class to see whether file close() and flush() is called on all edge paths.
On Thu, Nov 19, 2020 at 4:09 PM jeremyplichtafc [email protected] wrote:
We are using the AvroMessageParser and AvroParquetFileReaderWriterFactory and have noticed that a very small amount of messages are being dropped. Upon further investigation the sequence numbers of the messages being dropped correspond to the number right before (or sometimes 2 before) one of the files that was written to S3.
Ex: If one of the files on s3 is named: 1_1_00000000002329440769.gz.parquet (which I take to mean that the first piece of data in that file was from partition 1 with offset 2329440769), then the data which was dropped was in offset 2329440768.
The previous file I would have expected it to be in is well under our max file size param so I think it is getting finalized/written due to reaching max file age.
I will try to investigate more and see if I can write a unit test and figure out what is going on. If it turns out this is somehow related to our setup/config I'll add more detail here.
We are running of a fairly recent version we built off master: fullcontact@359c8b8 https://github.com/fullcontact/secor/commit/359c8b8863248e7870350ea351bf4a0bb8118ebf
Thanks, Jeremy
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/pinterest/secor/issues/1712, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABYJP753OTI3BQTQUQMYXH3SQWXTPANCNFSM4T4CXQLQ .
Thanks for the tips on how to troubleshoot. I'll let you know what I find. And if there is an apparent fix I'll send a PR your way.