[WIP] AMQ-??: add a guard to the record length read from corrupted file
Can you mark PRs as "draft" for work in progress stuff? It will make it easier to know when it's ready for review. Also, KahaDB already has a max journal length setting so I was wondering if that could be used may it may not be able but it has been a while since I looked. I think if a record is bigger than the max length I believe it will just write the entire value and be larger than the configured max. So we may need to enforce a max or something. Something else is the journal file length max size can be changed between restarts so you could have different size files.
@cshannon sorry about that. Sure thing I'll be more diligent. I discovered this while debugging a test randomly failing with out of memory.
Yes, KahaDB already has a max length and the idea is to use it to cap the record length. If the file is corrupted in a way that the record length is very large, we may blow up the memory. The idea here is to cap the record size to the file max size.
If your assumption is correct then my fix won't work, and it will fail even though the record was accurately written to disk in a bigger file. We need a cap on the record size in my opinion. But I'm not sure what's the best at the minute.
I think a multiplier on the 'journalMaxFileLength' would work.
edit: Yeah, the fact the the journalMaxFileLength can be changed between restarts is a challenge.