fix #13333: better handling when source file is repeated in compile_commands.json
We should have opt-in/opt-out CLI options so this behavior can be overridden. And it should probably be disabled by default (I feel like there was already a discussion where I was flip-flopping on this).
We should also somehow inform the user when a file is omitted because it is a duplicate (within --check-config?). This is something I need to do in a different scope but haven't gotten to it yet.
We should also somehow inform the user when a file is omitted because it is a duplicate (within
--check-config?). This is something I need to do in a different scope but haven't gotten to it yet.
Sorry, I overlooked that there is already a message.
And some Python tests for this need to be added.
Something for a future improvement.
Some of the settings could be sanitized so they are not different if the order is different. Please do not do this for the defines because they need to be reworked from the bottom up first (simplecpp et al) and that will probably implicitly fix that.
Some of the settings could be sanitized so they are not different if the order is different. Please do not do this for the defines because they need to be reworked from the bottom up first (simplecpp et al) and that will probably implicitly fix that.
The hash should be dependent of the order of defines/undefs now.
We should have opt-in/opt-out CLI options so this behavior can be overridden. And it should probably be disabled by default (I feel like there was already a discussion where I was flip-flopping on this).
What's a scenario where you'd want it to be disabled?
What's a scenario where you'd want it to be disabled?
Where it omits files from the analysis it should not (i.e. bugs). The existing handling of duplicates in other areas has been lacking so there should be a way to disable it. There will be more stuff added to FileSettings in the future.
So a flag to disable all de-duplication?
So a flag to disable all de-duplication?
Just for this added logic as it many parts which make up the hash.
The existing de-duplication is based on the (real) path being unique so that will only miss de-duplication but not accidentally omit files it should not because it is just a single value to compare.