rtabmap
rtabmap copied to clipboard
Reduce the used memory for big map
Hello @matlabbe, We created a pretty good map of a large area (multi-session mapping: 5). It ended up with >4 million words in the vocabulary, which led to rtabmap crashing every time It ran in localization mode for minutes or when I called the backup service explicitly. I am aware that you already opened issue #1201, however, I am wondering if any intermediate step to reduce the size of words? Some parameters that may related for you to check:
Kp/MaxFeatures 1500
Vis/MaxFeatures 1000
FeatureType: GFTT/BRIEF
Let me know if I can provide more information to help you with support.
Thank you for your contribution and hope that will receive your reply soon.
Also waiting for this critical feature. Cannot load full vocabulary due to huge memory needs to allocate and dont manage to get same accuracy while activating WM/LTM.
@alexk1976 You can try to use kp/NNStrategy
to 0 or 2. It doesn't need to uncompress the descriptor from bin to float to build vocabulary, this saves 50% of memory usage, and it goes with the reduced perform of Near Neighbor search of course. However, the need of this feature is still necessary.
Here some options:
-
Kp/MaxFeatures: 1500
is high, default is 500. You can fix your current database with:rtabmap-reprocess --Kp/MaxFeatures 500 input.db output.db
-
You can try
Kp/ByteToFloat: true
to save some RAM. -
At the other extreme, we can use a fixed dictionary with less words (<1M): https://github.com/introlab/rtabmap/issues/942#issuecomment-1345657263
Thank you!
i dont think it's a real solution..a bit bigger area like we have and its impossible to load dictionary even when we set MaxFeatures=500. If we dont use FlanTree - have performance issues. Fixed dictionary - gives worse accuracy. We need a way to load full graph and only part of the dictionary
@alexk1976 Agreed, it is kinda included in that other issue https://github.com/introlab/rtabmap/issues/1201 .