Reason for `AsyncOdometryEstimation::run` using a polling loop
I'd like to use AsyncOdometryEstimation::run as a generic event loop to invoke custom callbacks enqueued from my custom odometry class. It seems that AsyncOdometryEstimation::run is using a polling loop to retrieve events in a fixed inverval (1ms). As ConcurrentVector<> supports signaling via conditional variable, is there any specific reason for using a polling loop rather than block waiting on ConcurrentVector<>?
If not, I sugguest changing AsyncOdometryEstimation::run to a blocking loop by utilizing the signaling featuring of ConcurrentVector<>. This would make the event loop more responsive and reduce CPU consumption a bit. The basic idea is as follows:
- Define work items to be handled by the event loop:
class WorkItem {
public:
virtual ~WorkItem() = default;
};
class ImuWorkItem : public WorkItem {
public:
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
ImuWorkItem(const Eigen::Matrix<double, 7, 1>& imu_data) : data(imu_data) {}
public:
Eigen::Matrix<double, 7, 1> data;
};
class PointsWorkItem : public WorkItem {
public:
PointsWorkItem(const PreprocessedFrame::Ptr& frame) : data(frame) {}
public:
PreprocessedFrame::Ptr data;
};
#ifdef GLIM_USE_OPENCV
class ImageWorkItem : public WorkItem {
public:
ImageWorkItem(const std::pair<double, cv::Mat>& image) : data(image) {}
public:
std::pair<double, cv::Mat> data;
};
#endif
- Use a single
ConcurrentVector<std::shared_ptr<WorkItem>>as the queue for all work items rather than using separate queues. - Block wait on the work queue for work items and put them into corresponding queues:
auto work_items = work_queue.get_all_and_clear_wait();
if (!work_items.size()) {
break;
}
for (const auto& item : work_items) {
auto imu_item = std::dynamic_pointer_cast<ImuWorkItem>(item);
if (imu_item) {
imu_frames.push_back(imu_item->data);
continue;
}
auto points_item = std::dynamic_pointer_cast<PointsWorkItem>(item);
if (points_item) {
new_raw_frames.push_back(points_item->data);
continue;
}
#ifdef GLIM_USE_OPENCV
auto image_item = std::dynamic_pointer_cast<ImageWorkItem>(item);
if (image_item) {
images.push_back(image_item->data);
continue;
}
#endif
}
- Process the queues for specific work item type as before:
for (const auto& imu : imu_frames) {
const double stamp = imu[0];
const Eigen::Vector3d linear_acc = imu.block<3, 1>(1, 0);
const Eigen::Vector3d angular_vel = imu.block<3, 1>(4, 0);
odometry_estimation->insert_imu(stamp, linear_acc, angular_vel);
last_imu_time = stamp;
}
// Same for point clouds and images
When I wrote the code, I was afraid of the performance of condition_variable because IMU data may be at a very high frequency (~1000Hz). I now think the cost of calling notify_all 1000 times/s should be trivial, and am thinking to re-write AsyncOdometryEstimation once I get a spare time. Meanwhile I still want to avoid unifying items as WorkItem because it requires allocating memories for tiny and many IMU data.