librealsense
librealsense copied to clipboard
How To Use IMU To Get The Change of Roll Angle of L515 When It Is Moving
-
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
- Consider checking out SDK examples.
- Have you looked in our documentations?
- Is you question a frequently asked one?
- Try searching our GitHub Issues (open and closed) for a similar issue.
-
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Required Info | |
---|---|
Camera Model | L515 |
Firmware Version | 01.05.08.01 |
Operating System & Version | Ubuntu18.04 |
Kernel Version (Linux Only) | 5.4.0-125-generic |
Platform | PC |
SDK Version | 2.50.0 |
Language | C++ |
Segment | Robot |
Issue Description
<Describe your issue / question / feature request / etc..> Hi! I'm new in IMU and I have a question on how to use it to ratate my camera's coordinate system, both of 2D and 3D. My camera is L515 and it is on a robot. I want to give a deflection angle according to the direction of gravity acceleration when the robot is initialized and stationary (I think it is horizontal at this time). After the robot moves, it will have a roll angle. I want to obtain the roll angle of the camera relative to the initial position through IMU, which is used to rotate the camera's coordinate system to make it return to the horizontal position. Can L515 do this?
Hi @FeiSong123 It may not be practical to rotate the IMU coordinate system, as it is aligned with the depth coordinate system, as described at https://github.com/IntelRealSense/librealsense/issues/6456#issuecomment-633819091 and https://github.com/IntelRealSense/librealsense/issues/8249#issuecomment-768519918
The coordinate system of the D435i camera model applies to the L515 model too.
If you want to obtain the roll angle of the camera relative to the initial position then perhaps you could store the original roll angle when the camera is at horizontal, then subtract the new angle from the original one to find the angle that the roll has been deflected from the original orientation (new roll - original roll = number of degrees moved through). You should then know how many degrees have to be moved through to return from the new position to the original one.
No, I don't want to rotate the IMU coordinate system but to rotate color image and xyz values.
The yaw value should correspond to the Y angle of the gyro, as shown in the C++ rs-motion SDK example program.
https://github.com/IntelRealSense/librealsense/blob/master/examples/motion/rs-motion.cpp#L139
I am not aware though of a method by which the IMU can be used to rotate RealSense color or depth images, unfortunately.
Somebody else asked a similar question on the OpenCV forum at the link below.
https://forum.opencv.org/t/using-imu-roll-pitch-and-yaw-data-with-opencv/7486
Sorry for my unclear statement. What I want is to obtain the change of roll angle through IMU, and then rotate the coordinate system of the data according to the roll angle.
Whilst you should certainly be able to calculate the change in roll angle, I do not know of a way to use that to rotate the coordinate system. Such a rotation would not normally be required with IMU data, as you are not needing to rotate the orientation of a visible image like with depth or color to orient a picture the 'right way up'.
I now know how to rotate the coordinate system of data, but what I don't know is the change of roll angle, which I want to get through enable GYRO stream and ACCEL stream.
If you want to know the difference between the original and final roll angle then that could be done by storing the original angle and new angle in separate variables and subtracting the original angle from the new one.
If you needed to know the change of angle over time then you could potentially convert the gyro's radians per second (rad/sec) to degrees per second (deg/sec) to get the angular velocity, as described at https://github.com/IntelRealSense/librealsense/issues/3107
It's that original and final roll angle need to get as rs-motion.cpp ? And does it need to build librealsense from sources ? I built it from apt-get install say at https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md but can not find #include "example.hpp" and #include "d435.h" which included in rs-motion.cpp.
You could get the original and final roll angle by using rs-motion's gyro_angle.z = gyro_data.z; at the start position of the camera and store it in the variable for the 'original' value, and then repeat gyro_angle.z = gyro_data.z; again when the camera has reached its final position and store the result in a different variable for the 'final' value.
The source code version of librealsense is required if you want to build the SDK example programs such as rs-motion yourself using CMake.
The SDK examples such as rs-motion are built in a different way to how you would build a program that you have created yourself, as described at https://github.com/IntelRealSense/librealsense/issues/10715#issuecomment-1193351569
Thank you for your guidance ! I have another problem that I want to enable four stream, process in two thread, color and depth in main thread, gyro and accel in sub-thread. It seems that the frame rate of imu is limited by the other two. What should I do
In C++ the recommendable approach to implementing this would be to use callbacks like in the SDK example program rs-callback
https://github.com/IntelRealSense/librealsense/tree/master/examples/callback
Thanks for your help, I can use them by rs-callback. And I have few question on it.
Here is my general code:
cfg.disable_all_streams();
cfg.enable_stream(RS2_STREAM_DEPTH, IMAGE_WIDTH, IMAGE_HEIGHT, RS2_FORMAT_Z16, 30);
cfg.enable_stream(RS2_STREAM_COLOR, IMAGE_WIDTH, IMAGE_HEIGHT, RS2_FORMAT_BGR8, 30);
// cfg.enable_stream(RS2_STREAM_POSE, RS2_FORMAT_6DOF, 30);
cfg.enable_stream(RS2_STREAM_GYRO, RS2_FORMAT_MOTION_XYZ32F, 400);
cfg.enable_stream(RS2_STREAM_ACCEL, RS2_FORMAT_MOTION_XYZ32F, 400);
// Define frame callback
// The callback is executed on a sensor thread and can be called simultaneously from multiple sensors
// Therefore any modification to common memory should be done under lock
auto callback = [&](const rs2::frame& frame)
{
std::lock_guard<std::mutex> lock(uptateMutex);
if (rs2::frameset fs = frame.as<rs2::frameset>())
{
for (const rs2::frame& f : fs)
counters[f.get_profile().unique_id()]++;
// Handle image
TickMeter tk;
tk.start();
//processing data
Update(fs);
showImage();
tk.stop();
cout << "total time:" << tk.getTimeMilli() << endl;
tk.reset();
}
else if (rs2::motion_frame motionFrame = frame.as<rs2::motion_frame>())
{
if (motionFrame.get_profile().stream_name() == "Gyro")
{
// Get the timestamp of the current frame
double ts = motionFrame.get_timestamp();
//Handle Gyro data
rs2_vector gyroData = motionFrame.get_motion_data();
processGyro(gyroData, ts);
// cout << "gyroData:" << gyroData.x << "\t" << gyroData.y << "\t" << gyroData.z << endl;s
counters[frame.get_profile().unique_id()]++;
}
else if (motionFrame.get_profile().stream_name() == "Accel")
{
//Handle Accel data
rs2_vector accelData = motionFrame.get_motion_data();
processAccel(accelData);
// cout << "accelData:" << accelData.x << "\t" << accelData.y << "\t" << accelData.z << endl;
counters[frame.get_profile().unique_id()]++;
}
}
};
while (true) { // std::this_thread::sleep_for(std::chrono::microseconds(2500));
// std::lock_guardstd::mutex lock(uptateMutex);
// std::cout << std::endl; // for (auto p : counters) // { // std::cout << stream_names[p.first] << "[" << p.first << "]: " << p.second << " [frames] || "; // } }
I have a few question on it. Callback has opened up at least two threads, with while below , there are at least three threads, right ? Is the depth stream and color stream synchronized and stable at 30FPS? Do I still need operations like disabling exposure priority and how to operate?I don't know if the IMU's two streams are too fast. The thread update angle of the deep stream and color stream is very slow.
The default pair of values for the IMU streams (for the 400 Series cameras) are 200 for the gyro and 63 for accel. A faster frequency pairing that is also supported is 400 for the gyro and 250 for accel.
Each IMU data packet is timestamped using the depth sensor hardware clock to allow temporal synchronization between gyro, accel and depth frames.
In your full script, on the pipeline start line have you placed the word callback in the brackets so that the script knows that callbacks should be used, like in the start line of rs-callback:
https://github.com/IntelRealSense/librealsense/blob/master/examples/callback/rs-callback.cpp#L46
I would certainly recommend trying disabling auto-exposure priority. There is C++ code for doing so at https://github.com/IntelRealSense/librealsense/issues/5290 - remember to set the sensor index number to '1' so that the RGB sensor is accessed instead of the depth sensor (which has an index of '0'), as auto-exposure priority is an RGB option and not a depth option.
I do not know how many threads there should be.
Yes, I have start the callback like this:
rs2::pipeline_profile profiles = pipe.start(cfg, callback);
When I config it :
cfg.enable_stream(RS2_STREAM_GYRO, RS2_FORMAT_MOTION_XYZ32F, 400); cfg.enable_stream(RS2_STREAM_ACCEL, RS2_FORMAT_MOTION_XYZ32F, 250);
it reports errors:
terminate called after throwing an instance of 'rs2::error' what(): Couldn't resolve requests
The L515 data sheet document lists that the L515's IMU has different supported frequencies (100, 200, 400) than the IMU in the 400 Series cameras.
ACCEL
GYRO
This is confirmed when checking the L515's selectable IMU frequencies in the RealSense Viewer.
Please try changing the Accel speed to 200 instead of 250.
It can run as long as the frame rate of gyro and accel is 100/200/400, they don't need to be same.
I have another question here.
I want to obtain gyro and accel separately. Like the code below:
auto callback = [&](const rs2::frame& frame) { std::lock_guardstd::mutex lock(uptateMutex); if (rs2::frameset fs = frame.asrs2::frameset()) //obtain depth and color { for (const rs2::frame& f : fs) counters[f.get_profile().unique_id()]++;
// Handle image
}
else if (rs2::motion_frame motionFrame = frame.as<rs2::gyro_frame>()) //obtain Gyro data
{
//Handle Gyro data
counters[frame.get_profile().unique_id()]++;
}
else if (rs2::motion_frame motionFrame = frame.as<rs2::accel_frame>()) //obtain Accel data
{
//Handle Accel data
counters[frame.get_profile().unique_id()]++;
}
};
Is there any way to do this ?
At https://github.com/IntelRealSense/librealsense/issues/6426 a RealSense user wrote an adaptation of rs-callback that streamed only gyro and accel if a bool called useConfig was set to true, and otherwise used the default stream profile of the camera (depth and RGB) if useConfig was set to false.
Is separating out gyro and accel on its own like this what you had in mind for your own project, please?
I think Gyro and Accel data should be processed in the same thread. When I run on a computer with a CPU frequency of 3.9MHZ, the roll angle (calculated using Gyro and Accel data) is updated smoothly. When I run on a computer with a CPU frequency of 3.7MHZ (and with large fluctuations), the update is a bit stuck. So I want to separate them and try to solve the problem of stuck.
I tried the following callback code on a computer with low frequency:
auto callback = [&](const rs2::frame& frame) { std::lock_guardstd::mutex lock(uptateMutex); if (rs2::frameset fs = frame.asrs2::frameset()) //obtain depth and color { for (const rs2::frame& f : fs) counters[f.get_profile().unique_id()]++;
// Handle image
}
else if (frame..get_profile().stream_name() == "Gyro") //obtain Gyro data
{
//Handle Gyro data
counters[frame.get_profile().unique_id()]++;
}
else if (frame..get_profile().stream_name() == "Accel") //obtain Accel data
{
//Handle Accel data
counters[frame.get_profile().unique_id()]++;
}
};
But it doesn't seem to work.
There is a C++ callback case at https://github.com/IntelRealSense/librealsense/issues/6424 where the RealSense user was not achieving the desired IMU frame rate. They also wrote a script based on rs-callback The solution in their particular case was to disable global time.
Another RealSense callback user found that the color FPS seemed to be dragging down their IMU performance, so it may be worth trying to set color to 60 instead of 30.
Except that the color stream is changed from 30 frames to 60 frames, nothing else has changed, but the depth seems to be stuck in a certain frame, which is the same for several times.
Disable global time it didn't work. I had try int another computer whose CPU frequency is 4.0GHZ, when the program runs. The CPU frequency unit I mentioned above is wrong. It should be GHZ. This seems to have nothing to do with the CPU frequency. One thing we noticed is that the CPU of a normally running computer is Intel ® Core ™ i7-8809G CPU @ 3.10GHz × 8 (Memory 31.3 GiB), Intel is not running normally ® Core ™ i9-10880H CPU @ 2.30GHz × 16 (15.5 GiB) and i5 10400f (Momory 15.5 GiB). We don't think that the cause is running memory, because the memory utilization rate is only 0.7% when the program is running.
If depth frames are repeating for several frames then that suggests that there may be frame drops on the depth stream, as the SDK will return to the last known good frame on a stream if frame drops occur.
Introducing latency into the frame queue by increasing the frame queue capacity above its default size of '1' can help to reduce frame drops.
https://github.com/IntelRealSense/librealsense/wiki/Frame-Buffering-Management-in-RealSense-SDK-2.0#latency-vs-performance
A frame queue capacity of '2' is suggested by the documentation link above when using two streams (depth and color).
Sorry to bother you, but I still want to know how many threads are created by callback (like the following), and whether gyro and accel are processed in the same thread.
auto callback = [&](const rs2::frame& frame) { std::lock_guardstd::mutex lock(uptateMutex); if (rs2::frameset fs = frame.asrs2::frameset()) { for (const rs2::frame& f : fs) counters[f.get_profile().unique_id()]++;
// Handle image
}
else if (rs2::motion_frame motionFrame = frame.as<rs2::motion_frame>())
{
if (motionFrame.get_profile().stream_name() == "Gyro")
{
// Get the timestamp of the current frame
double ts = motionFrame.get_timestamp();
//Handle Gyro data
rs2_vector gyroData = motionFrame.get_motion_data();
processGyro(gyroData, ts);
counters[frame.get_profile().unique_id()]++;
}
else if (motionFrame.get_profile().stream_name() == "Accel")
{
//Handle Accel data
rs2_vector accelData = motionFrame.get_motion_data();
processAccel(accelData);
counters[frame.get_profile().unique_id()]++;
}
}
};
At https://github.com/IntelRealSense/librealsense/issues/7098 a RealSense team member provides information about the number of threads created. Within that discussion at https://github.com/IntelRealSense/librealsense/issues/7098#issuecomment-673590603 they advise that each stream endpoint has 2 threads and gyro and accel each have a thread.
They add that there are 3 endpoints for 400 Series cameras and 4 endpoints for L515.
I've been looking for it for a long time. The reason why my program is stuck is probably that only the single core CPU is running. Is there any official program to set the number of CPUs in realsense.
The ability to define the maximum number of CPU cores that can be used (each core has 2 threads) is only available for installing librealsense from source code, by using a '-j' command such as -j8.
https://github.com/IntelRealSense/librealsense/blob/master/doc/installation.md#building-librealsense2-sdk
However, you can enable programs to make use of multiple CPU cores instead of a single core (but not set the number of cores) by building librealsense from source code and including the CMake build flag -DBUILD_WITH_OPENMP=true.
This will enable librealsense to make use of multiple cores when performing depth-color alignment or YUY to RGB color conversion, at the cost of a slightly higher CPU percentage utilization.
I'd like to use OpenCv to rotate a image with continuously changing angle. I can see from the image in imshow that when the rotation angle is a fixed value, both srcImage and rotateImage are very smooth. The problem is that when the rotation angle starts to change, the srcImage is still very smooth, but the rotateImage is very stuck. It is worth noting that when the angle is fixed and changed on one of my computers, the two pictures are very smooth. On other computers, the problem is the same as above. I don't know whether it is the problem of environment configuration, and if so, which environment. Here is my code.
Rect bbox;
imageRotationMatix = getRotationMatrix2D(imgCenter, changeAngle, 1.0);//+ inv时针
bbox = RotatedRect(imgCenter, srcImage.size(), changeAngle).boundingRect();
imageRotationMatix.at
If the problem occurs when changing an angle in real-time, I wonder whether a rotational angle phenomenon called gimbal lock could be occuring.
https://en.wikipedia.org/wiki/Gimbal_lock
A symptom of a gimbal lock occurring may be if there is a sudden large jump, or 'snap', in the rotation angle from one position to another.
Alternatively, if nothing is happening but the values are still updating then it may be because the calculations in the script that change the value in real-time are causing invalid NaN values (Not a Number) to be generated.
Very strange! I saved the srcImage and rateImage through imwrite and found that they were synchronized, but imshow did not display the rotateImage in time. Then I changed the waitKey value from 1 to 3, which is much smoother (the time required for the above code is about 1.5-4ms).
There is no NaN value in the angle of real-time update, and there is no significant mutation. In fact, I have tried to add 1 degree to the rotation angle per cycle (33ms), but it is still too laggy on computers with poor performance before, and it is also very smooth to add 10 degrees per cycle on computers with good performance.
I don't know whether this problem of imshow is related to the computer's graphics processing ability