iOS app architecture
Great work @matlabbe and thanks for making it available to the community.
I am trying to build an iOS app that uses the color and laser data and in which most of the libraries are python/C++. Since rtabmap already essentially does that, it would help me a lot to understand how the iOS app works. Of course your open source code helps. But since I'm new to iOS development, I'm wondering: is there some high level documentation that describes how the iOS app interacts with the main rtabmap C++ library. If not, could you please describe it here in a few sentences?
Thanks!
There is no documentation about the Swift to C++ interface. Basically, swift calls Objective C functions that calls C functions. We pass the c++ rtabmap object as a pointer to C functions.
In summary the swift code is mainly UI stuff, calling the c++ library under the hood. For ARkit, it forwards the pose and image data to c++ library, which does the loop closure detection and the opengl rendering in c++.
We use the same opengl backend for the android and ios app.
Thanks!
In the ios code, you include the RtabMapApp.h header. Is this the same as rtabmap/app/android/jni /RTABMapApp.h?
Or is this a header that somehow gets automatically generated during the ios build?
ios build is including rtabmap/app/android/jni folder. I didn't put time to move that code in a shared folder between android and ios.
Ok thanks!
Hello, I'm new to iOS app development and curious about the main architecture of iOS RTABMap. It seems that the RTABMap iOS app(without lidar) doesn't heavily rely on core algorithms like odometry and loop closure from the RTABMap core. Can you confirm if this is true and clarify if the RTABMap app primarily uses outputs from ARKit? Thank you for your assistance in helping me understand this better.
RTABMap iOS app(without lidar) is using these inputs from ARKit: pose, RGB frame and 3D tracked features. Loop closure is still done by RTAB-Map library (we reproject the 3D tracked features in RGB frame to extract the corresponding visual descriptors), as well as the 3D mesh reconstruction/texturing.
Thank you so much for the prompt response!!! I will check based on the information you provided.
For ARkit, it forwards the pose and image data to c++ library, which does the loop closure detection and the opengl rendering in c++.
We use the same opengl backend for the android and ios app.
@matlabbe can you please elaborate on that? Where is this "same opengl backend" located? Are you referring to this: https://github.com/introlab/rtabmap/tree/master/app/android/jni? How does the iOS app interact with this opengl backend?
Ah sorry, you've already answered that above. I'm still unsure how the GL lib part gets built though.
The iOS project contains directly references to opengl es code from android/jni. All the opengl code is built inside the main ios c++ library of the App.
Thanks @matlabbe For which features do you use opengles? Is it mainly for rendering the 3D reconstruction and drawing the passthrough image to the UI? How is the rendering context handled?
Everything that is 3D has been drawn in OpenGL ES. The buttons/menus/dialogs are native iOS or Android UI stuff (not opengl).
On iOS, the OpenGL draw call comes from swift code (GLKView), calling internal c++ rtabmap render call here: https://github.com/introlab/rtabmap/blob/8cbee58f0772283d87939e016321235827155b55/app/ios/RTABMapApp/ViewController.swift#L2601-L2609
On android, the OpenGL draw call comes from java code (GLSurfaceView.Renderer), calling internal c++ rtabmap render call here: https://github.com/introlab/rtabmap/blob/8cbee58f0772283d87939e016321235827155b55/app/android/src/com/introlab/rtabmap/Renderer.java#L88-L106
On C++ side (for both iOS and Android), the main render loop is this function: https://github.com/introlab/rtabmap/blob/8cbee58f0772283d87939e016321235827155b55/app/android/jni/RTABMapApp.cpp#L1315-L1317