The objective of this project was to 3D map the Lab space with ZED Stereo Camera, Lidar, and JACKAL. We used RTAB-MAP for integrating the odometry data, depth data, and RGB data from the different sensors, and displayed it on RVIZ.
This diagram explains how the data from each component gets combined into a 3D map. ZED Stereo Camera publishes to a topic named '/rgb/image_rect_color' and '/depth/depth_registered.' Both of the topics are published as 'sensor_msgs/Image' message type. The control of JACKAL was done by a Playstation Controller, which publishes to a topic named '/cmd_vel.' JACKAL publishes to '/odometry/filtered'. RTAB-MAP subscribes to '/odometry/filtered', '/rgb/image_rect_color' and '/depth/depth_registered', and integrates the three into '/rtabmap/MapData.' RVIZ subscribes to '/rtabmap/MapData', and visualize it on the desktop. The LIDAR sensor does not play any role in 3D mapping, but publishes a high-quality point cloud, which can be used as a comparison to that generated by RTAB-MAP.
We decided to use 'SIFT': Scale-Invariant Feature Transform. A detailed introduction of the algorithm can be found here: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.html.
Selection of a feature detection algorithm was important in this project. Based on the detected features, the captured images from ZED camera are connected to each other and get combined into a map. Also, after a loop around (360°), the loop-closure does not occur without enough features found in the images.
There are many algorithms available which vary in the quality and speed. We tried FAST, SURF, and ORB, but SIFT worked the best in this project.
There were many parameters from ZED Stereo Camera and RTAB-MAP launch files that had been tuned for sake of better quality in mapping process. The list of the parameters that had been tuned and the effects of the changes can be found here:
"LINK-will be added soon"