Abstract:
Mobile robots relying solely on a single sensor often struggle to overcome challenges such as illumination change, external disturbances, effects of reflective surfaces, and cumulative errors, which limit their environmental perception capabilities as well as the accuracy and reliability of pose estimation. This article adopts a nonlinear optimization method to achieve a tightly coupled integrated localization and mapping system, IIVL-LM, at the data level (IMU, infrared camera, RGB camera, LiDAR). A real-time luminance conversion model based on RGB image information is proposed. The system incorporates varying luminance values into the visual SLAM model through nonlinear interpolation for real-time mapping, then fuses the feature extraction of key frames from the infrared camera and the RGB camera through dynamic weighting. In a simulated indoor rescue scenario dataset, compared to various mainstream fusion positioning methods, the IIVL-LM system exhibits a notable performance improvement under challenging luminance conditions, especially in low-light environments. The average Root Mean Square Error (RMSE) of the Absolute Trajectory Error (ATE) improved by 23% to 39% (0.006 to 0.013). The IIVL-LM system ensures that it operates with at least three active sensors at all times, thereby enhancing its robustness in unknown and open environments while maintaining precision. This capability is particularly valuable for applications in complex settings such as indoor rescue scenarios.