Fecha de publicación:
27/11/2024
Fuente: PubMed "smart farming"
Sensors (Basel). 2024 Nov 11;24(22):7214. doi: 10.3390/s24227214.ABSTRACTThe foundation of robot autonomous movement is to quickly grasp the position and surroundings of the robot, which SLAM technology provides important support for. Due to the complex and dynamic environments, single-sensor SLAM methods often have the problem of degeneracy. In this paper, a multi-sensor fusion SLAM method based on the LVI-SAM framework was proposed. First of all, the state-of-the-art feature detection algorithm SuperPoint is used to extract the feature points from a visual-inertial system, enhancing the detection ability of feature points in complex scenarios. In addition, to improve the performance of loop-closure detection in complex scenarios, scan context is used to optimize the loop-closure detection. Ultimately, the experiment results show that the RMSE of the trajectory under the 05 sequence from the KITTI dataset and the Street07 sequence from the M2DGR dataset are reduced by 12% and 11%, respectively, compared to LVI-SAM. In simulated complex environments of animal farms, the error of this method at the starting and ending points of the trajectory is less than that of LVI-SAM, as well. All these experimental comparison results prove that the method proposed in this paper can achieve higher precision and robustness performance in localization and mapping within complex environments of animal farms.PMID:39598990 | PMC:PMC11598613 | DOI:10.3390/s24227214