Visual SLAM: Enhancing Direct Visual Odometry Through the Integration of Deep Learning Approaches
This paper investigates the field of direct visual odometry and specifically the implementation of hybrid approaches between deep learning and classical hand-crafted methods. This project introduces a new approach that integrates a deblurring module with a saliency predictor to perform better point sampling which increases trajectory estimation accuracy in blurry frames, often caused by rapid camera movements or long exposure times in dimly lit conditions. Benchmark testing against DSO and SalientDSO on the EuRoC MAV dataset demonstrated consistent improvements, with the proposed system achieving an average Absolute Trajectory Error (ATE) of 0.26m, compared to 0.335m for DSO and 0.303m for SalientDSO. Further research directions include investigating other image improvement methods such as dehazing, denoising, or image enhancement for even more accurate results.