Visual odometry and vision system measurements based algorithms for rover navigation
Planetary exploration rovers should be capable of operating autonomously also for long paths with minimal human input. Control operations must be minimized in order to reduce traverse time, optimize the resources allocated for telecommunications and maximize the scientific output of the mission.
Knowing the goal position and considering the vehicle dynamics, control algorithms have to provide the appropriate inputs to actuators. Path planning algorithms use three-dimensional models of the surrounding terrain in order to safely avoid obstacles. Moreover, rovers, for the sample and return missions planned for the next years, have to demonstrate the capability to return to a previously visited place for sampling scientific data or to return a sample to an ascent vehicle.
Motion measurement is a fundamental task in rover control, and planetary environment presents some specific issues. Wheel odometry has wide uncertainty due to slippage of wheels on a sandy surface, inertial measurement has drift problems and GPS-like positioning systems is not available on extraterrestrial planets. Vision systems have demonstrated to be reliable and accurate motion tracking measurement methods. One of these methods is stereo Visual Odometry. Stereo-processing allows estimation of the three-dimensional location of landmarks observed by a pair of cameras by means of triangulation. Point cloud matching between two subsequent frames allows stereo-camera motion computation. Thanks to Visual SLAM (Simultaneous Localization and Mapping) techniques a rover is able to reconstruct a consistent map of the environment and to localize itself with reference to this map. SLAM technique presents two main advantages: the map of the environment construction and a more accurate motion tracking, thanks to the solutions of a large minimization problem which involves multiple camera poses and measurements of map landmarks.
After rover touchdown, one of the key tasks requested to the operations center is the accurate measurement of the rover position on the inertial and fixed coordinate systems, such as the J2000 frame and the Mars Body-Fixed (MBF) frame. For engineering and science operations, high precision global localization and detailed Digital Elevation Models (DEM) of the landing site are crucial.
The first part of this dissertation treats the problem of localizing a rover with respect to a satellite geo-referenced and ortho-rectified images, and the localization with respect to a digital elevation model (DEM) realized starting from satellite images A sensitivity analysis of the Visual Position Estimator for Rover (VIPER) algorithm outputs is presented. By comparing the local skyline, extracted form a panoramic image, and a skyline rendered from a Digital Elevation Model (DEM), the algorithm retrieve the camera position and orientation relative to the DEM map. This algorithm has been proposed as part of the localization procedure realized by the Rover Operation Control Center (ROCC), located in ALTEC, to localize ExoMars 2020 rover after landing and as initialization and verification of rover guidance and navigation outputs. Images from Mars Exploration Rover mission and HiRISE DEM have been used to test the algorithm performances.
During rover traverse, Visual Odometry methods could be used as an asset to refine the path estimation. The second part of this dissertation treats an experimental analysis of how landmark distributions in a scene, as observed by a stereo-camera, affect Visual Odometry measurement performances. Translational and rotational tests have been performed in many different positions in an indoor environment. The Visual Odometry algorithm, which has been implemented, firstly guesses motion by a linear 3D-to-3D method embedded within a RANdom SAmple Consensus (RANSAC) process to remove outliers. Then, motion estimation is computed from the inliers by minimizing the Euclidean distance between the triangulated landmarks.
The last part of this dissertation has been developed in collaboration with NASA Jet Propulsion Laboratory and presents an innovative visual localization method for hopping and tumbling platforms. These new mobility systems for the exploration of comets, asteroids, and other small Solar System bodies, require new approaches for localization. The choice of a monocular onboard camera for perception is constrained by the rover’s limited weight and size. Visual localization near the surface of small bodies is difficult due to large scale changes, frequent occlusions, high-contrast, rapidly changing shadows and relatively featureless terrains.
A synergistic localization and mapping approach between the mother spacecraft and the deployed hopping/tumbling daughter-craft rover has been studied and developed. We have evaluated various open-source visual SLAM algorithms. Between them, ORB-SLAM2 has been chosen and adapted for this application. The possibility to save the map made by orbiter observations and re-load it for rover localization has been introduced. Moreover, now it is possible to fuse the map with other orbiter sensor pose measurement.
Collaborative localization method accuracy has been estimated. A series of realistic images of an asteroid mockup have been captured and a Vicon system has been used in order to give the trajectory ground truth. In addition, we had evaluated this method robustness to illumination changes.
http://paduaresearch.cab.unipd.it/10175/1/Sebastiano_Chiodini_tesi.pdf