Collaborative Robotics Heads-Up Display Public
Downloadable ContentDownload PDF
Achieving a robust position and orientation estimate is crucial for intuitive interaction with autonomous systems, especially through augmented reality interfaces. However, available passive localization methods in GPS-denied environments do not suffice. This project loosely coupled inertial and visual sensors by modifying the monocular ORB SLAM algorithm. Data collected from LIDAR and motion capture was used to evaluate the realized system. ORB SLAM code was analyzed and performance profiled for real-time implementation. SLAM scale uncertainty was corrected with inertial data, and scale drift correction was attempted by modifying an internally-optimized motion model. A more accurate position estimate was achieved, and additional work can improve precision, robustness, and execution speed.
- Date created
- Resource type
- Rights statement
- In Collection:
Permanent link to this page: https://digital.wpi.edu/show/9g54xk31h