Index: Robotics & Computer Vision


Simultaneous Localization and Mapping

Localization is the process of finding a robot on a pre-existing map. There are many methods of localization, Monte-Carlo Localization (MCL) being one of the most popular.

Mapping is the process of recording information about the environment around a robot. It often assumes that you know (or can find out) where the robot is within that environment with high fidelity. This is great if you have access to GPS or even better dGPS.

SLAM is the combination of the above two methods. In SLAM, you use the partial information about the surroundings to help localize the robot within the map, and you use this location as a given in order to extend the map.

This project is looking at SLAM on it's own, both EKF-SLAM and FAST-SLAM, and applying it to our local environment, including several TurtleBot-2 platforms and a set of E-Puck robots.

Navigation on Reference Maps

In this work, started while on leave from Trinity in 2013-2014, we look at allowing an Autonomous Underwater Vehicle (AUV) to navigate using a forward looking sonar to automatically correct for intertial drift over time, without surfacing to get a GPS fix. Some background on the vehicle inspiring this work can be found here.

To be documented


Sensor Fusion of Lidar and Stereo for Urban Mobile Robots

This work was done while on a NASA-ASEE-USRA 2002 Summer Faculty Fellowship in Avionic Systems and Technology, Engineering and Science Directorate, Jet Propulsion Laboratory/California Institute of Technology entitled ``Sensor Fusion of Lidar and Stereo for Urban Mobile Robots''

This work builds on some lidar post-processing work done by Andres Castano at JPL. More details on this can be found here.


Camera Stabilization

This work was done while on a NASA-ASEE 2000 Summer Faculty Fellowship in Aeronautics and Space Research, Engineering Directorate, Johnson Space Center, Houston entitled ``Vision System Stabilization for Mobile Robot Base and Ground Movement''

Ego-motion (self-motion) of the camera is a considerable problem in outdoor rover applications. Stereo tracking of a single moving target is a difficult problem that becomes even more challenging when rough terrain causes significant and high-acceleration motion of the camera in the world.

This work uses inertial measurements to estimate camera ego-motion and uses these estimates to augment stereo tracking. Some initial results from an outdoor rover are planned for presentation at the 2001 IEEE International Conference on Robotics and Automation, illustrating the efficacy of the method. The method causes fast but predictable image location transients, but reduces the amplitude of image location transients due to the rough terrain.


Evaluation of feature-based room maps

This paper represents my first published work with an undergraduate.

In this paper, we describe a method for evaluating the consistency and sufficiency of a proposed feature-based room map of unspecified origin with regard to a single sensor measurement. This method analyzes the possibility of the proposed map giving rise to the given dense sonar scan. The features from which the map is derived can be from any of a multitude of sensors.

These quality ratings allow hypothesized room maps to be ruled out if they are inconsistent with observed data, and allow an autonomous robotic system to arrive at a ``most plausible explanation'' for a room map based on sensor measurements.


Model-based Tracking of 2DOF arm with unknown link lengths

This work was done at the University of Illinois, as an extention to my core Doctoral work on Model-Based Tracking of articulated objects. It involves planar movement of the human arm, with unknown lengths of the links of the arm.

This sequence shows the tracking of an arm with 2 degrees of freedom active. The lengths of each link of the arm are initialized correctly, so the tracking progresses much as if the link lengths had been measured and set. In the estimation process, which is based on the same mathematical framework (extended Kalman Filtering) as the previous cases, the joint angles are modeled as moving with constant velocity and the link lengths are modeled as being unknown constants. The sequence above illustrates the initiation of tracking, but the full sequence is also available.
This sequence shows the tracking of an arm with 2 degrees of freedom active. The lengths of each link of the arm are initialized incorrectly, so the tracking progresses much as if the link lengths had been measured and set. In the estimation process, which is based on the same mathematical framework (extended Kalman Filtering) as the previous cases, the joint angles are modeled as moving with constant velocity and the link lengths are modeled as being unknown constants. The sequence above illustrates the initiation of tracking, but the full sequence is also available.


Development of a Visual Space-Mouse

This work, done by Tobias Kurpjuhn, a researcher from Technische Universitat Munchen (Technical University of Munich) visiting our lab at the University of Illinois, involves visual control of a robotic arm.

This paper proposes an intuitive and convenient visually guided interface for controlling a robot with six degrees of freedom. Two orthogonal cameras are used to track the position and the orientation of the hand of the user. This allows the user to control the robotic arm in a natural way.

Tobias has written a web page, housed at the Computer Vision and Robotics lab at the University of Illinois, about this project. Videos of this project can be found there as well.

Last Update:
knickels AT engr.trinity.edu