This is an introduction to the set of approaches in the KAMBARA project. Please have a look on the publications for a more technical and detailed description.
Many approaches have been taken to the problem of motion control for underwater vehicles, ranging from traditional control to modern control to variety of neural network-based architectures. Most existing systems control limited motions yet require detailed dynamic models of the vehicle and a number of simplifying assumptions which may limit its operating regime and/or robustness. This result is expensive, sensitive, and unsatisfactory.
We seek an alternative. We are developing a method by which Kambara learns to control its own motions directly from experience of its actions in the world. Kambara starts with no explicit models of itself or of the effect that any action may produce. Our method uses a connectionist (artificial neural network) implementation of model-free reinforcement learning. Kambara learns in response to a reward signal, attempting to maximize its total reward over time, as it converges on a correct mapping from how it wishes to move to what specific action it should take.
Adaptive sensor data interpretation
Recent progress in the field of blind signal separation as well as biological insights which are independent from the investigated species suggests that elementary sensor data processing is driven by highly adaptive systems, which depend mostly on the nature of the sensors (which are usually also actuators), while continuously fine tuning them. This is to be seen in contrast to an expectation-driven approach, where sensor data streams are compared to a set of defined features directly.
Underwater Visual Servo-control
Controlling its own motion is part of the AUV's challenge, another is autonomously determining where to go. To guide itself, Kambara is equipped with color video cameras, video digitizers and a real-time computing system. Although not without difficulties underwater, we are investigating color stereo perception because of the availability of detectable features and the importance of visual cues to the tasks we envision for Kambara.
We use visual information, not to build maps to navigate, but for visual-servo control. We apply techniques for correlation-based feature tracking in a hierarchical matching scheme to track features from frame to frame. Correlating visual features from two separate cameras enables Kambara to triangulate the distance to the feature.
The motion of the visual features between images directly guides the motion of the AUV, just as you use the motion the road edge to help you adjust the steering of your car. We are currently implementing simple behaviors to regulate position and velocity relative to visual features. In this manner we intend Kambara to hold station on a reef, swim along a pipe, perform a repeatable search of the sea floor.
Students are a vital part of our project and their individual projects contribute necessary component and algorithm development to our overall program.