Sensor based path planning is an active field in robotics and many different methods have been proposed that deal with this problem. Many popular approaches, however, rely heavily on distance information and on an explicit workspace representation. On the other hand, there are landmark-based methods that mainly exploit angular information. One of the first methods in this area was the “snapshot model”, which was proposed to explain the “homing” behavior of ants and bees. In the snapshot model a snapshot is taken from a position in the environment. The snapshot represents a sequence of landmarks labeled by their compass bearing. According to this model, the current and the goal snapshots are matched with the aid of a compass and a route to the goal is computed.
We are following an angle-based approach similar to the snapshot model. A point robot is assumed which is unaware of its position and orientation, but is equipped with an infinite-radius panoramic sensing mechanism that perfectly computes the direction and the identity of all landmarks in the scene. This work is a feasibility study on the capabilities of snapshot-based techniques when they are restricted to use only angular information. The problem is studied in isolation and in this ideal model, because the objective is to show that under these assumptions, the robot is able to reach every goal configuration in a planar obstacle-free two-dimensional workspace.
In order to study the capabilities of current angle-based techniques, extensive experiments have been conducted using a simulator to compute the set of points that are reachable by a simple angle-based control law. As the following figure on the left side shows, this control law cannot be used to move the robot on all possible points on the plane. For this reason, another control law has been developed that reaches the complementary set of points for a three-landmark workspace. Based on geometric observations about the perceived angles, a hybrid system has been built to combine the reachability sets of the two control laws. The discrete state of the system is defined by a finite automaton that selects the appropriate control law for the given goal, while the continuous state is defined by the two complementary control laws. With this system, every point on the plane is reachable except from the circumscribed circle of the three landmarks, as the following figure on the right side shows. In reality, however, error will appear from sensor data which will effect the results presented in this paper. Nevertheless, combining the proposed framework with other methods can lead to fast and reliable navigation.
Results from simulation. The robot’s initial position is point A and three landmarks L_1, L_2, L_3 are visible in the scene. Every pixel could be the robot’s goal, and is painted gray if it is reachable by (left) the basic control law or (right) the final framework, presented in this paper.
By selecting angular information as the sensor cue for the achievement of a navigational task, the information requirements of this task are reduced. Landmarks can still be used, even if their exact position cannot be determined. Cameras, a popular sensor for mobile robots, do not readily provide range information. On the other hand, the calculation of the bearing angles of image features is a trivial task. Furthermore, wireless hardware has been proposed as a possible sensor for mobile robots. Directional wireless antennas are able to provide angular information for the location of wireless base stations. This study of angle-based control laws strongly suggests that mobile robot navigation can be achieved without the metric reconstruction of a scene and the explicit computation of distance information.