Cooperative Air
and Ground Surveillance
A Scalable Approach to the Detection and Localization
of Targets by a Network of UAVs and UGVs
BY BEN GROCHOLSKY, JAMES KELLER, VIJAY KUMAR, AND GEORGE PAPPAS
U
nmanned aerial vehicles (UAVs) can be used to
cover large areas searching for targets. However, sensors on UAVs are typically limited in
their accuracy of localization of targets on the
ground. On the other hand, unmanned
ground vehicles (UGVs) can be deployed to accurately
locate ground targets, but they have the disadvantage of not
being able to move rapidly or see through such obstacles as
buildings or fences. In this article, we describe how we can
exploit this synergy by creating a seamless network of
UAVs and UGVs. The keys to this are our framework and
algorithms for search and localization, which are easily scalable to large numbers of UAVs and UGVs and are transparent to the specificity of individual platforms. We describe
our experimental testbed, the framework and algorithms,
and some results.
Introduction
The use of robots in surveillance and exploration is gaining prominence. Typical applications include air- and
ground-based mapping of predetermined areas for tasks
such as surveillance, target detection, tracking, and search
and rescue operations. The use of multiple collaborative
robots is ideally suited for such tasks. A major thrust within this area is the optimal control and use of robotic
resources to reliably and efficiently achieve the goal at
hand. This article addresses this very problem of coordinated deployment of robotic sensor platforms.
Consider the task of reliably detecting and localizing an
unknown number of features within a prescribed search area.
In this setting, it is highly desired to fuse information from
all available sources. It is also beneficial to proactively focus
the attention of resources, minimizing the uncertainty in
detection and localization. Deploying teams of robots working towards this common objective offers several advantages.
Large environments preclude the option for complete sensor
coverage. Attempting to increase coverage leads to tradeoffs
between resolution or accuracy and computational constraints in terms of required storage and processing. A scalable and flexible solution is therefore desirable.
In this article, we present our approach to cooperative
search, identification, and localization of targets using a
heterogeneous team of fixed-wing UAVs and UGVs.
There are many efforts to develop novel UAVs and UGVs
for field applications. Here, we assume standard solutions to
low-level control of UAVs and UGVs and inexpensive offthe-shelf sensors for target detection. Our main contribution is a framework that is scalable to multiple vehicles and
decentralized algorithms for control of each vehicle that are
transparent to the specificity of the composition of the
team and the behaviors of other members of the team. In
contrast to much of the literature that addresses the very
difficult planning problems for coverage and search (see [1],
for example) and for localization (see [2], for example),
our interests are in reactive behaviors that 1) are easily
implemented; 2) are independent of the number or the
specificity of vehicles; and 3) offer guarantees for search
and for localization.
A key aspect of this work is the synergistic integration of
aerial and ground vehicles that exhibit complementary capabilities and characteristics. Fixed-wing aircraft offer broad field of view and
rapid coverage of search areas.
However, minimum limits
on operating airspeed
and altitude, combined
with attitude uncertainty, place a lower
limit on their ability to resolve and
localize ground
features. Ground
vehicles, on the
other hand, offer
BACKGROUND IMAGE: © PHOTODISC
16
IEEE Robotics & Automation Magazine
1070-9932/06/$20.00©2006 IEEE
SEPTEMBER 2006
high-resolution sensing over relatively short ranges, with the disadvantage of obscured views and slow coverage.
The use of aerial- and ground-based sensor platforms is
closely related to other efforts to exploit the truly complementary capabilities of air and ground robots. Examples of
such initiatives include the DARPA PerceptOR program [3]
and Fly Spy project [4]. Pursuit-evasion strategies with
ground vehicles and helicopters are described in [5]. The use
of aerial-vehicle-mounted cameras or fixed ground cameras to
guide ground robots is discussed in [6]. However, these
approaches don’t readily lend themselves to scaling up to large
numbers or to tasks other than navigation. Further, none of
these approaches incorporates the level of integration across
aerial and ground vehicles that is captured here.
Our framework and algorithms are built on previous work in
decentralized data fusion using decentralized estimation algorithms derived from linear dynamic models with assumptions of
Gaussian noise [7]. We use the architecture proposed here and in
[8]. In [9], we developed control algorithms that refine the quality of estimates, addressing both the detection and the localization problems. Our approach to active sensing and localization
with UAVs and UGVs, briefly summarized in this article, is discussed in greater detail in [10]. Our work on scalable coordinated coverage with UAVs is also discussed in a previous paper [11].
This article is organized as follows. “Experimental Testbed” describes the demonstration system. “Framework for
Scalable Information-Driven Coordinated Control” details
the technical approach taken and system architecture. “AirGround Coordination” describes the application to our network of aer ial and ground vehicles. We descr ibe the
characteristics of the UAV and UGV platforms and comparative qualities of feature observations from onboard cameras,
deriving measurement uncertainty for features observed by
vision sensors with uncertain state. These elements are combined and applied to an illustrative example of collaborative
ground feature detection and localization. Concluding
remarks follow.
Experimental Testbed
Figure 1 illustrates the UAVs in use at the GRASP Laboratory of the University of Pennsylvania. Each UAV consists of
an airframe and engine, avionics package, onboard laptop,
and additional sensing payload. We briefly describe the basic
components of our UAVs and UGVs as well as the overall
system architecture. Utilizing off-the-shelf airframe and
autopilot components allows for effort to be directed at mission-level control schemes. A formation flight experiment is
described in [12].
UAV Airframe and Payload
The airframe of each UAV is a quarter-scale Piper Cub J3
model airplane with a wingspan of 104 in ( ∼2.7 m). The
powerful glow fuel engine has a power rating of 3.5 hp,
resulting in a maximum cruise speed of 60 kn (∼30 m/s), at
altitudes up to 5,000 ft ( ∼1,500 m), and a flight duration of
15–20 min.
The airframe-engine combination enables having significant scientific payload on board. Figure 1 shows pods that
have been installed underneath each side of the wing containing high-resolution cameras and inertial measurement units
(IMUs) as well as deployable sensors, beacons, and landmarks.
More precisely, each UAV can carry the following internal
and external payloads:
◆ onboard embedded PC
◆ IMU 3DM-G from MicroStrain
◆ external global positioning system (GPS): Superstar
GPS receiver from CMC electronics, 10 Hz data
◆ camera DragonFly IEEE-1394 1024 × 768 at 15
frames/s from Point Grey Research
◆ custom-designed camera-IMU Pod includes the
IMU and the camera mounted on the same plate.
The plate is soft mounted on four points inside the
pod. Furthermore, the pan motion of the pod can be
controlled through an external-user PWM port on
the avionics.
ICAVI-I
Deployable POD
(Robotic Agent, Sensor,
Beacon, and Landmark)
Hi-Res Camera and IMU
POD Controllable PAN
Motion
Figure 1. PennUAVs: Two Piper J3 Cub model airplanes fitted with external payload pods.
SEPTEMBER 2006
IEEE Robotics & Automation Magazine
17
UAV 1
UAV n
Airframe
Ground Control
Airframe
Ground
Station
Avionics
Avionics
RS 232
RS 232
CAN
RS 232
CAN
Onboard PC
Onboard PC
Firewire RS 232
Firewire RS 232
Additional Sensors
Payload
Camera, IMU GPS,
Deployable PODs
Additional Sensors
Payload
Camera, IMU GPS,
Deployable PODs
Operator
PC
Remote PC
TCP/IP
Figure 2. Multi-UAV and ground station functional architecture.
◆
custom-designed deployable Pod could be used to carry
sensors, beacons, landmarks, or even robotic agents.
UAV Avionics and Ground Station
Each UAV is controlled by a highly integrated, user-customizable Piccolo avionics board, which is manufactured
by CloudCap Technologies [13]. The avionics board
comes equipped with the core autopilot, a sensor suite
(which includes GPS), and an IMU consisting of three
gyros, three accelerometers, and two pressure ports, one
for barometric altitude and one for airspeed. A 40-MHz
embedded Motorola MPC 555 Power PC receives the
state information from all sensors and runs core autopilot
loops at a rate of 20 Hz, commanding the elevator,
ailerons, and rudder and throttle actuators as well as external-user payload ports.
Each UAV continuously communicates with the ground
station. The communication occurs at 1 Hz and the range of
the communication can reach up to 6 mi. The ground station
performs differential GPS corrections and updates the flight
plan, which is a sequence of three dimensional (3-D) waypoints connected by straight lines. The UAVs can also be commanded in a similar way from a supervisory controller (residing
on board the UAV laptop), allowing further decentralization in
the physical layer of the architecture (see Figure 2).
The ground station can concurrently monitor up to ten
UAVs. Direct communication between UAVs can be emulated through the ground or by using the local communication channel on the UAVs (80211b—wireless network card).
The ground station has an operator interface program
(shown in Figure 3), which allows the operator to monitor
flight progress, obtain telemetry data, or dynamically change
the flight plans using georeferenced maps. Furthermore, the
operator interface program can act as a server and enable
multiple instances of the same software to communicate over
a TCP/IP connection. This allows us to monitor or command and control the experiment in real time, remotely.
The UGV Platform
The ground vehicles, shown in Figure 4, are commercial
four-wheel-drive model trucks modified and augmented with
Figure 3. Ground station operator Interface showing the flight plan
and actual UAV position (August 2003, Fort Benning, Georgia).
18
IEEE Robotics & Automation Magazine
Figure 4. Ground robot platforms.
SEPTEMBER 2006
onboard computers, stereo firewire cameras, GPS, and odometric and inertial sensors. Communication between ground
vehicles and to the aerial platform base station is through an
ad hoc 802.11b network.
Framework for Scalable Information-Driven
Coordinated Control
In this section, we briefly discuss our framework for modeling and control that leads to an information-driven framework for the execution of multirobot sensing missions. We
use the active sensor network (ASN) architecture proposed in
[14]. The key idea is that the value of a sensing action is
marked by its associated reduction in uncertainty. Mutual
information [15] captures formally the utility of sensing
actions in these terms. Dependence of the utility on robot
and sensor state and actions allows us to formulate the tasks of
coverage, search, and localization as optimal control problems.
Target Detection
Following this approach, detection and estimation problems
are formulated in terms of summation and propagation of
formal information measures. The feature-detection and
feature-location estimation processes are now presented
along with descriptions of the action utility, control strategy,
and architecture network node structure.
We use certainty grids [16] as the representation for the
search and coverage problems. The certainty grid is a discretestate binary random field in which each element encodes the
probability of the corresponding grid cell being in a particular
state. For the feature detection problem, the state x of the ith
cell C i can have one of two values: target and no target. This is
wr itten as s(C i ) = {ta rge t|no ta rge t} . The infor mation
measure ŷ d,i (k|k), where subscript d denotes detection, stores
the accumulated target detection certainty for cell i at time k
ŷ d,i (k|k) = logP(x) = logP(s(C i ) = ta rge t).
(1)
Information associated with the likelihood of sensor measurements z—which, again, take one of two values, target or no target—is given by
i d,s (k) = logP(z(k)|x).
(2)
The information measure that incorporates the current probabilities of detected targets is updated by the log-likelihood
form of Bayes rule:
i d,s (k) + C,
(3)
ŷ d,i (k|k) = ŷ d,i (k|k − 1) +
s
where C is a normalization factor.
Target Location Estimation
The coverage algorithm described above allows us to identify cells that have an acceptably high probability of containing features or targets of interest. The localization of features
or targets is the second part of the task. This problem is
SEPTEMBER 2006
posed as a linearized Gaussian estimation problem. As in [9],
the information form of the Kalman filter is used. New target location filters are instantiated as the detection process
reaches a set threshold.
In this problem, we redefine the state vector y f to be the
coordinates of all the features detected by the target detection algorithm, with y f,i denoting the (x, y) coordinates of
the feature in a global coordinate system. Note that the target detection algorithm can run concurrently, updating the
state vector with new candidate features and coordinates.
The information filter maintains an information state vector
ŷ f,i (k | k) and matrix Y f,i (k | k), distinguished by subscript f
for each feature i, that relate to the feature estimate mean
x̂ f,i (k | k) and covariance P−1
f,i (k | k) by
ŷ f,i (k | k) =P−1
f,i (k | k)x̂ f,i (k | k)
Y f,i (k | k) =P−1
f,i (k | k).
(4)
(5)
Each sensor measurement z contributes an information vector
and matrix that captures the mean and covariance
of the
observation likelihood P(z s (k) | x) ∼ N (μ s , s ):
−1
−1
i f,s (k) =
(k)μ s (k), I f,s (k) =
(k).
(6)
s
s
The fusion of N s sensor measurements with accumulated
prior information is simply
Ns
ŷ f,i (k | k) = ŷ f,i (k |k − 1) +
i f, j (k)
j=1
Y f,i (k | k) = y f,i (k |k − 1) +
Ns
I f, j (k),
(7)
j=1
from which the state estimate for the ith target and the
covariance associated with it can be easily recovered.
Decentralization, in the sense that nodes maintain local
knowledge of aggregate system information, is made possible
by the additive structure of the estimate update (3) and (7).
This characteristic allows all nodes in a network to be updated
through propagation of internodal information differences. A
communications manager known as a channel filter implements
this process at each interconnection [7].
Uncertainty Reducing Control
Equations (2)–(7) detail how sensing processes influence
estimate uncertainty. An entropy-based measure [15] provides a natural quantitative measure of information in
terms of the compactness of the underlying probability distributions. Mutual information measures the information
gain to be expected from a sensor before making an observation. Most importantly, this allows a priori prediction of
the expected infor mation outcome associated with a
sequence of sensing actions.
The control objective is to reduce estimate uncertainty.
Because this uncertainty directly depends on the system state
and action, each vehicle chooses an action that results in a
IEEE Robotics & Automation Magazine
19
maximum increase in utility or the best reduction in the
uncertainty. New actions lead to an accumulation of information and a change in overall utility. Thus, local controllers that
direct the vehicle and sensors according to the mutual information gradient with respect to the system state are implemented on each robotic sensor platform. Analytic gradient
expressions are available for the models used here in terms of
the sensor quality, observer state, and estimate uncertainty.
This is referred to as information surfing since the vehicles are,
in essence, driven by information gain contours.
Scalable Proactive Sensing Network
The network of aerial and ground sensor platforms can now
be deployed for searching for targets and for localization.
Both the search and localization algorithms are driven by
information-based utility measures and, as such, are independent of the source of the information, the specificity of the
sensor obtaining the information, and the number of nodes
that are engaged in these actions. Most importantly, these
nodes automatically reconfigure themselves in this task. They
are proactive in their ability to plan trajectories to yield maximum information instead of simply reacting to observations.
Thus, we are able to realize a proactive sensing network with
decentralized controllers, allowing each node to be seamlessly
y
uv
h
h
O
Ψ
z
X
Y
r
β
Z
(a)
(b)
Figure 5. Onboard cameras on the (a) UGVs and (b) UAVs.
(a)
(b)
Figure 6. A ground feature observed by (a) a UAV and (b) a UGV.
20
IEEE Robotics & Automation Magazine
Air-Ground Coordination
We have implemented our approach to active sensing on our
network of robotic platforms described earlier. We present
further detail of the sensing and cotrol schemes used along
with experimental results. The search and localization task
consists of two components. First, detection of an unknown
number of ground features in a specified search area ŷ d (k|k).
Second, the refinement of the location estimates for each
detected feature Y f,i (k|k). Feature observation uncertainty
is investigated, confirming the complementary characteristics of air and ground vehicles. Refinement of the location estimates requires
the development of a reactive controller
Fixed-Wing UAV
that is based on visual feedback. This is
Camera Pod
discussed followed by experimental
results for a fixed UAV search pattern.
Finally, a reactive controller for generating coordinated UAV search trajectories is presented.
x
Clodbuster UGV
Mounted Camera
aware of the information accumulated by the entire team.
Local controllers deploy resources accounting for and, in
turn, influencing this collective information. Coordinated
sensing trajectories result that transparently benefit from
complementary subsystem characteristics. Information aggregation and source abstraction result in nodal storage, processing, and communication requirements that are independent
of the number of network nodes. The approach scales to
indefinitely large sensor platform teams. This scalability is
achieved at a potential performance cost through limiting
control to local decision making. Alternative distributed and
anonymous control approaches that seek global cooperation
and assignment are pursued in [8].
Feature Observation Uncertainty
Figure 5 shows the onboard cameras,
which are the primary air and ground
vehicle mission sensors enabling detection
and localization of features in operational
environments. Figure 6 provides example
images of a ground feature observed from
air- and ground-based cameras. The
accuracy of feature observations depends
on the uncertain camera calibration and
platform pose. A linearized error model
developed in [11] is used here.
We consider points across the image
and illustrate how their corresponding
uncertainties on the ground plane vary.
We also compare the uncertainties in
ground feature localization using a UAV
and UGVs. This comparison reveals the
pros and cons of either platform and
highlights the advantage of combining
sensor information from these different
sources for reliable localization.
SEPTEMBER 2006
We simplify the target detection and localization problem by using colored targets and a simple color blob detection algorithm with our cameras. Figure 6 shows a typical
1.1-m × 1.4-m ground target as seen from a UAV and from
a UGV. We use the geometric information specific to our
sensor platforms. The UAV camera looks down from an
altitude of 50 m2 , having a typical pitch angle of θ = 5◦ .
The UGV cameras nominally look horizontally and are
positioned 0.32 m above the ground plane. The variance in
roll and pitch was estimated to be 4◦2 while that for heading to be 25◦2 . The variance in GPS coordinates is 25 m2 .
Figure 7 displays ground feature position confidence ellipses
associated with different points in the air and ground
imagery with these parameters. Thus, we can visualize how
uncertainties in target localization vary across the field of
view of the camera and from the different perspectives provided by the vehicles.
This comparison confirms the complementary character of
the air and ground vehicles as camera platforms. Airborne
cameras offer relatively uncertain observations over a wide
field of view. Ground vehicles offer high relative accuracy that
degrades out to an effective range of approximately 5 m. The
ground vehicle field of view encompasses the aerial observation confidence region. This allows feature locations to be
reliably handed off to ground vehicles, alleviating any requirement for ground vehicles to search for ground features.
20
3
15
2
10
1
y (m)
y (m)
5
0
−5
0
−1
−10
−2
−15
−3
−20
−30
−20
−10
x (m)
0
10
0
20
2
(a)
4
x (m)
6
8
(b)
Figure 7. Ground feature observation uncertainty for (a) air and (b) ground camera installations. The UAV camera looks down 5◦
off vertical at 50 m altitude. The UGV camera is mounted horizontally 0.32 m above the ground plane. Comparative feature
observation accuracy is illustrated by ground plane confidence ellipses associated with uniformly spaced pixels in the imagery.
1
Posterior
Confidence
Range Dependent
Observation
Likelihood
0.8
Ψ
x
θ
0.4
∧
Xfeature
r
0.2
−0.4
Prior
Confidence
(x,y)Robot
y
Information
Gain
0
−0.2
σβ
β
Feature
Estimate
Confidence
0.6
y
Vehicle
Sensing
Trajectory
Information Gain for Varied Sensor Location
Increasing
Gain
−0.6
Iso-Mutual
−0.8 Information
−1 Contours
−1 −0.8 −0.6 −0.4 −0.2
(a)
0
x
(b)
0.2
0.4
0.6
0.8
1
Figure 8. Ground vehicle information gain utility measure and iso-utility contours.
SEPTEMBER 2006
IEEE Robotics & Automation Magazine
21
Sensor Platform
Initial Location
1.2
1
0.8
Mutual
Information
Contours
Future Trajectory
y
y
1
Platform
Velocity
0.8
0.6
Initial Location
1.2
0.4
0.4
0.2
Feature Estimate
Confidence
Feature Location
0
Feature Estimate
−0.2
−0.6 −0.4 −0.2
0
0.2
0.4
0.6
0.8
1
1.2
Sensor Platform
0.6
Mutual Information
Contours
Platform Velocity
Feature Estimate
Confidence
Feature Location
0.2
0
Future Trajectory
−0.2
−0.6 −0.4 −0.2 0
x
(a)
Feature Estimate
0.2
0.4
0.6
0.8
1
1.2
x
(b)
Figure 9. An illustration of the gradient controller for deployment of a UGV to localize a ground feature. The control law
schedules the robot’s heading to be in the direction of steepest mutual information gradient. The resulting trajectory and
mutual information contours at two intermediate points are shown in (a) and (b). This strategy is referred to as information
surfing since the platform is driven by the information gain contours.
Optimal Reactive Controller for Localization
Our proactive sensing network includes a reactive optimal controller that actively seeks to improve the quality of estimates of
features or targets. While all vehicles (UAVs and UGVs) can run
this controller, it is particularly relevant to the operation of
UGVs, whose sensors are better equipped for precise localization.
Therefore, we describe this controller and its implementation on
the ground vehicles next. The controller is a gradient control law,
which automatically generates sensing trajectories that actively
reduce the uncertainty in feature estimates by solving
u i (k) = arg maxu∈U I f (u i (k)),
where U is the set of available actions, and I f,i (u i (k)) is the
mutual information gain for the feature location estimates
given action u i (k). For Gaussian error modeling of N f
features,
I f (u i (k)) =
Nf
j=1
Information
Maximizing
Trajectory
UGV Turning
Constraint
Expected
Feature
Location
Controller
Reactivated
Sensing Field
of View
Controller
Deactivated
Figure 10. Handling ground vehicle sensing field of view and
control constraints. The controller is deactivated, allowing the
vehicle to simply pursue the minimum turning radius (dotted
trajectory), and reactivated when the direction of steepest
descent is within the field of view (solid trajectory).
22
IEEE Robotics & Automation Magazine
(8)
log
|Y f, j (k|k − 1) + I f,i (u i (k))|
.
|Y f, j (k|k − 1)|
(9)
This utility measure is illustrated in Figure 8. The controller involves forward motion at a fixed speed while choosing the steering velocity to enable heading toward the
direction of steepest gradient. Figure 9 illustrates an example
UGV trajectory generated by (8).
When implemented on a nonholonomic robot with constraints imposed on the vehicle turn rate and sensor field of
view, this controller may result in the robot circling a feature
while unable to make observations. To resolve this, the controller is disengaged when the expected feature location is
within the turn constraint and outside the field of view, as
illustrated in Figure 10.
Experimental Results
Results are presented for an experimental investigation of a
collaborative feature localization scenario. Three rectangular
orange features, each measuring 1.1 m × 1.4 m, were placed
in a 50-m × 200-m search area. Figure 3 details a typical
UAV trajectory generated to cover a search area in multiple
SEPTEMBER 2006
passes. The elapsed time for each pass was approximately 100 s.
A sequence of images captured from an altitude of 65 m is
shown in Figure 11. The feature estimates are seamlessly made
available to all vehicles.
Figure 12 illustrates the initial feature uncertainty and
the trajectory taken by the ground vehicle to refine the
quality of these estimates. Detailed snapshots of the active
sensing process are shown in Figure 13. These indicate
Figure 11. Aerial images of the test site captured during a typical UAV flyover at 65 m altitude. Three orange ground features
highlighted by white boxes are visible during the pass.
x 106
4.4227
6
5
Initial Feature
Confidence
First Feature
σx (m)
4.4227
4.4226
Second Feature
1
0
0
6
4.4226
4.4226
Feature Location
Estimate
5
100
200
300
First
First Feature
Feature
Second Feature
Third Feature
Feature
Third
4
3
Third Feature
4.4226
4.4226
4.836
3
2
Initial UGV
Position
UGV Trajectory
σy (m)
Northing (m)
4.4226
4
2
1
4.8361
4.8362
4.8363
4.8364
Easting (m)
(a)
4.8365
4.8366
x 105
0
0
100
200
Time (s)
(b)
300
Figure 12. Figures indicating (a) initial feature confidence and UGV active sensing trajectory and (b) σx and σ y components of
feature estimate standard deviation over time.
SEPTEMBER 2006
IEEE Robotics & Automation Magazine
23
A key aspect of this work is the
synergistic integration of aerial
and ground vehicles that exhibit
complementary capabilities and
characteristics.
x 10
the proposed control scheme successfully positioning the
ground vehicle to take advantage of the onboard sensor
characteristics.
It is impor tant to note the perfor mance benefit
obtained through collaboration. Assuming independent
measurements, in excess of 50 passes (about 80 min of
flight time) are required by the UAV to achieve this feature estimate certainty. It would take in excess of half an
hour for the ground vehicle with this speed and sensing
range to cover the designated search area and achieve a
6
x 10
4.4226
4.4226
4.4226
4.4226
UGV Sensing Location
4.4226
UGV trajectory
Feature Location Estimate
4.4226
Northing (m)
Northing (m)
4.4226
6
4.4226
4.4226
4.4226
4.4226
4.4226
4.4226
4.4226
4.4226
4.4226
4.8362 4.8362 4.8362 4.8362 4.8362 4.8363 4.8363 4.8363 4.8363 4.8363 4.8364
4.8362 4.8362 4.8362 4.8362 4.8362 4.8363 4.8363 4.8363 4.8363 4.8363 4.8364
Easting (m)
Estimate
4.4226
Feature Confidence
x 10
5
Easting (m)
(a)
x 10
x 10
5
(b)
6
x 10
4.4226
4.4226
4.4226
4.4226
4.4226
Northing (m)
4.4226
4.4226
Northing (m)
Observation
Observation Confidence
4.4226
4.4226
6
4.4226
Rejected Observation
4.4226
4.4226
4.4226
4.4226
4.4226
4.4226
4.4226
Estimate History
Final Estimate Confidence
4.4226
4.4226
4.8362 4.8362 4.8362 4.8362 4.8362 4.8363 4.8363 4.8363 4.8363 4.8363 4.8364
Easting (m)
(c)
x 10
4.8362 4.8362 4.8362 4.8362 4.8362 4.8363 4.8363 4.8363 4.8363 4.8363 4.8364
5
Easting (m)
x 10
5
(d)
Figure 13. Snapshots of the active feature location estimate refinement by an autonomous ground robot equipped with vision,
GPS, and inertial and odometric sensors. This corresponds to the second feature indicated in Figure 12(a). The initial confidence
region obtained through aerial sensing alone is indicated in (a). Any need for an extensive search by the ground vehicle is alleviated since this confidence region is slightly smaller than the ground vehicle onboard camera effective field of view. Compounded
error sources in the ground vehicle sensor system result in feature observations that provide predominantly bearing information
as shown in (b)–(c). The controller successfully drives the ground robot to sensing locations orthogonal to the confidence ellipse
major axis that maximize the expected reduction in estimate uncertainty. False feature detections are rejected as indicated in (d).
24
IEEE Robotics & Automation Magazine
SEPTEMBER 2006
computations for estimation and control are decentralized.
Each vehicle chooses the action that maximizes the utility,
which is the combined mutual information gain from
SEPTEMBER 2006
Mutual Information
30
1
y (m)
2
50
2
00
1
x (m)
1
0
0
40
50
20
40
x (m)
0
60
2
11
−1
−2
−3
40
20
y (m)
x (m)
00
(a)
50
Coverage
40
2
1
0.6
30
y (m)
0.4
0.2
1
20
10
40
00
2
x (m)
50
0
1
0.5
20
y (m)
50
00
0
Observation Info-
20
y (m)
0
40
2
0
x (m)
0 0
0.5
20
y (m)
20
10
2
Entropy
We presented our experimental testbed
of aer ial and g round robots and a
framework and a set of algorithms for
coordinated control with the goal of
searching for and localizing targets in a
specified area. While many details were
omitted because of space constraints,
the details of the hardware and software integration are presented in [10]
and [12]. The methods described here
lend themselves to decentralized control of heterogeneous vehicles without
requiring any tailoring to the specific
capabilities of the vehicles or their sensors. The unique features of our
approach are as follows. First, the
methodology is transparent to the
specificity and the identity of the
cooperating vehicles. This is because
vehicles share a common representation, consisting of a certainty grid that
contains information about the probability of detection of targets and an
information vector/matrix pair that is
used in the information form of the
Kalman filter. Observations are propagated through the network, changing
both the certainty grid and the information vector/matrix. Second, the
Mutual Information
Concluding Remarks
Coverage
40
Entropy
Coordinated Multi-UAV Area Coverage
In the previous experiment, the UAV trajectory followed a
fixed search pattern selected to provide coverage of the target area. In this section, the reactive information-gathering
control concept previously used for UGV feature localization is applied to generate online UAV trajectories for search
area coverage. Summation over the sensor field of view captures the value of performing a sensing
action at a given location. Directing the
UAV camera platforms according to the
gradient of this utility measure provides
a reactive scheme for area coverage.
0.6
Coordinated trajectories arise due to
0.4
coupling through accumulated mea0.2
surement information, without knowl40
edge of the state of other UAVs. The
20
information objective drives platforms
y (m)
apart and towards unexplored regions as
detailed in Figure 14.
The coverage algorithm allows us
to identify cells that have an
acceptably high probability of
containing features or targets
of interest.
Observation Info.
high probability of detecting the features. The collaborative approach of using aerial cues to active ground sensing
completes this task in under 10 min. Thus, the proactive
sensing network has a performance level well in excess of
the individual system capabilities.
20
0
x (m)
2 1
40
60
−1
−2
−3
40
x (m)
(b)
20
y (m)
00
x (m)
50
Figure 14. Two snapshots in time (a) and (b) of the UAV trajectories and information measures relating to the probability of ground features remaining undetected
over a specified search area. Each has four displays indicating the detection probability entropy, extent of coverage, current information gain, and observation information. The UAVs are directed by the information gain gradient. Each UAV camera
is considered to have an 80% detection probability over its projected field of view.
The task is terminated when 99.9% detection confidence is reached.
IEEE Robotics & Automation Magazine
25
onboard sensors towards the detection and localization
processes. Finally, the methodology presented here is scalable to large numbers of vehicles. The computations scale
with the dimensionality of the representation and are independent of the number of vehicles. Our experiments
demonstrate the performance benefit obtained through collaboration and illustrate the synergistic integration of ground
and aerial nodes in this application.
Acknowledgments
This work was in part supported by DARPA MARS
NBCH1020012, ARO MURI DAAD19-02-01-0383, and
NSF CCR02-05336. The authors would like to acknowledge
Daniel Gomez Ibanez, Woods Hole Oceanographic Institute,
and Selcuk Bayraktar, Massachusetts Institute of Technology,
for their help in deploying the UAVs.
Keywords
Decentralized collaborative control, heterogeneous robot
teams, active perception.
References
[1] H. Choset, K.M. Lynch, S. Hutchinson, G.A. Kantor, W. Burgard,
L.E. Kavraki, and S. Thrun, Principles of Robot Motion: Theory, Algorithms, and Implementations. Cambridge, MA: MIT Press, June 2005.
[2] N. Roy, W. Burgard, D. Fox, and S. Thrun, “Coastal navigation –
Mobile robot navigation with uncertainty in dynamic environments,” in
Proc. IEEE Conf. Robotics Automation (ICRA), May 1999, pp. 35–40.
[3] T. Stentz, A. Kelly, H. Herman, P. Rander, and M.R., “Integrated
air/ground vehicle system for semi-autonomous off-road navigation,” in
Proc. AUVSI Symp. Unmanned Systems, 2002.
[4] R. Vaughan, G. Sukhatme, J. Mesa-Martinez, and J. Montgomery, “Fly
spy: lightweight localization and target tracking for cooperating ground
and air robots,” in Proc. Int. Symp. Distributed Autonomous Robot Systems,
2000, pp. 315–324.
[5] R. Vidal, O. Shakernia, H.J. Kim, D.H. Shim, and S. Sastry, “Probabilistic pursuit-evasion games: Theory, implementation and experimental evaluation,” IEEE Trans. Robot. Automat., vol. 18, no. 5, pp.
662–669, Oct. 2002.
[6] R. Rao, V. Kumar, and C. Taylor, “Experiments in robot control from
uncalibrated overhead imagery,” in Proc. 9th Int. Symp. Experimental
Robotics (ISER’04), 2004.
[7] J. Manyika and H. Durrant-Whyte, Data Fusion and Sensor Manage-ment:
An Information-Theoretic Approach. Englewood Cliffs, NJ: Prentice Hall,
1994.
[8] B. Grocholsky, “Information-theoretic control of multiple sensor platforms,” Ph.D. dissertation, Univ. Sydney, 2002 [Online]. Available:
http://www.acfr.usyd.edu.au
[9] B. Grocholsky, A. Makarenko, T. Kaupp, and H. Durrant-Whyte,
“Scalable control of decentralised sensor platforms,” in Proc. Information
Processing Sensor Networks: 2nd Int. Workshop, IPSN03, 2003, pp.
96–112.
[10] B. Grocholsky, S. Bayraktar, V. Kumar, C. Taylor, and G. Pappas,
“Synergies in feature localization by air-ground robot teams,” in Proc.
9th Int. Symp. Experimental Robotics (ISER’04), 2004, pp. 353–362.
[11] B. Grocholsky, R. Swaminathan, J. Keller, V. Kumar, and G. Pappas,
“Information driven coordinated air-ground proactive sensing,” in Proc.
IEEE Int. Conf. Robotics Automation (ICRA’05), 2005, pp. 2211–2216.
[12] S. Bayraktar, G. Fainekos, and G.J. Pappas, “Experimental cooperative
control of fixed-wing UAVs,” in Proc. 43rd IEEE Conf. Decision and
Control, 2004, pp. 4292–4298.
26
IEEE Robotics & Automation Magazine
[13] B. Vaglienti and R. Hoag, Piccolo System User Guide, 2003 [Online].
Available: http://www.cloudcaptech.com/downloads.htm
[14] A. Makarenko, A. Brooks, S. Williams, H. Durrant-Whyte, and B.
Grocholsky, “An architecture for decentralized active sensor networks,”
in Proc. IEEE Int. Conf. Robotics Automation (ICRA’04), New Orleans,
Louisiana, 2004, pp. 1097–1102.
[15] T. Cover and J. Thomas, Elements of Information Theory. New York:
Wiley, 1991.
[16] A. Makarenko, S. Williams, and H. Durrant-Whyte, “Decentralized
certainty grid maps,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots Systems (IROS), 2003, pp. 3258–3263.
Ben Grocholsky is a project scientist at the Robotics Institute, Carnegie Mellon University. He received his Ph.D.
from the University of Sydney in 2002 and was previously a
postdoctoral researcher at the University of Pennsylvania
GRASP Laboratory. His research spans active sensor networks, decentralized cooperative control, and unconventional operator interfaces.
James Keller joined the University of Pennsylvania
GRASP Laboratory in 2002 as a project engineer and Ph.D
student. Before this, he enjoyed a 20-year career in the
helicopter industry with the Boeing Company. He is currently working on autonomous path planning for aerial and
underwater vehicles.
Vijay Kumar received his M.Sc. and Ph.D. in mechanical
engineering from the Ohio State University in 1985 and
1987, respectively. He has been on the faculty in the
Department of Mechanical Engineer ing and Applied
Mechanics with a secondary appointment in the Department of Computer and Information Science at the University of Pennsylvania since 1987. He is currently the UPS
Foundation professor and chair of mechanical engineering
and applied mechanics. His research interests lie in the area
of robotics and networked multiagent systems. He is a Fellow of the IEEE and ASME.
George Pappas received his Ph.D. degree from the University of California at Berkeley in 1998. In 2000, he
joined the University of Pennsylvania Department of Electrical and Systems where he is currently an associate professor and director of the GRASP Laboratory. He has
published over 100 articles in the areas of hybrid systems,
hierarchical control systems, distributed control systems,
nonlinear control systems, and geometric control theory,
with applications to flight management systems, robotics,
and unmanned aerial vehicles.
Address for Correspondence: Prof. Vijay Kumar, Department
of Mechanical Engineering and Applied Mechanics, University of Pennsylvania, 3330 Walnut St., Levine Hall, GRW
470, Philadelphia, PA 19104 USA. E-mail: kumar@
grasp.upenn.edu.
SEPTEMBER 2006