GIS TECHNOLOGY IN ENVIRONMENTAL MANAGEMENT:
A
BRIEF HISTORY, TRENDS AND PROBABLE FUTURE
Joseph
K. Berry
Phone: (970) 215-0825 – Fax:
490-2300 – Email: jberry@innovativegis.com
Web:
http://www.innovativegis.com/basis
[draft of an invited book chapter in Global Environmental Policy and
Administration, edited by Soden and Steel, for Marcel Dekker, Inc.]
Environmental
management is inherently a spatial endeavor.
Its data are particularly complex as they require two descriptors;
namely the precise location of what is being described, as well as a clear
description of its physical characteristics.
For hundreds of years, explorers produced manually drafted maps which
served to link the “where is what” descriptors. With an emphasis on accurate location of
physical features, early maps helped explorers and navigators chart unexplored
territory.
Today, these
early attributes of maps have evolved from exploratory guides to physical space
into management tools for exploring spatial relationships. This new perspective marks a turning point in
the use of maps, setting the stage for a paradigm shift in environmental
planning and management— from one emphasizing physical descriptions of
geographic space, to one of interpreting mapped data and communicating
spatially-based decision factors. What
has changed is the purpose for which maps are used. Modern mapping systems provide a radically
different approach to addressing complex environmental issues. An understanding of the evolutionary stages
of the new technology, its current expression, and probable trends are
essential for today’s environmental policy-makers and administrators.
EVOLUTIONARY
STAGES
Since the
1960's, the decision-making process has become increasingly quantitative, and
mathematical models have become commonplace.
Prior to the computerized map, most spatial analyses were severely
limited by their manual processing procedures.
Geographic information systems (GIS) technology provides the means for
both efficient handling of voluminous data and effective spatial analysis
capabilities (Carter 1989; Coppock and Rhind 1991). From this perspective, GIS is rooted in the
digital nature of the computerized map.
Computer Mapping
The early
1970's saw computer mapping automate
the map drafting process (Brown 1949; McHarg 1969; Steinitz et. al. 1976; Berry
and Ripple, 1994). The points, lines and
areas defining geographic features on a map are represented as an organized set
of X,Y coordinates. These data drive pen
plotters that can rapidly redraw the connections at a variety of colors,
scales, and projections. The map image,
itself, is the focus of this automated cartography.
The pioneering
work during this period established many of the underlying concepts and
procedures of modern GIS technology (Abler et. al. 1971; Muehrcke and Muehrcke
1980; Cuff and Matson 1982; Robertson et. al. 1982). An obvious advantage of computer mapping is
the ability to change a portion of a map and quickly redraft the entire
area. Updates to resource maps, such as
a forest fire burn, which previously took several days, can be done in a few
hours. The less obvious advantage is the
radical change in the format of mapped data— from analog inked lines on paper,
to digital values stored on disk.
Spatial Database Management
During the
early 1980's, the change in format and computer environment of mapped data was
utilized. Spatial database management systems (SDBMS) were developed that
linked computer mapping capabilities with traditional database management
capabilities (Burrough 1987; Sheppard 1991).
In these systems, identification numbers are assigned to each geographic
feature, such as a timber harvest unit or wildlife management parcel. For example, a user is able to point to any
location on a map and instantly retrieve information about that location. Alternatively, a user can specify a set of
conditions, such as a specific vegetation and soil combination, and all
locations meeting the criteria of the geographic search are displayed as a map.
During the
early development of GIS, two alternative data structures for encoding maps
were debated (Maffini 1987; Piwowar 1990; Pueker and Christman 1990). The vector
data model closely mimics the manual drafting process by representing map
features as a set of lines which, in turn, are stored as a series of X,Y
coordinates. An alternative structure,
termed raster, establishes an
imaginary reference grid over a project area, then stores resource information
for each cell in the grid. Early debates
in the GIS community attempted to determine the universally best data
structure. The relative advantages and
disadvantages of both were viewed in a competitive manner that failed to
recognize the overall strengths of a GIS approach encompassing both formats.
By the
mid-1980's, the general consensus within the GIS community was that the nature
of the data and the processing desired determine the appropriate data
structure. This realization of the
duality of mapped data structure had significant impact on geographic
information systems. From one
perspective, maps form sharp boundaries that are best represented as
lines. Property ownership, power line
right-of-ways, and road networks are examples where the lines are real and the
data are certain. Other types of maps,
such as soils, ground water flows, and steep slopes, are abstract
characterizations of terrain conditions.
The placement of lines identifying these conditions are subject to
judgment, statistical analysis of field data, and broad classification of
continuous spatial distributions. From
this perspective, the sharp boundary implied by a line is artificial and the
data itself is based on expert opinion or probabilistic estimates.
This era of
rapidly increasing demand for mapped data focused attention on data
availability, accuracy and standards, as well as data structure issues. Hardware vendors continued to improve
digitizing equipment, with manual digitizing tablets giving way to automated
scanners at many GIS facilities. A new
industry for map encoding and database design emerged and a marketplace for the
sales of digital map products emerged.
Regional, national and international organizations began addressing the
necessary standards for digital maps to insure compatibility among
systems. This period saw GIS database
development move from being expensed as individual project costs to a corporate
investment in a comprehensive information resource.
GIS Modeling
As the
technology continued its evolution, the emphasis turned from descriptive
“geo-query” searches of existing databases to prescriptive analysis of mapped
data. For the most part, the earlier
eras of GIS concentrated on automating traditional mapping practices. If a user had to repeatedly overlay several
maps on a light-table, an analogous procedure was developed within the
GIS. Similarly, if repeated distance and
bearing calculations were needed, systems were programmed with a mathematical
solution. The result of this effort was
GIS functionality that mimicked the manual procedures in a user's daily
activities. The value of these systems was
the savings gained by automating tedious and repetitive operations.
By the
mid-1980's, the bulk of the geo-query operations were available in most GIS
systems and a comprehensive theory of spatial analysis began to emerge. The dominant feature of this theory is that
spatial information is represented numerically, rather than in analog fashion
as inked lines on a map. These digital
maps are frequently conceptualized as a set of "floating maps" with a
common registration, allowing the computer to "look" down and across the
stack of digital maps (Figure 1). The
spatial relationships of the data can be summarized (database geo-queries) or
mathematically manipulated (analytic processing). Because of the analog nature of traditional
map sheets, manual analytic techniques are limited in their quantitative
processing. Digital representation, on
the other hand, makes a wealth of quantitative (as well as qualitative)
processing possible. The application of
this new modeling theory to environmental management is revolutionary. Its application takes two forms— spatial
statistics and spatial analysis.
Geophysicists
have used spatial statistics for many
years to characterize the geographic distribution, or spatial pattern, of field
data (Ripley 1981; Meyers 1988; Cressie 1991 and 1993; Cressie and Ver Hoef
1993). The statistics describe the
spatial variation in the data, rather than assuming a typical response is
everywhere. For example, field
measurements of snow depth can be made at several plots within a
watershed. Traditionally, these data are
analyzed for a single value (the average depth) to characterize the
watershed. Spatial statistics, on the
other hand, uses both plot locations and the recorded measurements to generate
a map of relative snow-depth throughout the entire watershed.
More recently,
spatial statistics has evolved from descriptive, to predictive, to optimization
models. Precision farming, for example,
uses GIS modeling to investigate the spatial relationships between crop yield
and soil nutrients (
Traditional
“whole-field” management involves a similar analysis, except field averages are
used to derive a single application rate for the entire field. In highly variable fields, most areas receive
either too much or too little fertilizer.
Some farmers (encouraged by the chemical industry) hedge their bets on a
good crop by applying fertilizer at a higher rate in hopes of bringing up the
yield in the nutrient poor areas. The
result can be over-application on more than half the field. Precision farming, on the other hand, uses
“site-specific” management involving a “prescription map” derived by spatial
statistics and variable rate technology.
As a spray rig moves through the field, GPS locates its position on the
prescription map and the injected blend of nutrients is adjusted “on-the-fly.”
Many other
applications, from retail market forecasting to forest management, are using
spatial statistics to relate mapped variables.
The environmental sciences have a rich heritage in the quantitative
expression of their systems. Spatial
statistics provides a new set of tools for explaining spatially induced
variance— variations in geographic space rather than numeric space. From this perspective the floating maps in
Figure 1, represent the spatial distributions of mapped variables. In traditional mathematical terms, each map
is a “variable, “ each location is a “case,” and each map value is a
“measurement.” The GIS provides a
consistent spatial registration of the numbers.
The full impact of this map-ematical treatment of maps is yet to be
determined. The application of such
concepts as spatial correlation, statistical filters, map uncertainty and error
propagation await their translation from other fields.
Spatial analysis, on the other
hand, has a rapidly growing number of current resource and environmental
applications (Ripple 1987; Maguire et. al. 1991a; Goodchild et. al. 1993;
Ripple 1994) . For example, a forest
manager can characterize timber supply by considering the relative skidding and
log-hauling accessibility of harvesting parcels. Wildlife managers can consider such factors
as proximity to roads and relative housing density to map human activity and
incorporate this information into habitat delineation. Landscape planners can generate visual
exposure maps for alternative sites for a proposed facility to sensitive
viewing locations, such as recreational areas and scenic overlooks. Soil scientists can identify areas with high
sediment loading potential based on proximity to streams and intervening
terrain slope, vegetative cover and soil type.
Similarly, groundwater and atmospheric scientists can simulate the
complex movement of a release as it responses to environmental factors
affecting its flow through geographic space.
Just as spatial
statistics has been developed by extending concepts of conventional statistics,
a mathematics supporting spatial analysis has evolved (Uwin 1981; Berry 1987a;
Goodchild 1987; Ripple 1989; Johnson 1990; Maguire et. al. 1991b) . This "map algebra" uses sequential
processing of spatial operators to perform complex map analyses (Berry, 1987b;
Tomlin, 1990). It is similar to
traditional algebra in that primitive operations (e.g., add, subtract, exponentiate)
are logically
sequenced on
variables to form equations. However in
map algebra, entire maps composed of thousands or millions of numbers represent
the variables of the spatial equation.
For example,
the change in lead concentrations in an aquifer can be estimated by evaluating
the algebraic expression
%change = ((new value - old value) / old
value) * 100
using the
average values for two time periods.
Map algebra replaces the simple averages with spatially interpolated
maps based on the same sets of field data used to calculate the averages. The %change
equation is evaluated at each map location, resulting in a map percent change
(Figure 2).
Areas of
unusual change can be identified by the standard normal variable (SNV)
expression
SNV = ((new value - %change_average) / %change_standard_deviation) * 100
This normalizes
the map of changes in lead concentration, with areas of statistically unusual
increase having SNV values over 100.
These potentially hazardous areas can be overlaid on demographic maps to
determine the level of environmental risk.
Most of the
traditional mathematical capabilities, plus an extensive set of advanced map
processing operations, are available in modern GIS software. You can add, subtract, multiply, divide, exponentiate,
root, log, cosine, differentiate and even integrate maps. After all, maps in a GIS are just organized
sets of numbers. However, with this
“map-ematics,” the spatial coincidence and juxtapositioning of values among and
within maps create new operations, such as effective distance, optimal path
routing, visual exposure density and landscape diversity, shape and
pattern.
For example,
distance is traditionally defined as “the shortest straight line between two
points.” Both a ruler (analog tool) and
the Pythagorean theorem (mathematical tool) adhere to this strict
definition. The simple definition of
distance is rarely sufficient for most environmental applications. Often “…between two points” must be expanded
to “…among set of points” to account for proximity, such as buffers around
streams. And “…straight line” needs to
be expanded to “…not necessarily straight lines,” as nothing in the real world
moves in a straight line (even light bends in the atmosphere). In a GIS, the concept of movement replaces
distance by introducing the location of absolute and relative barriers into the
calculations (Muller 1982; Elridge and Jones 1991). An effective butterfly buffer “reaches” out
around a stream capturing an appropriate amount of butterfly habitat (function
of vegetation cover and slope/aspect), instead of simply reaching out a fixed
number of feet regardless of habitat.
Another example
of advanced spatial analysis tools involves landscape analysis. The ability to quantify landscape structure
is a prerequisite to the study of landscape function and change. For this reason considerable emphasis has
been placed on the development of landscape metrics (Turner, 1990; McGarigal
and Marks 1995). Many of these
relationships are derived through analysis of the shape, pattern and
arrangement of landscape elements spatially depicted as patches (individual
polygons), classes of related patches (polygons of the same type/condition),
and entire landscape mosaics (all polygons).
The convexity index compares each patch’s perimeter to its area, with an
increase in perimeter per unit area indicating more irregularly shaped
parcels. The mean proximity index
indicates the average distance between the patches within a class as a measure
of the relative dispersion. The fractal
dimension of a landscape assesses the proportion of edge versus interior of all
patches, summarizing whether the mosaic is primarily composed of simple shapes
(circle or square like) or complex shapes with convoluted, plane-filling
perimeters. These, plus a myriad of
other landscape indices, can be used to track the fragmentation induced by
timber harvesting and relate the changes to impacts on wildlife habitat.
This GIS
modeling “toolbox” is rapidly expanding.
A detailed discussion of all of the statistical and analysis tools is
beyond the scope of this chapter. It
suffices to note that GIS technology is not simply automating traditional
environmental approaches, but radically changing environmental science. It is not just a faster mapper, nor merely an
easier entry to traditional databases.
Its new tools and modeling approach to environmental information combine
to extend record-keeping systems and decision-making models into effective decision
support systems (Parent and Church 1989; Densham 1991; Pereira and Duckstein
1993).
Spatial Reasoning and Dialogue
The 1990's are
building on the cognitive basis, as well as the databases, of current
geographic information systems. GIS is
at a threshold that is pushing beyond mapping, management, and modeling, to spatial reasoning and dialogue. In the past, analysis models have focused on
management options that are technically optimal— the scientific solution. Yet in reality, there is another set of
perspectives that must be considered— the social solution. It is this final sieve of management
alternatives that most often confounds resource and environmental
decision-making. It uses elusive
measures, such as human values, attitudes, beliefs, judgment, trust and
understanding. These are not the usual
quantitative measures amenable to computer algorithms and traditional
decision-making models.
The step from
technically feasible to socially acceptable options is not so much an increase
in scientific and econometric modeling, as it is communication (Calkins 1991;
Epstein 1991; King and Kraemer 1993; Medyckyj-Scott and Hernshaw 1993). Basic to effective communication is
involvement of interested parties throughout the decision-making process. This new participatory environment has two
main elements— consensus building and conflict resolution. Consensus
building involves technically-driven communication and occurs during the
alternative formulation phase. It
involves the resource specialist's translation of the various considerations
identified by a decision team into a spatial model. Once completed, the model is executed under a
wide variety of conditions and the differences in outcome are noted.
From this
perspective, a single map rendering of a environmental plan is not the objective. It is how the plan changes as the different
scenarios are tried that becomes information for decision-making. "What if avoidance of visual exposure is
more important than avoidance of steep slopes in siting a new haul road? Where does the proposed route change, if at
all?" Answers to such analytic
queries focus attention on the effects of differing perspectives. Often, seemingly divergent philosophical
views result in only slightly different map views. This realization, coupled with active involvement
in the decision-making process, often leads to group consensus.
If consensus is
not obtained, conflict resolution is
necessary. Such socially-driven
communication occurs during the decision formulation phase. It involves the creation of a "conflicts
map" which compares the outcomes from two or more competing uses. Each management parcel is assigned a numeric
code describing the conflict over the location.
A parcel might be identified as ideal for a wildlife preservation, a
campground and a timber harvest. As
these alternatives are mutually exclusive, a single use must be assigned. The assignment, however, involves a holistic
perspective that simultaneously considers the assignments of all other
locations in a project area.
Traditional scientific
approaches are rarely effective in addressing the holistic problem of conflict
resolution. Most are deterministic
models, involve a succession, or cascade, of individual parcel assignments. The final result is strongly biased by the
ordering of parcel consideration, mathematical assumptions and the assignment
of discrete model parameters. Even if a
scientific solution is reached, it is viewed with suspicion by the
layperson. Modern resource information
systems provide an alternative approach involving human rationalization and
tradeoffs. This process involves
statements like, "If you let me harvest this parcel, I will let you set
aside that one as a wildlife preservation." The statement is followed by a persuasive
argument and group discussion. The
dialogue is far from a mathematical optimization, but often closer to an
effective decision. It uses the
information system to focus discussion away from broad philosophical positions,
to a specific project area and its unique distribution of conditions and
potential uses.
THE CURRENT
FRONTIER
The elements
for computer mapping and spatial database management are in place, and the
supporting databases are rapidly coming on-line. The emerging concepts and procedures
supporting GIS modeling and spatial reasoning/dialogue are being refined and
extended by the technologists. There are
a growing number of good texts (Star and Estes 1990; Berry 1993; Korte 1993;
Berry 1995b; Douglas 1995) and college courses on GIS technology are becoming
part of most land-related curricula.
What seems to be lacking is a new spatial paradigm among the user
communities. Many are frustrated by the
inherent complexity of the new technology.
Others are confused by new approaches beyond those that simply automate
existing procedures. Fundamental to the
educational renaissance demanded by GIS is a clear understanding of the
questions it can address.
Seven Basic Questions
Seven basic
questions encompassing most GIS applications are identified in Table 1. The questions are progressively ordered from
inventory-related (data) to analysis-related (understanding) as identified by
their function and approach.
There are seven types of questions addressed by
GIS technology. The first three are
inventory-related; the latter four are analysis-related investigating the
interrelationships among mapped data beyond simple spatial coincidence.
___________________
The most basic
question, "Can you map that?"
is where GIS began thirty years ago— automated cartography. A large proportion of GIS applications still
involve the updating and timely output of map products. As an alternative to a room full of draftspersons
and drafting pens, the digital map has a clear edge. Applications responding to this question are
easily identified in an organization and the "payoffs" in
productivity apparent. Most often, these
mapping applications are restatements of current inventory-related activities.
Questions
involving "Where is what?" exploit
the linkage between the digital map and database management technology. These questions are usually restatements of
current practices as well. They can get
a group, however, to extend their thinking to geographic searches involving
coincidence of data they had not thought possible. The nature and frequency of this type of
question provide valuable insight into system design. For example, if most applications require interactive
map queries based on a common database from a disperse set of offices, a
centralized GIS provides consistency and control over the shared data. However, if the queries are localized and
turnaround is less demanding, a distributed GIS might suffice. The conditions surrounding the first two
questions are the primary determinants of the character and design of the GIS
implemented in an organization. The
remaining questions determine the breadth and sophistication of its
applications. They also pose increasing
demands on the education and computer proficiency of its users.
The third type
of question, "Where has it changed?"
involves temporal analysis. These
questions mark the transition from inventory-related data searches to packaging
information for generating plans and policies.
Such questions usually come from managers and planners, whereas the
previous types of questions support day-to-day operations. A graphic portrayal of changes in geographic
space, whether it is product sales or lead concentrations in well water,
affords a new perspective on existing data.
The concept of "painting" data which is normally viewed as
tables might initially be a bit uncomfortable— it is where GIS evolves from
simply automating current practices to providing new tools.
"What relationships exit?" questions
play heavily on the GIS toolbox of analytic operations. "Where are the
steep areas?", "Can you see the proposed power plant from over
there?", "How far is the town from the contamination spill?",
and "Is vegetation cover more diverse here, or over there?" are a few
examples of this type of question.
Whereas the earlier types involved query and repackaging of base data,
spatial relationship questions involve derived information. Uncovering of these questions within an
organization is a bit like the eternal question— “Did the chicken or the egg
come first?" If users are unaware
of the different things a GIS can do differently, chances are they are not
going to ask it to do anything different.
Considerable training and education in spatial reasoning approaches are
needed to fully develop GIS solutions to these questions. Their solution, however, is vital to the
treatise of the remaining two types of questions.
Suitability
models spring from questions of "Where
is it best?". Often these
questions are the end products of planning and are the direct expression of
goals and objectives. The problem is
that spatial considerations historically are viewed as input to the decision
process— not part of the "thruput."
Potential GIS users tend to specify the composition (base and derived
maps) of "data sandwiches" (map layers) which adorn the walls during
discussion. The idea of using GIS
modeling as an active ingredient in the discussion is totally foreign. Suitability questions usually require the
gentle coaxing of the “visceral visions” locked in the minds of the
decision-makers. They require an
articulation of various interpretations of characteristics and conditions and
how they relate within the context of the decision at hand.
"What effects what?" questions
involve system models— the realm of the scientist and engineer. In a manner of speaking, a system model is
like an organic chemist's view of a concoction of interacting substances,
whereas a suitability model is analogous to simply a recipe for a cake. Whereas suitability models tend to
incorporate expert opinion, a system model usually employs the tracking of
"cause and effect" through empirically derived relationships. The primary hurdle in addressing these
applications is the thought that GIS simply provides spatial summaries for
input and colorful maps of model output.
The last 100 years have been spent developing techniques that best
aggregate spatial complexity, such as stratified random sampling and the
calculation of the average to represent a set of field samples. The idea that GIS modeling retains spatial
specificity throughout the analysis process and responds to spatial
autocorrelation of field data is a challenging one.
"What if...?" questions involve the
iterative processing of suitability or system models. For suitability models, they provide an
understanding of different perspectives on a project— “What if visual impact is
the most important consideration, or if road access is the most important;
where would it be best for development?"
For system models, they provide an understanding of uncertain or special
conditions— “What if there was a 2-inch rainstorm, or if the ground was
saturated; would the surface runoff require a larger culvert?"
In determining
what GIS can do, the first impulse is to automate current procedures. Direct translation of these procedures are
sufficient for the first few types of questions. As GIS moves beyond mapping to the
application modeling required to address the latter questions, attention is
increasingly focused on the considerations embedded in the derivation of the
"final" map. The map itself is
valuable, but the thinking behind its creation provides the real insights for
decision-making. From this perspective,
the model becomes even more useful than the graphic output.
GIS Modeling Approach and Structure
Consider the
simple model outlined in the accompanying figure (Figure 3). It identifies the suitable areas for a
residential development considering basic engineering and aesthetic factors. Like any other model it is a generalized
statement, or abstraction, of the important considerations in a real-world
situation. It is representative of one
of the most common GIS modeling types— a suitability model. First, note that the model is depicted as a
flowchart with boxes indicating maps, and lines indicating GIS processing. It is read from left to right. For example, the top line tells us that a map
of elevation (ELEV) is used to derive a map of relative steepness (SLOPE),
which in turn, is interpreted for slopes that are better for a campground
(S-PREF).
|
Figure 3. Development Suitability Model. Flow chart of GIS processing determining the
best areas for a development as gently sloped, near roads, near water, with
good views of water and a westerly aspect.
Next, note that
the flowchart has been subdivided into compartments by dotted horizontal and
vertical lines. The horizontal lines
identify separate sub-models expressing suitability criteria— the best
locations for the campground are 1) on gently sloped terrain, 2) near existing
roads, 3) near flowing water, 4) with good views of water, and 5) westerly
oriented. The first two criteria reflect
engineering preferences, whereas the latter three identify aesthetic
considerations. The criteria depicted in
the flowchart are linked to a sequence of GIS commands (termed a command macro) which are the domain of the GIS
specialist. The linkage between the
flowchart and the macro is discussed latter; for now concentrate on the model’s
overall structure. The vertical lines indicate
increasing levels of abstraction. The
left-most primary maps section
identifies the base maps needed for the application. In most instances, this category defines maps
of physical features described through field surveys— elevation, roads and water. They are inventories of the landscape, and
are accepted as fact.
The next group
is termed derived maps. Like primary maps, they are facts, however
these descriptors are difficult to collect and encode, so the computer is used
to derive them. For example, slope can
be measured with an Abney hand level, but it is impractical to collect this
information for all of the 2,500 quarter-hectare locations depicted in the
project area. Similarly, the distance to
roads can be measured by a survey crew, but it is just too difficult. Note that these first two levels of model
abstraction are concrete descriptions of the landscape. The accuracy of both primary and derived maps
can be empirically verified simply by taking the maps to the field and
measuring.
The next two
levels, however, are an entirely different matter. It is at this juncture that GIS modeling is
moved from fact to judgment—from the description of the landscape (fact) to the prescription of a proposed
land use (judgment). The interpreted
maps are the result of assessing landscape factors in terms of an intended
use. This involves assigning a relative
"goodness value" to each map condition. For example, gentle slopes are preferred
locations for campgrounds. However, if
proposed ski trails were under consideration, steeper slopes would be
preferred. It is imperative that a
common goodness scale is used for all of the interpreted maps. Interpreting maps is like a professor's
grading of several exams during an academic term. Each test (vis. primary or derived map) is
graded. As you would expect, some
students (vis. map locations) score well on a particular exam, while others
receive low marks.
The final suitability map is a composite of the
set of interpreted maps, similar to averaging individual test scores to form an
overall semester grade. In the figure,
the lower map inset identifies the best overall scores for locating a
development, and is computed as the simple average of the five individual
preference maps. However, what if the
concern for good views (V-PREF map) was considered ten times more important in
siting the campground than the other preferences? The upper map inset depicts the weighted
average of the preference maps showing that the good locations, under this
scenario, are severely cut back to just a few areas in the western portion of
the study area. But what if gentle
slopes (S-PREF map) were considered more important? Or proximity to water (W-PREF map)? Where are best locations under these
scenarios? Are there any consistently
good locations?
The ability to
interact with the derivation of a prescriptive map is what distinguishes GIS
modeling from the computer mapping and spatial database management activities
of the earlier eras. Actually, there are
three types of model modifications that can be made— weighting, calibration and
structural. Weighting modifications
affect the combining of the interpreted maps into an overall suitability map,
as described above. Calibration modifications affect the assignment of the individual
"goodness ratings." For
example, a different set of ranges defining slope “goodness” might be assigned,
and its impact on the best locations noted.
Weighting and
calibration simulations are easy and straight forward— edit a model parameter,
resubmit the macro and note the changes in the suitability map. Through repeated model simulation, valuable
insight is gained into the spatial sensitivity of a proposed plan to the
decision criteria. Structural modifications, on the other hand, reflect changes in
model logic by introducing new criteria.
They involve modifications in the structure of the flowchart and
additional programming code to the command macro. For example, a group of decision-makers might
decide that forested areas are better for a development than open terrain. To introduce the new criterion, a new
sequence of primary, derived and interpreted maps must be added to the
"aesthetics" compartment of the model reflecting the group’s
preference. It is this dynamic
interaction with maps and the derivation of new perspectives on a plan that
characterize spatial reasoning and dialogue.
GIS IN
By their
nature, all land use plans contain (or imply) a map. The issue is determining "what should go
where," and as noted above there is a lot of thinking that goes into a
final map recommendation (Berry and Berry 1988; Gimblett 1990). One can not simply geo-query a database for the
recommendation any more than it can arm a survey crew with a "land
use-ometer" to measure the potential throughout a project area. The logic behind a land use model and its
interpretation by different groups are the basic elements leading to an effective
decision. During the deliberations, an
individual map is merely one rendering of the thought process.
The potential
of "interactive" GIS modeling extends far beyond its technical
implementation. It promises to radically
alter the decision-making environment itself.
A "case study" might help in making this claim. The study uses three separate spatial models
for allocating alternative land uses of conservation, research and residential
development. In the study, GIS modeling
is used in consensus building and conflict resolution to derive the "best"
combination of competing uses of the landscape.
The study takes
place on the western tip of
A map of
accessibility to existing roads and the coastline formed the basis of the Conservation Areas Model. In determining access, the slope of the
intervening terrain is considered. The
“slope-weighted proximity” from the roads and from the coastline was used. In these calculations, areas that appear
geographically near a road may actually be considered inaccessible if there are
steep intervening slopes. For example,
the coastline might be a “stone's throw away” from the road, but if it lands at
the foot of a cliff it is effectively inaccessible for recreation. The two maps of weighted proximity were
combined into an overall map of accessibility.
The final step of the model involved interpreting relative access into
conservation uses (Figure 4). Recreation
was identified for those areas near both roads and the coast. Intermediate access areas were designated for
limited use, such as hiking. Areas
effectively far from roads were designated as preservation areas.
The
characterization of the Research Areas
Model first used an elevation map to identify individual watersheds. The set of all watersheds was narrowed to
just three based on scientists' preferences that they require relatively large
and wholly contained areas for their research (Figure 5). A sub-model used the prevailing current to
identify coastal areas influenced by each of the three terrestrial research
areas.
The Development Areas Model determined the
“best” locations for residential development.
The model structure used is nearly identical to that of the development
suitability model described in the section above. Engineering, aesthetic, and legal factors
were considered. As before, the
engineering and aesthetic considerations were treated independently, as
relative rankings. An overall ranking
was assigned as the weighted average of the five preference factors. Legal
constraints, on the other hand, were treated as critical factors. For example, an area within the 100meter
set-back was considered unacceptable, regardless of its aesthetic or
engineering rankings.
The best areas for development were first
determined through equal consideration of the five criteria.
Figure 6 shows
a composite map containing the simple arithmetic average of the five separate
preference maps used to determine development suitability. The constrained and undesirable locations are
shown as white. Note that approximately
half of the land area is ranked as “Acceptable” or better (gradient of darker
tones). In averaging the five preference
maps, all criteria were considered equally important at this step.
The analysis
was extended to generate a series of weighted suitability maps. Several sets of weights were tried. The group finally decided on
·
view preference times 10 (Most Important)
·
coast proximity times 8
·
road proximity times 3
·
aspect preference times 2, and
·
slope preference times 1 (Least Important).
The resulting
map of the weighted averaging is presented in Figure 7. Note that a smaller portion of the land is
ranked as “Acceptable” or better. Also
note the spatial distribution of these prime areas are localized to distinct
clusters.
The group of
decision-makers were actively involved in development of all three of the
individual models— conservation, research and development. While looking over the shoulder of the GIS
specialist, they saw their concerns translated into map images. They discussed whether their assumptions made
sense. Debate surrounded the
"weights and calibrations" of the models. They saw the sensitivity of each model to
changes in its parameters. In short,
they became involved and understood the map analysis taking place. The approach is radically different from
viewing a "solution" map with just a few alternatives developed by a
sequestered set of GIS specialists. It
enables decision-makers to be just that— decision-makers, not choice-choosers
constrained to a few pre-defined alternatives.
The involvement of decision-makers in the analysis process contributes
to consensus building. At this stage, the group reached consensus on
the three independent land use possibilities.
Determining “Best Mix” Suitability
The three
analyses, however, determined the best use of the project area considering the
possibilities in a unilateral manner.
What about areas common to two or more of the maps? These areas of conflict are where the
decision-makers need to focus their attention.
Three basic approaches are used in GIS-based conflict resolution— hierarchical dominance, compatible use and
tradeoff. Hierarchical dominance assumes certain land uses are more important
and, therefore, supersede all other potential uses. Compatible
use, on the other hand, identifies harmonious uses and can assign more than
one to a single location. Tradeoff recognizes mutually exclusive
uses and attempts to identify the most appropriate land use for each
location. Effective land use decisions
involve elements of all three of these approaches.
From a map
processing perspective, the hierarchical approach is easily expressed in a
quantitative manner and results in a deterministic solution. Once the political system has identified a
superseding use it is relatively easy to map these areas and simply assign the
dominant use. Similarly, compatible use
is technically easy from a map analysis context, though often difficult from a
policy context. When compatible uses can
be identified, both uses are assigned to all areas with the joint
condition.
Most conflict,
however, arises when potential uses for a location are justifiable and
incompatible. In these instances,
quantitative solutions to the allocation of land use are difficult, if not
impossible, to implement. The complex
interaction of the spatial frequency and juxtapositioning of several competing
uses is still most effectively dealt with by human intervention. GIS technology assists decision-making by
deriving a map that indicates the set of alternative uses vying for each location. Once in this graphic form, decision-makers
can assess the patterns of conflicting uses and determine land use
allocations. Also, GIS can aid in these
deliberations by comparing different allocation scenarios and identifying the
areas of change.
Figure 8. Conflicts Map. The Conservation Areas, Research Areas and
Development Areas maps were overlaid to identify locations of conflict which
are deemed best for two or more uses.
In the case
study, the Hierarchical Dominance approach was tried, but resulted in total
failure. At the onset, the group was
uncomfortable with identifying one land use as always being better than
another. However, the approach was
demonstrated by identifying development as least favored, recreation next, and
the researchers' favorite watershed taking final precedence. The resulting map was unanimously rejected as
it contained very little area for development, and what areas were available,
were scattered into disjointed parcels.
It graphically illustrated that even when decision-makers are able to
find agreement in “policy space,” it is frequently muddled in the complex
reality of geographic space.
The alternative
approaches of compatible use and tradeoff faired better. Both approaches depend on generating a map
indicating all of the competing land uses for each location— a comprehensive conflicts map. Figure 8 is such a map considering the
Conservation Areas, Research Areas and Development Areas maps. Note that most of the area is without
conflict (lightest tone). In the absence
of the spatial guidance in a conflicts map, the group had a tendency to assume
that every square inch of the project area was in conflict. In the presence of the conflicts map,
however, their attention was immediately focused on the unique patterns of
actual conflict.
First, the
areas of actual conflict were reviewed for compatibility. For example, it was suggested that research
areas could support limited use hiking trails, and both activities were
assigned to those locations. However,
most of the conflicts were real and had to be resolved "the hard way."
Figure 9 presents the group's “best” allocation of land use. Dialogue and group dynamics dominated the
tradeoff process. As in all discussions,
individual personalities, persuasiveness, rational arguments and facts affected
the collective opinion. The easiest
assignment was the recreation area in the lower portion of the figure as this
use dominated the area. The next
break-through was an agreement that the top and bottom research areas should
remain intact. In part, this made sense
to the group as these areas had significantly less conflict than the central
watershed. It was decided that all
development should be contained within the central watershed. Structures would
be constrained to the approximately twenty contiguous hectares identified as
best for development, which was consistent with the island's policy of
encouraging “cluster” development. The
legally constrained area between the development cluster and the coast would be
for the exclusive use of the residents.
The adjoining research areas would provide additional buffering and open
space, thereby enhancing the value of the development. In fact, it was pointed out that this
arrangement provided a third research setting to investigate development, with
the two research watersheds serving as control.
Finally, the remaining small “salt and pepper” parcels were absorbed by
their surrounding 'limited or preservation use' areas.
In all, the
group's final map is a fairly rational land use allocation, and one that is
readily explained and justified.
Although the decision group represented several diverse opinions, this
final map achieved consensus. In
addition, each person felt as though they actively participated and, by using
the interactive process, better understood both the area's spatial complexity
and the perspectives of others.
This last step
involving human intervention and tradeoffs might seem anticlimactic to the
technologist. After a great deal of
rigorous GIS modeling, the final assignment of land uses involved a large
amount of group dynamics and subjective judgment. This point, however, highlights the
capabilities and limitations of GIS technology.
Geographic information systems provide significant advances in how we
manage and analyze mapped data. It
rapidly and tirelessly assembles detailed spatial information. It allows the incorporation of sophisticated
and realistic interpretations of landscape factors, such as weighted proximity
and visual exposure. It does not,
however, provide an artificial intelligence for land use decision-making. GIS technology greatly enhances
decision-making capabilities, but does not replace them. In a sense, it is both a toolbox of advanced analysis capabilities and a sandbox to express decision-makers’
concerns, inspirations and creativity.
An Enabling Technology
The movement from
descriptive to prescriptive mapping has set the stage for revolutionary
concepts in map structure, content and use.
The full potential for GIS in decision-making, however, has not been
realized and is, at least in part, due to 1) the inherent complexity of a
developing technology, 2) the unfamiliar nature of its products, and 3) the
“user-abusive” nature of its use.
Digital maps
derived through spatial modeling are inherently different from traditional
analog maps, composed of inked lines, shading and graphic symbols used to
identify the precise placement of physical features. The modeled map is a reflection of the
logical reasoning of the analyst— more a spatial expression (application model)
than a simple geo-query of the coincidence of base map themes (data
sandwich). Until recently, this logic
was concealed in the technical language of the command macro. The general user required a GIS specialist as
a translator at every encounter with the technology. The concept of a dynamic map pedigree uses a
graphical user-interface to communicate the conceptual framework of a spatial
model and facilitate its interactive execution.
As GIS systems adopt a more humane approach, end users become directly
engaged in map analysis and spatial modeling— a situation that is changing the
course of GIS technology.
A Humane GIS
Within an
application model, attention is focused on the considerations embedded in an
analysis, as much as it is focused on the final map's graphical rendering. The map itself is valuable, yet the thinking
behind its creation provides the real insight for generating programs, plans
and policy. A dynamic map pedigree is an emerging concept for communicating
spatial reasoning that links a flowchart of processing (logic) to the actual
GIS commands (macro) (Davies and Medyckyj-Scott 1994; Wang 1994; Berry
1995a). GIS users need to interact with
a spatial model at several levels— casual, interactive, and developer. Figure 10 shows the extension of the
flowchart for development siting model previously described (Figure 3) into a
interactive user interface linking the flowchart to the actual GIS code and map
database. At one level (casual), a user
can interrogate the model's logic by mouse-clicking on any box (map) or line
(process) and the specifications for that step of the model pops-up. This affords a look into the spatial
reasoning supporting the application and facilitates understanding of the
model.
Figure 10. Dynamic Map Pedigree. The flowchart of a GIS model is dynamically
linked to GIS code through "pop-up" dialog boxes at each step.
At another
level (interactive), a user can change the specifications in any of the dialog
boxes and rerun the model. The updated
macro is automatically time-stamped and integrated into the legend of the new
modeled map. This provides an intuitive
interface to investigating “what if…” scenarios. For example, the processing depicted in
Figure 10 shows changing the averaging of the preference maps so proximity to
roads (R-PREF TIMES 10) is ten times more important in determining
suitability. The suitability map
generated under these weights can be compared to other model runs, and changes
in spatial arrangement of relative development suitability are automatically
highlighted. At the highest level (developer),
a user can modify the logical structure of a model. The flowchart can be edited (e.g., cut/paste,
insert, delete) and the corresponding GIS code written and/or updated.
The dynamic map
pedigree provides three major improvements over current approaches. First, the graphical interface to spatial
models releases users from the burden of directly generating GIS code and
thereby avoiding its steep learning curve.
Secondly, it furnishes an interactive stamp of model logic and
specifications with each map generated.
Finally, it establishes a general structure for spatial modeling that is
not directly tied to individual GIS systems.
In short, it provides an interface that stimulates spatial reasoning
without requiring a GIS degree to operate— a humane GIS.
Trends, Directions and Challenges
What began in
the 60's as a cartographer's tool has quickly evolved into a revolution in many
disciplines (Thomas and Huggett 1980; Goodchild et. al. 1992; Berry 1994;
Maguire and Dangermond 1994; Ottens 1994; Rix 1994). As general users become more directly
engaged, the nature of GIS applications change.
Early applications emphasized mapping and spatial database management. Increasingly, applications are emphasizing
modeling of the interrelationships among mapped variables. Most of these applications have involved cartographic modeling, which employs GIS
operations that mimic manual map processing, such as map overlay and simple
buffering around features. The new wave
of applications concentrates on GIS modeling,
which employs spatial statistics and advanced analytical operations. These new applications can be grouped into
three categories: 1) data mining, 2) predictive modeling and 3) dynamic
simulation.
Technological Advances
Data mining uses the GIS to discover relationships
among mapped variables. For example, a
map of dead and dying spruce/fir parcels can be statistically compared to maps
of driving variables, such as elevation, slope, aspect, soil type and depth to
bedrock. If a strong spatial correlation
(coincidence) is identified for a certain combination of driving variables,
this information can be used to direct management action to areas of living
spruce/fir under the detrimental conditions.
Another form of data mining is the derivation of empirical models. For example, the geographic distribution of
lead concentrations in an aquifer can be interpolated from water samples taken
at local wells as described previously.
Areas of unusually high concentrations (more than one standard deviation
above the average) can be isolated. If a
time series of samples are considered and the maps of the high concentrations
are animated, the contamination will appear to move through the aquifer—
forming an empirical ground water model.
A “blob” moving across the map indicates an event, whereas a steady
“stream” indicates a continuous discharge of a pollutant. The locations in front of the animated
feature can be assumed to be the next most likely area to be affected. Data investigation and visualization will
increasingly extend beyond the perspective of traditional map renderings.
Most predictive modeling is currently
non-spatial. Environmental data are
collected by sampling large areas, then using these data to solve a
mathematical model, such as a regression equation, establishing an equation
linking the variable to predict to other more easily obtainable variables. The model is applied by collecting data on
the driving variables for another area or period in time, reducing the
measurements to typical values (arithmetic averages), then evaluating the
prediction equation. An analogous
spatially-based approach was discussed within the context of precision farming
in which map variables of yield and soil nutrients were used.
Another
example, involves the derivation of a prediction equation for the amount of
breakage during timber harvesting.
Breakage is defined in terms of percent slope, tree diameter, tree
height, tree volume and percent defect— with big old rotten trees on steep
slopes having the most breakage. A
traditional non-spatial approach ignores the geographic distribution of
variation in field collected data by assuming that the “average tree on average
terrain” is everywhere. Its prediction
is a single level of breakage for the entire area, extended within a range of
probable error (standard deviation). In
a mathematical sense, the non-spatial approach assumes that the variables are
randomly, or uniformly, distributed throughout a project area and that the
variables are spatially independent.
Both parts of the assumption are diametrically opposed to ecological
theory and evidence. Most environmental
phenomena coalesce into niches responding to a variety of physical and social
factors.
The GIS
modeling solution spatially interpolates the data into maps of each variable,
then solves the equation for all locations in space. This approach generates a map of predicted
breakage with "pockets" of higher and lower breakage than expected
clearly identified. The coincidence of
the spatial patterns of the variables are preserved, thereby relaxing the
unrealistic assumptions of random/uniform geographic distribution and spatial
independence. The direct consideration of the spatial patterns and coincidence
among mapped data will increasingly refine environmental predictions and
management actions making them more responsive to the unique conditions in a
project area.
Dynamic simulation allows the
user to interact with a GIS model. If
model parameters are systematically modified and the induced changes in the
final map tracked, the behavior of the model can be investigated. This “sensitivity analysis” identifies the
relative importance of each mapped variable, within the context of the unique
geographic setting it is applied. In the
timber breakage example, the equation may be extremely sensitive to steep
slopes. However, in a project area with
a maximum slope of only ten percent, tree height might be identified as the
dominant variable. A less disciplined
use of dynamic simulation enables a GIS to act like a spatial spreadsheet and
address “what if…” questions. Such
queries address natural curiosity as much as they provide insights into system
sensitivities. Both simulation versions
aid decision-makers in understanding the linkages among the variables and help
identify critical ranges. The use of
dynamic simulation will increasingly involve decision-makers in the analysis
(thruput) phase of environmental policy and administration.
Technology Versus Science
In many
respects, the emerging applications of data mining, predictive modeling, and
dynamic simulation have “the technological cart in front of the scientific
horse.” GIS can storehouse tremendous
volumes of descriptive data and overlay a myriad of maps for their
coincidence. It has powerful tools for
expressing the spatial interactions among mapped variables. However, there is a chasm between GIS
technology and applied science. The bulk
of scientific knowledge lacks spatial specificity in the relationships among
variables. Now that there is a tool that
can characterize spatial relationships (cart), the understanding of its
expression in complex systems (horse) becomes the void.
For example, a
GIS can characterize the changes in the relative amount of edge in a landscape
by computing a set of fractal dimension maps.
This, and over sixty other landscape analysis indices, allows tracking
of changes in landscape structure, but the impact of the changes on wildlife is
beyond current scientific knowledge.
Similarly, a GIS can characterize the effective sediment loading
distance from streams as a function of slope, vegetative cover and soil
type. It is common sense that areas with
a stable soil on gentle, densely vegetated intervening slopes are effectively
farther away from a stream than areas of unstable soils with steep, sparsely
vegetated intervening slopes. But how is
effective sediment loading distances related to the survival of fish? Neighborhood variability statistics allow us
to track the diversity, interspersion and juxtapositioning of vegetative cover—
but how are these statistics translated into management decisions about elk
herd populations?
The mechanics
of GIS in integrating multiple phenomena is well established. The functionality needed to relate the
spatial relationships among mapped variables is in place. What is lacking is the scientific knowledge
to exploit these capabilities. Until
recently, GIS was thought of as a manager's technology focused on inventory and
record-keeping. Even the early
scientific applications used it as an electronic planimeter to aggregate data
over large areas for input into traditional, non-spatial models. The future will see a new era of scientific
research in which spatial analysis plays an integral part and its results
expressed in GIS modeling terms. The
opportunity to have both the scientific and managerial communities utilizing
the same technology is unprecedented.
Until then, however, frustrated managers will use the analytical power
of GIS to construct their own models based on common (and uncommon) sense.
The direct
engagement of general users will increasingly question the traditional concepts
of a map and its use. To more
effectively portray a unified landscape, GIS must step beyond its classical
disciplines. Traditional concepts of a
map 1) distort reality of a three-dimensional world into a two-dimensional
abstraction, 2) selectively characterize just a few elements from the actual
complexity of the spatial reality, and 3) attempt to portray environmental
gradients and conceptual abstractions as distinct spatial objects. The imposition of our historical concept of a
map constructed of inked lines, shading and symbols thwarts the exploitation of
the full potential of mapped data expressed in digital form.
The concepts of
“synergism,” “cumulative effects,” and “ecosystem management” within the
environmental and natural resources communities are pushing at the envelope of
GIS's ability to characterize a unified landscape. Historically, system models have required a
discrete piecemeal approach, however, a unified landscape is by nature a
holistic phenomena. For example,
consider how a hiking trail which maximizes cover type diversity might be
identified. An atomistic approach would begin at the trailhead, test the neighborhood
around each location and step to a different cover type whenever possible. This approach, however, could commit to a
monotonous path after the first few diverse steps. However, if a few seemingly sub-optimal steps
where made at the start it might have lead to a much more diverse route.
A holistic modeling approach requires the
assimilation of an entire system at the onset of an analysis. Conventional mapping and GIS modeling
approaches characterize the landscape in an atomistic fashion. GIS can benefit from the advancements in
holistic modeling made by artificial intelligence, chaos theory and fuzzy
logic. These approaches attempt to
account for inference, abrupt changes and uncertainty. Instead of a deterministic solution with a
single map portrayal, they establish the “side-bars” of system response. If applied to GIS these emerging map-ematical
techniques might provide a more realistic description of a system “whose whole
is greater than the sum of its individual parts.”
Equally
important is the recognition of perception
as an additional element of a landscape.
Each individual has a unique set of spiritual, cultural, social and
interpersonal experiences which form their perspective of a landscape. The ability to map these considerations requires
a closer marriage between GIS and the social sciences. As the future of GIS unfolds, maps will be
viewed less as a static description of the landscape and more as an active
process accounting for inherent variability in perception, as well as spatial
descriptors. To move from tool
development to a true discipline, GIS needs an infusion of ideas from a wealth
of “neo-related fields” not traditionally thought of as its bedfellows, such as
the social sciences.
CONCLUSION
Environmental
policy and administration have always required information as their
cornerstone. Early information systems
relied on physical storage of data and manual processing. With the advent of the computer, most of
these data and procedures have been automated during the past two decades. As a result, environmental information
processing has increasingly become more quantitative. Systems analysis techniques developed links
between descriptive data of the landscape to the mix of management actions
which maximizes a set of objectives.
This mathematical approach to environmental management has been both
stimulated and facilitated by modern information systems technology. The digital nature of mapped data in these
systems provides a wealth of new analysis operations and an unprecedented
ability to spatially model complex environmental issues. The full impact of the new data form and
analytical capabilities is yet to be determined.
Effective GIS
applications have little to do with data and everything to do with
understanding, creativity and perspective.
It is a common observation of the Information Age that the amount of
knowledge doubles every 14 months or so.
It is believed, with the advent of the information super highway, this
periodicity will likely accelerate. But
does more information directly translate into better decisions? Does the Internet enhance information
exchange or overwhelm it? Does the
quality of information correlate with the quantity of information? Does the rapid boil of information improve or
scorch the broth of decisions?
GIS technology
is a prime contributor to the landslide of information, as terra bytes of
mapped data are feverishly released on an unsuspecting (and seemingly
ungrateful) public. From a GIS-centric
perspective, the delivery of accurate base data is enough. However, the full impact of the technology is
in the translation of “where is what, to so what.” The effects of information rapid transit on
our changing perceptions of the world around us involve a new expression of the
philosophers’ view of the stages of enlightenment— data, information,
knowledge, and wisdom. The terms are
often used interchangeably, but they are distinct from one another in some
subtle and not-so-subtle ways.
The first is
data, the "factoids" of our Information Age. Data are
bits of information, typically but not exclusively, in a numeric form, such as
cardinal numbers, percentages, statistics, etc.
It is exceedingly obvious that data are increasing at an incredible
rate. Coupled with the barrage of data,
is a requirement for the literate citizen of the future to have a firm
understanding of averages, percentages, and to a certain extent,
statistics. More and more, these types
of data dominate the media and are the primary means used to characterize public
opinion, report trends and persuade specific actions.
The second
term, information, is closely related to data.
The difference is that we tend to view information as more word-based
and/or graphic than numeric. Information is data with
explanation. Most of what is taught in
school is information. Because it
includes all that is chronicled, the amount of information available to the
average citizen substantially increases each day. The power of technology to link us to
information is phenomenal. As proof,
simply "surf" the exploding number of "home pages" on the
Internet.
The
philosophers' third category is knowledge,
which can be viewed as information within a context. Data and information that are used to explain
a phenomenon become knowledge. It
probably does not double at fast rates, but that really has more to do with the
learner and processing techniques than with what is available. In other words, data and information become
knowledge once they are processed and applied.
The last
category, wisdom, certainly does not
double at a rapid rate. It is the
application of all three previous categories, and some intangible
additions. Wisdom is rare and timeless,
and is important because it is rare and timeless. We seldom encounter new wisdom in the popular
media, nor do we expect a deluge of newly derived wisdom to spring forth from
our computer monitors each time we log on.
Knowledge and
wisdom, like gold, must be aggressively processed from tons of near worthless
overburden. Simply increasing data and
information does not assure the increasing amounts of the knowledge and wisdom
we need to solve pressing environmental and resource problems. Increasing the processing "thruput"
by efficiency gains and new approaches might.
How does this
philosophical diatribe relate to GIS technology? What is GIS’s role within the framework?
What does GIS deliver-- data, information, knowledge or wisdom? Actually, if GIS is appropriately presented,
nurtured and applied, it can affect all four.
That is provided the technology's role is recognized as an additional
link that the philosophers failed to note.
Understanding
sits at the juncture between the data/information and knowledge/wisdom stages
of enlightenment. Understanding involves the honest dialog among various
interpretations of data and information in an attempt to reach common knowledge
and wisdom. Note that understanding is
not a "thing," but a process.
It is how concrete facts are translated into the slippery slope of
beliefs. It involves the clash of
values, tempered by judgment based on the exchange of experience. Technology, and in particular GIS, has a
vital role to play in this process. It
is not sufficient to deliver spatial data and information; a methodology for
translating them into knowledge and wisdom is needed.
Tomorrow's GIS
builds on the cognitive basis, as well as the spatial databases and analytical
operations of the technology. This new
view pushes GIS beyond data mapping, management and modeling, to spatial
reasoning and dialogue focusing on the communication of ideas. In a sense, GIS extends the analytical
toolbox to a social "sandbox," where alternative perspectives are
constructed, discussed and common knowledge and wisdom distilled.
This step needs
to fully engage the end-user in GIS itself, not just its encoded and derived
products. It requires a democratization
of GIS that goes beyond a graphical user interface and cute icons. It obligates the GIS technocrats to explain
concepts in layman terms and provide access to their conceptual expressions of
geographic space. In turn, it requires
environmental professionals to embrace the new approaches to spatial reasoning
and dialogue. GIS has an opportunity to
empower people with new decision-making tools, not simply entrap them in a new
technology and an avalanche of data. The
mapping, management and modeling of spatial data is necessary, but not
sufficient for effective solutions. Like
the automobile and indoor plumbing, GIS will not be an important technology in
environmental policy and administration until it fades into the fabric of the
decision-making process and is taken for granted. Its use must become second nature for both
accessing spatial data/information and translating it into the
knowledge/wisdom needed to address increasingly complex environmental
issues.
References
·
Abler, R.J., J. Adams, and P. Gould, 1971. Spatial Organization: The Geographer’s
View of the World, Prentice Hall, Englewood Cliffs NJ.
·
Burgess, T. and R. Webster, 1980. “Optimal Interpolation
and Isarithmic Mapping of Soil Properties: The Semi-Variogram and Punctual
Kriging,” Journal of Soil Science, 31: 315-31.
·
·
·
·
·
·
·
·
·
·
·
Brown,
·
Burrough, P.A. 1987. Principles of Geographical Information
Systems for Land Resources Assessment,
·
Calkins, H.W., 1991. “GIS and Public Policy,” in Geographic
Information Systems: Principles and Applications, ed. D.J. Maguire, M.F.
Goodchild, and D.W. Rhind, Longman Scientific and Technical Press,
·
Carter, J.R., 1989. “On Defining the Geographic Information
Systems,” in Fundamentals of Geographic Information Systems: A Compendium, ed.
W. Ripple, American Society of Photogrammetry and Remote Sensing,
·
Coppock, J. and D. Rhind, 1991. “The History of GIS,” in Geographic
Information Systems: Principles and Applications ed. D.J. Maguire, M.F.
Goodchild, and D.W. Rhind, Longman Scientific and Technical Press,
·
Cressie, N., 1991. Statistics for Spatial Data, John
Wiley and Sons,
·
Cressie, N., 1993. “Geostatistics: A Tool for Environmental
Modelers,” in Environmental Modeling with GIS, ed. M.F. Goodchild, B.O.
Parks, and L.T. Steyaert, Oxford University Press, Oxford UK, 414-421.
·
Cressie, N. and J.M. Ver Hoef, 1993. “Spatial Statistical Analysis of
Environmental and Ecological Data,” in Environmental Modeling with GIS,
ed. M.F. Goodchild, B.O. Parks, and L.T. Steyaert, Oxford University Press,
Oxford UK, 404-413.
·
Cuff, D.J. and M.T. Matson, 1982. Thematic Maps,
·
Davies, C. and D. Medyckyj-Scott, 1994. “GIS Usability: Recommendations Based on the
User’s View,” International Journal of Geographical Information Systems, 8:
175-89.
·
Densham, P.J., 1991. “Spatial Decision Support Systems,” in Geographic
Information Systems: Principles and Applications, ed. D.J. Maguire, M.F.
Goodchild, and D.W. Rhind, Longman Scientific and Technical Press,
·
·
Elridge, J. and J.Jones, 1991. “Warped Space: A Geography of Distance
Decay,” Professional Geographer, 43(4): 500-511.
·
Epstein, E.F., 1991. “Legal Aspects of GIS,” in Geographic
Information Systems: Principles and Applications, ed. D.J. Maguire, M.F.
Goodchild, and D.W. Rhind, Longman Scientific and Technical Press,
·
Gimblett, H.R., 1990. “Visualizations: Linking Dynamic Spatial
Models with Computer Graphics Algorithms for Simulating Planning and Management
Decision,” Journal of Urban and Regional Information Systems, 2(2): 26-34.
·
Goodchild, M.F., 1987. “A Spatial Analytical Perspective on
Geographical Information Systems,” International Journal of Geographical
Information Systems, 1(4): 327-34.
·
Goodchild, M.F, R. Haining, and S. Wise,
1992. “Integrating GIS and Spatial Data
Analysis: Problems and Possibilities,” International Journal of Geographical
Information Systems, 6(5): 407-23.
·
Goodchild, M.F., B.O. Parks, L.T. Steyaert
(eds), 1993. Environmental Modeling
with GIS,
·
Johnson, L.B., 1990. “Analyzing Spatial and Temporal Phenomena
Using Geographical Information Systems” A Review of Ecological Applications,”
Landscape Ecology, 4(1): 31-43.
·
King, J.L. and K.L. Kraemer, 1993. “Models, Facts and the Policy Process: The
Political Ecology of Estimated Truth,” in Environmental Modeling with GIS,
ed. M.F. Goodchild, B.O. Parks, and L.T. Steyaert, Oxford University Press,
Oxford UK, 353-360.
·
Korte, G.B., 1993. The GIS Book,
·
Lam, N.S., 1983.
“Spatial Interpolation Methods: A Review,” American Cartographer, 10:
129-49.
·
Leick, A., 1990.
GPS Satellite Surveying, John Wiley and Sons,
·
Maffini, G., 1987. “Raster versus Vector Data Encoding and
Handling: A Commentary,” Photogrammetric Engineering and Remote Sensing,
53(10): 1397-98.
·
Maguire, J.D., M.F. Goodchild, and D.W. Rhind
(eds), 1991a. Geographic Information
Systems: Principles and Applications, Vol. 2 (Applications), Longman
Scientific and Technical Press,
·
Maguire, J.D., M.F. Goodchild, and D.W. Rhind
(eds), 1991b. Geographic Information
Systems: Principles and Applications, Vol. 1(Principles), Longman
Scientific and Technical Press,
·
Maguire, D.J. and J. Dangermond, 1994. “Future of GIS Technology,” in The CGI
Source Book for Geographic Systems, The Association for Geographic
Information,
·
McGarigal, K. and B. Marks, 1995. FRAGSTATS: Spatial Pattern Analysis
Program for Quantifying Landscape Structure, USDA-Forest Service, Technical
Report PNW-GTR-351,
·
McHarg, I.L., 1969. Design with Nature, Doubleday/Natural
History Press, Garden City, NJ.
·
Medyckyj-Scott, D. and H.M. Hernshaw (eds), 1993. Human Factors in Geographical Information
Systems, Belhaven Press,
·
Meyers, D.E., 1988. “Multivariate Geostatistics for Environmental
Monitoring,” Sciences de la Terra, 27: 411-27.
·
Muehrcke, P.C. and J.O. Muehrcke, 1980. Map Use:
·
Muller, J.C., 1982. “Non-Euclidean Geographic Spaces: Mapping
Functional Distances,” Geographical Analysis, 14: 189-203.
·
Ottens, H., 1994. “Relevant Trends for Geographic Information
Handling,” Geo-Info Systems, 4(8): 23.
·
Parent , P. and R. Church, 1989. “Evolution of Geographic Information Systems
as Decision Making Tools,” in Fundamentals of Geographic Information
Systems: A Compendium, ed. W. Ripple, American Society of Photogrammetry
and Remote Sensing,
·
·
Pueker, T., and N. Christman, 1990. “Cartographic Data Structures”, American
Cartographer 2(1): 55-69.
·
Piwowar, J.M., et. al.., 1990. “Integration of Spatial Data in Vector and
Raster Formats in a Geographic Information System,” International Journal of
Geographic Information Systems 4(4): 429-44.
·
Ripley, B.D., 1981. Spatial Statistics, John Wiley and
Sons,
·
Ripple, W. (ed), 1987. GIS for
·
Ripple, W. (ed), 1989. Fundamentals of Geographic Information
Systems: A Compendium, American Society of Photogrammetry and Remote
Sensing,
·
Ripple, W. (ed), 1994. The GIS Applications Book: Examples in
Natural Resources, American Society of Photogrammetry and Remote Sensing,
·
Rix, D., 1994.
“Recent Trends in GIS Technology,” in The CGI Source Book for
Geographic Systems, The Association for Geographic Information,
·
Robertson, A., R. Sale, and J. Morrison,
1982. Elements of Cartography, 4th
edition, John Wiley and Sons,
·
Shepherd, J.D., 1991. “Information Integration and GIS,” in
Geographical Information Systems: Principles and Applications ed. D.J.
Maguire, M.F. Goodchild, and D.W. Rhind, Longman Scientific and Technical Press,
·
Star, J. and J. Estes, 1990. Geographic Information Systems: An
Introduction, Pretence-Hall,
·
Steinitz, C.F., et. al., 1976. “Hand Drawn Overlays: Their History and
Prospective Uses,” Landscape Architecture, 66: 444-55.
·
Thomas, R. and R. Huggett, 1980. Modelling in Geography: A Mathematical
Approach, Harper and Row,
·
Tomlin, C.D., 1990. Geographic Information Systems and
Cartographic Modeling, Prentice Hall,
·
Turner, M.G., 1990. “Spatial and Temporal Analysis of Landscape
Patterns,” Landscape Ecology,
·
Unwin, D., 1981.
Introductory Spatial Analysis,
·
Wang, F., 1994.
“Towards a Natural Language User Interface: An Approach of Fuzzy Query,”
International Journal of Geographical Information Systems, 8: 143-62.
·
Webster, R. and T. Burgess, 1980. “Optimal Interpolation and Isarithmic Mapping
of Soil Properties: Changing Drift and Universal Kriging,” Journal of Soil
Science, 31: 505-24.