Applying
The following extends
the brief description in the Beyond Mapping column on “Calibrating and
Weighting GIS Model Criteria” by
A companion discussion
on applying the Analytical Hierarchy process (AHP) for “weighting GIS model
criteria” is posted at— http://www.innovativegis.com/basis/bm/Beyond.htm#BM_supplements.
What is
The Delphi Process provides a structured method for
developing consensus in areas that are hard to quantify or difficult to
predict. It was originally developed in the 1950s by Rand Corporation for
forecasting future scenarios but since has been used as a generic strategy for
group-based decisions in a variety of fields.
The essence of
What kind of information is
gained?
What is involved in the
process?
The Delphi Process involves anonymity, iteration with controlled feedback (both qualitative and quantitative) and documentation of group interaction. It includes a series of “rounds” that solicit group responses to a set of questions— the answers are tabulated, and the results are used to form the basis for the next round. Through several iterations, this process synthesizes the responses, most often resulting in consensus that reflects the participants' combined intuition and expert knowledge.
Who should be involved?
In the routing of an electric transmission line described in the Beyond Mapping series (see above reference), “Discipline Experts” identify model criteria and conceptual structuring of the problem. “GIS Experts” provide input on data availability, analysis techniques required and technical structuring of the model.
How are the decision elements
identified and calibrated?
Two levels of interactive group discussion are involved in
developing a GIS model. At the first
level, decision elements (map layers) that “drive” the problem are identified
and used to structure the model. At the
second level, the decision elements are calibrated to reflect the appropriate
interpretation of the criteria.
The calibration of the derived maps (Level 2 discussion) reflect the model’s objectives and include avoiding locations that 1) have high Visual Exposure (VE= V_Exposure_rating), 2) are close to Sensitive Areas (SA= SA_Proximity_rating), 3) are close to Roads (R= R_Proximity_rating) and 4) have high Housing Density (HD= H_Density_rating). The MapCalc command Renumber is used to reclassify the derived maps (set the ratings).
The following discussion illustrates a single iteration in
applying
http://www.innovativegis.com/basis/bm/Beyond.htm#BM_supplements.
What constitutes the first
What is the content and form
of the questionnaire?
The questionnaire contains a series of statements that provide a consistent scale for calibrating the map layers. A calibration scale of 1 (least preferred) to 9 (most preferred) is used. The respondent identifies “cutoff values” for the data ranges for continuous mapped data and directly assigns ratings to map categories for discrete maps. It is critical that a consistent scale is applied independently to all maps and that each map contains at least one 9 (least preferred) and one 1 (most preferred) rating.
The questionnaire forms a series of questions soliciting cutoff values for continuous maps or direct rating assignments for discrete maps. A calibration question is developed for each map layer in the model. In the routing example, a question involving housing density (continuous) might be written as:
In
terms of a preference to avoid areas of high housing density when routing
electric transmission lines, what cutoffs for housing density are appropriate
for the nine preference levels indicated below?
|
Level |
Cutoff |
Implied |
Most Preferred |
1 |
________ |
________ to ________ |
|
2 |
________ |
________ to ________ |
Good |
3 |
________ |
________ to ________ |
|
4 |
________ |
________ to ________ |
OK |
5 |
________ |
________ to ________ |
|
6 |
________ |
________ to ________ |
Marginal |
7 |
________ |
________ to ________ |
|
8 |
________ |
________ to ________ |
Least Preferred |
9 |
________ |
________ to ________ |
Self-rated expertise level on this rating (1= low to
9=high) _____________
What constitutes the second
round?
Working separately, the respondents enter map values for the
“Cutoff” and “
Note: The individual categories defining a discrete map are listed with a column for participants to record their preference level for each map category. Preference levels can be repeated but each map layer must contain a 1 (most preferred) and a 9 (least preferred) assignment.
How are the individual
responses recorded at the completion of the second round?
Face-to-face group meetings are best, however a conference call coupled with online responses can be used if travel is impractical. In either case, the individual responses are recorded in a spreadsheet for statistical summary.
What information is in the
controlled feedback from round 2?
Note: For discrete map ratings a similar set of statistics are calculated for each map category.
How are the final calibration
ratings developed in the third round?
The statistical summary of the group’s responses serves a catalyst for further discussion. Each question is visited and the group discusses why a cutoff value should be higher or lower than the mean. The coefficient of variation is an indicator of the amount of compromise needed to reach consensus.
In many cases, group consensus is reached and final ratings can be directly assigned. If not, new response sheets for the questions in conflict are distributed and the participants are asked to re-enter their ratings based on the extended discussion. Members of the group expressing extreme views are asked to develop a brief written statement justifying their position. The process is repeated until an acceptable coefficient of variation is reached and the group mean is assigned, or a deadlock occurs.
How are the derived
calibration ratings used in a GIS model?
What are the benefits of using
The most obvious benefit is the development of calibration ratings needed to implement a GIS model. Less obvious benefits surround the process itself. First, it engages a group of experts in structured discussion that insures all interpretations are presented. In addition, it documents group interactions leading to the calibration ratings. The result is a “consistent, objective and defendable” procedure for calibrating GIS models.