GeoWorld
Editorial Board Industry Outlook
Opening Panel Remarks at GeoTec 2009
Joseph K. Berry
W. M. Keck Visiting
Scholar in Geosciences, University of Denver
Principal, Berry &
Associates // Spatial Information Systems (BASIS)
1701 Lindenwood Drive,
Fort Collins, Colorado, USA 80524
Moderator:
Panelists: Xavier
Lopez, Director of Spatial Technologies, Oracle Corporation — Ron Lake, President, Galdos Systems Incorporated — Nigel Waters, Director for the Center of Excellence for
Geographic Information Science, George Mason University —
___________________________
(Click here
for a .pdf version) The following
is a synopsis of Dr. Berry’s notes/remarks to the questions on Industry
Outlook—
KEY
QUESTIONS
1. First we’ll start with a “broad-brush”
question. What’s the most radical change(s)
that 1) we have seen in geotechnology’s evolution, and 2) that we will see
in geotechnology’s future?
(Part
1 – Evolution of geotechnology). It wasn’t until the
late 1990s that I fully realized the impact of the “perfect geotechnology storm”
brought on by the convergence of four critical enabling technologies; 1) the personal
computers’ dramatic increase in computing power, 2) the maturation of
GPS and RS (remote sensing) technologies, 3) a ubiquitous Internet
and 4) the general availability of digital mapped data. If any one of these elements were missing, the
current state of geotechnology would be radically different and most certainly
not as robust or generally accepted. Much
of our advancement, particularly of late, has come from external forces.
Keep
in mind that geotechnology is in its fourth decade—the 1970s saw Computer
Mapping automate the drafting process through the introduction of the
digital map; the 80s saw Spatial Database Mining link digital maps to
descriptive records; the 90s saw the maturation of Map Analysis and Modeling
capabilities that moved mapped data to effective information by investigating
spatial relationships; and finally, our current decade that focuses on Multimedia
Mapping that emphasizes data delivery
through Internet proliferation of data portals and advanced display mechanisms involving 3D visualization and virtual
reality environments, such as in Google and Virtual Earths.
In
the early years, GIS was “down the hall and to the right,” sequestered in a
relatively small room populated by specialists …users would rap on the door and
say “Joe sent me for some maps.” Today,
geotechnology is on everyone’s desk and in nearly everyone’s pocket. Contrary to most GIS perspectives, our
contributions have been as much a reaction to enabling technologies as it has
been proactive in the wild ride to mass adoption.
(Part
2 – Future of geotechnology). The future of our
status as a mega-technology alongside the giants of biotechnology and
nanotechnology will be in large part self-determined …that is, if we step out
of the closet and fully engage other disciplines and domain experts. The era of “maps as data” is rapidly
giving way to the “age of spatial information” where mapped data and
analytical tools effectively support decision-making. The direct relevance of geotechnology isn’t
just a wall hanging, it’s an active part of the consideration of geographic
space …whether it’s a personal “what
should we do and where should we go?” decision on a vacation, or a
professional one for locating a pipeline, identifying wildlife management units
or establishing a marketing plan for a new city.
The
key element for developing applications beyond data delivery lies in domain
expertise as much as mapping know-how.
The geometrical increase in awareness and use of geotechnology by the
masses will lead to entirely new and innovative applications that we haven’t
even dreamed of …nor can we as geotechnology specialists. The only way we could drop the ball is to
retreat further into our disciplinary cave.
On
a technical front, I see a radical change in georeferencing from our 400 year
reliance on Cartesian “squares” in 2-D and “cubes” in 3-D to hexagons (2-D)
and dodecahedrals (3-D) that will lead to entirely new analytic
capabilities and modeling applications. To conceptualize the difference, imagine a
regular square grid morphing into a grid of hexagons like a tray in a bee
hive. The sharp corners of the squares are
knocked-off resulting the same distance from the centroid to each of the sides
defining the cell …a single consistent step instead of two different types of
steps (diagonal and orthogonal) when moving to an adjacent location. Now consider a three-dimensional world with
12-sided volume (dodecahedral) replacing a cube …a single consistent step
instead of a series of differing steps to all of the surrounding locations.
This
seemingly slight shift in spatial theory, however, will revolutionize our
concept of geographic space. At a minimum, it finally will dispel the false
assumption that the earth is flat …at least in the map world that stacks
two-dimensional map layers like pancaes.
At a maximum, it will enable us to conceptualize, analyze and actualize
spatial conditions within the full dimensionality of the real world.
Now
all we need to do is to figure out a way to fully account for time, as well as
space, in our mapping for a temporally dynamic representation of
geography—but that’s another story to be written by tomorrow’s
geotechnologists.
2.
The next question sounds basic, ‘where
do we go from here?’ But the answer is anything but simple. As a
background, we have established the basic means of encoding, analyzing, visualizing
and storing geographic information, and the compute power to do these
operations everywhere is widespread. We have some nascent standards and a large
quantity of data (content) in terms of vector and image data. So do we need to
think in terms of an all-the-time integration of business processes that deal
with the real world--from land management to building design to environmental
protection. But how do we get there? And
how do we make it happen?
While
I am sure there are technological waypoints along the path we take from here,
the human element likely will be the most critical determinant of forward
progress. Chief among these is the education
component. It’s interesting to note
that our earliest tinkering with geotechnology had a huge tent with zealots
from all disciplines tossing something into the stone soup of an emerging
technology—foresters, engineers, geographers, epidemiologists, hydrologists,
geologists to mention but a few. As the
field matured the big tent contracted considerably as “specialists” emerged and
formal programs of study and certification have surfaced.
There
are many positive aspects in this maturation, but there also are some drawbacks. In many universities, the GIS Center of
Excellence is housed in a single college or department, and thereby becomes
lodged in a disciplinary stovepipe. A
“one shoe for geotechnology education” is not sufficient. It shouldn’t be the exclusive domain of any
discipline due to the breadth of its emphasis— from those “of the computer,”
such as Programmers, Developers, and Systems Managers, to those more “of the
application,” such as Data Providers, GIS Specialists, General Users—
Of the Computer |
… ß
Continuum of Focus within Geotechnology à ... |
Of the Application |
|||
Programmer |
Developer |
Systems Manager |
Data Provider |
GIS Specialist |
General User |
Another
characteristic is the growing gap on campus between the “-ists” and the “-ologists.” The “-ists,” GIS Specialists, for example, are seeking in-depth knowledge
and view geotechnology as a “Stand Alone discipline.” On the other hand, the “-ologists,” such as Ecologists
are after practical skills and see it as an “Applied discipline.”
An
academic analogy that comes to mind is Statistics. While its inception is rooted in 15th
Century mathematics, it wasn’t until the early 20th
Century that the discipline broadened its scope and societal impact much
like contemporary geotechnology. Today
it is difficult to find a discipline on campus that does not develop at least a
basic literacy in statistics. This level
of intellectual diffusion was not accomplished by funneling most of the student
body through a series of one-size-fits-all courses in the Statistics
Department. Rather, it was accomplished
through a dandelion seeding approach where statistics is enveloped into
existing disciplinary classes and/or specially tailored courses, such as Introduction
to Statistics for Foresters, for Engineers, etc.
This
doesn’t mean that deep-keeled geotechnology curricula are pushed aside. On the contrary, like a Statistics
Department, there is a need for in-depth courses that produce the theorists,
innovators and specialists who grow the technology’s capabilities and
databases. However, it does suggest a less
didactic approach in which all who touch GIS need to “start at the beginning and when you
get to the end...stop” as suggested by The Cheshire Cat.
In
large part, it can be argued that the outreach to other disciplines is
our most critical waypoint in repositioning geotechnology for the 21st
Century.
3. Many industry analysts and
technology leaders suggest that “cloud computing” is likely to become the next
phase in Web and enterprise computing.
How do you envision geospatial technologies and services to interact with and
be deployed with a cloud-computing environment?
My technical skills
are such that I can’t address what cloud computing architecture will look like
or what enabling technologies are involved.
However, I might be able to help some of you get a grasp of what cloud
technology is and what might be its near-term fate.
The usually crisp
Wikipedia definition for cloud computing is riddled with techy-speak, as are
most of the blogs. However, what I am
able to decipher is that there are three distinguishing characteristics—that the
technology
1) involves virtualized resources …meaning that
workloads are allocated among a multitude of interconnected computers acting as
a single device;
2) acts as a service
…meaning that the software and data components are shared over the Internet;
and,
3) is dynamically scalable …meaning that the system can
be readily enlarged.
In other words, cloud
computing is basically the movement of applications, services, and data from
local storage to a dispersed set of servers and datacenters …a particularly
advantageous environment for data heavy applications like GIS.
While there is some
credence in the argument that cloud computing is simply an extension of yesterday’s
buzzwords of object-oriented programming, interoperability, web-services and
mash-ups, it ingrains considerable technical advancement (as my esteemed
colleagues can attest). For example, the
cloud offers a huge potential for capitalizing on the spatial analysis,
modeling and simulation functions of a GIS, as well as tossing gigabytes around
with ease …a real step-up from the earlier expressions.
However, there are
four important non-technical aspects to consider: 1) liability concerns, 2)
information ownership, sensitivity and privacy issues, 3) economic and payout considerations, and 4) legacy impediments.
Liability concerns arise from
decoupling data and procedures from a single secure computing infrastructure— What
happens if it is lost or compromised? What
if the data is changed or basically wrong?
Who is responsible? Who cares?
The closely related
issues of ownership, sensitivity and privacy raise questions like: Who
owns the data? Who is the data shared with and under what circumstances? How secure is the data? Who determines its accuracy, viability and
obsolescence? Who defines what data is
sensitive? What is personal
information? What is privacy? These lofty questions rival Socrates sitting
on the steps of the Acropolis and asking …what is beauty? …what is truth? But these social questions need to be
addressed if the cloud technology promise ever makes it down to earth.
In addition, a
practical reality needs an economic and payout component. The alchemy of spinning gold from cyberspace
straw continues to mystify me. It
appears that the very big boys like Virtual and Google Earth can do it through
eyeball counts, but what happens to smaller data, software and service
providers that make their livelihood from what could become ubiquitous? What is their incentive? How would a cloud computing marketplace be
structured? How will its transactions be
recorded and indemnified?
Governments, non-profits
and open source consortiums, on the other hand, see tremendous opportunities in
serving-up gigabytes of data and analysis functionality for free. Their perspective focuses on improved access
and capabilities, primarily financed through cost savings. But are they able to justify large transitional
investments to retool under our current economic times?
All these
considerations, however, pale in light legacy impediments, such as the inherent
resistance to change and inertia derived from vested systems and cultures. The old adage “don’t fix it, if it ain’t broke” often delays, if not trumps,
adoption of new technology. Turning the
oil tanker of GIS might take a lot longer than technical considerations suggest—don’t
expect GIS to “disappear” into the clouds just yet.
4. Has GIS technology become commoditized?
What are the advantages/disadvantages of commoditization? And how can we
continue to add value to our product?
Commoditization implies the
transformation of goods and services into a commodity thus becoming an undifferentiated product characterized solely by
its price, rather than its quality and features. The product is perceived as the same no
matter who produces it, such as petroleum, notebook paper, or wheat. Non-commodity products, such as televisions,
on the other hand, have many levels of quality. And, the better a TV is perceived to be, the
higher its value and the more it will cost.
So where is geotechnology along this continuum? Like the other two mega-technologies (bio and nano) it has a split personality with both commodity and
non-commodity characteristics. In our
beginning, research dominated and the mere drafting of a map by a plotter was
perceived as a near miracle in the 1970s.
Fast forward to today and digital maps are as commonplace as they are
ubiquitous—a transformation from “knock-your-socks-off” to commodity status.
But we shouldn’t confuse mass adoption of a map
product with commoditization of an entire technology. It is like the product life cycle in
pharmaceuticals from trails, to unique flagship drug, to generic forms and finally
to commodity status. While the products
might cycle to commodity, the industry doesn’t as innovation keeps adding value
and new product lines.
What is rapidly becoming a commodity in our field is
generic mapped data and Internet delivery.
However, contemporary value-added products and services are extremely differentiated;
such as a propensity map for product sales, a map of wildfire risk, and a real-time
helicopter routing map that avoids enemy detection. The transition is a reflection of a paradigm
shift from mapped data to spatial information—less of a focus on automating
traditional mapping roles and procedures, to an emphasis on new ways of
integrating spatial relationships into decision-making.
The bottom line is that commoditization of geotechnology is neither good nor bad, nor an advantage or disadvantage. It just is a natural progression
of product life cycles and renewed advancements in value-added features and
services through continued innovation.
If we fail to innovate, the entire industry will become commoditized and
GIS specialists will hawk their gigabytes of graphics in the geotechnology
commodity market next to the wheat exchange in Chicago.
EXTRA
QUESTIONS (as needed)
1. Government can be a boon to
geotechnology, as seen here in Vancouver concerning the Winter Olympics and in
the United States with the so-called “stimulus package.” What
role should federal governments have in demanding geotechnology, and is that
role currently expanding or contracting?
Governments
traditionally provide goods and services that cannot be efficiently,
effectively or economically delivered by the private sector. They also can provide a kick-start for emerging
technologies, particularly those with extensive reach that is difficult for
a gaggle of individual companies to gain a common foothold.
At
another level, governments are consumers of technology just like any
business. These and a myriad of other
factors suggest that governments will have increasing direct role in the consumption,
as well as its traditional role in the development of geotechnology through
policy, legislation, standards and certifications.
2. As a related question, specific
types of geotechnology can become more or less popular in large governments,
depending on the world situation. For example, military-type applications grew
dramatically in the last decade, and it’s expected that infrastructure spending
will increase now. What other types of geotechnology could receive boosts from upcoming federal
government programs?
Public
safety
and disaster planning and response will likely be the next stimulated
sector. Geotechnology has well-established
its descriptive mapping capabilities, however full engagement of its predictive
and prescriptive capabilities will become generally accepted. For example, wildfire risk and impact
modeling first applies fire science to create maps of relative threat
levels throughout an area based on terrain, weather, fuels and historical ignition
occurrence. Then wildfire threat is
combined with existing maps of facilities, infrastructure, census data,
sensitive areas and other descriptors to generate probable loss in
economic, social and environmental terms.
In
a similar manner, crime data and related descriptors can be used to generate crime
risk and impact maps. In a
potentially more sinister way, maps can linked with advanced image processing and
recognition software to track individuals as they move throughout a network of
remote video cameras …a well-established application lead by the British.
Heck,
in Denver this has already been taken to a new level—robotics. As a parking enforcement officer drives along
a street a camera recognizes your license plate, checks your registration
information and then “asks” the parking meter if time has expired; if so, a
ticket is issued and mailed to you. It
even checks its memory to see if you are in the same place more than two hours
…running out to “feed the meter” is no longer an option in Denver.
It’s
a fine line between the “cyber-liberation” of the parking cop and the “geo-slavery”
that has taken a lot the “gaming” out of being a good citizen.
3. Are new laws aimed at protecting privacy, such as the U.S. Health
Insurance Portability and Accountability Act (HIPAA), for example,
inadvertently decreasing GIS-related research? If so, what can be done to
mitigate the effect? Or, more broadly, where is the current “balance point” of
privacy and results, and do you see it swinging in one direction or the other?
With
any significant technological advancement there comes a potential
downside. The threat of “geoslavery” is
real but it is greatly exaggerated, and for the most part, offset by the
“cyber-liberation” brought on by shredding the paper map and tossing-out the
magnetic compass. While the thought
that “great-honking computers” will know everything that “we do, what we
buy and where we go” affronts our private space psyche and smacks of Orwellian
control, the reality is much different.
In
modern society spying and control occurs without geo-referencing all the
time. Your shopping basket is
scanned to automatically total the bill, keep an inventory, help determine
in-store specials and out-of-store marketing.
If you use a preferred customer card “they” know who you are and, yes,
where you live. But that’s a far step
from a knock on your door and confrontation over purchasing a sleazy brown-bag
novel …it hasn’t happened yet, and Orwell’s book spoke of 1984.
There
are numerous ways that individual privacy can be maintained while still
mitigating the informational content in data and images. Census data has struggled with this dilemma
for years by employing “aggregation procedures” as necessary. I believe the “balancing point” teeters on
two principles—individual anonymity versus group characterization.
For
example, if you are caught in a routine image of the landscape your privacy
isn’t compromised. Viewsheds are public
spaces as Madonna discovered several years ago in her suit over high resolution
coastal imagery of her Malibu estate.
However, a private (or governmental) plane targeting her estate is an
entirely different matter and threatens her privacy—an interpretation that
might put a lot of shutter vultures out of business.
Judicial,
political, cultural and societal opinions will continue to fluctuate around
this balance point like a teeter-totter with the current position a bit more on
the individual privacy end. As GIS
specialists, our charge is to develop innovative technologies that are applied
within current policy constraints. Just
because a procedure is technically feasible doesn’t mean it is automatically viable.
4. Google Earth, Microsoft Virtual
Earth and others seem to be growing exponentially, in both the amount of
information available and popularity. Are
such platforms the definitive future of the industry, or do you expect new
technologies to take their place?
The
current set of major actors and technological expressions are in the driver’s
seat of geotechnology …and will be for some time. The baton transfer of “all things geographic”
from flagship GIS companies to more generic information and access companies
began a decade ago. The exponential
growth of the technology and breadth of its applications has sealed the
transfer—like the boutique store (viz. GIS company) losing customers to the big
box stores (viz, GE and VE).
It
might warm your heart, to hear that the geotechnology baton is posed for at
least two other forward passes. It seems
that Tweeter and Facebook are awakening to the idea that geography is a really
cool way to show friends exactly what are doing AND WHERE you are doing it. Location stamps on photos and video are
becoming as common as time stamps …throw-in a link to GE or VE and the transfer
is complete.
Less
on the radar is the interest of big database companies in geotechnology (viz.
Oracle and Sybase). They have been
developing “spatial data-blades” for several years and have incorporated considerable
GIS capability into their software. What
has been holding them back is an easy way to automatically stamp data with a universal
geographic identifier. The
complexity and multiplicity of current referencing systems and transformations
keep them “pulling at the bit” at a slow trot.
Soon they will be galloping with geotechnology when a standard
“universal key” is in play.
Personally,
I believe that key is already here but most folks, even in the GIS community
aren’t aware. USGS in conjunction with
numerous other groups have established a raster/grid-based referencing schema
for the globe using Lat/Lon WGS84—1m, 10m, 30m, 90m and 1km patterns. This means that there is a consistent “square
partition” for every location on the face of the earth. That suggests that any spatially dependent record
can be easily plunked into one of these “bins” as a single compound number for
column/row, and all other data and database systems will immediately know where
it is. No more geodedic hoops to hop
through for the geographically-challenged among us (viz. “unwashed” IT types)
…the gates have sprung open and the race is on.