Beyond
Mapping III Introduction
– The GIS Evolution (Further Reading) |
Map Analysis book |
(Software Design)
GIS Software's Changing Roles
— discusses the evolution of GIS software and identifies
important trends (September 1998)
What Is Object-Oriented Technology
Anyway? — establishes the basic concepts in
object-oriented technology (September 1996)
Spatial Objects—Parse and Parcel of
GIS? — discusses database objects and their map expressions (October 1996)
Does Anyone Object? — discusses
some concerns of object-oriented GIS (November 1996)
(Virtual Reality and GIS)
Behind the Scenes of Virtual Reality
— discusses the basic considerations and concepts in 3D-object
rendering (June
2000)
How to Rapidly Construct a Virtual
Scene — describes the procedures in generating a
virtual scene from landscape inventory data (July 2000)
How to Represent Changes in a Virtual
Forest — discusses how simulations and
"fly-bys" are used to visualize landscape changes and characteristics
(August
2000)
(Multimedia Mapping)
Capture "Where and When" on
Video-based GIS — describes how
Video Mapping Brings Maps to Life
— describes how video maps are generated and discusses some
applications of video mapping (October 2000)
<Click here> for a printer-friendly version of this topic (.pdf).
(Back
to the Table of Contents)
______________________________
GIS
Software's Changing Roles
(GeoWorld,
September 1998)
Although
For a myriad of reasons, this early effort focused on
This point was struck home during a recent visit to Disneyland. The newest ride subjects you to a seemingly
endless harangue about the future of travel while you wait in line for over an
hour. The curious part is that the
departed Walt Disney himself is outlining the future through video clips from
the 1950s. The dream of futuristic
travel (application) hasn’t changed much and the 1990s practical reality
(tool), as embodied in the herky-jerky ride, is a long way from fulfilling the
vision.
What impedes the realization of a technological dream is rarely a lack of
vision, but the nuts and bolts needed in its construction. In the case of
With the 1980s came the renaissance of modern computers and with it the
hardware and software environments needed by
Working within a
From an application developer’s perspective the floodgates had opened. From an end user’s perspective, however, a
key element still was missing—the gigabytes of data demanded by practical
applications. Once again
But another less obvious impediment hindered progress. As the comic strip character Pogo might say, “…we
have found the enemy and it’s us.”
By their very nature, the large
The 1990s saw both the data logjam burst and the
So where are we now? Has the role of
MapInfo’s MapX and ESRI’s MapObjects
are tangible
Like using a Lego set, application developers can apply the “building blocks”
to construct specific solutions, such as a real estate application that
integrates a multiple listing geo-query with a pinch of spatial analysis, a dab
of spreadsheet simulation, a splash of chart plotting and a sprinkle of report
generation. In this instance,
In its early stages,
The distinction between computer and application specialist isn’t so much their
roles, as it is characteristics of the combined product. From a user’s perspective the entire
character of a
As the future of
What Is Object-Oriented Technology Anyway?
(GeoWorld,
September 1996)
Object-oriented technology is revolutionizing
Like most over-used and under-grasped terms, object-oriented (OO, pronounced
like a cow’s moo, but without the “m”) has a variety of near religious interpretations. In reality, however, there are just three
basic forms: 1) Object-Oriented User Interfaces (OOUI), 2) Object-Oriented
Programming Systems (OOPS) and 3) Object-Oriented Data Base Management (OODBM). Yep, you got it… oo-eee,
oo-la-la, oops ‘n ooD’BUM, ra’ma, ra’ma ding-dong, whoopee…
sounds more lyrics from a 60’s hit than a
With the advent of Windows ’95, however, OOUI’s have moved from
“state-of-the-art” to commonplace. Its
objects include all those things on the screen you that drag, drop, slide,
toggle, push and click. On the “desktop”
you have cute little sketches including file folders, program shortcuts, a
recycle bin and task bar. In a “dialog
box” the basic objects include the check box, command button, radio (or option)
button, slider bar and spinner button, in addition to the drop-down, edit and
list boxes. When activated, these
objects allow you to communicate with computers by point ‘n click,
instead of type ‘n retype. Keep
in mind that OOUI defines “a friendly user interface (FUI) as a graphical user
interface (GUI)”… yep you got it again, a phooey-gooey.
In addition to icons, OOUI’s use “glyphs.” These are graphical objects are
controlled by programs to perform specific functions. The best examples of glyphs are in screen
savers. The little dog, man or monster
roaming across your screen which “destroys” it (sends it to black) piece by
piece is actually a “screen object” expression of a computer program. In a
Just about every
Whereas an OOUI dialog box is independent, an OOPS dialog box is one in
a linked set of commands depicted in the flowchart. Most modern
The OOPS flowchart captures a succinct diagram of an application’s logic, as
well as actual code. As
This topic was discussed in detail several issues ago (see Author’s
Note) within the contexts of a “humane
The full incorporation of a robust OOPS makes the transition of
__________________________________
Author’s Note: GeoWorld November and December, 1993).
Spatial Objects—Parse and Parcel of GIS?
(GeoWorld, October
1996)
The previous section suggested that there are three basic types of
“object-oriented-ness.” The distinctions between an OOUI (object-oriented
user interface) and an OOPS (object-oriented programming system)
were tackled. That leaves the third type
of “OO-ness,” object-oriented database management (OODBM,
pronounced ooD’BUM) for scrutiny this time. The OODBM concept provides database
procedures for establishing and relating spatial objects, more generally
referred to as “data objects.” Early DBM
systems used “flat files,” which were nothing more than long lists of
data. The records, or data items, were
fairly independent (usually just alpha-numerically sorted) and items in
separate files were totally independent.
With the advent of “relational databases,” procedures were developed
for robust internal referencing of data items, and more importantly, indexing
among separate data sets. OODBM, the next
evolutionary (or quite possibly, revolutionary) step, extends direct data
indexing by establishing procedural rules that relate data. The rules are used to develop a new database
structure that interconnects data entries and greatly facilitates data queries. They are a natural extension of our current
concerns for data standards and contemporary concepts of data dictionaries and
metadata (their discussion reserved for another article).
The OODBM rules fall into four, somewhat semantically-challenging categories: encapsulation,
polymorphism, structure, and inheritance. Before we tackle the details, keep in mind
that “object-oriented” is actually a set of ideas about data organization, not
software. It involves coupling of data
and the processing procedures that act on it into a single package, or data
object. In an OODBM system, this data/procedure
bundle is handled as a single unit; whereas the data/procedure link in a
traditional database system is handled by separate programs.
It can be argued (easily) that traditional external programs are
cumbersome, static and difficult to maintain— they incrementally satisfy the
current needs of an organization in a patchwork fashion. We can all relate to horror stories about
corporate databases that have as many versions as the organization has
departments (with 10 programmers struggling to keep each version afloat). The very real needs to minimize data
redundancy and establish data consistency are the driving forces behind
OODBM. Its application to
Encapsulation is OODBM’s cornerstone concept. It refers to the bundling of the data
elements and their processing for specific functions. For example, a map feature has a set of data
elements (coordinates, unique identifier, and thematic attributes). Also it can have a set of basic procedures
(formally termed “methods”) it recognizes and responds to, such as “calculate
your dimensions.” Whenever the
data/procedure bundle is accessed, “its method is executed on its data
elements.” When the “calculate your
dimensions” method is coupled with a polygonal feature, perimeter and area are
returned; with a linear feature, length is returned; and, with a point feature,
nothing is returned.
If the database were extended to include 3-dimensional features, the
same data/procedure bundle would work, and surface area and volume would be
returned… an overhaul of the entire application code isn’t needed, just an
update to the data object procedure. If
the data object is used a hundred times in various database applications, one
OODBM fix fixes them all.
The related concept of polymorphism refers to the fact that the same
data/procedure bundle can have different results depending on the methods that
are executed. In the “calculate your
dimensions” example, different lines of code are activated depending on the
type of map feature. The “hidden code”
in the data/procedure bundle first checks the type of feature, then automatically
calls the appropriate routine to calculate the results… the user simply applies
the object to any feature and it sorts out the details.
The third category involves class structure as a means of organizing
data objects. A “class” refers to a set
of objects that are identical in form, but have different specific expressions,
formally termed “instances.” For example, a linear feature
class might include such diverse instances as roads, streams, and power lines.
Each instance in a class can occur several times in the database (viz. a set of
roads) and a single database can contain several different instances of the
same class. All of the instances,
however, must share the common characteristics of the class and all must
execute the set of procedures associated with the class.
Subclasses include all of the data characteristics and procedural methods of
the parent class, augmented by some specifications of their own. For example, the linear feature class might
include the subclass “vehicle transportation lines” with the extended data
characteristic of “number of potholes.”
Whereas the general class procedure “calculate your dimensions” is valid
for all linear feature instances, the subclass procedure of “calculate your
potholes per mile” is invalid for streams and power lines.
The hierarchical ordering embedded in class structure leads to the
fourth rule of OODBM— subclass inheritance. It describes the condition that each subclass
adheres to all the characteristics and procedures of the classes above
them. Inheritance from parent classes
can occur along a single path, resulting in an inheritance pattern similar to a
tree branch. Or, it can occur along
multiple paths involving any number of parent classes, resulting in complex
pattern similar to a neural network.
In general, complex structure/inheritance patterns embed more
information and commonality in data objects which translates into tremendous
gains in database performance (happy computer).
However, designing and implementing complex OODBM systems are extremely
difficult and technical tasks (unhappy user).
Next month we’ll checkout the reality, potential and pitfalls of
object-oriented databases as specifically applied to
Does Anyone Object?
(GeoWorld,
November 1996)
The last several sections dealt with concepts and procedures
surrounding the three forms of object-oriented (OO) technology—OUI, OOPS
and OODBM. An object-oriented
user interface (OOUI) uses point-and-click icons in place of command
lines or menu selections to launch repetitive procedures. It is the basis of the modern graphical user
interface. An object-oriented
programming system (OOPS) uses flowcharting widgets to graphically
portray programming steps which, in turn, generate structured computer
code. It is revolutionizing computer
programming—if you can flowchart, you can program. Both OOUI and OOPS are natural extensions to
Object-oriented database management (OODBM), on the other hand,
represents a radical change in traditional
The OOBDM concept provides database procedures for establishing and relating
data objects. Last issue discussed
the four important elements of this new approach (encapsulation, polymorphism,
class structure, and inheritance). As a quick overview, consider the classical
OODBM example. When one thinks of a
house, a discrete entity comes to mind.
It’s an object placed in space, whose color, design and materials might
differ, but it’s there and you can touch it (spatial object). It has some walls and a roof that separates
it from the air and dirt surrounding it. Also, it shares some characteristics
with all other houses—rooms with common purposes. It likely has a kitchen, which in turn has a
sink, which in turn has a facet.
These items form discrete objects which share both spatial and
functional relationships. For example,
if you encounter a facet in space you are likely going to find it connected to
a sink. These relational objects can be
found not only in a kitchen, but a bathroom as well. In an OODBM, the data (e.g., facet) and
relationships (e.g., rooms with sinks therefore facets) are handled as a single
unit, or data/procedure bundle.
Within a
Such a system (if and when it is implemented) is well suited for organizing the
spatial objects comprising a
For years
The main problem of both OOBDM and its “vector composite” cousin comes from
their static and discrete nature. With
OODBM, the specification of “all encompassing rules” is monumental. If only a few data items are used, this task
is doable. However, within the sphere of
a “corporate”
However, each time a layer changes, a new composite must be
generated. This can be a real hassle if
you have a lot of layers and they change a lot.
New data base “patching” procedures (map feature-based) hold promise for
generating the updates for just the areas affected, rather than overlaying the
entire set of layers.
Even if the “mechanics” of applying OODBM to
Most often, these applications involve interactive processing as users
run several “what if…” scenarios. From
this perspective, habitat and sales maps aren’t composed of spatial objects as
much they are iterative expressions of differing opinions and experience
(spatial spreadsheet). For OODBM to work
and garnish data accessing efficiencies there must be universal agreement on
the elements and paradigms used in establishing fixed data objects. In one sense, the logic of a
In fact,
But another less obvious impediment hindered progress. As the comic strip character Pogo might say,
“…we have found the enemy and it’s us.”
By their very nature, the large
The 90’s saw both the data logjam and
So where are we now? Has the role of
MapInfo’s MapX and ESRI’s MapObjects
are tangible
Like using a Lego set, application developers can apply the “building blocks”
to construct specific solutions, such as a real estate application that
integrates a multiple listing geo-query with a pinch of spatial analysis, a dab
of spreadsheet simulation, a splash of chart plotting and a sprinkle of report
generation. In this instance,
In its early stages,
The distinction between computer and application specialist isn’t so much their
roles, as it is characteristics of the combined product. From a user’s perspective the entire
character of a
As the future of
Behind the Scenes of Virtual Reality
(GeoWorld, June
2000)
Over the past three decades, cutting-edge
The transition of
This evolution is most apparent in multimedia
While these visualizations are dramatic, none of the multimedia
Since discovery of herbal dyes, the color pallet has been a dominant part of
mapping. A traditional map of forest
types, for example, associates various colors with different tree species—red
for ponderosa pine, blue for Douglas fir, etc.
Cross-hatching or other cartographic techniques can be used to indicate
the relative density of trees within each forest polygon. A map’s legend relates the abstract colors
and symbols to a set of labels identifying the inventoried conditions. Both the map and the text description are
designed to conjure up a vision of actual conditions and the resulting spatial
patterns.
The map has long served as an abstract summary while the landscape artist’s
canvas has served as a more realistic rendering of a scene. With the advent of computer maps and virtual
reality techniques the two perspectives are merging. In short, color pallets are being replaced by
rendering pallets.
Like the artist’s painting, complete objects are grouped into patterns rather
than a homogenous color applied to large areas.
Object types, size and density reflect actual conditions. A sense of depth is induced by plotting the
objects in perspective. In effect, a
virtual reality
Fundamental to the process is the ability to design realistic objects. An effective approach, termed geometric
modeling, utilizes an interface (figure 1) similar to a 3-D computer-aided
drafting system to construct individual scene elements. A series of sliders and buttons are used to
set the size, shape, orientation and color of each element comprising an
object. For example, a tree is built by
specifying a series of levels representing the trunk, branches, and leaves. Level one forms the trunk that is
interactively sized until the designer is satisfied with the
representation. Level two establishes
the pattern of the major branches.
Subsequent levels identify secondary branching and eventually the leaves
themselves.
Figure 1.
Designing tree objects.
The basic factors that define each level include 1) linear positioning,
2) angular positioning, 3) orientation, 4) sizing and 5) representation. Linear positioning determines how often and
where branches occur. In fig. 1, the
major branching occurs part way up the trunk and is fairly evenly spaced.
The angular positioning, sets how often branches occur around the trunk or
branch to which it is attached. The
branches at the third level in the figure form a fan instead of being equally
distributed around the branch.
Orientation refers to how the branches are tilting. Note that the lower branches droop down from
the trunk, while the top branches are more skyward looking. The third-order branches tend show a similar
drooping effect in the lower branches.
Sizing defines the length and taper a particular
branch. In the figure, the lower branches
are considerably smaller than the mid-level branches. Representation covers a lot of factors
identifying how a branch will appear when it is displayed, such as its
composition (a stick, leaf or textured crown), degree of randomness, and 24-bit
Figure 2.
The inset on the left shows various branching patterns. The inset on the right depicts the sequencing
of four branching levels.
Figure 2 illustrates some branching patterns and levels used to
construct tree-objects. The tree
designer interface at first might seem like overkill—sort of a glorified
“painting by the numbers.” While it’s
useful for the artistically-challenged, it is critical for effective 3-D
rendering of virtual landscapes.
The mathematical expression of an object allows the computer to generate a
series of “digital photographs” of a representative tree under a variety of
look-angles and sun-lighting conditions.
The effect is similar to flying around the tree in a helicopter and
taking pictures from different perspectives as the sun moves across the
sky. The background of each bitmap is
made transparent and the set is added to the library of trees. The result is a bunch of snapshots that are
used to display a host of trees, bushes and shrubs under different viewing
conditions.
The object-rendering process results in a “palette” of objects analogous to the
color palette used in conventional
_______________________
Author's Note: the Tree designer module of the Virtual Forest
software package by Pacific Meridian Resources was used for the figures in this
column. See http://www.innovativegis.com/products/vforest/
for more examples and discussion.
How to Rapidly Construct a Virtual Scene
(GeoWorld, July
2000)
The previous section described how 3-dimensional objects, such as trees,
are built for use in generating realistic landscape renderings. The drafting process uses an interface that
enables a user to interactively adjust the trunk’s size and shape then add
branches and leaves with various angles and positioning. The graphic result is similar to an artist’s
rendering of an individual tree.
The digital representation, however, is radically different. Because it is a mathematically defined
object, the computer can generate a series of “digital photographs” of the tree
under a variety of look-angles and sun-lighting conditions. The effect is similar to flying around the
tree in a helicopter and taking pictures from different perspectives as the sun
moves across the sky.
The background of each of these snapshots is made transparent and the set is
added to a vast library of tree symbols.
The result is a set of pictures that are used to display a host of
trees, bushes and shrubs under different viewing conditions. A virtual reality scene of a landscape is
constructed by pasting thousands of these objects in accordance with forest
inventory data stored in a
Figure 1.
Basic steps in constructing a virtual reality scene.
There are six steps in constructing a fully rendered scene (see Figure
1). A digital terrain surface provides
the lay of the landscape. The
The link between the
Finally, information on percent maturity establishes the baseline
height of the tree. In a detailed tree
library several different tree objects are generated to represent the continuum
from immature, mature and old growth forms.
Figure 2 shows the tree exam files for two polygons identified in the
adjacent graphic. The first column of
values identifies the tree symbol (library ID#). Polygon 1573 has 21 distinct tree types
including snags (dead trees). Polygon
1658 is much smaller and only contains 16 different types. The second column indicates the percent
maturity while the third defines the number of trees. These data shown are for an extremely detailed
U.S. Forest Service research area in
Figure 2.
Forest inventory data establishes tree types, stocking density and
maturity.
Once the appropriate tree symbol and number of trees are identified the
computer can begin “planting” them. This
step involves determining a specific location within the polygon and sizing the
snapshot based on the tree’s distance from the viewpoint. Most often trees are randomly placed however
clumping and compaction factors can be used to create clustered patterns if
appropriate.
Tree sizing is similar pasting and resizing an image in a word
document. The base of the tree symbol is
positioned at the specific location then enlarged or reduced depending on how
far the tree is from the viewing position.
Figure 3 shows a series of resized tree symbols “planted” along a
slope—big trees in front and progressively smaller ones in the distance.
The process of rendering a scene is surprisingly similar to that of landscape
artist. The terrain is painted and
landscape features added. In the
artist’s world it can take hours or days to paint a scene. In virtual reality the process is completed
in a minute or two as hundreds of trees are selected, positioned and resized
each second.
Since each tree is embedded on a transparent canvas they obscure what
is behind them—textured terrain and/or other trees, depending on forest stand
and viewing conditions. Terrain
locations that are outside of the viewing window or hidden behind ridges are
simply ignored. The multitude of issues
and extended considerations surrounding virtual reality’s expression of
Figure 3.
Tree symbols are “planted” then sized depending on their distance from
the viewpoint.
_______________________
Author's Note: the
Landscape Viewer module of the Virtual Forest software package by Pacific
Meridian Resources was used for the figures in this column. See http://www.innovativegis.com/products/vforest/
for more examples and discussion.
How to Represent Changes in a Virtual Forest
(GeoWorld, August
2000)
The previous section described the steps in rendering a virtual
landscape. The process begins with a 3D drafting
program used to construct mathematical representations of individual scene
elements similar to a painter’s sketches of the different tree types that occur
within an area. The tree library is
linked to
The result is a strikingly lifelike rendering of the landscape instead of a
traditional map. While maps use colors
and abstract symbols to represent forest conditions, the virtual forest uses
realistic scene elements to reproduce the composition and structure of the
forest inventory data. This lifelike 3D
characterization of spatial conditions extends the boundaries of mapping from
dry and often confusing drawings to more familiar graphical perspectives.
The baseline rendering for a data set can be modified to reflect
changes on the landscape. For example,
the top two inserts in figure 1 depict a natural thinning and succession after
a severe insect infestation. The winter
effects were introduced by rendering with a snow texture and an atmospheric
haze.
Figure 1.
Changes in the landscape can be visualized by modifying the forest
inventory data.
The lower pair of inserts shows the “before and after” views of a
proposed harvest block. Note the linear
texture features in the clearcut that identify the
logging road. Alternative harvest plans
can be rendered and their relative visual impact assessed. In addition, a temporal sequence can be
generated that tracks the ‘green-up” through forest growth models as a
replanted parcel grows. In a sense, the
baseline
While
Admittedly, real-time “fly-bys” of
Figure 2.
A “fly-by” movie is constructed by generating a sequence of renderings
then viewing them in rapid succession (click
here to view the fly-by).
While trees off in the distance form a modeled texture, placement
differences of a couple of feet for trees in the foreground can significantly
alter a scene. For key viewpoints
Realistic trees, proper placement, appropriate under-story textures and
shaded relief combine to produce strikingly real snapshots of a landscape. In robust, forest inventory data the
rendering closely matches reality.
However, equally striking results can be generated from limited
data. For example, the “green” portions
on topographic maps indicate forested areas, but offer no information about
species mixture, age/maturity or stocking.
Within a
Figure 3.
Strikingly real snapshots of forest data can be generated from either limited
or robust
That brings up an important point as map display evolves to virtual
reality—how accurate is the portrayal?
Our cartographic legacy has wrestled with spatial and thematic accuracy,
but “rendering fidelity” is an entirely new concept (figure 32-8). Since you can’t tell by looking, standards
must be developed and integrated with the metadata accompanying a
rendering. Interactive links between the
underlying data and the snapshot are needed.
Without these safeguards, it’s “viewer beware” and opens a whole new
avenue for lying with maps.
While Michael Creighton’s emersion into a virtual representation of a database
(the novel Disclosure) might be decades off, virtual renderings of
_______________________
Author's Note: the Virtual Forest software package by Pacific
Meridian Resources was used for the figures in this column. The fly-by in http://www.innovativegis.com/products/vforest/,
select “Flybys” to access the simulated helicopter flight described as well as
numerous other examples of 3D rendering.
Capture "Where and When" on Video-based
GIS
(GeoWorld,
September 2000)
The past three columns described procedures for translating
An alternative is to populate a
Video mapping is an exciting part of the revolution in visualization of mapped
data. It records
With video mapping, the construction of a multimedia
During the Recording Step, video mapping encodes
In turn, the acoustic signals are sent to one of the audio channels
through the microphone input connector on the video camera. The result is recording the
The direct recording of “where and when” on the tape greatly facilitates field
data collection—as long as there is
Figure 1.
Video Mapping in the Field. As video is recorded, the precise location, time,
and date are recorded every second to one of the videotape’s audio tracks. The other track records pertinent information
as you speak.
Most contemporary video cameras have a switch between photo and movie
mode. In movie mode, streaming video is
recorded at 30 frames per second. In
photo mode, the camera acts like a still camera and "freezes" the
frame for several seconds as it records the image to videotape. In this mode, a one-hour videotape can record
over 500 digital pictures. In both photo
and movie modes the one-second
The Indexing Step involves connecting the video mapping unit to a computer and
playing the video (see figure 2). In
this configuration, the audio cord is switched to the Headphone Output
connector and a second cable is connected to the Lan
C connector on the camera. The connector
provides information on tape position (footage on older cameras and time code
on newer ones) used in indexing and playback control of the tape similar to
those on a standard
As the videotape is played, the audio X,Y and
time code information is sent to the video mapping unit where it is converted
to digital data and sent to the serial port on the computer. If a headset was used in the field, the voice
recording on the second audio channel is transferred as well.
Figure 2.
Indexing the Videotape. The video, audio notes and
For indexing there are five types of information available—streaming
video (movie mode), still images (photo mode), voice audio (headset), tape time
code (tape position), and
Video mapping software records the
The Review Step uses the indexed database to access audio and video information
on the tape. The hardware configuration
is the same as for indexing (audio, Lan
C and serial cables). Clicking on any
indexed location retrieves its
Map features can start applications, open files and display images. The software works with video capture cards
to create still images and video clips you can link to map features, giving
maximum flexibility in choosing a data review method. In many applications the completed multimedia
map is exported as an HTML file for viewing with any browser or over the
Internet. The map features can contain
any or all five of the basic information types:
Text — interpreted from
audio as .
Data — interpreted from audio as .DAT, .XLS or .DBF file
Audio — captured as .WAV file (about 100KB per 5 seconds)
Image — captured as .JPG file (about 50KB per image)
Video — captured as .
The next section explores the procedures for constructing finished maps
and describes several applications of video mapping. In the interim, you might checkout the links
to some online examples (see author’s notes).
_______________________
Author's Note: the information contained in this column is
intended to describe the conceptual approach, considerations and practical
applications of video mapping. Several
online demonstrations of this emerging technology in a variety of applications
are available at http://www.redhensystems.com. For information on the information on the VMS
200TM Video Mapping System contact Red Hen Systems, Inc, 2310 East
Prospect Road, Suit A, Fort Collins, USA 80525,
Phone: (800) 237-4182, Email: info@redhensystems.com,
Web site: http://www.redhensystems.com/.
Video Mapping Brings Maps to Life
(GeoWorld, October
2000)
As detailed in last month’s column, video mapping enables anyone with a
computer and video camera to easily create their own interactive video
maps. The integration of computers,
video camera, and
For example, corridor mapping of oil and gas pipelines, transmission towers,
right of ways and the like provide images of actual conditions not normally
part of traditional maps. In law
enforcement, video mapping can be used from reconnaissance to traffic safety to
forensics. Agriculture applications
include crop scouting, weed/pest management, verification of yield maps, and
“as-applied” mapping. Geo-business uses
range from conveying neighborhood character, to insurance reporting to web page
development.
By coupling audio/visual information to other
Data is collected without a computer in the field or cumbersome additional
equipment. Figure 32-11 shows the VMSTM
unit by Red Hen Systems (see author’s note) that weighs less than a pound and
is connected to the video camera via a small microphone cable. The
Figure 1.
Video Mapping Hardware. The VMS 200 unit enables recording and
processing of
The office configuration consists of a video camera, VMS unit, notebook
or desktop computer, and mapping software.
The software generates a map automatically from the data recorded on the
videotape. Once a map is created, it can
be personalized by placing special feature points that relate to specific
locations. These points are
automatically or manually linked to still images, video clips, sound files,
documents, data sets, or other actions that are recalled at the touch of a
button. A voice recognition package is
under development that will create free-form text and data-form entry. The mapping software also is compatible with
emerging
While a map is being created, or at a later time, a user can mark special
locations with a mouse-click to "capture" still images, streaming
video or audio files. The “fire wire”
port on many of the newer computers makes capturing multimedia file a
snap. Once captured, a simple click on
the map feature accesses the images, associated files, or video playback
beginning at that location.
Sharing or incorporating information is easy because the video maps are compatible
with most
popular
Differential post-processing is another important software addition. The post-processing software (EZdiffTM) takes base station correction data
from the Internet and performs a calculation against the video mapped
Figure 2 shows an example of the video mapping software. The dark blue line on map identifies the
route of an ultralite (a hang-glider with an
engine). Actually the line is composed
of a series of dots— one for each second the video camera was recording. Clicking anywhere on the line will cause the
camera, or
Figure 2.
Video Mapping Software. Specialized software builds a linked database
and provides numerous features for accessing the data, customizing map layout
and exporting to a variety of formats.
The light blue and red dots in the figure are feature locations where
still images, audio tracks and video clips were captured to the hard disk. The larger inset is a view of the lake and
city from the summit of a hiking trail.
The adjacent red dots are a series of similar images taken along the
trail. When a video camera is set in
photo mode, a one-hour videotape contains nearly 600 exposures— no film,
processing or printing required. In
addition, the automatic assignment of
The top captured image on the right side of the figure shows a photo
taken from an ultralite inventory of bridges along a
major highway. The middle image is a
field photo of cabbage loper damage in a farmer’s
field. The bottom image is of a dummy in
a training course for police officers.
The web pages for these and other applications are online for better
understanding of video mapping capabilities (see author’s notes).
For centuries, maps have been abstractions of reality that use inked lines,
symbols and shadings to depict the location of physical features and landscape
conditions. Multimedia
_______________________
Author's Note: the information contained in this column is
intended to describe the conceptual approach, considerations and practical
applications of video mapping. Several
online demonstrations of this emerging technology in a variety of applications
are available at http://www.redhensystems.com. For information on the information on the VMS
200TM Video Mapping System contact Red Hen Systems, Inc, 2310 East
Prospect Road, Suit A, Fort Collins, USA 80525,
Phone: (800) 237-4182, Email: info@redhensystems.com,
Web site: http://www.redhensystems.com/.
(Back to the Table of Contents)