Price of non-interoperability

Price of non-interoperability

SHARE

Price of non-interoperability


Mark Reichardt
President
Open Geospatial Consortium (OGC)

Abstract
Non-interoperability impedes the sharing of data and the sharing of computing resources, causing organizations to spend much more than necessary on data, software, and hardware. Because organizations today are under “economic constraints,” the issue of non-interoperability is one that obviously needs to be resolved quickly

Our world is going through a communications revolution on top of a computing revolution, and the many technology issues this involves, frequently cause confusion in the corporate technology decision making process. The technology has been immature as well as overwhelming in volume, hype and rate of new product appearance. Thus, in hindsight, we often see that resources have been applied less effectively than they might have been.

This sense of confusion and disorder has been amplified by the latest phase in the communications revolution in which almost all computers have been attached to a vast network. The Net is potentially a wonderful thing, but besides unleashing evils like viruses and spam, it has shown that our applications often don’t work very well together. That is, they are often non-interoperable.

Organizations seek to avoid unnecessary risks. Non-interoperability increases technology risks, which are a function of 1) the probability that a technology will not deliver its expected benefit and, 2) the consequence to the system (and users) of the technology not delivering that benefit. Risk assessment must take into account evolving requirements and support costs. Some technology risks derive from being locked into one vendor, others from choosing a standard that the market later abandons.The most dire risks associated with non-interoperability are real-world risks.

Today, lives and property depend on digital information flowing smoothly from one information system to another. No single organization produces all the data (so it’s inconsistent) and no single vendor provides all the systems (so the systems uses different system architectures, which are usually based on different proprietary interfaces). Thus, there is the potential for real world disorder.

Sources of Geoprocessing Non-Interoperability
Few kinds of information are more complex than information about the location, shape of, and relationships among geographic features and phenomenon. One reason is that there are many fundamentally different kinds of geoprocessing systems, that is, systems for creating, storing, retrieving, processing, and displaying geospatial data.

These include vector and raster geographic information systems (GIS) and systems for Earth imaging (imaging devices on satellites and airplanes), computer-aided design (CAD) (for roads, sewers, bridges, etc.), navigation, surveying, cartography, location based services (delivered, for example, via cell phones that can give directions and report about what’s nearby), facilities management, etc. Numerous vendors work within each of these technology domains, and they did not, until they joined OGC, consult with their competitors to form agreements on how the data should be structured and how the systems might communicate.

This lack of communication coupled with the many different ways of measuring and mathematically representing the Earth produced a complex and non-interoperable geoprocessing environment. Added to that confusion are the user-side semantic issues: Without coordination, no two highway departments, for example, will use the same attribute schemas, measurement types, and data types in describing a road. Their “metadata” (data describing their data sets) will also use different schemas, making automated data discovery and data sharing difficult.