Where” is the new “who.” The “where” dimension is one of the most natural, powerful, insightful and intuitive ways to explore the rapidly growing world of data and services. Consumers and enterprises are overwhelmed by the amount of raw data available and are yearning for valuable insights into their work and personal lives. The ability to pivot on real world dimensions like people, places, calendar and things can provide better answers for everyday tasks for individuals as well as deep understandings for businesses, municipalities and governments. The Internet is rapidly evolving into a real-time, read/write medium of data and services. Device proliferation is assuring that this real-time, read/write capability is part of our lives through wearable computing devices and sensors, smartphones, tablets and a host of other Internet-enabled devices. Technologies like augmented reality and real-time sensors will connect the digital world with real world. The “Internet of things” will breathe digital life into the world of physical things. Connectivity, searchability, relevance and usefulness depend on the contextual understanding of the location and their relevant intersections.
The last decade changed the computing landscape with the “who” dimension through personalisation and social connection. This created powerful context, relevance and relations for people both at home and work with companies like Amazon, Facebook, LinkedIn, Twitter and others leading the way. The current decade is poised to do the same for the “where” dimension across devices, data and services. Just as the “who” dimension filtered and increased relevancy for our digital world, the “where” dimension will do the same as our digital connectivity reaches out to touch the entire physical world and reasons with the volumes of user-generated data and real-time sensor information. This creates opportunity but also unparalleled potential noise, raising the need for a powerful reasoning principle like the “where” dimension.
Traditionally, consumers performed simple mapping tasks while GIS professionals provided deeper understanding to corporations by wielding GIS analytical tools against private business data. Vast insight remains locked in tags, posts, tweets, reviews, micro-blogs and websites. Today, some of the larger companies in the Internet ecosystem have begun to move from traditional Web indexing to higher-level information organisation. They are using algorithmic extraction and big data graphs to create and relate entities on the Web, organising them through a semantic taxonomy and enabling natural access to this knowledge via conversational understanding and other natural user interfaces.
This allows people to find answers to many business questions, in addition to addressing consumer needs. A great deal of value will be created at the intersections of individuals’ personal data, private business data and the read/write web. GIS derivative analysis and statistical data mining have long been encumbered by the inability to reach tail data, the absence of scale collaborative filtering, the lack of ample “oxygen” for statistical reasoning and more that could be garnered from the Web if only there were a stronger relational notion of entities and a semantic model. On the reasoning side, traditional GIS relies on visual layering of data and mash-ups, leaving the burden of analysis and understanding upon the user. This, too, will change due to the increasing power of computational analysis and reasoning engines, and also because of the increasing volume of semantically indexed data.
Extracting the digital identity of entities and all of their digital relations, attributions, properties and characteristics requires massive scale computing resources and human curation. These entities need to be lashed to their physical existence or trajectory in a computable “living” 3D map canvas. The future “living” 3D map will encompass contributions from user-generated data, real-time broadcasting sensors and massively scalable machine vision algorithms that digitise and make sense of the physical world through imagery. Imagery will range from high quality professional satellite, aerial and streetside capture to consumer grade camera phone pictures and video for much of the outdoors and interior spaces. GPS traces, check-ins, metadata, map edits, commercial data feeds and municipal public data sources will be integral to the comprehensiveness and freshness of the map. The future map will evolve at the world’s course and speed. The map will have many views — your personal view decorated with your pertinent information, your enterprise view enhanced with your business data and public views based on your authorisations and task purpose. Privacy and firewall controls will put the user, enterprise and municipality in control of private data. Task and situational tolerances will determine the appropriate level of verification of crowdsourced data.
One of the most essential aspects to the future is developer innovation at all levels. Today, most developers are operating at the presentation level and there are not enough handles, levers, APIs and interfaces to contribute and consume services at the database, algorithmic or foundational building block levels. The future “living” 3D map canvas must be extensible to developer innovations and services at every tier by every developer, not just GIS specialists.
This real-world trellis and underlying data ontology provides the visual and data framework for augmenting your real-world view, framing your spatio-temporal exploration, powering your intelligent agents and analysing the world you live, work and play in. Democratisation of entity and attribution creation by users, algorithmic advancement towards digitisation and capture of the physical world, connectivity of the temporal state, a conversational understanding of user intent, a rich semantic organisation of the data beyond keyword matching, natural user interfaces and new augmented device experiences powered by the “where” dimension promise to change the future of your digital and physical landscape.