We are seeing a growing demand for accurate location data. This is caused by a number of factors but in particular, the consequences of poor-quality data are becoming more severe as bigger decisions are being made based on data. However, we are also seeing more ROI from companies using location data and this will continue, especially as the quality of location data continues to improve says Mike Davie, CEO and Founder, Quadrant in an exclusive interview with Geospatial World.
What could be done to make the data supply chain more transparent and trustworthy?
The Data Economy, simply put, is the production, analysis, selling and use of data, and the Data Supply Chain is essentially the link between the end data consumer and the original source. However, very little is known about it as much of the conversation around data has centered on issues of privacy, specifically when it comes to social media platforms such as Facebook. Data producers (such as ride-sharing apps which emit data on the movements of passengers and drivers) create data, which is then stored and often purchased by a third party. The third parties who purchase this data then use it for their own, separate business purposes.
More and more organizations from governments to business are using data to make decisions and are becoming much more data-driven. However, the method by which these transactions are made are chaotic and opaque. For example, if a company – such as a large FMCG firm that operates in multiple markets – wanted to know how many consumers bought its products around the world, it would have to go through a series of middlemen to gather this data. Many of these middlemen, though, try to hide their sources and make it non-transparent. They do not want to show where this data comes from for either malicious reasons (data could be false, replicated) or non-malicious reasons (they either don’t know or want to protect their own sources).
By not knowing the original source of the data it is hard for companies to know its quality and accuracy. Additionally, we are in a world where data can be faked very easily, leading to wrong decisions costing millions of dollars or worse. This lack of transparency can have real-world consequences, companies will make multi-million-dollar decisions based on data that could well be wrong.
This is why we need to make the Data Economy more transparent. Data authentication technology, which tracks data from its source and uses blockchain to stamp an indelible signature to it, is a good example of a solution to the murky data economy. This guarantees that, from the time of stamping, any change in the data will result in misalignment to the unique signature, signaling to the buyer that the data has been changed. The technology is helping to introduce transparency and trust the data supply chain.
Where do you think the global data economy is heading?
The global data economy, especially location data, is booming and heading on a clear trajectory: onwards and upwards. This economy is huge, with 2.5 quintillion bytes of data created each day at our current pace. It is increasingly valuable too, with revenues for big data and business analytics solutions forecast to reach $260 Billion by 2022. We are also becoming increasingly reliant on data and firms themselves are becoming more data-driven – 93% of organizations in Singapore, for example, use data for critical and automated decision-making.
We are seeing more companies emerge whose business models rely solely on data and innovations that they create using this data. A good example of this is the CityMapper App that uses transport and location data to help users find the most efficient way to get from A to B. This is one example but there are many more. Lastly, the financial services industry is beginning to get in big with data – specifically alternative data. As alternative data becomes more widely available – and more accurate and trustworthy – then we will see even more usage of data.
What are some of the major discrepancies and loopholes in the data economy that need to be identified and resolved?
Today, the big data space is chaotic and uncharted, full of unstructured data whose source is hard to verify. This lack of organization means that organizations are not able to benefit from the full value of data and are unable to identify trends and patterns that could help them. Furthermore, it restricts innovation as companies cannot make sense of the amount of data out there. One area that Quadrant is involved heavily in is data mapping. We are able to do this thanks to our Blockchain-enabled data authentication technology that allows us to link and organize different data sets. In this respect, we are enabling AI/microservices on top of our Quadrant platform. This allows entrepreneurs to build innovative solutions using data.
Data authenticity is also a problem. It is often hard for companies to understand the quality and accuracy of the data they use due to the fact that sources are often hidden. This lack of transparency can have real-world consequences. When using a ride-sharing app, occasionally your destination address will not appear in the search box or will be inaccurate. This is not a software glitch, rather this is a result of poor-quality data – data that the ride-sharing company will have purchased from a third party – being fed into the app.
While this causes nothing more than an irritant for user and driver, poor data quality can have much more serious consequences. Multinational companies will pay millions (often hundreds of millions) of dollars for anonymous consumer data that they will use to influence where they will locate their next restaurant or advertise their newest brand of shampoo. Unbeknown to the firm, some of that data could have come from ‘Click Farms’ who operate solely to profit from the sale of false data, which makes its way to legitimate businesses through a Data Supply Chain that lacks transparency.
A city government may purchase location data surrounding the movement of its citizens, and use that data to plan new bus routes, train stations, hospitals or emergency services. If this data is wrong, then the result could be anything from wasted spending (and higher taxes) to worse crime or slower economic growth. The problem is, data accuracy and quality are rarely questioned, rather it is the end analysis and decision making that usually bears the brunt of criticism when things go wrong.
Location technology is ubiquitous and certainly plays a big role in multiple domains. How to ensure the authentication of location data?
One of the main ways to ensure authenticity is through using Blockchain technology. Quadrant authenticates data via Quadrant Protocol. We use Blockchain-enabled data authentication technology to stamp data with a unique signature (also known as a ‘Hash’) as it is created, placing it on Quadrant’s blockchain. This guarantees that, from the time of stamping, any change in or corruption of the data will result in misalignment to the unique signature, signaling to the buyer that the data has been changed.
By using Quadrant’s data authentication technology and blockchain technology, companies can create an immutable ledger that can reference the data by hashing the data and putting it on the blockchain. So, whenever it is consumed and analyzed, they go back to the data source, knowing that the data is true to the moment that the data was created and stamped.
Quadrant enables the tracing of the bad data to the bad actor whenever the data is delivered in a bad state. This allows for trust and transparency in the data economy, as companies are able to verify the source of their data. Importantly, it also allows companies to map disparate sources of data together, organizing the data and making it easier to innovate and develop creative solutions and products.
What are the current major trends in the location-based services market?
On the location-based services side of the market, private companies are using data at a much greater pace – especially location data with the global market estimated to exceed US$13 billion by 2025. Organizations are relying on analysis of this data to make serious business decisions on expansion and strategy, or to make purchases worth many millions of dollars. Location data-emitting apps are becoming more accurate and usable, from ride-hailing to augmented reality, and are being sold to both consumers, governments, and the enterprise.
We are seeing a growing demand for accurate location data, too. This is caused by a number of factors but in particular, the consequences of poor-quality data are becoming more severe as bigger decisions are being made based on data. However, we are also seeing more ROI from companies using location data and this will continue, especially as the quality of location data continues to improve. Lastly, as the quality of location data gets better, so is the quality of Machine Learning and Artificial Intelligence (AI), both of which rely on data to make decisions.
How do you foresee the future of location intelligence?
I believe that we will start to see some amazing innovation and growth in this space. Location data usage has grown significantly and, as I mentioned earlier, more companies are seeing real results. As more companies (from start-ups to more established firms) become data-driven we will, in turn, see more firms produce data. Once this is mapped and organized then we will start seeing innovations and solutions that are perhaps unimaginable today.
Of course, we do need to address the issue of data provenance and transparency within the Data Economy as a whole – which we are seeking to tackle through solutions such as those enabled with Blockchain. Lastly, we will see more regulation emerge, the effects of which are hard to gauge. However, if we develop regulations that protect the individual yet at the same time make it easy for firms to access and use (anonymous) data in order to innovate and create solutions, then our lives will be improved thanks to location data.
What is the vision of Quadrant and are you developing any new technology?
Quadrant maps and authenticates data, making it easier to buy and sell quality, authentic data and spurring innovation and solutions for organizations. Quadrant consists of two parts, the Quadrant Platform and Quadrant Protocol, which combined allows businesses and governments to solve their data challenges and optimize data to fit their needs.
The Quadrant platform processes over 50 billion records a month, enabling organizations in every industry to purchase data which they can then use to make business and policy decisions. It is powered by a protocol that uses blockchain technology to authenticate and map data. The Quadrant Protocol “stamps” data at its source with a unique signature, so any data point that has been stamped can be traced and verified back to its origin at any time, ensuring all stamped data is authentic. The technology also allows for the mapping of data into usable, targeted data sets that de-clutter the field of information for professionals and organizations, allowing them to use inputs relevant to their needs and to analyze them effectively and efficiently.
By creating an ecosystem that enables access to data that is authentic, traceable and mapped, Quadrant solves data challenges and spurs innovation. We are developing new technology, making it easier for organizations to find solutions to their data problems. We launched Quadrant Protocol late last year and earlier this year we made it possible for organizations to purchase credits for our services using fiat currency. By allowing companies to use currency, we ensure that they remain in compliance with traditional book-keeping and accounting structures.
What is your take on privacy concerns arising with the abundance of data?
Privacy concerns need to be addressed through smart regulations. We should protect individuals from harm, yet at the same time make data available for organizations to then use and innovate. By mapping and authenticating data, we can help create order to reduce abuses of privacy. Still, saying individuals should be proprietaries of all the data they produce is as far-fetched as saying they should control none of it. Data privacy is not the only problem in the ecosystem and ultimately, whilst there are conversations surrounding privacy and security, few are talking about data quality. That’s what we are focused on.
Can you throw some insights on the interconnection between location-based advertising and consumer behavior?
Quadrant analyzed anonymous location data for two large malls in Kuala Lumpur, Malaysia, which is a great example of how retailers can use location data to find their customers and increase competitiveness. The data enabled retailers to understand their catchment areas.
We examined how Pavilion Kuala Lumpur, a shopping mall located in the Bukit Bintang area of Kuala Lumpur, can use location data to find their catchment area and increase their footfall. Pavilion Kuala Lumpur and Suria KLCC are two competing shopping malls located two kilometers from each other.
For Suria KLCC, it was observed that 10% of the people who were approaching the mall were from the region within a 5km radius (see the innermost concentric circle in Figure 1). The second concentric circle (10km radius) represents the most probable customers with a total of 44%, composed of 10% and 34%. Similarly, the third concentric circle (15km radius) covers a total of 78% (made up of 10% plus 34% and 34%). Finally, the outer region shows that 22% of the total people come from this region.
For Pavilion Kuala Lumpur, 15% of the people who approached the mall were from the region within the 5km radius (the innermost concentric circle in Figure 2). The second concentric circle (10km radius) had a total coverage of 59% (15% and 44%) of the total probable customers. Within the third concentric circle (15km radius) was 87% (15% + 44% + 28%). Finally, the remaining 13% of the total people came from the outer region.
From these images, we know that most of the customers of Pavilion Kuala Lumpur were concentrated within a shorter radius unlike the customers of Suria KLCC who were more spread out. For Pavilion Kuala Lumpur, only 14% of the total customers came from the outside region compared to Suria KLCC’s 22%. From this data, Pavilion concludes that if it wants a bigger share of the market, it needs to improve its customer base from the outside region.
Based on this information, Pavilion Kuala Lumpur can target more customers to compete better against Suria KLCC. Using location data, Pavilion Kuala Lumpur can increase the prospects of finding potential customers and can increase their footfall.
For retailers, understanding their catchment areas is very important in running their business. A catchment is an area from which a retailer is expected to draw their customer and thereby increase footfall. Not focusing on the right catchment area can make a retailer lose most of their potential customers; and hence, their competitiveness. Furthermore, once general catchment areas are identified retailers can go beyond this and identify individual neighborhoods that they can specifically target based on travel patterns and trends. Ultimately, high quality, mapped and anonymous location data such as this can improve business outcomes and make lives easier for the consumer.