With the recent news that the UK government will invest £300m in AI research to commit to ‘deep tech’ investment, we thought this would be a fitting time to share our exciting developments in this area.
Currently, our machine learning team is in overdrive. From a recent collaboration with Microsoft, the team have developed a model which identifies roof types purely from aerial imagery. In the time it takes a human to classify roof types in a single image, this technique can process thousands. And if that doesn’t excite you enough (you’re likely on the wrong blog!), we speculate that the machine could take less than a day to classify the roof types of all 35,700,655 properties in Great Britain!
Essentially, the machine is taught how to recognise features in a landscape and thereafter categorise them. It is trained to know when it sees something that meets the learned criteria of an object. For instance, to identify a gabled roof, it has learned a certain structure, colour and context. You could say, it is learning to call a spade a spade.
Work on having the machine recognise other geographic features including bodies of water, road markings and other building attributes is ongoing. This brings us to the question; how many bodies of water can you see in the image below?
If you identified seven, you’d be correct (and clearly more observant than me!). But no matter how many you spotted, you must admit that some of them aren’t obvious. This is essentially what we are asking the machine model to do – hence why our team are very busy!
Aside from stupefying me, having this remarkable level of detail at our fingertips offers many benefits. For example, it could be utilised to help insurance companies and the emergency services produce more accurate risk assessments by identifying thatched roofs and the era houses were built. To help the United Arab Emirates (UAE) plan and manage its natural resources and infrastructure, OS collaborated with Deimos Space UK and the Mohammed bin Rashid Space Centre in Dubai and a deep learning algorithm was created to classify and count palm trees!
ImageLearn is a deep learning programme where we are training a model to understand a highly detailed labelling method for the landscape. Similarly, our upcoming façade hack project will use deep learning techniques to not only label, but to segment mobile mapping imagery to extract building façade features such as windows and doors. Among its many potential uses, this could help in the development of 5G.
For a successful 5G network, you need to know the materials around the antenna and what obstructions are in the direct field as they can interfere with the signal. For this, we could use the façade hack or ImageLearn to establish what buildings are made from and what obstructions to avoid putting an antenna near.
Another exciting project being worked on is the continuing development of a rules-based classification computer vision system. This can classify imagery and height models to assess the landcover within. Previously used for assessing the hedge cover of rural land (Rural Payments Agency project), we are utilising this method to analyse change over time such as new housing developments.
The team are also exploring how we can use computer visions techniques in terms of 5G and autonomous vehicles. Believe it or not, they are testing a road marking detection algorithm to deduce the number of lanes on roads. This research is in its early stages and is still being refined and tested, but just the fact that it is being worked on is fascinating.
We regularly collaborate with the University of Southampton and the University of Lancaster to actively support and sponsor PhD students who are working in these areas. Our PhD programmes are diverse and often inspire ideas for our future. A PHD student at the time, Jon Slade was key in the development of our upcoming façade hack. One candidate is currently investigating the use of deep learning techniques on extracting previously undiscovered archaeological sites in GB from aerial imagery, LIDAR and height models, while another is looking at real-time language processing of Twitter statuses by geographical area and analysing real-time land use of geographical areas by social media posts.
This is just a summary of what our machine learning team are getting up to. If you’d like to keep updated on our progress in this area, we will always post updates on our Twitter so simply follow us!