Another stupendous innovation by Google; collaborating AI with Google Street Views, Google has taken digital photography to a new level.
In May 2017, Google integrated Street View imagery with Google Maps allowing one to use turn-by-turn navigation while reaching a place, and now it has taken the use of Street View imagery to the next level. It has successfully used Machine Learning to conquer subjective concepts like aesthetics by developing Creatism, a deep-learning system for artistic content creation.
To achieve the milestone, Google used 40,000 Street View imagery from areas like the Alps, Banff and Jasper National Parks in Canada, Big Sur in California and Yellowstone National Park, and turned them into a highly appealing panorama using the software that mimicked the workflow of a professional photographer. This successful experiment in advanced photography is a fruitful culmination of Google’s past attempts in the realm. Earlier Google worked with an AI “photo editor” that attempted to fix tampered professional shots using an automated system equipped to modify lighting and applied filters. Eventually, it came up with another model that made an effort to distinguish between an original and a professionally edited shot. All these attempts aided Google to now add this accomplishment in its repertoire.
Breaking the aesthetics into multiple aspects
According to Hui Fang, a software engineer at Google Research’s Machine Perception, “Our approach relies only on a collection of professional quality photos, without before/after image pairs, or any additional labels. It breaks down aesthetics into multiple aspects automatically, each of which is learned individually with negative examples generated by a coupled image operation. By keeping these image operations semi-“orthogonal”, we can enhance a photo on its composition, saturation/HDR level and dramatic lighting with fast and separable optimizations.”
For the training, Google used a generative adversarial network (GAN), where a generative model creates a mask to fix lighting for negative examples, while a discriminative model tries to distinguish enhanced results from the real professional ones. Unlike shape-fixed filters such as vignette, dramatic mask adds content-aware brightness adjustment to a photo. The competitive nature of GAN training leads to good variations of such suggestions.
The success of this experiment corroborates our belief in the power of spatial data, AI and deep learning. The beautiful landscapes that Google’s experiment has produced reinforce the fact that these technologies are not just ‘technical stuff’. They have the power to make our lives happier. The Geospatial community is known to dedicatedly work towards creating a better world and this experiment is surely another feather in the cap. The images are immensely soothing to eyes and automatically transport us to a tranquil environment.