If the entire mechanism of a self-driving car can be used to design a spectacle that tells the wearer the right routes, then it could help many visually impaired people too; this is how deepWay came into being. An innovation that has the potential to bring the change, deepWay — an aid in the form of a spectacle — uses deep learning particularly convolutional neural networks so that they can navigate through the streets. The aid has been created by a young college enthusiast Satinder Singh who is a student at Army Institute of Technology, College of Engineering, Pune, India.
Excited to use his technical know-how for a social cause, Singh shared that he took about 10,000 images around his college, which included all types of roads (off roads as well), and trained convolutional neural network classifier using this data. “I used Arduino to interface two servo’s that could press against one side of my head indicating me to move in that direction. I connected a camera, earphones and the Arduino to my laptop. The camera was placed on my chest. The camera feed was processed using the laptop and the laptop predicted on which side of the road the user was walking,” explains Singh. The system also tells the user about the people around him and stop signs using earphones. For face detection and stop sign detection haar cascades in opencv has been used.
Location-based tracking needs to be included
Reiterating that location is a vital component of the project, Singh says right now the aid localizes the visually impaired people and gives them direction to keep towards their left while walking. But it suffers certain inadequacies and does not give their exact location. Thus location-based tracking can boost the efficiency of the aid. In order to so Singh will use LiDAR data to get surrounding information and GPS to get location on maps. By mixing both of these technologies exact location of the visually impaired people can be found.
Not everything was smooth, there were rough patches too
While conceptualizing this project was easy, making it work was difficult. The main challenge was collecting labelled data for the convolutional neural network. The CNN’s are hungry for training data. The more the data the better it works. “I used to go around my college for a walk to gather training data. I did this for different types of roads and in different lighting conditions so that my model can better generalize the dataset and predict correct classes. After all this rest was the code which was not a big problem for me,” adds Singh.
How is deepWay is different from Microsoft’ s app Seeing AI?
When asked how different is deepWay from Microsoft’s app Seeing AI, which is designed for the low vision community using AI to describe people, text and, Singh says, “I saw Microsoft ‘Seeing AI’ before implementing my project. The main thing that was missing in that project was ‘Navigation’, I added navigation to my project. Apart from this, connecting 2 servos for navigation is also a new thing I implemented in my project. There are a lot more things that I am going to implement so that visually impaired can use it as a complete package for their day to day lives.”
On a positive note Singh concludes by saying that he is already working on deepWay version 2 which is going to be much better and effective.