Mapbox’s Vision SDK to transform connected cameras into the second set of eyes for your car. With live imagery processed directly on your device, it brings visual context to Mapbox’s live location platform and rethinking how machines and humans alike interact with the road.
For users, the Vision SDK unlocks augmented reality navigation, detection and segmentation of various road features, customizable safety alerts, and more. On the backend, the SDK feeds valuable road metadata back into the living map. Highly efficient neural networks run solely on the device, so network bandwidth needs are low.
• Takes advantage of ARM’s Project Trillium AI technology to provide recognition of vehicles, pedestrians, signs and crosswalks out of the box.
• Runs neural networks directly on mobile devices, which allows for recognition to occur in real time.
Vision SDK identifies salient road features (such as signs, traffic lights, and lane information) and processes the data directly on-device. Changes to the driving environment are detected on the spot and uploaded to make low-latency, low-bandwidth updates to the living map.
Vision SDK is not just for mobile apps – it unleashes new functionality for embedded automotive systems as well.