US: Apple has published a new paper in Cornell’s arXiv open directory of scientific research, which describes a method for using machine learning to translate the raw point cloud data gathered by LiDAR arrays into results that include detection of 3D objects, with no additional sensor data required.
The paper offers the clearest look at Apple’s work on self-driving technology. It is known that Apple is working on this because it had to admit it in order to secure a self-driving test permit from the California Department of Motor Vehicles and because its test car has been seen many times.
Simultaneously, Apple is also opening up on its machine learning efforts, publishing papers highlighting its research, and also sharing information with the larger research community.
This specific picture describes how Apple researchers, including paper authors Yin Zhou and Oncel Tuzel, created VoxelNet that can extrapolate and infer objects from a collection of points captured by a LiDAR array. Essentially, LiDAR works by creating a high-resolution map of individual points by emitting lasers at its surrounding and registering the reflected results.
The research is very interesting because it could allow LiDAR to act much more effectively on its own in autonomous vehicles. Typically, the LiDAR sensor data is paired or ‘fused’ with info from optical cameras, radar, and other sensors to create a complete picture and perform object detection; using LiDAR alone with a high degree of confidence could lead to future production and computing efficiencies in actual self-driving cars on the road.