To achieve large-scale commercialization of autonomous vehicles a new generation of high-precision 3D environment sensing solid-state LiDAR technology products will be required to fulfill the industry’s strict requirements, including the need for automotive grade, mass production, high resolution, stability, and low cost, underscores Dr. Leilei Shinohara, Vice President of R&D, RoboSense, in an exclusive interview with Geospatial World.
Since you were actively involved in the world’s first automotive-grade LiDAR product, could you tell us about the growth of the LiDAR industry since then and the scope for new innovations?
Since the first automotive LiDAR on L3 AD passenger vehicles, this product has proven the feasibility and necessity of automotive grade LiDAR. In the next five years, the mature mass-produced automotive-grade solid-state LiDAR market will grow very quickly. We expect that MEMS LiDAR will be the first generation solid-state LiDAR for autonomous driving vehicles, including RoboSense’s RS-LiDAR-M1 MEMS solid-state LiDAR. Level 3 autonomous passenger vehicles using LiDAR will gradually become an industry standard. As mentioned above, LiDAR is a very important sensor to guarantee the safety and compensate for the weaknesses that currently occur with conventional sensors. Recently, both Audi’s A8 (a Level 3 mass-produced autonomous vehicle) and the Waymo One (an autopilot ride-hailing service) have both used LiDAR, which is an important industry signal.
Furthermore, smart perception functionality with LiDAR point cloud and data fusion with other sensors are also very important. In the end, what customers want is a smart sensor, a combination of hardware and software functionality. The RS-LiDAR-M1 MEMS solid-state LiDAR is such a sensor with AI perception algorithms included within the sensor to provide the detection, tracking, and classification of objects.
What do you think should be done for the mainstreaming and easy availability of LiDAR sensors? And how to make LIDAR’s more precise?
Currently, traditional LiDAR systems are limited by their large size and high cost, making it difficult to meet the industry’s needs. Therefore, to achieve large-scale commercialization of autonomous vehicles, a new generation of high-precision 3D environment sensing solid-state LiDAR technology products will be required to fulfill the industry’s strict requirements, including the need for an automotive grade, mass production, high resolution, stability, and low cost.
How viable is mass production of LiDARs? And how could it be made cost-effective?
To mass produce LiDAR in cars, companies must follow strict automotive standards and requirements. RoboSense’s MEMS LiDAR is a product designed for L2 ADAS and L3 AD. It is targeted to support standard mass production vehicles starting in 2020/21. L2 ADAS vehicles are already available on the market. There are also L3 AD vehicles technically ready with mass production LiDAR sensors. Technically, the supply chains for automotive LiDAR products are ready. Driven by the demands placed on LiDAR by the AD market, supply chain development is also becoming extremely fast, and through massive reliability testing, AD vehicle safety will be proven.
Could you please elaborate on MEMS LiDAR technology and what advantages it has over existing LiDARs?
LiDAR is an indispensable sensor for autonomous driving, enabling safe driving in autonomous vehicles. LiDAR can be categorized as difference types: MEMS Micro Mirror Scan (MEMS LiDAR), mechanical LiDAR, Optical Phase Array (OPA LiDAR), and Flash LiDAR. MEMS LiDAR uses a MEMS micromirror to steer the laser beam for scanning. The pros are that the device can feasibly be made small and with a lower cost.
Mechanical LiDAR uses a motor to spin the laser reflection mirror or spin the entire laser and detector unit to scan the environment. Due to the complex laser sending and receiving design, it is difficult to achieve high volume production and the cost is higher. Flash LiDAR “3D TOF camera” features a flashing laser that illuminates the entire FOV without using a beam steering device. Pros are that there are no moving components; it is truly solid-state, stable, low cost, and able to be made small. Cons are that it is difficult to achieve a wide FOV and difficult to reach long range, and there are issues with working under strong ambient light conditions. OPA LiDAR uses an optical phase array to deflect the beam for scanning. However, there isn’t an established supply chain and it is difficult to achieve a wide FOV and long range.
What are some of the major challenges before the automotive LiDAR industry?
LiDAR is currently limited by large size and high cost, which makes it difficult to achieve automotive grade, mass production, high resolution, stability, and low cost.
Intel-owned Mobileye has developed a new autonomous driving system that makes use of high-definition cameras and not LIDAR’s. Would it have any impact on the LiDAR industry and also on the autonomous car sector?
Mobileye is an excellent company. They made significant improvements to front automotive cameras and accelerated the ADAS and AD industry. However, cameras have inherent hardware weaknesses and physical limitations. In ADAS systems, it is feasible to achieve that type of use with a single camera. However, with L3+ ASIL-D autonomous driving systems, neither cameras alone or radar or LiDAR alone is feasible. Therefore, a sensor fusion system with cameras, radar, LiDAR, and other sensors are needed to create the redundancy needed to achieve the vehicle level required for an ASIL-D system.
How do you foresee the future of autonomous vehicles?
There will be step-by-step growth in autonomous vehicles. The biggest concerns are always safety and public acceptance. The SAE has defined AD vehicles into 5 category levels (L1 – L5). L2 (partial automation or advanced ADAS systems) and L3 (conditional AD) passenger vehicles will start growing significantly in 2020/21. Meanwhile, L4 (highly automated) vehicles for special uses, such as parking, robo-taxis, and robo-trucks, will enter the commercial stage at the same time. Fully automated vehicles (L5), I think, will still take a long time to be reached. If they are not able to prove that fully automated vehicles are safer than human drivers, there will be difficulty becoming popular. But the industry is moving in this direction step by step.
As per surveys, public confidence in self-driving cars is shaky and most consumers won’t trust an autonomous vehicle. What do you think the industry needs to do to allay skepticism and dispel prevailing myths and suspicion about self-driving cars?
The biggest challenge for autonomous driving is safety. The system has to make sure it is able to reduce accidents more than human drivers. The system has to prove to the public that the accident rate is lower. To achieve this, the surrounding environment perception is very important. Which means the system must “see” further and wider and understand the environment better than a human so the system will make safer and quicker decisions.
According to ASIL-D safety regulations, the single point failure rate needs to be less than 1%. Conventional sensors, which include cameras and radars, all have their limitations. For example, cameras don’t work well under bad ambient light conditions and radars have limitations detecting an unmoving non-metallic obstacle. Therefore, when using only radar and camera sensors, they cannot guarantee ASIL-D compliance. These weaknesses can be corrected by LiDAR. But LiDAR cannot replace them alone since LiDAR also has limitations. Therefore, a good perception software system (like RoboSense’s) is needed to fuse together LiDAR, radar, and cameras data for redundancy.
With autonomous vehicles, we would need to revamp the rules, regulations, and laws concerning driving and automobiles entirely. Are the regulatory associations and government organizations prepared to tackle this challenge?
Most governments are working on it. As mentioned above, the biggest challenge is proving the safety of autonomous cars. To allow an AV on the road, not only rules, regulations, and laws are needed, but also insurance and public acceptances need to be considered. For example, in China, local and state governments are working very hard together with specialists in the industry to establish regulations and also provide an AV-friendly infrastructure.
Could you please elucidate on the major autonomous driving trends?
The AD system includes L3 AD functionality for passenger cars, such as TJP (Traffic Jam Pilot) and HWP (Highway Pilot), which will be included in first generation AD passenger car systems. Those cars will focus first on easy highway driving situations. L4 functions will arise from situations, such as valet parking with a limited operating environment because of parking speed limitations and robo-taxis and robo-trucks with also limited and defined operational areas. Also in bad weather conditions, operators can limit operation to reduce safety risks. Those kinds of limitations are based on the same reasoning, to achieve a safe environment for AD. Therefore, to guarantee the smooth deployment of autonomous driving, an AV-friendly environment and infrastructure is very important.
On the vehicle side, the widespread use of LiDAR and smart perception, along with fusing other sensors, such as radar and camera, will absolutely become the standard. On the road, creating an AV-friendly infrastructure, the IVICS (Intelligent Vehicle Infrastructure Cooperative Systems) along with V2X will become important, as well.
This year, Robosense put quite a lot of effort on developing LiDAR for IVICS system, with a LiDAR on the roadside of the vehicle to create a dynamic HD map for the entire environment, including stationary and dynamic obstacles. This dynamic map will be updated in the vehicle via the V2X platform in a sub-second rate for further fusion with the vehicle’s sensors. Such a system together will accelerate the deployment of the AD vehicles.
To ensure safety, fusion with a lot of different sensors is needed. A highly powerful embedded computation platform with a reliable communications system, such as 5G V2x, is essential. Furthermore, an AD-friendly infrastructure is important. Then, the system needs to be able to take over all driving, even in the case of a system failure. Therefore, LiDAR systems will require operation in all weather conditions. This is still a challenge for LiDAR. However, RoboSense is using algorithm software to solve this issue and has just announced an all-weather LiDAR system for snowy and low-temperature conditions with Finland-based Sensible 4 for their GACHA shuttle bus, the world’s first autonomous driving robo-taxi for all weather conditions.
What are the future plans of RoboSense and what makes it distinct from its competitors?
RoboSense is Asia’s market leader with an over 50% market share of all LiDAR sold. We are currently developing different types of LiDAR to meet different customers’ needs, so most of our customers will be served by our LiDAR products. We also have joint development projects with OEMs, technical companies, universities, and research institutes to provide full system solutions. RoboSense’s partners include Alibaba Group’s Cainiao Network to provide LiDAR for 100,000 unmanned delivery vehicles in the next three years; Sensible 4 to provide all-weather LiDAR for the GACHA self-driving all-weather shuttle bus in 2019; and many others.
Unlike most other companies, RoboSense focuses on stable technologies, such as RoboSense’s mechanical spinning and MEMS mirror LiDAR. RoboSense’s mechanical LiDAR uses a very special combination of transmission and detection components, as well as an optical system design which reduces the cost of conventional LiDAR. For RoboSense’s MEMS LiDAR, we have a unique optical module design to improve manufacturing, which was awarded a number of patents for the best performance class and lowest cost.
Through co-development projects with our OEM and industrial partners, RoboSense’s CES Innovation Award-winning MEMS LiDAR product, the RS-LiDAR-M1, will be ready for delivery to automotive customers. The RS-LiDAR-M1 with patented MEMS technology offers ground-breaking vehicle intelligence awareness to fully support Level 3/4 driverless automated driving and also Level 2 ADAS applications. A breakthrough on the measurement range limit based on 905nm LiDAR with a detection distance to 200 meters, the upgraded optical system and signal processing technology provides a remarkable final output point cloud effect which can now clearly recognize even small objects.
In addition and most important, RoboSense believes that LiDAR hardware also needs superior software to function fully. Thus, we are focusing on developing first-class AI perception algorithms and systematic LiDAR solutions with a full set of features to support customers and to get the most out of our LiDAR sensors.