Digital image sensor combines 2D and 3D

Digital image sensor combines 2D and 3D

SHARE

 

On Semiconductor announces the AR0430 CMOS digital image sensor, available from Framos. It combines 2D video imaging and 3D image recognition on a single sensor.

The AR0430 CMOS sensor, with a small 1/3.1-inch optical format, delivers high-quality images with advanced 2.0 micron pixel stacked BSI technology; and, a 4Mpixel resolution at 120 frames per second. The depth mode enables depth mapping concurrently, while shooting video at 30 frames per second. The sensor can be used in industrial and consumer end-product applications, adding 3D features like cameras for IoT, wearables, security, and augmented reality, virtual reality and mixed reality (AR/VR/MR).

For example, the user can participate in a video conference while replacing the background for security purposes. It is also possible to scan objects and to create simple 3D models for use in virtual reality worlds, or even interpret hand gestures to control smart devices.

The AR0430 has an active-pixel array of 2312 x 1746, achieving a 4:3 aspect ratio. The device provides low-power performance, drawing a mere 125mW, when a 4Mpixel data stream is operating at 30 frames per second. The low power monitoring mode drawing only 8.0mW in standby, is especially valuable in battery-powered security applications.

The AR0430 in standard imaging mode, can provide high-quality images in both day and night lighting conditions, enhancing its suitability for use in security cameras. The imager, with a large linear, full well capacity and high dynamic range, succeeds in challenging light conditions with leading colour performance, claims Framos.

The sensor can record video at 120 frames per second in slow motion mode and use the zoom feature, while retaining the resolution quality that is perfect for wearable devices. The feature of depth data at 30 frames per second promotes the possibility of object recognition, virtual replacement, or downstream artificial intelligence (AI), that can interpret data for autonomous decisions, or touchless device control.

The standard compact sensor size, for embedded vision applications, allows multiple camera synchronisation for 360-degree cameras, or longer-range depth solutions.

Simultaneous video and depth mapping is enabled by ON Semiconductor’s Super Depth technology, built on stacked die technology. Super Depth Technology, and the color filter array (CFA) feature and on-pixel micro-lenses, create a data stream containing both image and depth data. This data is combined off the sensor, via an algorithm, to deliver a 30 frame per second video stream and depth map of any object within one meter of the camera.

Imaging engineers and system designers will benefit from the AR0430’s significant configuration flexibility. This flexibility includes the ability to program gain, horizontal and vertical blanking, frame size/rate, exposure, image reversal, window size, and panning. Current engineering samples are available in bare die format, and full production will start later in Q1 2018.