Home Location Kinect camera to enhance indoor navigation

Kinect camera to enhance indoor navigation

US: Researchers have launched a website that allows users to record their environments in 3D with a Kinect camera. The website, [email protected] (www.kinectathome.com), allows users to download a plug-in that allows them to capture 3D images with their Kinect in return for sharing the images with the researchers.

According to an online article published on Wired, researchers may be on the cusp of an unprecedented way to amass 3D data to improve navigation and object-recognition algorithms that allow robots to cruise and manipulate indoor environments.

“For robots to work in everyday space and homes, we need lots of 3D data. Big data is where it’s at, as Google understands with its efforts,” said roboticist Alper Aydemir of the Royal Institute of Technology in Sweden. “But no one has been able to do this efficiently yet [with 3D data].”

With the advent of Microsoft’s low-cost yet highly effective 3-D camera system, called Kinect, and sanctioned ways to hack the device, computer vision research is experiencing a revolution.

“I think we’ve developed a win-win situation,” said Aydemir, who leads the [email protected] effort. “Users get access to 3D models they can embed anywhere on the internet, and we use this data to create better computer vision algorithms.”

Populations are growing older, health insurance costs are rising and care systems are increasingly stretched, so autonomous robots offer a dreamy vision of the future for many people.

The trouble is that most automatons can only bumble through crowded human environments. Incorporating building blueprints into navigation algorithms pushes them only so far because such plans lack couches, tables, dogs and other oddities that people cram into indoor spaces.

What’s more, helper robots are only useful if they can recognise and interact with a dizzying variety of objects. Some crowdsourced schemes use Amazon Mechanical Turk to categorise objects in 2D images acquired by robots, but these images don’t inform any item’s 3D shape or behaviour.

Helper robots must be able to distinguish a refrigerator from an oven, for example, and open these labyrinthine 3-D objects to cook a casserole or deliver a cold beer to beckoning human owners.

“If you can get real-world 3D data for 5,000 refrigerators, you can develop an algorithm to generalise a refrigerator and then test a robot’s ability to generalize them,” Aydemir said.

Source: Wired