Objectively Mapping the Future

Image of Kejie Li

Kejie Li, AIML PhD Student

We tend to take for granted how we navigate the three dimensional world we inhabit, because our brains rapidly process messages sent by our senses and just as quickly tell our bodies what to do in response.

Equipping robots with similar navigational skills has been a research challenge for some time. It starts with a camera transmitting visual images to enable Simultaneous Localisation and Mapping (SLAM).

Building robots with consistently reliable SLAM skills would be a great leap forward in robotic vision.

PhD student Kejie Li has been working towards this leap through object-oriented SLAM, which enables robots to recognise objects in context.

Traditional SLAM can only reconstruct an environment using low-level geometry based representations, such as data points on a 3D grid (3D point cloud).

Kejie says that although such representations can indicate where items might be occupied in the 3D space, they fail to provide the descriptive information used by people to recognise and utilise a specific object. For example: this is a chair, I could sit in it, walk around it, perhaps move it. This description helps create a more accurate map.

“Unlike traditional SLAM systems, the mapping of our object-level SLAM is based on objects with semantic meaning,” Kejie says.

“In particular, in order to reconstruct 3D object shapes from partial observations, we train a deep neural network to ‘hallucinate’ the full 3D shape of an object given only a few images, or even just one image.”

Placing the reconstructed objects in the 3D space would enable a robot to ‘see’ what and where the objects are.

Kejie says we are likely to see many virtual reality devices entering our daily lives and that representing the environment via object-level mapping would help them to understand and navigate their surroundings in the same way that people do.

The research has a long way to go, but his team is helping to map the way forward.

 

Story provided by Kejie Li, PhD student with Professor Ian Reid

Funding: ARC Centre of Excellence for Robotic Vision

Tagged in Robotic Vision, case study