Simultaneous location and mapping in autonomous vehicles

Professor Ian Reid

Professor Ian Reid

Robotic and autonomous vehicles are valuable defence assets as they can undertake operations which present increased risks to personnel, such as entering contested or contaminated areas where they can gather information, or act as relay points for communication systems.

Key to their use is trust. Operators of these vehicles need to be sure that the information both they and the vehicle are using is accurate, and that it can be interpreted quickly and effectively. Information on the environment around a vehicle can be gathered using a range of sensors including radar, lidar and global positioning (GPS). These sensors are used in autonomous vehicles to provide information about where the vehicle is and, to some extent, the state of the dynamic, uncertain environment in which it is operating. But current methods do not make this picture more complete by determining things like object identities, whether they are fixed or moving, threats or benign; whether regions are navigable or traversable; and how to make effective decisions using this more complete – but still uncertain – picture of the world.

Professor Ian Reid’s team are combining data from these sensors with images and video from vehicle-mounted cameras to help create a much more detailed picture of the environment. This will allow an autonomous vehicle to make much better decisions itself, but also provides its operator with a much clearer picture of what is around the vehicle so they can understand the rationale for any decisions the vehicle makes and provide additional guidance based on this more comprehensive data.

Ian explains "We’re looking at gathering information from a broader range of sensors on an autonomous vehicle, whether the vehicle is land, sea or airbased, and creating software which takes that information and creates a geometric and semantically meaningful, dynamic map of the world which can be understood by the vehicle and the human operator. Using machine learning, we can sift through huge quantities of data (too much for a human to process) and distil it into a detailed and easily recognisable map of the surrounding environment. This can then be used for path planning, for example identifying if the flat area in front of the vehicle is a road or a river, or for determining if the area is safe from other potential threats which may damage the equipment."

"This technology can also be useful in areas where GPS is unavailable, by using simultaneous localisation and mapping. The software uses the sensor data to create a detailed map of the surrounding environment and, from that, work out where the vehicle is."

"A significant element of trust in such a sensor suite is proven robustness. A significant focus of our current work is to look at how the software performs when it loses an input. If a camera gets mud on the lens and can no longer provide suitable images, how should other inputs respond and what changes does the software need to make to adapt to the new information, so outputs can still be trusted? We know that theoretically the mathematics of probability provide us with ways to fuse disparate sensor data robustly, but new methods in machine learning are needed to build models which will enable us to do so. We are currently investigating machine learning models, such as those used at the heart of ChatGPT for instance, as a powerful new way to fuse all sensor data."

Tagged in Robotics and Autonomous Systems Artificial Intelligence and Space