Award winning space AI
Story by: Kurtis Eichler and Eddie Major
Sofia McLeod spends a great deal of time thinking about space exploration.
The Australian Institute for Machine Learning (AIML) PhD candidate is researching ways to build an AI system that can safely land an autonomous spacecraft on a distant planetary or asteroid surface guided by visual input from a single event camera.
Inspired by the workings of a human eye, an event camera is a dynamic vision sensor where each pixel works independently to report changes in brightness as they occur. Whereas a typical camera sensor—like the one in your smartphone—records numerous whole image frames every second, even if there’s nothing new happening in view.
Because it only sends new data when conditions change, an event camera system is more data efficient, lightweight, and may use less power — all things that are vital in successful space missions, where resources are precious and limited.
For her work researching the AI navigation tool, McLeod was last month named as one of three students to win $10,000 each as part of the Andy Thomas Space Foundation's EOS Space Systems Research awards, also known as the Jupiter program.
The Jupiter program was launched last year with the aim of fostering a new generation of space scientists and engineers. As part of their entry, finalists submitted a short research project outlining how their project would benefit Australia’s space network.
Space exploration is an area currently seeing rapid development in autonomous technology. In April 2021, NASA completed the first powered controlled extraterrestrial aircraft flight as part of its Mars 2020 rover mission. The long distance from Earth meant the craft had to operate with a high degree of autonomy.
“If you're looking at Ingenuity, which is the drone that’s now on Mars, the delay is approximately 20 minutes for human interaction,” McLeod says, referring to the maximum return radio signal transmission time due to the 50 - 200 million kilometre distance between Earth and Mars.
"We need both the rover and Ingenuity to be able to do this navigation by themselves, so they can avoid obstacles on their own. You have to remember that if a robot gets stuck or breaks down, we can’t go to Mars to repair it.”
Event-based computer vision technology in space isn’t just limited to autonomous landing, but has a range of potential applications.
“Ideally we want to design unmanned spacecraft to refuel satellites when they run out of power. To do this you’ll need computer vision to know that you’re aligned perfectly with the object you're trying to dock with,” she says.
And like a future autonomous spacecraft’s camera-guided journey, Sofia McLeod’s pathway to computer science and machine learning at AIML was a visual one; she initially considered a career in design or visual effects for the entertainment industry.
“I've always been a visual person,” she says. “I was thinking of doing VFX or graphic design… but I was definitely better at algorithms.”
“I was just really fascinated by the concept of getting computers to see,” she says. “It’s just so intuitive.”
Andy Thomas Space Foundation chief executive, Nicola Sasanelli, says that developing students into future space leaders is a priority for the foundation.
“This opportunity not only enables students to become immersed in real-world industry experience but provides innovative perspectives and new ideas.”