The realistic interaction of real and synthetic content is an important part of creating a believable augmented reality. This augmentation may be as simple as rendering markers as if they are actually attached to physical objects, or as complex as adding 3D special effects to live video, but in either case the synthetic geometry must appear fixed to the same frame as the objects around it in order to be believable. The key to rendering geometry in a fixed frame is to maintain an accurate estimate of the camera position relative to the environment. Purely IMU-based solutions suffer from inaccuracy, and particularly lag, in the estimated position which results in synthetic geometry which appears to float over the real geometry. More fundamentally, however, the fact that no analysis is carried out on the captured image stream means that the system cannot know what is actually within the field of view of the camera. Thus, the system cannot take into account line of sight and must render synthetic content based only on viewpoint direction, rather than on the basis of the immediately visible environment. We have undertaken a large body of fundamental research in this field and has several active projects looking at many different applications of the technology.
Real-time special effects in live video
The goal of this work is to allow 3D visual effects to be composited into real video as the video is captured. This allows real and synthetic elements to be seen together in the live video and for the recording process to be informed by their interaction, allowing a level of interaction between the compositing and filming process that would otherwise be impossible. This will be achieved with software that analyses the video as it is captured; the only hardware involved is a laptop which is connected to the camera. The interactions required are simple and can be carried out on a touch-screen attached to the camera itself.
Professor Anton van den Hengel; Associate Professor Anthony Dick.
Interactive 3D modelling from video
Many industry sectors are becoming increasingly reliant on the production of 3D models from imagery. These models are used for purposes such as video post-production, virtual reality, reverse engineering, 3D medical imaging, planning and simulation. By developing technologies and expertise in these areas, and particularly in model generation, within Australia, the research team aim to provide a capacity of significant value to Australian industry and a valuable export opportunity.
Professor Anton van den Hengel; Professor Philip Torr.
Added depth: automated high level image interpretation
Automated image interpretation has been one of the landmark goals of Artificial Intelligence since its inception. It is only recently, however, that significant progress has been made towards this goal. Current approaches succeed because they reference the enormous volume of imagery and related meta-data available on the Internet. They fail, because this information is fundamentally 2D in nature. By extending current 2D image interpretation methods on the basis of this previously unavailable 3D information we aim to develop technologies capable of interpreting the world through imagery. The outcome will help computers understand images, robots navigate their world, surveillance systems understand what they see and cars avoid pedestrians.
Professor Anton van den Hengel; Professor Philip Torr; Associate Professor Simon Lucey.
Learning to see in 3D
The aim of this project is to collect and analyse a large collection of digital images and their associated depth maps, in order to formulate a method for predicting depth from a single image. This has only recently become feasible due two factors: the availability of sensors for simultaneously collecting appearance and depth data, and advances in data-driven machine learning techniques. The ability to infer depth from appearance has widespread significance for applications of computer vision and image based measurement, including medical imaging, mining, architecture and entertainment, in which the current bottleneck is the difficulty of obtaining 3D information. We will produce a publicly available web service to demonstrate the system.
Professor Anton van den Hengel; Associate Professor Anthony Dick.
Multi-model predictions of ecosystem flux under climate change based on novel genetic and image analysis methods
The loss of biodiversity from human modification to ecosystems (including climate change) is accelerating to human society's own detriment. We will use new genetic and historical photo-point methods to assess rapidly and cost-effectively the diversity and uniqueness of South Australian biota. In support of the Terrestrial Ecosystem Research Network, these expansive biodiversity assessment databases will be combined with state-of-the-art projection techniques to ascertain the most realistic future of Australia's unique and highly threatened biodiversity. The tools we will develop will help determine the best approaches to managing landscapes for biodiversity maintenance while continuing productive human land uses.
Professor Andrew Lowe; Professor Corey Bradshaw; Professor Anton van den Hengel; Professor Barry Brook; Professor Alan Cooper.
Improving yield through image-based structural analysis of cereals
Meeting the food requirements of a growing world population given the constraints on land and water availability requires the development of crops capable of delivering higher yield in more marginal conditions. The projected impact of climate change compounds the problem by potentially rendering current agricultural practices unsustainable in the long term. This project will develop technologies capable of expediting the development of improved food crops by improving the accuracy and cycle time of plant breeding processes. This will be achieved by improving the process of yield estimation through image-based analysis of the structural properties of new plant varieties.
Professor Anton van den Hengel; Professor Mark Testor; Associate Professor Anthony Dick; Bayer CropScience.
Accessing Australia's photographic history
Australian museums have vast collections of photographs documenting our national history. This project proposes an image based method that addresses the challenge of extracting 3D measurements and models from these photographs. The method is designed to work with non standard cameras, noisy images and sparse data. If successful, it will allow in house exhibits and interactive web based displays to be created based on the reconstructed scenes and objects, thereby presenting images from our past in an engaging and widely accessible manner. Beyond historical images, it also has application for 3D reconstruction from limited or poor quality image data.
Professor Anton van den Hengel; Associate Professor Anthony Dick; SA Maritime Museum.