Machine Learning

Machine learning is a form of artificial intelligence that enables computers and machines to autonomously learn how to do complex tasks without being overtly programmed. 

Machine learning involves teaching computers what to do by feeding examples of data and information. This allows the computer to look for patterns and then make better future decisions based on the provided examples. The aim is for the computer to learn automatically, and adjust actions, without human assistance.

    Expand
  • Deep Learning

    Deep learning, sometimes referred to as deep structured learning or hierarchical learning, is a subset of artificial intelligence and machine learning whereby computers are trained to perform human-like tasks (such as making predictions, understanding and identifying images or recognising speech). 

    Deep learning involves building and training neural networks using large data sets. By performing the set task repeatedly, the machine finds patters and learns from experience.  

    Our researchers have made major contributions to advancing the mathematical tools that underpin deep learning theory. We can and do use this world class expertise to help organisations better understand their data.

  • Systems Optimisation

    Traditional optimisation techniques achieve the best possible solutions in a fixed environment.

    Applying this in the real world can be challenging, with ever-changing environmental factors impacting the result (such as electricity prices, the weather, tax, and share market). At AIML we work on developing the theory, algorithms and tools that can predict when these factors will be in a state of flux, and therefore develop solutions to meet these needs now, and prepare for the future. The best solutions now may not be appropriate or even feasible in the future. 

    This is the process of updating and modifying software systems so that they work more efficiently and autonomously. 

  • Robust Statistics

    This area of research looks at the developing procedures to analyse data to ensure that the information remains informative and efficient. Otherwise data analysis by non-robust methods can result in biased answers and conclusions. Robust statistics uses methods that identify patterns in the data, focusing on homogenous subset of the data, without being influenced by smaller subgroups.

  • Probabilistic Graphical Models

    Probabilistic graphical models are very effective at modelling complex relationships among variables. These might be the relationships between symptoms and diseases, or the relationships between a set of sensor inputs and the state of the system being modelled, or the relationships between cellular metabolic reactions and the genes that encode them, or the relationships between users in a social network about whom we wish to draw inferences. 

    Probabilistic graphical models use nodes to represent random variables and graphs to represent joint distributions over variables. By utilising conditional independence, a gigantic joint distribution (over potentially thousands or millions of variables) can be decomposed to local distributions over small subsets of variables, which facilitates efficient inference and learning.