High Performance Computing Options
With several different High Performance Computing (HPC) options available for researchers at the University, it can be difficult to understand which HPC option best fits a particular scenario.
The following HPC options outline some key criteria that can help you decide which specific compute option is best suited to meet your research compute needs.
Please note, research data is considered a University of Adelaide information asset and should be stored in university approved storage. For further information please refer to the Data Retention and Preservation web page and the Information Management Policy. If you have questions, please contact Records Services.
Phoenix - University owned HPC
For high volume compute, Phoenix is the preferred use option within the University. The use of the existing Phoenix infrastructure is free for University related research.
Phoenix is suitable when jobs:
- Will benefit from parallelization and are consistent in intensity in terms of CPU usage over time. Phoenix provides from 32 to 40 cores per node with up to a total of approximately 10800 cores available.
- Are GPU-accelerated and Machine learning type jobs. Phoenix provides 288 Nvidia K80 and 16 Volta V100 GPUs. The Australian Institute of Machine Learning (AIML) can also benefit from the 48 Nvidia V100 from the Volta logical cluster.
- Need to process large datasets as Phoenix uses a 2PB high performing Lustre file system that is shared across all nodes.
- Are small to mid-size Input/Output (i.e. I/O) intensive, as each node on Phoenix offers from 350GB to 500GB of local disk.
Phoenix is not suitable when jobs:
- Require a graphical user interface.
- Need interaction while running.
- Are large I/O intensive jobs dealing with a very large number of small sized files (e.g over 100,000 files of e.g. 1MB size). These types of jobs cause congestion and impact all jobs sharing the Lustre file system leading to an overall significant performance degradation of Phoenix.
- Are non-parallelizable and span a very large amount of time (i.e. more than 7 days).
- Exceed the researcher's allocated fraction of the cluster as determined by the portion of co-investment provided by their group or faculty. These jobs will have difficulty in reserving the required resources, resulting in long wait times for the job to be executed.
- Are urgent, as queue wait times vary.
As Phoenix is a shared University of Adelaide resource, the job priority can be reset and lowered resulting in increasing wait times. Phoenix is not recommended for small jobs that do not need the performance associated with HPC resources or if the project demands are too high as it means Phoenix usage will be inconsistent or usage over several months will be spiky.
Visit the Phoenix HPC page for more information.
RONIN – Configure and access Amazon Web Services (AWS)
RONIN is a user-friendly web based management dashboard that allows researchers to leverage the full complexity and power of AWS cloud based computing, without facing a steep learning curve. While RONIN provides additional flexibility for different research scenarios, our onsite Phoenix HPC environment remains the most cost effective option and is the preferred HPC option where possible.
RONIN and AWS is suitable for tasks:
- When you have Grant or faculty money to pay for your research compute. Please note, you may be eligible for a quota of up to US$330 per project per year. If your compute requirements exceed that amount, you will need to pay for the difference.
- That require a specific Operating System (e.g Windows)
- Requiring hosting on web servers
- Requiring collaborative research. A researcher can package a specific machine to work with a collaborator outside the University who is also using AWS and/or share important data-sets using Amazon’s Simple Storage Service (S3)
- Requiring a graphical user interface
- That have heterogeneous workflows (e.g. mix computationally intensive and non-intensive tasks) as you can easily create virtual machines with specific architectures and move data across these VMs
- That are sporadic and computationally short term and/or I/O intensive jobs that require a large amount of compute resources. The AWS "spot market" feature can assist in enormous costs savings.
- That would benefit from an auto-scale cluster. RONIN makes the creation of auto-scale clusters very easy with project users having exclusive access to the cluster.
- This can be used in forums where many users are involved e.g. national workshops, university courses or provision of consultancy services.
- Which are focused on optimising and reviewing compute jobs prior to submitting them for processing on Phoenix
- Meeting tight deadlines where Phoenix queue wait times could be an issue. RONIN allows researchers to access compute services on demand.
- For example, when creating temporary high performance environment to be used for up to 12 months at a time in workshops or teaching scenarios.
RONIN and AWS are not suitable for tasks:
- With consistent usage requirements as they are computationally intensive. In the long term, the compute resources for these tasks are much more expensive as opposed to using an on premise solution.
- Requiring constant and extensive use of VMs with GPUs as these AWS VMs are expensive.
Visit the Ronin page for further information.
ADAPT is the University’s Citrix environment which provides remote access to a range of published research applications and desktops.
ADAPT is suitable for:
- Accessing applications without the need to install these on the local machine
- Windows small jobs (up to 40 cores) that require a graphical user interface and GPU via the HPC GPU desktop.
ADAPT is not suitable for:
- Tasks that require intense computation as ADAPT is not true HPC due to its limited resources.
Visit the ADAPT page for more information.
There are other non-University supported options that Researchers can access based on a merit based allocation basis to support nationally significant research projects. These options could be suitable when there is a consistent need of very large-scale jobs (approximately 1024+ cores).
Nectar Research Cloud
The Australian Research Data Commons (ARDC) Nectar Research Cloud is Australia’s first federated research cloud. This service provides Australia’s research community with fast, interactive, self-service access to computing infrastructure, software, and data, and is a powerful platform for collaboration
The Pawsey Supercomputing Centre is an unincorporated joint venture between CSIRO, Curtin University, Edith Cowan University, Murdoch University and The University of Western Australia. It is supported by the Western Australian and Federal Governments.
NCI Australia is the nation’s most highly integrated high-performance research computing environment, providing world-class services to government, industry, and researchers. NCI’s newest supercomputer is Gadi, Australia's peak research supercomputer for 2020 and beyond. A 3,200 node supercomputer comprising the latest generation Intel Cascade Lake and Nvidia V100 processors, Gadi supports diverse workloads with over 9 petaflops of peak performance.