Skip to content

Technical Information

For technical or research-specific information, please refer to the following online resources:

  • Hardware

    Phoenix Supercomputer s a heterogeneous hardware system that include a mix of CPU-only and CPU/GPU-accelerated nodes.

    The current primary supercomputer hardware is a Lenovo NeXtScale system consisting of 168 nodes.

    Cluster Node Type Nodes Node Specs Total Specs Performance GoLive
    Lenovo NeXtScale CPU GPU CPU 96 32 cores, 128GB memory 3072 CPU cores, 12TB of memory 350 Teraflops, ranked 15th global Green500 February 2016, upgraded 2017
    CPU/GPU 72 32 cores, 2x Nvidia K80 dual GPU accelerators, 128GB memory 2304 CPU, 9TB of memory, 288 GPU’s
    HP CPU High Memory 4 32 cores, 2x 512GB memory, 2x 256GB memory 128 cores, 1.5TB 50 Teraflops March 2015
    GPU 6 20 cores, 8x Nvidia K40 GPU accelerators, 256GB memory 120 cores, 1.5TB memory, 48 GPU’s
    System Total 178 5624 cores, 336 GPU accelerators, 24TB memory 400 Teraflops
    • Software
      Application Software

      Here is a (possibly incomplete) list of the currently available software on Phoenix. For the most up to date list of the available software, simply type module avail at the command line. Click on the software package name to get more information about the installation and usage.

    • Allocations

      Compute usage on Phoenix is drawn from an allocation of Service Units (SU) applied to a users account. Allocations are managed in quarters. When a job runs the corresponding compute usage is charged to the user’s account.

      There are two types of allocation on Phoenix.

      - In-kind

      - Priority Subscription

      In-kind Allocations

      The in-kind allocation service is provided to researchers who are not within a school, faculty or research group with a subscription. In-kind allocations are drawn from a central pool of SU, funded by the financial contributions from the DVCR and Technology Services. A school account with an in-kind allocation currently receives a default allocation of 20000 service units per quarter. Jobs run at a lower priority than a subscriber. Jobs will continue to run once the in-kind allocation is exceeded but at a lower priority again. To receive an in-kind allocation a researcher simply requests an Phoenix account. There is no merit process.

      Priority Subscription Allocation

      The subscription service provides a compute allocation in service units (SU) on Phoenix that is valid for a minimum of 3 years, aligned to typical infrastructure life span. Allocations on Phoenix are partitioned on a quarterly basis. A subscription gives a research group, school or faculty higher priority than in-kind users and helps to ensure the group can use up the allocation within a quarter. Users who deplete their allocation can continue to submit and run jobs on Phoenix, however those jobs will run with a lower priority but still higher than in-kind until the usage resets at the start of the next quarter. Subscriptions are charged at a nominal rate and the funds are directed into infrastructure purchases for growth and cycling. Subscriptions are also incentivised by DVCR and Technology Services who will match a portion of the financial contribution.

      Common questions about Allocations

      What kind of allocation do I have?

      If you are part of a priority subscribed research group you will typically have had a discussion with your supervisor prior to requesting access to Phoenix and requested your account be made part of the groups allocation.

      You can confirm this by running a command in the phoenix terminal: rcquota

      The Grant_SU value is the allocation attached to your account. An in-kind allocation will typically have a value of 20000, while a priority subscription will have a much larger value. Your total usage will also be displayed under the heading Usage_SU, as well as your usage as a percentage of your grant.

      [a1234567@l01 hpc]$ rcquota

      Compute Usage: System = phoenix, Period = 2016.q3 [ 76.9% lapsed]
      System         Project               Grant_SU       Usage_SU     Grant Use%
      phoenix        rc123                  1454000         181378         12.47%
                       a1234567                   1            330          0.02%

      Can I use more than my allocation?

      Yes you can. The impact is your FairShare priority will reduce and jobs will take longer to begin running. This priority value is compared against others with jobs in the queue along with other variables to decide when your job(s) will run.

      How much allocation is available on Phoenix in total?

      In January 2017 there are approximately 17 million Service Units per quarter. This is made up of CPU and GPU hours.

      Core/Accelerator Hours (per quarter) Cost of Hours (SU) Service Units (per quarter)
      CPU 12 million 1 12 million
      GPU 625,000 8 5 million
      TOTAL 17 million
    • Job Scheduler


      Phoenix uses the SLURM scheduler to manage the compute workload. After a job is submitted to a queue (also referred to as a partition), a number of factors are used to determine the priority of the job and when it will run. The factors are:

        • Job age
          • Job partition
          • Fairshare factor

      After the job is assigned a priority the scheduler will determine when the job runs based on the availability of the requested resources (cores, gpus, memory), the job's requested walltime and its priority. Higher priority jobs will in general be run before lower priority jobs, all other things being equal.

      What is the Fairshare Factor?

      The Fairshare factor is a number used to ensure the system resources are distributed equitably.  Each user belongs to an association, and each association has a certain number of shares that reflect their corresponding allocation on Phoenix.  As the user consumes compute resources, their usage increases and their corresponding Fairshare priority factor will decrease.  Hence, jobs from users who have under-utilised their allocation will have a higher priority than those who have already used a significant fraction of their allocation.  In this way the scheduler ensures that all users have the opportunity to utilise their allocated fraction of the system.

      There are a number of custom reporting tools for Phoenix users.

      rcquota shows project and user allocation and usage statistics for the current quarter. e.g.

      $ rcquota
      Compute Usage: System = phoenix, Period = 2016.q4 [ 63.7% lapsed]
      System Project Grant_SU Usage_SU Grant Use%
      phoenix rc007 1261875 489275 38.77%
      a1234567 1 212731 16.86%

      Allocations and usage are measured in SU (Service Units). 1 SU = 1 CPU-hour, and 1 GPU-hour = 8 SU.

      rceff reports on job efficiency with respect to memory and CPU utilisation.
      rcshare shows a summary of allocation and usage statistics by project for the current accounting period.

    • Acknowledging in Publications

      As part of the conditions of use for the Phoenix HPC facility, users are required to acknowledge Phoenix in their publication outputs, including journal publications and conference proceedings. This is done by including an appropriate reference to Phoenix and the University of Adelaide in the Acknowledgements section. An example of an appropriate acknowledgement is:

      This work was supported with supercomputing resources provided by the Phoenix HPC service at the University of Adelaide.

    • Writing Phoenix into your Grant


      DVCR Operations has confirmed compliance with FT14 Funding Agreement conditions as per section 7.

      7.2 The Administering Organisation must ensure that expenditure on each Project described in Schedule A is in accordance with the ‘Project Description’ contained in the Proposal and within the broad structure of the proposed ‘Project Cost’ detailed in the Proposal or any revised budget, aims and research plan submitted by the Administering Organisation which has been approved by the ARC.

      Grant Blurb

      Use the blurb below in your grant. Please do edit as required.

      The University of Adelaide has invested $3.5 million in the Phoenix Computational Research Service providing it's researchers access to three high performance compute clusters comprised of CPU, GPU and Machine Learning platforms with in excess of 68 million hours of compute time available annually. The clusters support interactive applications in Windows and Linux environments such as Ansys, Matlab Simulink, R together with custom codes and open source applications. A bespoke scheduling and queuing system ensures equitable allocation of resources to groups and individual researchers. The infrastructure support is locally provided by a team of server and storage professionals and computational research application support specialist(s) with discipline specific experience. The University of Adelaide Phoenix service ensures it's researchers timely research outcomes.

      The latest update added 1536 additional CPU cores, and is expected to achieve a HP Linpack performance figure of 350 Teraflops. The supercomputer is one of the largest available to a single University in the country. Comprised of 174 nodes of dual Intel Xeon 16 core CPU's, Nvidia K80 graphics accelerators and Nvidia K40 accelerators. Total 5568 CPU cores, 336 GPU's, 21TB memory. Further, it is extremely energy efficient, ranking 17th in the June 2016 Green 500 (Top500 list sorted by Performance per Watt).

      Budget Estimations

      Refer to information on this page on:

      - Allocations

      - Supercomputer vs Workstation

    • Supercomputer vs Workstation

      Comparing the cost of different types of infrastructure and support required for computational research is not easy.

      We find the simplest way is to compare the compute performance of the research dollar across the typical types of infrastructure.

      A Price/Performance comparison for two example workstation configurations, the Dell PowerEdge T630 (CPU-only and with two GPU's), versus an equivalent Phoenix allocation.


Research Technology Support

Contact us
Want to fix a problem yourself? It's easy with our Quick Reference Guides.