Learnings from LTAG - Professor Edward Palmer
The Learning and Teaching Advancement Grant (LTAG) scheme supports innovation and leadership in learning ad teaching. This month, Adelaide Education Academy member, Professor Edward Palmer (School of Education), shares how he utilised his grant to look at the impact of AI on assessments.

The LTAG scheme partially funded the development and dissemination of a project which looked at the impact of AI on assessments and helped identify which tasks are most affected using a rubric based on the qualities of good assessment practices. This enables staff to prioritise their efforts in revising courses thus making best use of finite resources.
What was the impact of your project?
The results and their impact are ongoing. Our findings to date show that most assessment tasks (including my own) are potentially compromised heavily by AI. As the rubric is based on assessment frameworks and theory rather than from a perspective related to academic integrity or cheating, this suggests our current tasks may now contain flaws which could lead to negative impacts on student learning. A culture of learning needs to be developed to partially address some of these problems along with a review of assessment tasks.
One interesting aspect of undertaking this activity was…?
The project helped attach data to a known issue in that assessments can be compromised by AI.
The rubric we developed to evaluate the robustness of assessment in an AI environment provided evidence that we could quantify strengths and weaknesses of tasks and those we identify as being significantly compromised could have resources provided to remedy any issues.Edward Palmer
What is your key learning from this activity?
It is possible to develop a rubric to look at the risk assessment tasks may have when AI is available. It is also possible, by looking at well-known assessment frameworks to minimise the risk, but it cannot be eliminated. Things will get more challenging as AI technology evolves.
How could colleagues use your learnings in their practice?
It is worthwhile looking at all assessment tasks and identifying where AI may potentially compromise learning. From there devote effort to those that are identified as most affected. This may mean that the task is eliminated, but more likely it may mean that aspects of it are changed e.g. move the task into class to be completed or ensure that good scaffolding leads into the task so students feel less inclined to use AI. This aligns with the need to define and support a learning culture where AI is a known and welcome partner.
How do you plan on building upon the results of your project?
We will continue the project. Refine the rubric and look at more assessment tasks to identify more recommendations for academics on what a good assessment task looks like. It would be useful to look at Adelaide University assessments to look at the impact a structured approach to design on a wide scale may have.
How can people learn more about your project?
The presented paper can be found here