Skip to content

Return to parent page Return to Assessment home page.

An Introduction to Assessment

About Assessment: Why, What, How

Why Assess?  |  What to Assess?  |  How to Assess?

Why Assess?

Why do we assess learners and their learning outcomes in universities? This may seem like a redundant question, but it is important for all teaching staff to reflect on the purpose of assessment. Reasons generally proposed for assessment include (Broadfoot and Black, 2004):

  • it encourages learning
  • it provides feedback on learning and teaching to both the learner and the teacher
  • it documents competency and skill development
  • it allows learners to be graded or ranked
  • it validates certification and licence procedures for professional practice
  • it allows benchmarks to be established for standards.

Assessment tasks determine to a significant extent what learners will learn, and the methods they will employ to retain, reproduce, reconstruct and engage with learnt material (Biggs, 2002). Learner responses to perceived, or actual, assessment tasks will often dominate other extrinsic or intrinsic motivators that initially drive learner behaviour.

References:

Biggs J.B. (2002). Aligning teaching and assessment to curriculum objectives. LTSN Imaginative Curriculum Guide IC022. http://www.ucl.ac.uk/teaching-learning/global_uni/internationalisation/downloads/Aligning_teaching

Broadfoot P. and Black P. (2004). Redefining assessment? The first ten years of assessment in education. Assessment in Education: Principles, Policy and Practice, vol 11, issue 1, pp7-26.

return to top

What to Assess?

What are the fundamental principles in determining what should be assessed?

Should everything that is covered in a course be assessed?

What is essential to assess and what is not so important?

See Purposes and principles of assessment (website), Oxford Centre for Staff and learning Development, Oxford Brookes University.

Assessments should be directly related to the stated learning outcomes.

Bloom described the cognitive (what you know), affective (how you feel) and psychomotor (how you do something) domains of learning (http://w3.unisa.edu.au/gradquals/staff/program/blooms.asp, http://cft.vanderbilt.edu/teaching-guides/blooms-taxonomy/). To these we can also add a fourth domain, communication.

Skills and capabilities assessed in the different domains of learning

1. Cognitive Skills and Capabilities
In relation to the assessment of discipline content:

  • understanding and using
  • making meaning
  • making decisions
  • reflecting on meaning

2. Affective Skills and Capabilities
In relation to the assessment of discipline content and activities:

  • making judgements
  • valuing and characterising
  • emotional responses
  • managing time and resources

3. Psychomotor Skills and Capabilities
In relation to the assessment of discipline content and activities:

  • physically manipulating objects and tools
  • performing creative or physical activities
  • using digital and communication equipment

4. Communication Skills and Capabilities
In relation to the assessment of discipline content:

  • constructing a meaningful argument
  • cogently presenting to others

Will you use criterion- or norm-referenced assessment?

If you wish to know how one student has performed in relation to another student, then norm-referenced assessment is appropriate. If you wish to know whether a particular student has developed certain skills or capabilities, irrespective of the rest of the class, then criterion-referenced assessment is appropriate. (http://www.cshe.unimelb.edu.au/assessinglearning/06/normvcrit6.html)

Features of criterion-referenced and norm-referenced assessment adapted from: Outcomes Based Education and Assessment at UWA. Centre for the Advancement of Teaching and Learning, University of Western Australia. Updated 2005.

Criterion-referenced assessment Norm-referenced assessment Mixed mode application
Judgements about performance or competence are made against specified criteria. Judgements about performance or competence are made in relation to how a particular group of students performs. Marks would be based on explicit marking criteria given to students.
There is a direct relationship between the stated learning outcomes and the assessment tasks. The relationship between the stated learning outcomes and the assessment tasks may be direct or indirect. Students may not see the correlation between what they have learnt and the assessment tasks. Staff explicitly state how the assessment tasks will relate to the learning activities and content.
No scaling of marks, the predetermined pass mark is independent of the number of students above or below this mark. Everyone may achieve a High Distinction, Pass or Fail depending on how well they matched the criteria. Marks may be scaled and the Pass mark may vary depending on the distribution of marks for a particular group of students. An 'acceptable' fail rate is often used to determine the Pass mark or the number of High Distinctions. Grade descriptors are used so that students understand what is expected in order to obtain a particular grade or mark. The distribution of marks may still follow an historical pattern acceptable to the discipline.
Usually takes more time to produce an authentic assessment task as the criteria must be developed and aligned with learning activities. Developing appropriate assessment tasks still takes time but less time is usually allocated to align learning activities to the assessment. Assessment tasks and learning activities can be aligned so that grade descriptors can be used by students to benchmark their own performance.
Students are explicitly aware of the standard required to obtain a Pass before undertaking the assessment. Students usually only aware of the standard required after the assessment is marked. Grade descriptors are used so that students understand what is expected in order to obtain a particular grade or mark.
Detailed feedback is part of the assessment process and assists in future learning. Feedback is usually in the form of model answers. Rubrics are used to assist in providing feedback to students. See Assessment Design and Rubrics.
return to top

How to Assess?

The Centre for the Study of Higher Education in the AUTC project Assessing Learning in Australian Universities (http://www.cshe.unimelb.edu.au/assessinglearning/) developed the following twelve principles for assessment activity:

  1. Assessment should help students to learn.
  2. Assessment must be consistent with the objectives of the course and what is taught and learnt.
  3. Variety in types of assessment allows a range of different learning outcomes to be assessed. It also keeps students interested.
  4. Students need to understand clearly what is expected of them in assessed tasks.
  5. Criteria for assessment should be detailed, transparent and justifiable.
  6. Students need specific and timely feedback on their work - not just a grade.
  7. Too much assessment is unnecessary and may be counter-productive.
  8. Assessment should be undertaken with an awareness that an assessor may be called upon to justify a student's result.
  9. The best starting point for countering plagiarism is in the design of the assessment tasks.
  10. Group assessment needs to be carefully planned and structured.
  11. When planning and wording assignments or questions, it is vital to mentally check their appropriateness to all students in the class, whatever their cultural differences.
  12. Systematic analysis of students' performance on assessed tasks can help identify areas of the curriculum which need improvement.

Assessing Learning in Australian Universities contains the following sections:

  • Assessing Group Work
  • Quality and Standards
  • Academic Honesty
  • Online Assessment
  • Assessing Large Classes
  • Assisting International Students

Resources

See... University of Adelaide document Assessment types pdf - a summary of different assessment types, why you might consider using them and what additional issues you may need to consider.

Assessing learning in Australian universities: Ideas, strategies and resources for quality in student assessment http://www.cshe.unimelb.edu.au/assessinglearning/

Oxford Brookes University, Oxford Centre for Staff and Learning Development has some suggested methods for assessing learning in different contexts: http://www.brookes.ac.uk/services/ocsld/resources/methods.html

 

Assessment Design and Rubrics

Methods

Diagnostic, formative and summative assessment tasks can be linked with learning activities as shown below. An integrated learning-assessment model allows for both intrinsic and extrinsic reward factors, and the provision of appropriate feedback to learners becomes the critical component that links the assessment to the learning.

The format for the assessment will need to take into account whether the assessments is low, medium or high stakes.

Summary of decision-making issues for assessment formats (Crisp 2005)

Low stakes
assessment
Medium stakes
assessment
High stakes
assessment
Purpose of assessment Improve learning, identify teaching gaps Improve learning, progression to new concepts Credentials, gate keeping, progression, certification
Consequences if problems arise Few with low impact Some with modest impact Significant with high impact
Resources required Often minimal, can use low threshold software Modest investment in large scale system Significant investment in enterprise system
Consequences of cheating Few Some Significant
Authentication of learner Not important Maybe important Very important
Invigilation required Not usual Sometimes Always
Development effort Minor Medium Major
Evaluation of reliability and validity Not usual, anecdotal feedback from colleagues and learners sought Subject matter expert provide feedback Requires professional psychometric analysis
return to top

Approaches

SOLO Taxonomy

SOLO stands for Structure of the Observed Learning Outcome. The taxonomy is a useful way to characterize different levels of questions in assessments and the corresponding responses expected from students. It originates from Biggs, J.B. and Collis, K.F. (1982). Evaluating the Quality of Learning-the SOLO Taxonomy. (1st ed). New York: Academic Press.

The five levels of the SOLO taxonomy are:

Pre-structural:
  • students are acquiring pieces of unconnected information
  • no overall sense
  • no organisation
Unistructural:
  • students make simple and obvious connections
  • the significance of the connections is not demonstrated
Multistructural:
  • students make a number of connections
  • the significance of the relationship between connections is not demonstrated
Relational level:
  • students demonstrate the relationship between connections
  • students demonstrate the relationship between connections and the whole
Extended abstract level:
  • students make connections beyond the immediate subject area
  • students generalise and transfer the principles from the specific to the abstract

Examples

Examples of how to use the SOLO taxonomy include:
Assessment and Learning Outcomes: The Evaluation of Deep Learning in an On-line Course, Journal of Information Technology Education Volume 2, 2003. http://jite.org/documents/Vol2/v2p305-317-29.pdf

Example of using SOLO taxonomy:
http://w3.unisa.edu.au/gradquals/staff/program/solo.asp

Rubrics

A rubric is a scoring guide, check list or set of rules that identifies the criteria and the expected standards for a given assessment. They can be designed for all forms of assessment. Developing a marking rubric will assist the teaching staff and the student by explicitly detailing what is expected, the relative weightings for different components, and the standard required for different grades. Examples of rubrics can be found in the following section.

 

Assessment: Grading and Feedback

Why do we assign marks or grades to an assessment?

The mark or grade awarded is a measure of how closely the actual student response matched the intended or expected response. The weighting of a mark should be directly related to the relative importance of the task and the level of skills and capabilities developed in order to accomplish the task.

Feedback

Adapted from 'Designing Assessment to Improve Physical Sciences Learning' by Phil Race

Feedback should be targeted to enhance learning:
Feedback should be part of the learning design for the course. Students will read feedback if it can be related to a learning or assessment activity that is to take place soon.

Feedback should be timely:
In order for feedback to be relevant to students it should be received within 2 weeks of completing the task.

Think about how students will feel when they get marked work back:
Staff should think about the impact their comments will have on students. A good way of thinking about this is to remember how you felt when you received feedback on a draft conference paper or a grant application.

Try to do more than put ticks:
Ticks do not inform students about why something gained marks, or was deemed significant by the assessor. If something is particularly relevant, either because it was a good point or not addressing the question, then you could provide a short commentary.

Avoid putting crosses, if possible:
'Please review', or 'Consult reference xx' is more conducive to a student's learning than simply being told theirs is an incorrect or inappropriate response.

Try to make your writing legible:
Electronic annotation is quite easy to do and always ensures that your feedback will be legible (use 'Track Changes' in Microsoft Word). If paper copies of assessments are used, then printing your comments often assists students.

Try to give some feedback before you start assessing:
Once a class has completed an assessment, you could make model answers available within a day or two. Students will still be interested in the expected responses and will very likely discuss with each other their own responses.

Don't forget to give positive feedback:
Feedback should always commence with what was good about the student response, then proceed to suggested improvements.

Give feedback to groups of students sometimes:
It may be more efficient to give general feedback to the class on some common aspects of assessments, or to use tutorial groups for providing oral feedback on general points.

Let students argue:
It is often useful to let a student work through a point out loud so that you, or they, may clarify a point. Students will often arrive at an acceptable answer by verbally discussing a question.

Feedback should be realistic:
If feedback is to have an impact on student learning, then it must be achievable within the resources and time available.

Feedback should be fair:
Feedback should address the issues within the assessment, and not unrelated issues. The feedback should be directly related to the learning outcomes indicated in the course objectives.

Feedback should be motivating:
Feedback should allow students to improve. For low achieving students this may be how to achieve a Pass, whereas for High Distinction students it may be how to stretch themselves beyond their current abilities.

Feedback should be honest:
If a student is not meeting the required standard, then they should be told this in a direct, but positive, manner. Make sure that the comments are about the work, and not the student. It is the student response that is being assessed, not the student.

Think about audiotapes for feedback:
It is relatively straightforward to make digital audio files and attach these to emails, or student digital assessment files. This may be a more efficient use of time compared to writing feedback, either by hand, or using 'Track Changes' in Microsoft Word.

Consider giving feedback by email:
It is possible to automate email lists and this allows students to view feedback at a time and place convenient to them. It also is an efficient use of staff time as appointments may be difficult to organise.

 

Assessment: Validity, Reliability and Fairness


See the 'Enhancement Themes Initiative' website for: Reflections on Assessment: Volume II, Assessment workshop series no. 6: Issues of validity, reliability and fairness. Gloucester, UK: Quality Assurance Agency for Higher Education. pp.54-103.
http://dera.ioe.ac.uk/11618/

Samuel A. Livingston. (2004). Equating Test Scores (without IRT). Princeton, NJ: Educational Testing Services.
http://www.ets.org/Media/Research/pdf/LIVINGSTON.pdf

See the University of Adelaide site, Academic Honesty & Plagiarism Information for Staff.

 

Division of Academic and Student Engagement
Address

The University of Adelaide
South Australia 5005
Australia

Street Address

Level 7, Wills Building
North Terrace Campus
THE UNIVERSITY OF ADELAIDE
SA 5005 AUSTRALIA

Contact

T +61 8 8313 5901
F +61 8 8313 8333

dvca@adelaide.edu.au