Anatomy of an Assessment

“A map is not the territory it represents” (Korzybski, 1958, p58). Polish-American scientist and philosopher Alfred Korzybski famously remarked. Korzybski proceeds to caution against confusing a word with the object it represents. Failing to dive deeper into the meanings of words can sometimes result in unintended consequences, he warns.

In the learning and teaching realm, this fallacy is sometimes translated into a tendency for an assessment to be confused with its method of assessment. Specifically, when educators discuss assessments, they are generally referring to a quiz, an essay or a presentation. Not often is there sufficient attention paid to the other components that make an assessment an Assessment. Referring to an assessment by its method may facilitate discussions the way referring to an item on the menu allows for one to order a meal. Yet, a lack of careful consideration of the other components of an assessment — when needed, can lead to unexpected problems.

Take for example, quizzes. In an effort to solve the long-standing problem of academic integrity, a tutor might throw out a quiz on account of the assumption that quizzes comprise multiple-choice questions and, therefore vulnerable to collusion between students. However, a quick review of quizzes on popular learning management systems will prove that there are many different types of both open-ended and close-ended question types supported. If questions are written well, quizzes can be great platforms to evaluate learning and collect data on student progress. Therefore, in conjunction with deciding on a method of assessment, there needs to be a larger conversation that includes the other components of the assessment — lest we be guilty of throwing the baby out with the bathwater.

Let’s look at another example. An assessment requiring students to “Create a YouTube video Presenting the Characteristics of Social Media” seems an interesting project. However, how motivated would students be to complete the task if a weighting were not assigned? If a weighting is then assigned, how should it be divided between the content i.e. “The Characteristics of Social Media” and the process i.e. the steps to create the video? Relatedly, a milestone scheduled along the timeline requiring students to share their ideas will help to scaffold students’ completion of the assessment. This approach will help to manage student anxiety as the deadline approaches and, also encourage knowledge sharing in the class — a valuable strategy to foster community.

The following interactive has been created to help you design and evaluate assessments. This interactive presents a table with five headings representing the five sections found in most assessments. Each section in turn includes prompts that you can use to evaluate the contribution of each element to the assessment as a whole.

Reference

Korzybski, A. (1958). Science and sanity: An introduction to non-Aristotelian systems and general semantics. Institute of GS.

This section introduces students to the assessment.

What are the course learning outcomes this assessment evaluates?

What are the graduate attributes this assessment helps to nurture?

How relevant is this assessment to the overarching purpose or intent of the course?

How relevant is this assessment to the personal lives, interests & aspirations of students?

Where is this assessment situated within the 'grand scheme' of the course and programme?

What is the connection between this assessment and the discipline?​ How will it help students in their journey to be a better professional / practitioner of the discipline?

How similar are the activities to those performed by professionals of the discipline in the real-world?

What is the weighting of this assessment?

Is the weighting of this assessment fair? In other words, is the percentage assigned for this assessment commensurate with expectations on the effort to be invested, duration of tasks, number of words and/or cognitive complexity?​

A three-minute thesis for example, may be relatively short in terms of duration of delivery, but there often is a substantial amount of preparation required to ensure a good delivery. This needs to be taken into consideration.

The section that specifies the work that needs to be undertaken.

What is the method of assessment? (e.g. case study analysis, poster presentation, job interview, etc.)?

To what extent can the task and sub-tasks be planned, completed, submitted and evaluated fully online?​

How do the timelines and duration of tasks correspond to real-world / discipline-based expectations?

For example, is there a discipline-based justification for the task to be timed?​

In completing tasks, is there a discipline-based justification for the assessment to be ‘closed-book’?​ 

What sort of resources and tools do professionals in the discipline have access to when attempting similar tasks in the real-world? 

This section states the steps involved in planning for, attempting and submitting the assessment.

Is the date and duration of the task stated upfront?

Does the deadline factor public holidays and weekends?

Is the duration of the assessment sufficient in allowing students to have a fair go at the assessment? 

Have the number of questions / sections on quizzes been communicated to students?​

Are larger assessments scaffolded and processes made visible – through the periodic showcasing of deliverables (draft, pseudocode, mind-map, meeting minutes, presentation of ideas, etc.)​?

What supporting resources will students be required to access to help them in the assessment?​

What digital tools will students leverage to plan for, attempt and submit the assessment? Has students’ efficacy in using these tools been considered?​

Are the instructions to submit or present the assessment logical and easy to follow?​

Has the submission process been tested?

This section focuses on The Question: Specific prompts and scaffolds that guide student responses and influence successful completion of the assessment.

Is there sufficient context for students to understand where the question is coming from and its connection to the broader assessment?

Is there signposting between questions or sets of questions for the assessment to come across as a coherent item?

Has a question really been asked? Grammatically speaking, a question is an interrogative or a declarative sentence.

Open-ended questions ask for divergent responses. Close-ended questions ask for convergent responses.

Is the question phrased in a manner that communicates specifically, clearly and succinctly the response expected? 

The section that presents the agreed standards of judgment for the assessment.

To what extent does criterion on the rubric map back to course learning outcomes / graduate attributes?

Was the rubrics released to students together with the  assessment?

Have students been socialised to the rubrics, for example through a 'trial' of the rubrics with draft essays or mock presentations?

A marking scheme states the range of responses for a closed question and the allocation of marks for each possible response.

Does the marking scheme specify the complete list of possible responses expected from students?​

An answer key states the correct responses to questions.

Does the answer key communicate the correct answers?

For group-based assessments, to what extent does the criteria for teamwork align with graduate attributes / course learning outcomes?

Are the expectations of group-work specified on the rubrics reflective of teamwork in the discipline / profession?

How will feedback be provided to students?

What kinds of automated and teacher-provided feedback will be offered to students?

When can students expect this feedback?

How will feedback provided help students?

To what extent is the feedback in alignment with the assessment criteria?