top of page
Presentation

Efficient feedback

A marking scheme in which several people may deliver feedback on one assignment, to diminish costs and to utilize expertise appropriately

A scheme in which students collate the best feedback they receive, ultimately to help teaching staff deliver feedback more efficiently

A tool that can automate the grading of many assessments while enhancing the skills of students simultaneously

A scheme that optimizes the accuracy of peer marking—by motivating and helping peers mark as accurately as possible

Work Desk

Tiered feedback

A marking scheme in which several people may deliver feedback on one assignment, to diminish costs and to utilize expertise appropriately

Overview of the problem

  • Most teaching staff complain about their huge marking load—a key source of burnout

  • Even senior academics often grade many assignments, at a high cost, even though staff at a lower salary could deliver most of this feedback

  • Many students do not even read the feedback they receive—and hence some of the efforts of these senior academics are squandered. Marking is thus inefficient

 

Overview of a solution

  • Tertiary institutions should introduce a marking scheme that comprises two features. 

  • First, the extent to which students receive feedback should depend on the likelihood they will utilize this feedback.

  • Second, more than one person should be able to assess each assignment.  For example, junior staff might be able to assess the attributes in which they have developed expertise.  Senior staff might be able to assess the attributes in which greater expertise is necessary. 

 

Example of these features

  • A casual staff member might utilize a tool or templates to deliver feedback about writing, structure, and format on many assignments.  A more senior academic might then deliver feedback about the degree to which the arguments are valid and comprehensive.

  • However, if students are unlikely to consider feedback about writing, structure, and format, no feedback on this attribute would be sought.

  • Various sources of information could be utilized to predict this likelihood—such as the time that students dedicated to reading this feedback, as measured by some learning management systems.

Equality medium.png
Efficiency medium.png
Wellbeing medium.png
Cost medium.png
Work Desk

The feedback accumulator

A scheme in which students collate the best feedback they receive, ultimately to help teaching staff deliver feedback more efficiently

Outline of the problem

  • The degree to which students feel satisfied with their development and with the institution significantly depends on the quality of feedback they receive

  • Yet, teaching staff often feel too busy to deliver helpful and detailed feedback

  • And, even when these staff do deliver useful advice or informative comments, many students will disregard, rather than utilize, this feedback.

 

Outline of a solution

  • Teaching staff could design an assignment in which students need to collate 10 or so of the most useful feedback comments they received during a course. 

  • The students then need to specify the circumstances in which they might apply this feedback in the future

  • According to the literature on implementation intentions, this simple exercise might increase the likelihood that students apply this feedback in the future

  • More importantly, if many teaching staff encourage students to collate useful feedback, tertiary institutions can develop a database of useful feedback in response to various assignments.

  • Natural language processing and machine language algorithms can then be applied to predict which feedback could be relevant to each student.

  • To illustrate, teaching staff could perhaps submit an assignment into this algorithm.  And the algorithm would generate feedback that is likely to be relevant to this assignment. Teaching staff could thus deliver exemplary feedback efficiently

Equality medium.png
Efficiency high.png
Wellbeing medium.png
Cost medium.png
Work Desk

Automated, customized assessments

A tool that can automate the grading of many assessments while enhancing the skills of students simultaneously

Overview of the problem

  • To diminish the expense of marking assignments—and to circumvent the limitations of multiple-choice tests—many tertiary institutions want to invest in AI tools or other methods to grade assignments efficiently

  • However, these automated tools tend to apply standard criteria—criteria that often deviate from the preferences of a teacher or coordinator. 

  • Hence, these automated tools may not be too rigid, preventing institutions from prioritizing criteria or qualities that match their strategies and values

  • In addition, these automated tools seldom impart enough advice to facilitate the development of students

 

Overview of a solution

  • Tertiary institutions could instead design a tool that is sensitive to the preferences of each staff member or institution and also promotes development

  • To illustrate this tool, consider an assignment in which students need to write a research report.  To achieve this solution, one to three students receive a template—such as the main headings—and are invited to write the report.  The teacher of this class also writes this report as proficiently as possible. 

  • The tool first utilizes this information to generate multiple-choice questions—such as “which of the following titles are better”—in which the response of this teacher is the correct answer

  • Therefore, to teach students the key features of a research report, these individuals might first complete only multiple-choice questions

  • Next, the tool utilizes the same data to generate short-answer questions—such as “please write the title of this report”.   The tool then applies pattern matching—such as n-grams—to determine whether the answer is more similar to the response of teachers than to the response of uninformed students.

  • Hence, to help students learn to write these reports, these individuals might next complete short-answer questions that are graded automatically

  • Whenever the answers are inadequate, the better response is displayed to students as feedback

  • Over time, as the institution collates more examples, the tool can utilize similar methods to match and to assess increasingly larger sections of a report

  • In short, this tool imparts the skill the teacher wants to convey gradually and assesses this development seamlessly.  

Equality medium.png
Efficiency high.png
Wellbeing medium.png
Cost medium.png
Work Desk

A peer assessment without peer

A scheme that optimizes the accuracy of peer marking—by motivating and helping peers mark as accurately as possible

Outline of the problem

  • Some tertiary institutions implement peer assessments—in which students at the same level or a higher level grade the assignments of their peers.

  • Peer assessments do not only diminish the marking load of teaching staff and thus reduce costs but can also benefit students.

  • For example, as research indicates, when students assume the role of teachers, their learning improves.  And these students gain a skill that may boost their CV. Yet, peer assessments tend to be limited, because many individuals are concerned the grades that peers assign may be lenient or inaccurate.

 

Outline of a solution

  • Tertiary institutions can introduce an approach that enhances the accuracy of peer marks. To illustrate this approach, consider a circumstance in which third-year students are instructed to grade five research papers that were submitted by second-year students

  • The third-year students receive a mark on how accurately they grade these research papers.  Specifically, their mark will equal 100 minus the discrepancy between the marks they assign these research papers and the average mark that other students assign these research papers. 

  • This mark will thus motivate these students to grade the assignments as accurately as possible.

  • These students will also receive materials to enhance their capacity to grade these submissions accurately—materials that both enhances their marking ability as well as reinforce their knowledge of the topic. 

  • For example, they might receive some comments they can copy and paste to deliver suitable feedback. 

  • To implement this approach, teaching staff could utilize tools that were designed to facilitate peer assessments. 

  • Alternatively, teaching staff could distribute a list that specifies which assignments each student must grade, download these assignments to a location these students can access, construct a form that enables students to enter the grades into a spreadsheet, and utilize a macro to analyze this spreadsheet. 

  • Tertiary institutions could develop these resources, such as this form and macro, to assist teaching staff.

Some relevant literature

Equality medium.png
Efficiency high.png
Wellbeing medium.png
Cost medium.png

Contributors

To seek advice or engage specialists on these initiatives, contact the contributors of this page

  • Tiered feedback

  • The feedback accumulator

  • Automated, customized assessments

  • Collective class development

  • A peer assessment without peer

The model university 2040: An encyclopedia of research and ideas to improve tertiary education

©2022 by The model university. Proudly created with Wix.com

bottom of page