Corbin, Thomas; Dawson, Phillip; Nicola-Richmond, Kelli; Partridge, Helen
‘Where’s the line? It’s an absurd line’: towards a framework for acceptable uses of AI in assessment Artikel
In: Assessment & Evaluation in Higher Education, S. 1–13, 2025.
Abstract | Links | BibTeX | Schlagwörter: academic integrity, artificial intelligence, assessment design, higher education, O
@article{Corbin2025,
title = {‘Where’s the line? It’s an absurd line’: towards a framework for acceptable uses of AI in assessment},
author = {Thomas Corbin and Phillip Dawson and Kelli Nicola-Richmond and Helen Partridge},
url = {https://doi.org/10.1080/02602938.2025.2456207},
doi = {10.1080/02602938.2025.2456207},
year = {2025},
date = {2025-01-24},
journal = {Assessment & Evaluation in Higher Education},
pages = {1–13},
abstract = {As higher education grapples with ensuring assessment validity in an increasingly AI-populated time, institutions and educators are working to establish appropriate boundaries for AI use. However, little is known about how students and teachers conceptualize and experience these boundaries in practice. This study investigates how students and teachers navigate the line between acceptable and unacceptable AI use in assessment, drawing on a thematic analysis of qualitative interviews with 19 students and 12 staff at a large Australian university informed by social boundary theory. The titular metaphor of ‘drawing a line’ emerged organically from both students and staff in our interviews, revealing ongoing struggles to understand and articulate what counts as appropriate. We found that students frequently construct their own individually unique and often complex ethical frameworks for AI use. Teachers, meanwhile, report significant emotional burden and professional uncertainty as they attempt to understand and communicate what is appropriate to their students. Our analysis suggests that assessment policies for AI ought to move beyond simple prohibitions or permissions and begin to address three critical dimensions: the feasibility of enforcement, the preservation of authentic learning, and the emotional wellbeing of teachers and students.},
keywords = {academic integrity, artificial intelligence, assessment design, higher education, O},
pubstate = {published},
tppubtype = {article}
}
As higher education grapples with ensuring assessment validity in an increasingly AI-populated time, institutions and educators are working to establish appropriate boundaries for AI use. However, little is known about how students and teachers conceptualize and experience these boundaries in practice. This study investigates how students and teachers navigate the line between acceptable and unacceptable AI use in assessment, drawing on a thematic analysis of qualitative interviews with 19 students and 12 staff at a large Australian university informed by social boundary theory. The titular metaphor of ‘drawing a line’ emerged organically from both students and staff in our interviews, revealing ongoing struggles to understand and articulate what counts as appropriate. We found that students frequently construct their own individually unique and often complex ethical frameworks for AI use. Teachers, meanwhile, report significant emotional burden and professional uncertainty as they attempt to understand and communicate what is appropriate to their students. Our analysis suggests that assessment policies for AI ought to move beyond simple prohibitions or permissions and begin to address three critical dimensions: the feasibility of enforcement, the preservation of authentic learning, and the emotional wellbeing of teachers and students.