Muehlhoff, Rainer; Henningsen, Marte
Chatbots im Schulunterricht: Wir testen das Fobizz-Tool zur automatischen Bewertung von Hausaufgaben Unveröffentlicht
Preprint auf arXiv:2412.06651, 2024.
Abstract | Links | BibTeX | Schlagwörter: AI, artificial intelligence, chatbots, correction, feedback, O
@unpublished{Muehlhoff2024,
title = {Chatbots im Schulunterricht: Wir testen das Fobizz-Tool zur automatischen Bewertung von Hausaufgaben},
author = {Rainer Muehlhoff and Marte Henningsen},
url = {https://doi.org/10.48550/arXiv.2412.06651
https://media.ccc.de/v/38c3-chatbots-im-schulunterricht},
doi = {10.48550/arXiv.2412.06651},
year = {2024},
date = {2024-12-09},
urldate = {2024-12-09},
issue = {arXiv:2412.06651},
abstract = {This study examines the AI-powered grading tool "AI Grading Assistant" by the German company Fobizz, designed to support teachers in evaluating and providing feedback on student assignments. Against the societal backdrop of an overburdened education system and rising expectations for artificial intelligence as a solution to these challenges, the investigation evaluates the tool's functional suitability through two test series. The results reveal significant shortcomings: The tool's numerical grades and qualitative feedback are often random and do not improve even when its suggestions are incorporated. The highest ratings are achievable only with texts generated by ChatGPT. False claims and nonsensical submissions frequently go undetected, while the implementation of some grading criteria is unreliable and opaque. Since these deficiencies stem from the inherent limitations of large language models (LLMs), fundamental improvements to this or similar tools are not immediately foreseeable. The study critiques the broader trend of adopting AI as a quick fix for systemic problems in education, concluding that Fobizz's marketing of the tool as an objective and time-saving solution is misleading and irresponsible. Finally, the study calls for systematic evaluation and subject-specific pedagogical scrutiny of the use of AI tools in educational contexts.},
howpublished = {Preprint auf arXiv:2412.06651},
keywords = {AI, artificial intelligence, chatbots, correction, feedback, O},
pubstate = {published},
tppubtype = {unpublished}
}
This study examines the AI-powered grading tool "AI Grading Assistant" by the German company Fobizz, designed to support teachers in evaluating and providing feedback on student assignments. Against the societal backdrop of an overburdened education system and rising expectations for artificial intelligence as a solution to these challenges, the investigation evaluates the tool's functional suitability through two test series. The results reveal significant shortcomings: The tool's numerical grades and qualitative feedback are often random and do not improve even when its suggestions are incorporated. The highest ratings are achievable only with texts generated by ChatGPT. False claims and nonsensical submissions frequently go undetected, while the implementation of some grading criteria is unreliable and opaque. Since these deficiencies stem from the inherent limitations of large language models (LLMs), fundamental improvements to this or similar tools are not immediately foreseeable. The study critiques the broader trend of adopting AI as a quick fix for systemic problems in education, concluding that Fobizz's marketing of the tool as an objective and time-saving solution is misleading and irresponsible. Finally, the study calls for systematic evaluation and subject-specific pedagogical scrutiny of the use of AI tools in educational contexts.
Truscott, John
The effect of error correction on learners’ ability to write accurately Artikel
In: Journal of Second Language Writing, Bd. 16, Ausg. 4, S. 255–272, 2007, ISBN: 1873-1422.
Abstract | Links | BibTeX | Schlagwörter: A, correction, feedback, Korrektur, writing
@article{Truscott2007,
title = {The effect of error correction on learners’ ability to write accurately},
author = {John Truscott},
url = {https://doi.org/10.1016/j.jslw.2007.06.003},
doi = {10.1016/j.jslw.2007.06.003},
isbn = {1873-1422},
year = {2007},
date = {2007-12-01},
journal = {Journal of Second Language Writing},
volume = {16},
issue = {4},
pages = {255–272},
abstract = {The paper evaluates and synthesizes research on the question of how error correction affects learners’ ability to write accurately, combining qualitative analysis of the relevant studies with quantitative meta-analysis of their findings. The conclusions are that, based on existing research: (a) the best estimate is that correction has a small negative effect on learners’ ability to write accurately, and (b) we can be 95% confident that if it has any actual benefits, they are very small. This analysis is followed by discussion of factors that have probably biased the findings in favor of correction groups, the implication being that the conclusions of the meta-analysis probably underestimate the failure of correction.},
keywords = {A, correction, feedback, Korrektur, writing},
pubstate = {published},
tppubtype = {article}
}
The paper evaluates and synthesizes research on the question of how error correction affects learners’ ability to write accurately, combining qualitative analysis of the relevant studies with quantitative meta-analysis of their findings. The conclusions are that, based on existing research: (a) the best estimate is that correction has a small negative effect on learners’ ability to write accurately, and (b) we can be 95% confident that if it has any actual benefits, they are very small. This analysis is followed by discussion of factors that have probably biased the findings in favor of correction groups, the implication being that the conclusions of the meta-analysis probably underestimate the failure of correction.