Solstad, Hanna Eide
A comparison of manual and automated quality assessment of Open Educational Resources and their reliability Abschlussarbeit
Norwegian University of Science and Technology, 2022.
Abstract | Links | BibTeX | Schlagwörter: O, OER, open educational resources, Qualitätssicherung, quality
@mastersthesis{Solstad2022,
title = {A comparison of manual and automated quality assessment of Open Educational Resources and their reliability},
author = {Hanna Eide Solstad},
url = {https://ntnuopen.ntnu.no/ntnu-xmlui/bitstream/handle/11250/3024682/no.ntnu%3Ainspera%3A112046434%3A23371129.pdf},
year = {2022},
date = {2022-06-01},
urldate = {2023-03-03},
institution = {Faculty of Information Technology and Electrical Engineering, Department of Computer Science},
school = {Norwegian University of Science and Technology},
abstract = {The fourth Sustainable Development Goal is to: "Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all". UNESCO considers Open Educational Resources (OER) vital in achieving this. OERs are educational material shared under an open license permitting free access, use, adaption, and redistribution under few restrictions. The OER movement has regained attention, with COVID-19 forcing millions to study from home. However, there are many challenges to continued growth. One of the most crucial challenges is quality control. Current approaches are mainly built on manual reviews, which are time-consuming and expensive.
This thesis proposes a white-box algorithm that combines theoretical quality knowledge with measurable metrics to give a quality score. The algorithm was developed for the educational resource type Interactive Videos created with the framework H5P. I performed a comparative study of the algorithm and the most adopted approach: manual reviews. 23 H5P users were recruited to perform 107 manual reviews of 57 OERs. The manual reviews scored different quality factors, two overall scores, and could add a comment for each resource. The data were then used to find the degree of agreement between the two methods and their reliability.
The result was low to moderate degree of agreement between the manual reviews and algorithm scores. That means that the algorithm can be a suitable approach in certain cases, but mostly as an addition to other methods. However, the most crucial finding was the low reliability of the manual reviews. The reviews were highly subjective and this has significant implications for this study and all research using reviews as a data source. Future studies need to continue to work on automated approaches but consider how they can be evaluated correctly},
howpublished = {Master’s thesis in Master of Technology in Computer Science},
keywords = {O, OER, open educational resources, Qualitätssicherung, quality},
pubstate = {published},
tppubtype = {mastersthesis}
}
This thesis proposes a white-box algorithm that combines theoretical quality knowledge with measurable metrics to give a quality score. The algorithm was developed for the educational resource type Interactive Videos created with the framework H5P. I performed a comparative study of the algorithm and the most adopted approach: manual reviews. 23 H5P users were recruited to perform 107 manual reviews of 57 OERs. The manual reviews scored different quality factors, two overall scores, and could add a comment for each resource. The data were then used to find the degree of agreement between the two methods and their reliability.
The result was low to moderate degree of agreement between the manual reviews and algorithm scores. That means that the algorithm can be a suitable approach in certain cases, but mostly as an addition to other methods. However, the most crucial finding was the low reliability of the manual reviews. The reviews were highly subjective and this has significant implications for this study and all research using reviews as a data source. Future studies need to continue to work on automated approaches but consider how they can be evaluated correctly
Renz, Jan; Rohloff, Tobias; Meinel, Christoph
Automatisierte Qualitätssicherung in MOOCs durch Learning Analytics Konferenzberichte
Proceedings of DeLFI and GMWWorkshops 2017, Chemnitz, Germany, September 5, 2017, 2017, ISSN: 1613-0073.
Abstract | Links | BibTeX | Schlagwörter: A, learning analytics, massive open online courses (MOOCs), Qualitätssicherung, quality
@proceedings{Renz2017,
title = {Automatisierte Qualitätssicherung in MOOCs durch Learning Analytics},
author = {Jan Renz and Tobias Rohloff and Christoph Meinel},
editor = {Carsten Ullrich and Martin Wessner},
url = {http://ceur-ws.org/Vol-2092/paper24.pdf
https://www.researchgate.net/publication/325226167_Automatisierte_Qualitatssicherung_in_MOOCs_durch_Learning_Analytics
https://www.researchgate.net/publication/321105881_Automatisierte_Qualitatssicherung_in_MOOCs_durch_Learning_Analytics},
issn = {1613-0073},
year = {2017},
date = {2017-09-05},
urldate = {2018-12-20},
series = {CEUR Workshop Proceedings},
abstract = {Dieser Beitrag beschreibt wie mithilfe von Learning Analytics Daten eine automatisierte Qualitätssicherung in MOOCs durchgeführt werden kann. Die Ergebnisse sind auch für andere skalierende E-Learning Systeme anwendbar. Hierfür wird zunächst beschrieben, wie in den untersuchten Systemen (die als verteilte Dienste in einer Microservice-Architektur implementiert sind) Learning Analytics Werkzeuge umgesetzt sind. Darauf aufbauend werden Konzept und Implementierung einer automatisierten Qualitätssicherung beschrieben. In einer ersten Evaluation wird die Nutzung der Funktion auf einer Instanz der am HPI entwickelten MOOC-Plattform untersucht. Anschließend wird ein Ausblick auf Erweiterungen und zukünftige Forschungsfragen gegeben.},
howpublished = {Proceedings of DeLFI and GMWWorkshops 2017, Chemnitz, Germany, September 5, 2017},
keywords = {A, learning analytics, massive open online courses (MOOCs), Qualitätssicherung, quality},
pubstate = {published},
tppubtype = {proceedings}
}
Kaynardağ, Aynur Yürekli
Pedagogy in HE: does it matter? Artikel
In: Studies in Higher Education, Bd. 44, Nr. 1, S. 111–119, 2017, ISSN: 1470-174X.
Abstract | Links | BibTeX | Schlagwörter: competence, higher education, O, pedagogy, quality, training
@article{Yuerekli17,
title = {Pedagogy in HE: does it matter?},
author = {Aynur Yürekli Kaynardağ},
url = {https://doi.org/10.1080/03075079.2017.1340444},
doi = {10.1080/03075079.2017.1340444},
issn = {1470-174X},
year = {2017},
date = {2017-06-19},
urldate = {2019-02-16},
journal = {Studies in Higher Education},
volume = {44},
number = {1},
pages = {111–119},
publisher = {Routledge},
abstract = {Pedagogical competencies of instructors play a crucial role in improving the quality of the teaching and learning in higher education institutions. However, in many countries worldwide, pedagogical training is not a requirement for being an instructor at a university [Postareff, L., S. Lindblom-Ylänne, and A. Nevgi. 2007. “The Effect of Pedagogical Training on Teaching in Higher Education.” Teaching and Teacher Education 23: 557–71; Badley, G. 2000. “Developing Globally-Competent University Teachers.” Innovations in Education and Training International 37 (3): 244–53]. This study explores how pedagogical competencies of instructors affect the perceptions of students by focusing on three key dimensions of classroom pedagogy; namely delivery (provision of content and facilitation), communication and assessment. The results of the scale administered to a total of 1083 university students suggests that there are meaningful differences in terms of students’ perceptions regarding their instructors’ pedagogical competencies. The greatest difference is reflected in the ratings of items related to the communication dimension.},
keywords = {competence, higher education, O, pedagogy, quality, training},
pubstate = {published},
tppubtype = {article}
}