Show simple item record

dc.contributor.authorAlexandron, Giora
dc.contributor.authorLee, Sunbok
dc.contributor.authorRuiperez Valiente, Jose Antonio
dc.contributor.authorPritchard, David E.
dc.date.accessioned2018-06-21T20:27:56Z
dc.date.available2018-06-21T20:27:56Z
dc.date.issued2018-09
dc.identifier.urihttp://hdl.handle.net/1721.1/116511
dc.description.abstractMassive Open Online Courses (MOOCs) collect large amounts of rich data. A primary objective of Learning Analytics (LA) research is studying these data in order to improve the pedagogy of interactive learning environments. Most studies make the underlying assumption that the data represent truthful and honest learning activity. However, previous studies showed that MOOCs can have large cohorts of users that break this assumption and achieve high performance through behaviors such as Cheating Using Multiple Accounts or unauthorized collaboration, and we therefore denote them fake learners. Because of their aberrant behavior, fake learners can bias the results of Learning Analytics (LA) models. The goal of this study is to evaluate the robustness of LA results when the data contain a considerable number of fake learners. Our methodology follows the rationale of ‘replication research’. We challenge the results reported in a well-known, and one of the first LA/PedagogicEfficacy MOOC papers, by replicating its results with and without the fake learners (identified using machine learning algorithms). The results show that fake learners exhibit very different behavior compared to true learners. However, even though they are a significant portion of the student population (∼15%), their effect on the results is not dramatic (does not change trends). We conclude that the LA study that we challenged was robust against fake learners. While these results carry an optimistic message on the trustworthiness of LA research, they rely on data from one MOOC. We believe that this issue should receive more attention within the LA research community, and can explain some ‘surprising’ research results in MOOCs. Keywords: Learning Analytics, Educational Data Mining, MOOCs, Fake Learners, Reliability, IRTen_US
dc.language.isoen_US
dc.publisherHTTC e.V.en_US
dc.relation.isversionofhttp://www.ec-tel.eu/index.php?id=791en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceRuipérez-Valienteen_US
dc.titleEvaluating the Robustness of Learning Analytics Results Against Fake Learnersen_US
dc.typeArticleen_US
dc.identifier.citationAlexandron, Giora et al. "Evaluating the Robustness of Learning Analytics Results Against Fake Learners." EC-TEL 2018, Thirteenth European Conference on Technology Enhanced Learning, 3-6 September, 2018, Leeds, United Kingdom, HTTC e.V., 2018.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Physicsen_US
dc.contributor.approverRuipérez-Valiente, Jose A.en_US
dc.contributor.mitauthorRuiperez Valiente, Jose Antonio
dc.contributor.mitauthorPritchard, David E
dc.relation.journalEC-TEL 2018, Thirteenth European Conference on Technology Enhanced Learningen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsAlexandron, Giora; Ruipérez-Valiente, José A.; Lee, Sunbok; Pritchard, David E.en_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-2304-6365
dc.identifier.orcidhttps://orcid.org/0000-0001-5697-1496
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record