MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Are MOOC Learning Analytics Results Trustworthy? With Fake Learners, They Might Not Be!

Author(s)
Alexandron, Giora; Yoo, Lisa Y.; Ruiperez Valiente, Jose Antonio; Lee, Sunbok; Pritchard, David E.
Thumbnail
Downloadijaied_are_moocs_learning_analytics_trustworthy_preprint.pdf (538.0Kb)
Terms of use
Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/
Metadata
Show full item record
Abstract
The rich data that Massive Open Online Courses (MOOCs) platforms collect on the behavior of millions of users provide a unique opportunity to study human learning and to develop data-driven methods that can address the needs of individual learners. This type of research falls into the emerging field of learning analytics. However, learning analytics research tends to ignore the issue of the reliability of results that are based on MOOCs data, which is typically noisy and generated by a largely anonymous crowd of learners. This paper provides evidence that learning analytics in MOOCs can be significantly biased by users who abuse the anonymity and open-nature of MOOCs, for example by setting up multiple accounts, due to their amount and aberrant behavior. We identify these users, denoted fake learners, using dedicated algorithms. The methodology for measuring the bias caused by fake learners’ activity combines the ideas of Replication Research and Sensitivity Analysis. We replicate two highly-cited learning analytics studies with and without fake learners data, and compare the results. While in one study, the results were relatively stable against fake learners, in the other, removing the fake learners’ data significantly changed the results. These findings raise concerns regarding the reliability of learning analytics in MOOCs, and highlight the need to develop more robust, generalizable and verifiable research methods. Keywords: Learning Analytics; MOOCs; Replication research; Sensitivity analysis; Fake learners
Date issued
2019-07
URI
https://hdl.handle.net/1721.1/124002
Department
Massachusetts Institute of Technology. Department of Physics; Massachusetts Institute of Technology. Program in Comparative Media Studies/Writing
Journal
International Journal of Artificial Intelligence in Education
Publisher
Springer Science and Business Media LLC
Citation
Alexandron, Giora et al. "Are MOOC Learning Analytics Results Trustworthy? With Fake Learners, They Might Not Be!" International Journal of Artificial Intelligence in Education 29, 4 (July 2019): 484–506
Version: Author's final manuscript
ISSN
1560-4292
1560-4306

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.