Machine learning models have often been found to be unfair, for example, when they produce more errors for certain groups. To ensure that unfair models are not deployed in Moodle Learning Analytics (LA) and to guarantee the trustworthiness of the deployed models, it is crucial to audit their fairness before deployment. However, Moodle currently lacks the necessary auditability features, specifically, it does not store and make available evidence that can be used to prove or disprove fairness claims. To address this lack of evidence, we developed a plugin that allows developers and administrators to train and test a LA model configuration while also storing the intermediate results and providing these data sets as downloads. In this presentation, we will share our insights on enabling auditability in AI-integrating systems, present the Moodle plugin we developed, and demonstrate how it can be used to conduct an audit of Moodle LA. By enabling fairer LA models and increasing trust in their predictions, we hope to reach more learners and maximize the potential benefits of these models.
Day 3 Presentation