Enhancing Transparency in Educational Data Mining: Applying Explainable AI to Analyze Student Behavior and Learning Patterns

Authors

  • Felix Chinedu Ugwu

    Federal College of Education Okene
    Author
  • Aimola, Amos Ayodele

    Federal College of Education Okene, Kogi State, Nigeria.
    Author
  • Rita, Mizilafe Uwumagbe

    Federal College of Education Okene, Kogi State, Nigeria.
    Author
  • Badams Sanni Latifat

    Federal College of Education Okene, Kogi State, Nigeria.
    Author

Abstract

This study investigates the application of Explainable Artificial Intelligence (XAI) in Educational Data Mining (EDM) to analyze student behavior and learning patterns in a transparent and accountable manner. The primary objective is to demonstrate how XAI improves the interpretability of machine learning models while preserving predictive performance, thereby supporting informed and equitable decision-making in education. A mixed-methods approach was adopted using two large-scale educational datasets: the Open University Learning Analytics Dataset (OULAD), comprising 32,593 students, and the EdNet dataset, containing over 784,000 learners and more than 131 million interaction records. These datasets include student demographics, assessment outcomes, and detailed virtual learning environment (VLE) interaction logs. Both interpretable models (Decision Trees and Logistic Regression) and black-box models (Random Forest and XGBoost) were developed to predict student performance and engagement. Black-box models achieved the highest predictive performance, with XGBoost reaching an accuracy of 0.85 on OULAD and 0.88 on EdNet, compared to 0.74–0.76 for interpretable models. To address the interpretability gap, SHAP and LIME were applied to generate global and local explanations of model predictions. SHAP analysis identified VLE access frequency, cumulative assessment scores, and early engagement indicators as the most influential predictors of academic success. LIME provided case-level explanations that highlighted factors such as low session time and poor early assessment performance in high-risk student predictions. A pilot evaluation involving 10 educators indicated that 80% found SHAP visualizations highly informative for understanding global learning patterns, while 70% rated LIME explanations as helpful for individual student diagnosis. The findings demonstrate that XAI enhances model transparency without sacrificing performance and enables educators to make data-driven, fair, and personalized instructional decisions. The study concludes that integrating XAI into EDM systems strengthens trust, accountability, and educational effectiveness, and recommends broader adoption of explainable learning analytics in educational institutions.

Author Biographies

  • Aimola, Amos Ayodele, Federal College of Education Okene, Kogi State, Nigeria.

    Department of Computer Science, Federal College of Education, Okene, Kogi State, 

  • Rita, Mizilafe Uwumagbe, Federal College of Education Okene, Kogi State, Nigeria.

    Department of Computer Science, Federal College of Education, Okene, Kogi State, Nigeria.

  • Badams Sanni Latifat, Federal College of Education Okene, Kogi State, Nigeria.

    Department of Computer Science, Federal College of Education, Okene, Kogi State, Nigeria.

     

Downloads

Published

2026-03-20