To read this content please select one of the options below:

Automated detection of learning stages and interaction difficulty from eye-tracking data within a mixed reality learning environmen

Omobolanle Ruth Ogunseiju (School of Building Construction, Georgia Tech, Atlanta, Georgia, USA)
Nihar Gonsalves (Myers Lawson School of Construction, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, USA)
Abiola Abosede Akanmu (Myers Lawson School of Construction, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, USA)
Yewande Abraham (Department of Civil Engineering Technology, Environmental Management and Safety, Rochester Institute of Technology, Rochester, New York, USA)
Chukwuma Nnaji (Department of Construction Science, Texas A&M University, College Station, Texas, USA)

Smart and Sustainable Built Environment

ISSN: 2046-6099

Article publication date: 9 January 2023

149

Abstract

Purpose

Construction companies are increasingly adopting sensing technologies like laser scanners, making it necessary to upskill the future workforce in this area. However, limited jobsite access hinders experiential learning of laser scanning, necessitating the need for an alternative learning environment. Previously, the authors explored mixed reality (MR) as an alternative learning environment for laser scanning, but to promote seamless learning, such learning environments must be proactive and intelligent. Toward this, the potentials of classification models for detecting user difficulties and learning stages in the MR environment were investigated in this study.

Design/methodology/approach

The study adopted machine learning classifiers on eye-tracking data and think-aloud data for detecting learning stages and interaction difficulties during the usability study of laser scanning in the MR environment.

Findings

The classification models demonstrated high performance, with neural network classifier showing superior performance (accuracy of 99.9%) during the detection of learning stages and an ensemble showing the highest accuracy of 84.6% for detecting interaction difficulty during laser scanning.

Research limitations/implications

The findings of this study revealed that eye movement data possess significant information about learning stages and interaction difficulties and provide evidence of the potentials of smart MR environments for improved learning experiences in construction education. The research implication further lies in the potential of an intelligent learning environment for providing personalized learning experiences that often culminate in improved learning outcomes. This study further highlights the potential of such an intelligent learning environment in promoting inclusive learning, whereby students with different cognitive capabilities can experience learning tailored to their specific needs irrespective of their individual differences.

Originality/value

The classification models will help detect learners requiring additional support to acquire the necessary technical skills for deploying laser scanners in the construction industry and inform the specific training needs of users to enhance seamless interaction with the learning environment.

Keywords

Acknowledgements

This paper is an enhanced version of the conference paper presented at the 21st International Conference on Construction Applications of Virtual Reality (CONVR 2021). The authors would like to acknowledge the editorial contributions of Professor Nashwan Dawood and Dr. Farzad Rahimian of Teesside University for making this publication possible.

Funding: The authors would like to acknowledge National Science Foundation for their support (Grant No. IUSE – 1916521). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Citation

Ogunseiju, O.R., Gonsalves, N., Akanmu, A.A., Abraham, Y. and Nnaji, C. (2023), "Automated detection of learning stages and interaction difficulty from eye-tracking data within a mixed reality learning environmen", Smart and Sustainable Built Environment, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/SASBE-07-2022-0129

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Emerald Publishing Limited

Related articles