Measurement model of readiness for online testing of undergraduate students in Thailand’s distance education programs

Thanyasinee Laosum (Office of Registration, Records and Evaluation, Sukhothai Thammathirat Open University, Nonthaburi, Thailand)

Asian Association of Open Universities Journal

ISSN: 2414-6994

Article publication date: 2 September 2024

Issue publication date: 26 September 2024

240

Abstract

Purpose

This study aims to develop a model for readiness measurement and to study readiness levels for online testing of undergraduate students in Thailand’s distance education programs.

Design/methodology/approach

In total, 870 undergraduate students enrolled in the 2022 academic year of a Thai university were sampled for the study. The samples were divided into two groups: Group 1 comprised 432 students who underwent exploratory factor analysis (EFA) and Group 2 comprised 438 students who underwent second-order confirmatory factor analysis (CFA). Both were multi-stage random samples. Descriptive statistics, item-total correlations (ITCs), coefficient correlations, EFA and second-order CFA were used.

Findings

The readiness for the online testing model comprised 5 factors and 33 indicators. These included self-efficacy (SE) in utilizing technology (nine indicators), self-directed learning (SL) for readiness testing (six indicators), adequacy of technology (AT) for testing (five indicators), acceptance of online testing (AC) (seven indicators) and readiness training for testing (six indicators). The model was congruent with empirical data, and the survey results indicated that students were highly prepared at the “high” level.

Practical implications

This study disclosed several factors and indicators involved in the readiness for online testing. The university may use these findings in preparing its students for online testing for better achievement.

Originality/value

These findings may serve as a framework for the analysis of the readiness issues for online testing of undergraduate students and also offer guidance to the universities preparing to offer online testing.

Keywords

Citation

Laosum, T. (2024), "Measurement model of readiness for online testing of undergraduate students in Thailand’s distance education programs", Asian Association of Open Universities Journal, Vol. 19 No. 2, pp. 186-201. https://doi.org/10.1108/AAOUJ-01-2024-0007

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Thanyasinee Laosum

License

Published in the Asian Association of Open Universities Journal. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) license. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this license may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

The outbreak of coronavirus (COVID-19) caused a significant impact on various aspects of Thai life, including the economy, society, tourism and education (McKibbin and Fernando, 2020). Lauret and Bayram-Jacobs (2021) found that the virus had a profound impact on learning environments. Atchanpanya (2020) observed that Thai educational institutions were no exceptions. Guangul et al. (2020) noted that in this borderless online connectivity and limitless communication, coupled with diverse technological tools, educational institutions sought ways to manage teaching and learning through various forms of distance learning, replacing face-to-face instruction. Institutions have integrated online testing to support teaching and assessment. The readiness for online testing has become a matter of debate among Thai educational institutions.

Pinyosinwat (2020) proposed that Thailand’s challenge was not merely addressing immediate COVID-19 issues but transforming crises into opportunities to enhance teaching and learning quality. Redesigns of learning units and teaching methods that well-suited the assessment process were recommended. Assessment should emphasize learning opportunities over examination scores. This aligns with Almossa (2021), who found that the COVID-19 situation called for educators to revise and adapt their instructional structures.

Viktoria and Aida (2020) found that the COVID-19 pandemic was an accelerator in transitioning to distance learning. Distance education has become a feasible alternative for most institutions. Simonson et al. (2011) define distance education as a form of education where learners are separated from instructors but engage in interactive telecommunications systems connecting learners, learning resources and instructors.

In Thailand, Sukhothai Thammathirat Open University (STOU) is the only open university delivering education services using distance education. With the spread of COVID-19, STOU shifted its teaching and assessment approach to an online format (STOU, 2020a).

STOU introduced an online testing system in the second semester of 2019–2021. In total, groups of 5,902, 17,634, 5,022, 18,740 and 29,004 students participated in the five rounds of exams, which indicated the increase in popularity of the online testing.

A number of problems were faced by the students. These included difficulties in student ID verification, system access, signal failure, timely logins, power disruption, inability to locate the subject and late submission of answer sheets. These problems led to stress, fatigue and failure to participate in online testing. The findings correspond with Masalimova et al. (2022), who found that distance learning led to physical and psychological health problems, such as fear, anxiety, stress and a loss of concentration. Online learning and testing has been widely debated and promoted among educators, which has induced many educational institutions to switch to online teaching with only poor infrastructure in place.

The rapid rise of online testing has had a considerable impact on students’ adaptability due to its novelty, since some students are unfamiliar with the technology. Bakhov et al. (2021) found that a disadvantage of distance learning was access to digital resources and the availability of quality internet. This created a digital divide and educational inequality, limiting students’ equal access to online testing. STOU (2020b) should develop an easy-to-use online testing system and support and establish channels to assist students to deal with the problem encountered. Additionally, STOU should prepare students to be ready to participate in online learning and testing and be publicized among its students and staff, and necessary online service skills should be developed.

In this situation, STOU needs to develop a model for measuring readiness for online testing and for measuring the readiness level of its students. The Khairuddin et al. (2020) framework for the measurement of undergraduate student readiness for online testing was utilized. The framework proposed the students’ readiness for online learning across six dimensions: technology availability, technology utilization, self-confidence, acceptance, self-directed learning (SL) and training. The framework presented an avenue for higher education offering distance learning to apply the findings for the promotion of readiness for online testing and thereby ensuring equal access to online testing for its students. The objectives of the study therefore were: (1) to develop a model for measuring student readiness for online testing and (2) to study the level of readiness of undergraduate students in online testing.

Historical development

Skinner (1965) emphasized the crucial role of readiness in determining success or failure in various tasks. Readiness enables individuals to succeed, while a lack of readiness can lead to difficulties. Downing and Thackrey (1971) identified four factors of readiness: physical factors, intellectual factors, environmental factors and motivational and personality factors. According to the Texas Education Agency (2008), institutions are recommended to provide technology training to enhance learners' readiness or to reduce anxiety for online testing. Well-prepared students are more likely to succeed as compared with inadequately prepared students.

Existing literature reveals that there are a number of factors associated with readiness in online testing. These include technology readiness, ability to use technology, self-confidence in online testing, acceptance of online testing (AC), SL and training readiness (TR).

  • (1)

    Technology readiness refers to students' preparedness to use digital tools and platforms required for online testing. Tang et al. (2021) found that technology readiness is crucial for addressing learning challenges. Maryani et al. (2023) found that technology readiness enables students to effectively navigate digital learning resources, platforms and devices. Enhancing technology readiness is paramount for students' success in online testing environments.

  • (2)

    Ability to use technology is the ability of students to use technology that helps them to take online testing. Lee et al. (2016) found that students' computer skills, attitudes toward technology, learning styles and support from instructors and peers all influence the use of technology. Abduvakhidov et al. (2021) emphasized the importance of digital skills for students to succeed in a digital learning environment. Therefore, it is crucial to provide students with the necessary skills and tools for effective test preparation in digital environments.

  • (3)

    Self-confidence is the characteristic of students that demonstrates their thoughts, decision-making abilities, courage and confidence to accomplish various tasks related to online testing, even in the face of obstacles. Bandura (1977) states that self-confidence is considered one of the most important psychological factors affecting success. Therefore, students who possess these characteristics are more likely to succeed in online testing.

  • (4)

    AC is the perception of students towards the ease, convenience, speed and efficiency of using online testing, along with recognizing the value and benefits of online testing. Mohd et al. (2015) found that perceiving online testing as being easy encourages students to take online testing frequently. Al-Qdah and Ababneh (2017) discovered that online testing was perceived to offer automated results and instant feedback, thereby fostering a more rapid learning process.

  • (5)

    SL refers to students' intrinsic desire for knowledge, curiosity, exploration from various sources and the motivation to learn about online testing independently. Long (1994) states that there are psychological mechanisms through which learners intentionally guide themselves to acquire knowledge and comprehend how to resolve problems. SL empowers students to take ownership of their learning process, facilitating adaptability and resilience in digital contexts.

  • (6)

    TR is about enhancing knowledge and skills related to online testing so that students can participate in online testing. Cabero-Almenara et al. (2021) found that higher education should support the development of digital competencies by offering training and resources for students, integrating technology into the curriculum and promoting it as a core competency for all learners.

Currently, educators are interested in various dimensions of online testing, which include these three dimensions:

  • (1)

    Learners’ perceptions of online testing. Fageeh (2015) found that students did not feel anxious about online testing but enjoyed using the e-learning environments. They felt that online assessment benefited and intended to use the grade center for practicing both core and elective subjects. Khan et al. (2021) found that significant latent variables of students’ perceptions of online testing included pedagogy, validity and reliability, affective factors, practicality and security.

  • (2)

    Participation in online testing. Sugilar (2015) revealed that factors that contributed to students' decision to engage in online testing were self-confidence in computer usage, perception of the ease of online testing, understanding the significance of online testing, recognizing the value of online testing and consideration of the associated costs.

  • (3)

    Assessment of readiness for online testing. Mohd et al. (2015) found that students perceived ease of use and perceived that online testing was a learning tool.

The literature review presented surveys on the perception, participation and readiness assessment of online testing, but studies on the measurement model of online testing readiness (REA) are rare. The researcher was therefore anxious to develop a model to measure the readiness of online testing for undergraduate students. How many factors contribute to the effective model? What specific indicators should each factor entail? Further, what is the readiness level of undergraduate students in the Thai distance education?

Methodology

Research informants and samples

In total, 12 focus group informants for the verification of indicators and the appropriateness of the model were divided into three purposeful sampled groups: (1) six senior experts with a minimum of three years of experience in online testing and/or development of online testing, (2) three senior experts with a minimum of three years of experience in measurement and/or assessment and (3) three lecturers with a minimum of three years of experience in online testing.

Eight undergraduate students enrolled in the 2022 academic year at STOU and willing to participate were chosen for the verification of language clarity and questionnaire comprehensibility prior to the content validation and pilot test.

In total, 870 undergraduate students from 11 academic disciplines who enrolled in the 2022 academic year at STOU were sampled for the development of the online testing model. They were divided into two groups: Group 1 comprised of 432 participants and used for exploratory factor analysis (EFA) and Group 2 comprised of 438 participants, who were used for second-order confirmatory factor analysis (CFA). Lorenzo-Seva (2022) argued against using the same sample for both EFA and CFA. The division of participants into two groups was advocated. The approach aligns with that of del Rey et al. (2021). The study divided the participants into two groups. The allocation of the sample size of the study was considered appropriate and was in accordance with Tabachnick and Fidell’s (2013) recommendation that a sample size of 300 was suitable for factor analysis. The two sampled groups were obtained through multi-stage random sampling. Google Forms was used in the data collection.

Materials and tools

The questionnaire developed by the researcher through a comprehensive literature review and expert validation was used. It was divided into two parts: the general information and the four-point rating scale comprising six factors and measuring students’ readiness to prevent mediocre answers (Kerlinger, 1964).

Through the interviews with the eight students for the validation of the questionnaire, it was found that most questions were clear and understandable. Only a small number of questions needed further revision.

The item objective congruence (IOC) technique was used, and it revealed that all questions were suitable, with IOC values ranging from 0.67–1.00. This aligns with Rovinelli and Hambleton’s criteria (1977), who stated that questions with an IOC value of 0.50 or higher were acceptable.

The reliability analysis using Cronbach’s alpha yielded a coefficient of 0.90 for the entire questionnaire, with each factor ranging from 0.86–0.96, indicating excellent reliability (George and Mallery, 2010).

Data analysis

The analysis was divided into four parts: (1) analysis of the variables using descriptive statistics and item-total correlation (ITC), (2) verifying the correlations of observed variables using Pearson correlation coefficient (r), (3) conducting factor analysis of the model and (4) analysis of the level of REA by using descriptive statistics.

Lorenzo-Seva (2022) stated that EFA and CFA were two crucial steps in the factor analysis process. In this study, the EFA and the CFA, as the second step, were used to determine the appropriate number of factors, and the observed variables were conducted corresponding with Brown (2015). An oblique rotation method was used. The criteria for EFA were as follows: (1) each factor should have eigenvalues >1 (Kaiser, 1960), (2) the factor loadings for each variable within a factor should >0.30 and (3) each factor should consist of at least three variables (Kim and Mueller, 1978).

The second-order CFA was conducted to verify the structural validity of the model. The Omega (ω) index (McDonald, 1970), Cronbach’s alpha (α) index and the construct reliability (CR) indices were used to verify the reliability of the model. The average variance extracted (AVE) index was used to verify the convergent validity. The maximum shared variance (MSV) and average shared variance (ASV) indices were used to verify discriminant validity.

Findings

This revealed that the model was appropriate. However, an adjustment of the names of factors and indicators suitable for the development of the model was suggested. The total 33 indicators were categorized into 6 groups: technological readiness (five indicators), ability to use technology (five indicators), self-confidence (four indicators), AC (seven indicators), SL (six indicators) and TR (six indicators).

The analysis of the model was divided into two parts:

  • (1)

    The EFA.

    • It was found that the mean (x¯) ranged from 2.75–3.37, and the standard deviation (SD) ranged from 0.64–1.01. Most variables exhibited negative skewness (Sk) and kurtosis (Ku) values, indicating scores higher than the sample mean and showing considerable data dispersion. The skewness and the kurtosis values deviated from zero but did not exceed +2/–2, which followed a near-normal distribution (George and Mallery, 2010).

The ITC values of all variables showed positive correlations between 0.57–0.85, demonstrating highly strong discriminating acceptable power, as per Ebel’s criteria (1972), who asserted that an ITC value of 0.40 and above signified excellent discriminative ability, as illustrated in Table 1.

  • Correlation analysis among pairs of variables revealed that every pair of variables exhibited statistically significant correlations at the 0.01 level. The pairs of variables with the highest correlations were between V1 and V2 (r = 0.908), followed by V2 and V4 (r = 0.888), and the lowest correlations were between V5 and V32 (r = 0.346).

  • The Bartlett's test of sphericity indicated a significant statistical difference between the correlation matrix of the variables and the identity matrix, with 15996.893, df = 528 and sig = 0.000. The Kaiser–Meyer–Olkin measure (KMO) of sampling adequacy was 0.90 and above, which was deemed excellent (Kaiser, 1974).

  • The communalities (h2) analysis revealed that all variables had communalities ranging from 0.35–0.90, which exceed Beavers et al.’s (2013) Figure 0.25. Every variable used was suitable for the measurement of common factors, as illustrated in Table 2.

  • The EFA revealed as follows:

Factor 1 – Self-efficacy (SE) in utilizing technology, was described by nine variables, each variable loading ranging from 0.71–0.90. The variables with the highest to lowest were SE1, SE2, SE3, SE4, SE5, SE6, SE7, SE8 and SE9. These nine variables contributed to this factor with the eigenvalue = 19.28, and this factor contributed to 14.83% of the model.

Factor 2 –SL was described by six variables, each variable loading ranging from 0.75–0.93. The variables with the highest to the lowest were SL1, SL2, SL3, SL4, SL5 and SL6. These six variables contributed to this factor with the eigenvalue = 2.44, and this factor contributed to 13.36% of the model.

Factor 3 – Adequacy of technology (AT) was described by five variables; each variable loading ranged from 0.72–0.95. The variables with the highest to lowest were AT1, AT2, AT3, AT4 and AT5. These five variables contributed to this factor with the eigenvalue = 1.62, and this factor contributed to 12.52% of the model.

Factor 4 –AC was described by seven variables, each variable loading ranging from 0.54–0.87. The variables with the highest to the lowest were AC1, AC2, AC3, AC4, AC5, AC6 and AC7. These seven variables contributed to this factor with the eigenvalue = 1.24, and this factor contributed to 13.26% of the model.

Factor 5 –TR was described by six variables, each with variable loadings ranging from 0.78–0.93. The variables with the highest to the lowest were TR1, TR2, TR3, TR4, TR5 and TR6. These six variables contributed to this factor with the eigenvalue = 1.13, and this factor contributed to 11.45% of the model, as illustrated in Table 2.

  • (2)

    The second-order CFA.

    • It revealed that the mean ranged from 2.74–3.28, and the standard deviation ranged from 0.74–0.98. Most variables exhibited negative skewness and kurtosis values, indicating that the scores were higher than the sample mean and showed considerable data dispersion. The kurtosis and the skewness values deviated from zero but did not exceed +2/–2, which indicated that the variables followed a near-normal distribution (George and Mallery, 2010).

The ITC values of all variables showed positive correlations between 0.62–0.85. This demonstrated strong discriminating and highly acceptable power, as per Ebel’s criterion (1972).

  • Correlation analysis among pairs of variables revealed that every pair of variables exhibited statistically significant correlations at the 0.01 level. The pairs of variables with the highest correlations were between AT3 and AT4 (r = 0.799), followed by AT2 and AT4 (r = 0.794), and the lowest correlation was between SE9 and TR2 (r = 0.338).

  • The Bartlett’s test of sphericity indicated a significant statistical difference between the correlation matrix of the variables and the identity matrix, with 14695.480, df = 528 and sig = 0.000. The KMO was 0.90 and above, which was deemed excellent (Kaiser, 1974).

  • The second-order CFA revealed that the model was congruent with empirical data, as illustrated in Table 3.

The first-order analysis revealed that the factor score coefficients (β) of the 33 variables were statistically significant at the 0.01 level. The variables with the highest significance weights were AT2 (β = 0.89), AT4 (β = 0.89) and TR4 (β = 0.89), and the lowest was SE9 (β = 0.68). These 33 variables had a covariance with the model of 47–80%.

SE: The β of all variables were statistically significant at the 0.01 level. The variable with the highest significance weight was SE6 (β = 0.88), and the lowest was SE9 (β = 0.68). The two variables had a covariance with this factor of 78 and 47%, respectively.

SL: The β of all variables were statistically significant at the 0.01 level. The pairs of variables with the same highest significance weights were – SL1 (β = 0.87) and SL4 (β = 0.87), and the pairs with the same lowest weights were SL5 (β = 0.81) and SL6 (β = 0.81), with the covariance of the two pairs were 76 and 66%, respectively.

AT: The β of all variables were statistically significant at the 0.01 level. The pairs of variables with the same highest significance weights were – AT2 (β = 0.89) and AT4 (β = 0.89), and the variable with the lowest was AT5 (β = 0.73), with the covariance of the pair and the variable being 79 and 54%, respectively.

AC: The β of all variables were statistically significant at the 0.01 level. The variable with the highest significance weight was AC3 (β = 0.88), and the lowest was AC4 (β = 0.70). The two variables had a covariance with this factor of 78 and 49%, respectively.

TR: The β of all variables were statistically significant at the 0.01 level. The variable with the highest significance weight was TR4 (β = 0.89), and the lowest was TR2 (β = 0.78). The two variables had a covariance with this factor of 80 and 60%, respectively.

The second-order analysis revealed that the β of the five factors of REA ranged from 0.81–0.96 and were statistically significant at the 0.01 level. These βs were descendingly ranked as AC, SE, SL, TR and AT, with β values of 0.96, 0.91, 0.87, 0.82 and 0.81, respectively.

The results of the verification of reliability for the model using the ω and α estimation methods indicated that the five factors had ω values ranging from 0.919–0.942, meeting the Rodriguez et al. (2016) criterion of ω ≥ 0.80. The α values ranged from 0.932–0.949, meeting the Hair et al. (2010) and Kline (2011) criterion of α ≥ 0.70. Further, the CR index of the five factors was valued between 0.916–0.943, meeting the Hair et al. (2010) and Kline (2011) criterion of CR ≥ 0.70.

The verification of the construct validity of the model through the analysis of convergent validity using the AVE index revealed that the five factors had AVE values ranging from 0.668–0.735. These values meet the Hair et al. (2010) and Kline (2011) criterion with CR > AVE and AVE ≥0.50, indicating that the model exhibited convergent validity.

To verify discriminant validity, the ASV index revealed that the five factors had ASV values ranging from 0.520–0.667, meeting the Hair et al. (2010) criterion (ASV < AVE). Furthermore, the MSV valued between 0.619–0.689 of the three factors (SL, AT and TR) met the Hair et al. (2010) criterion (MSV < AVE). However, SE and AC factors did not meet this criterion, suggesting that all factors demonstrate discriminant validity, except the SE and the AC factors, as illustrated in Table 4.

  • (3)

    The analysis showed that the 870 students had a high level of readiness for online testing (x¯ = 2.99, SD = 0.66).

For each factor, SE was at a high level (x¯ = 2.98, SD = 0.75). The items with the highest readiness level were SE4 (x¯ = 3.02, SD = 0.91) and SE7 (x¯ = 3.02, SD = 0.87), followed by SE6 (x¯ = 3.00, SD = 0.85).

SL was at the high level (x¯ = 3.05, SD = 0.73). The item with the highest readiness level was SL1 (x¯ = 3.13, SD = 0.81), followed by SL3 (x¯ = 3.08, SD = 0.80).

AT was at the high level (x¯ = 2.94, SD = 0.79). The item with the highest readiness level was AT5 (x¯ = 2.99, SD = 0.88), followed by AT4 (x¯ = 2.97, SD = 0.91).

AC was at the high level (x¯ = 3.09, SD = 0.69). The item with the highest readiness level was AC7 (x¯ = 3.24, SD = 0.79), followed by AC1 (x¯ = 3.14, SD = 0.80).

TR was at the high level (x¯ = 2.88, SD = 0.81). The item with the highest readiness level was TR6 (x¯ = 2.92, SD = 0.91), followed by TR3 (x¯ = 2.91, SD = 0.92) and TR5 (x¯ = 2.91, SD = 0.92).

Discussion

The research revealed that the developed model consisted of five factors – SE, SL, AT, AC and TR. Four factors corresponded to Khairuddin et al.’s (2020) findings. The two factors, namely ability to use technology and self-confidence in online testing, were later structurally and statically merged as SE and attributed as the fifth factor of the drafted model. The second-order CFA confirmed the construct validity of the drafted model. This suggests that students aiming for online testing need to be well-prepared for their SE, SL, AT, AC and TR. Particularly, the SE showed a tendency to enhance students’ readiness, which corresponds with Bandura’s (1977) SE theory.

The most important factor of the model was the SE. The indicator with the highest factor loading was the ability to connect to the Internet via mobile phones and iPads and/or tablets. This indicator holds significant importance for open universities. The university may consider implementing activities that promote students' technology skills. This corresponds to Shraim (2019), who found that learners’ readiness for online testing played a crucial role in online testing. Rafique et al. (2021) observed that students in Pakistan, during the COVID-19 outbreak, demonstrated considerable confidence in using computers and the Internet. This confidence in specific technological skills underscores the importance of comprehensive skill development in fostering an effective online learning environment. These studies collectively suggest that a holistic approach, addressing both technological skills and broadening learning competencies, is essential for enhancing student readiness in online contexts.

SL was the second important factor. The indicator with the highest factor loading was the impetus to seek a way to learn something essential. The SL students sought knowledge independently, experimented, practiced, improved and developed themselves until they became proficient and capable of applying the knowledge for their own benefit. This corresponds with Long (1994), who advocated that SL was a psychological process through which learners could manage and guide themselves to create knowledge and understand problem-solving and could aim to overcome various obstacles independently. Rafique et al. (2021) found that SL was a critical component of students’ readiness for online learning. Students should find some helpers in dealing with learning challenges and setting goals on SL, which is a crucial characteristic in developing further students’ knowledge and essential skills.

AT was the third important factor. The indicator with the highest factor loading was the presence of a computer with an audio system and/or microphone in working condition. Students who had the necessary equipment for online testing were more likely to be prepared for online testing. This corresponds with Wagiran et al. (2022), who found that students with digital technology proficiency, equipment capabilities, user satisfaction and motivation were ready for e-learning. Similarly, if students had adequate equipment, they were more inclined to be prepared and excel in online testing.

AC was the fourth important factor. The indicator with the highest factor loading was the belief that the online testing system could be used continuously once logged in. This corresponds to Mohd et al. (2015), who found that the online testing enhanced students learning efficiency and performance in various activities. Davis et al. (1989) found that perceived usefulness and perceived ease of use were primary and secondary factors that determine users' intention in using computer technology. Joo et al. (2011) discovered that usability and learnability significantly influence users’ technology acceptance. This indicates that the usability and learnability of technology impact technology acceptance.

TR was the fifth important factor. The indicator with the highest factor loading was the use of essential equipment for online testing. Students who had undergone training on online testing equipment were ready for online tests and had a tendency to succeed. This corresponds to Nisperos (2014), who found that before students engaged in online activities, it was crucial for universities to provide technology training. Budiman and Syafrony (2023) found that digital literacy training optimized the lecture process and improved students’ digital literacy.

The model, through second-order CFA, was congruent with empirical data and was constructively valid. The internal consistency of the model’s five factors and ω and α values met the criteria (ω ≥ 0.80, Hair et al., 2010; α ≥ 0.70, Kline, 2011). The model’s indicators were aligned in the same direction and at a high level (Viladrich et al., 2017). The verification of the CR index revealed that all factors met the CR criteria (CR ≥ 0.70; Hair et al., 2010; Kline, 2011), indicating that all factors exhibited high reliability.

The verification of convergent validity of the model using the AVE index indicated that all five factors met the criteria (CR > AVE, AVE ≥0.50; Hair et al., 2010; Kline, 2011). This finding suggested that the model exhibits a high level of convergent validity. This implies that individual items serve as a good indicator of the factor.

The SL, AT and TR’s ASV and MSV indices were discriminantly valid (ASV < AVE; Hair et al., 2010), but not the SE and the AC. This suggests that the SE and the AC may be measured with some overlap. Future investigation into these variables is therefore recommended. Rahim and Magner (1995) and Goh and Blake (2021) stated that the factor loadings of each indicator were high, and the CR and AVE values for all factors were higher than the criteria and were acceptable. Overall, discriminant validity can be accepted for this model, and it supports the discriminant validity among the factors under study. Based on the results, the factor loadings and CR and AVE analyses were higher than the criteria. Thus, it can be concluded that the model has sufficient evidence to be accepted.

The study indicated a high level of readiness among students for online testing, and the analysis of factors consisting of SE, SL, AT, AC and TR showed strong preparation in all areas. This aligns with the comprehensive integration of technology and innovations by STOU (2023), which has enhanced lifelong learning opportunities since 2022. This widespread technology use in distance education, from enrollment to exam administration, has contributed to students becoming more accustomed to and proficient with these tools. The use of Google Forms for data collection may have some influence on the study’s findings. The age of the sample is another factor worth additional consideration. The majority of the participants fell within the age range of 25–30 years, followed by those in the range of 31–35 years. Andrea et al. (2016) grouped individuals aged between 25 and 40 into Generation Y, which has grown up in an era of rapid technological advancement, enabling them to swiftly access the online world and which has matured in the presence of computers and the internet. Consequently, they are the generation with the highest internet usage. These Google Forms data collection primarily engaged students familiar with online tools, reflecting van Dijk’s (2005) discussions on the digital divide and Holsapple and Lee-Post’s (2006) concept of e-readiness. The findings indicated a high level of readiness of the questionnaire respondents; however, they might not fully represent the perspectives of less technologically prepared students, highlighting the need for more inclusive data collection of the full spectrum of students.

Conclusion

The model for measuring online readiness of undergraduate students in Thailand’s distance education consists of five factors: SE, SL, AT, AC and TR. The model was congruent with empirical data. Reliability verification using ω, α and CR indices revealed high reliability. Convergent validity verification confirmed that all factors were convergently valid. Likewise, the ASV index exhibited that all factors were discriminantly valid. The MSV index evidenced that the three factors – SL, AT and TR – showed discriminant validity. Conversely, the other two factors – SE and AC – did not exhibit discriminant validity. This indicates that the two factors might be measured using some overlap. Further investigations into these variables are recommended. The results of a survey on readiness levels indicated that undergraduate students in Thailand’s distance education had a high level of REA, both overall and on individual factors.

Recommendations

The open university that offers distance learning programs in Thailand should apply the model for the measurement of the readiness of undergraduate students in distance education for online testing. This finding can be used to plan, improve and develop the REA of its students.

Based on the analysis of overall readiness, the study found that students were highly prepared for online testing across all five factors. Considering individual factors, the TR had the lowest level of readiness among students. Most students require training to prepare for online testing. Therefore, higher education institutions offering distance learning should prioritize this factor over others. They may design activities that promote TR of their students. Initiatives could include developing digital skills and competencies, creating video clips and conducting surveys to gauge student satisfaction with the online testing process.

Descriptive statistics and ITC

Variablex¯SDSkKuITC
Technology readiness
V1I have a ready-to-use webcam computer
V2I have a computer with ready-to-use audio and microphone capabilities2.910.95−0.58−0.560.76
V3I have a computer with internet/Wi-Fi connectivity2.980.93−0.69−0.310.69
V4I have a computer with a pre-installed web browser for online testing2.960.95−0.67−0.410.76
V5I have a stable internet/Wi-Fi signal for online testing2.920.88−0.52−0.370.67
Ability to use technology
V6I can use the computer for online testing2.980.88−0.60−0.300.81
V7I can connect to the internet with mobile phone/iPad/tablet3.080.86−0.73−0.060.77
V8I can use a mobile phone/iPad/tablet to take/answer sheets3.120.85−0.870.340.75
V9I can upload images from a mobile phone/iPad/tablet to the computer3.030.90−0.66−0.320.78
V10I can efficiently use technology to communicate with proctors3.020.86−0.54−0.420.81
Self-confidence
V11I am confident that I can follow all the online testing guidelines of the university2.940.89−0.62−0.250.82
V12I am confident that I can use tools to communicate with the exam proctor3.020.81−0.630.080.85
V13Encountering technical issues, I am confident that I can resolve them by following the recommendations of technical support2.750.86−0.24−0.580.72
V14If any issues arise during the online testing, I can decide how to handle the situation2.820.85−0.27−0.570.79
Acceptance of online testing
V15I believe that accessing online testing systems can be done easily, conveniently, and quickly3.010.84−0.53−0.340.83
V16I believe the online testing system can be used continuously2.940.85−0.50−0.330.81
V17I believe that the online testing system can accommodate a large number of concurrent test takers2.860.84−0.39−0.410.67
V18I believe that the online testing is as effective as the on-site testing2.920.85−0.56−0.180.70
V19I believe that online testing reduces errors in responses2.970.86−0.59−0.230.77
V20I believe that online testing is suitable for assessing learning in distance education3.090.84−0.750.080.80
V21I believe that online testing saves student’s time and travel expenses3.370.64−0.62−0.070.57
Self-directed learning
V22I am intrinsically motivated to learn through online testing3.050.87−0.57−0.470.77
V23I enjoy searching for new knowledge myself3.220.73−0.770.540.71
V24I promptly seek a way to learn something essential on online testing3.250.74−0.850.690.73
V25I solve the problems myself if there are problems arisen during online testing3.160.78−0.790.410.74
V26I knew what I had to learn if I were to take an online testing3.140.82−0.740.020.76
V27I immediately sought assistance if I encountered online testing problems3.200.83−0.860.160.69
Training readiness
V28I viewed video clips and listened to advice regarding online testing3.000.81−0.49−0.230.75
V29I underwent software training for online testing2.800.93−0.44−0.610.73
V30I practiced using essential technology for online testing2.970.88−0.62−0.240.75
V31I practiced using essential equipment for online testing2.780.98−0.45−0.770.70
V32I experimented using online testing systems2.811.01−0.50−0.790.66
V33I practiced using communication tools for online conversations2.830.98−0.48−0.750.71

Source(s): Table created by author

EFA results

FactorVariableh2Factor loadingEigenvalue% of varianceCumulative variance (%)
SESE1(V7)0.810.9019.2814.8314.83
SE2(V9)0.820.90
SE3(V6)0.780.87
SE4(V8)0.750.87
SE5(V10)0.780.86
SE6(V12)0.790.84
SE7(V11)0.730.81
SE8(V14)0.700.76
SE9(V13)0.590.71
SLSL1(V24)0.860.932.4413.3628.19
SL2(V23)0.780.88
SL3(V25)0.760.86
SL4(V26)0.760.86
SL5(V27)0.620.78
SL6(V22)0.670.75
ATAT1(V2)0.900.951.6212.5240.71
AT2(V4)0.870.93
AT3(V1)0.840.92
AT4(V3)0.810.90
AT5(V5)0.570.72
ACAC1(V16)0.790.871.2413.2653.97
AC2(V18)0.690.83
AC3(V15)0.740.81
AC4(V17)0.650.80
AC5(V20)0.710.79
AC6(V19)0.680.78
AC7(V21)0.350.54
TRTR1(V31)0.870.931.1311.4565.42
TR2(V32)0.840.91
TR3(V33)0.810.90
TR4(V29)0.800.89
TR5(V28)0.730.78
TR6(V30)0.720.78

Source(s): Table created by author

Fit construct indices

IndexThresholdEstimateInterpretation
χ2/df<5*3.294pass
Comparative fit index>0.90**0.924pass
Tucker–Lewis index>0.90**0.917pass
Root mean square error of approximation≤0.08***0.072pass
Standardized root mean square residual<0.08***0.054pass

Source(s): Table created by author

Second-order CFA results

FactorVariableβR2ReliabilityConvergent validityDiscriminant validity
ωαCRAVEMSVASV
First-order analysis
SESE10.86**0.740.9420.9490.9430.6740.7620.627
SE20.83**0.68
SE30.83**0.69
SE40.80**0.65
SE50.86**0.73
SE60.88**0.78
SE70.83**0.69
SE80.81**0.65
SE90.68**0.47
SLSL10.87**0.760.9360.9340.9390.7080.6890.582
SL20.85**0.72
SL30.85**0.72
SL40.87**0.76
SL50.81**0.66
SL60.81**0.66
ATAT10.87**0.770.9230.9320.9270.7350.6190.520
AT20.89**0.79
AT30.88**0.78
AT40.89**0.79
AT50.73**0.54
ACAC10.85**0.720.9190.9340.9180.6680.7620.667
AC20.81**0.66
AC30.88**0.78
AC40.70**0.49
AC50.86**0.74
AC60.84**0.71
AC70.73**0.54
TRTR10.80**0.640.9200.9380.9160.7000.6190.536
TR20.78**0.60
TR30.84**0.71
TR40.89**0.80
TR50.87**0.77
TR60.85**0.72
Second-order analysis
REASE0.91**0.84 0.9940.693
SL0.87**0.76
AT0.81**0.65
AC0.96**0.91
TR0.82**0.68

Note(s): **p < 0.01

Source(s): Table created by author

References

Abduvakhidov, A.M., Mannapova, E.T. and Akhmetshin, E.M. (2021), “Digital development of education and universities: global challenges of the digital economy”, International Journal of Instruction, Vol. 14 No. 1, pp. 743-760, doi: 10.29333/iji.2021.14145a.

Al-Qdah, M. and Ababneh, I. (2017), “Comparing online and paper exams: performances and perceptions of Saudi students”, International Journal of Information and Education Technology, Vol. 7 No. 2, pp. 106-113, doi: 10.18178/ijiet.2017.7.2.850.

Almossa, S.Y. (2021), “University students' perspectives toward learning and assessment during COVID-19”, Education and Information Technologies, Vol. 26 No. 6, pp. 7163-7181, doi: 10.1007/s10639-021-10554-8.

Andrea, B., Gabriella, H.C. and Timea, J. (2016), “Y and Z generations at workplace”, Journal of Competitiveness, Vol. 8 No. 3, pp. 90-106.

Atchanpanya, N. (2020), “The COVID-19 virus is reshaping the global education system”, The Technology Source, available at: https://www.eef.or.th/30577-2/ (accessed 15 December 2023).

Bakhov, I., Opolska, N., Bogus, M., Anishchenko, V. and Biryukova, Y. (2021), “Emergency distance education in the conditions of COVID-19 pandemic: experience of Ukrainian universities”, Education Sciences, Vol. 11 No. 7, p. 364, doi: 10.3390/educsci11070364.

Bandura, A. (1977), “Self-efficacy: toward a unifying theory of behavioral change”, Psychological Review, Vol. 84 No. 2, pp. 191-215, doi: 10.1037/0033-295x.84.2.191.

Beavers, A.S., Lounsbury, J.W., Richards, J.K., Huck, S.W., Skolits, G.J. and Esquivel, S.L. (2013), “Practical considerations for using exploratory factor analysis in educational research”, Practical Assessment, Research and Evaluation, Vol. 18 No. 1, pp. 1-13.

Brown, T.A. (2015), Confirmatory Factor Analysis for Applied Research, 2nd ed., The Guilford Press, NY.

Budiman, R. and Syafrony, A.I. (2023), “The digital literacy of first-year students and its function in an online method of delivery”, Asian Association of Open Universities Journal, Vol. 18 No. 2, pp. 176-186, doi: 10.1108/aaouj-01-2023-0017.

Cabero-Almenara, J., Guillén-Gámez, F.D., Ruiz-Palmero, J. and Palacios-Rodríguez, A. (2021), “Digital competence of higher education professor according to DigCompEdu, Statistical research methods with ANOVA between fields of knowledge in different age ranges”, Education and Information Technologies, Vol. 26 No. 4, pp. 4691-4708, doi: 10.1007/s10639-021-10476-5.

Davis, F.D., Bagozzi, R.P. and Warshaw, P.R. (1989), “User acceptance of computer technology: a comparison of two theoretical models”, Management Science, Vol. 35 No. 8, pp. 982-1003, doi: 10.1287/mnsc.35.8.982.

del Rey, R., Ojeda, M. and Casas, J.A. (2021), “Validation of the sexting behavior and motives questionnaire (SBM-Q)”, Psicothema, Vol. 33 No. 2, pp. 287-295, doi: 10.7334/psicothema2020.207.

Downing, J. and Thackrey, D. (1971), Reading Readiness, University of London Press, NY.

Ebel, R.L. (1972), Essentials of Educational Measurement, Prentice-Hall, NJ.

Fageeh, A.I. (2015), “EFL student and faculty perceptions of and attitudes towards online testing in the medium of blackboard: promises and challenges”, JALT CALL Journal, Vol. 11 No. 1, pp. 41-62, doi: 10.29140/jaltcall.v11n1.183.

George, D. and Mallery, P. (2010), SPSS for Windows Step by Step: A Simple Guide and Reference, 10th ed., Allyn & Bacon, Boston, MA, 17.0 update.

Goh, P.S.C. and Blake, D. (2021), “E-readiness measurement tool: scale development and validation in a Malaysian higher educational context”, Cogent Education, Vol. 8 No. 1, pp. 1-24, doi: 10.1080/2331186x.2021.1883829.

Guangul, F.M., Suhail, A.H., Khalit, M.I. and Khidhir, B.A. (2020), “Challenges of remote assessment in higher education in the context of COVID-19: a case study of Middle East college”, Educational Assessment, Evaluation and Accountability, Vol. 32 No. 4, pp. 519-535, doi: 10.1007/s11092-020-09340-w.

Hair, J.F., Black, W.C., Babin, B.J. and Anderson, R.E. (2010), Multivariate Data Analysis, 7th ed., Prentice Hall, NJ.

Holsapple, C.W. and Lee-Post, A. (2006), “Defining, assessing, and promoting e-learning success: an information systems perspective”, Decision Sciences Journal of Innovative Education, Vol. 4 No. 1, pp. 67-85, doi: 10.1111/j.1540-4609.2006.00102.x.

Hu, L.-t. and Bentler, P.M. (1999), “Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives”, Structural Equation Modeling, Vol. 6 No. 1, pp. 1-55, doi: 10.1080/10705519909540118.

Joo, S., Lin, S. and Lu, K. (2011), “A usability evaluation model for academic library websites: efficiency, effectiveness and learnability”, Journal of Library and Information Studies, Vol. 9 No. 2, pp. 11-26.

Kaiser, H.F. (1960), “The application of electronic computers to factor analysis”, Educational and Psychological Measurement, Vol. 20 No. 1, pp. 141-151, doi: 10.1177/001316446002000116.

Kaiser, H.F. (1974), “An index of factorial simplicity”, Psychometrika, Vol. 39 No. 1, pp. 31-36, doi: 10.1007/bf02291575.

Kerlinger, F.H. (1964), Foundations of Behavioural Research: Educational and Psychological Inquiry, Holt, Rinehart & Winston, NY.

Khairuddin, Z., Arif, N.N. and Khairuddin, Z. (2020), “Students' readiness on online distance learning (ODL)”, Universal Journal of Educational Research, Vol. 8 No. 12, pp. 7141-7150, doi: 10.13189/ujer.2020.081281.

Khan, M.A., Vivek, V., Khojah, M., Nabi, M.K., Paul, M. and Minhaj, S.M. (2021), “Learners' perspective towards E-Exams during COVID-19 outbreak: evidence from higher educational institutions of India and Saudi Arabia”, International Journal of Environmental Research and Public Health, Vol. 18 No. 12, p. 6534, doi: 10.3390/ijerph18126534.

Kim, J.O. and Mueller, C.W. (1978), Introduction to Factor Analysis: what it Is and How to Do it, Sage, Beverly Hills, CA.

Kline, R.B. (2011), Principles and Practice of Structural Equation Modeling, 3rd ed., Guilford Press, NY.

Lauret, D. and Bayram-Jacobs, D. (2021), “COVID-19 lockdown education: the importance of structure in a suddenly changed learning environment”, Education Sciences, Vol. 11 No. 5, p. 221, doi: 10.3390/educsci11050221.

Lee, C., Yeung, A.S. and Ip, T. (2016), “Use of computer technology for English language learning: do learning styles, gender, and age matter?”, Computer Assisted Language Learning, Vol. 29 No. 5, pp. 1035-1051, doi: 10.1080/09588221.2016.1140655.

Long, H.B. (1994), “Resources related to overcoming resistance to self-direction in learning”, New Directions for Adult and Continuing Education, Vol. 64, pp. 13-21, doi: 10.1002/ace.36719946404.

Lorenzo-Seva, U. (2022), “SOLOMON: a method for splitting a sample into equivalent subsamples in factor analysis”, Behavior Research Methods, Vol. 54 No. 6, pp. 2665-2677, doi: 10.3758/s13428-021-01750-y.

Maryani, I., Latifah, S., Fatmawati, L., Erviana, V.Y. and Mahmudah, F.N. (2023), “Technology readiness and learning outcomes of elementary school students during online learning in the new normal era”, Pegem Journal of Education and Instruction, Vol. 13 No. 2, pp. 45-49.

Masalimova, A.R., Khvatova, M.A., Chikileva, L.S., Zvyagintseva, E.P., Stepanova, V.V. and Melnik, M.V. (2022), “Distance learning in higher education during COVID-19”, Frontiers in Education, Vol. 7, p. 120, doi: 10.3389/feduc.2022.822958.

Mcdonald, R.P. (1970), “The theoretical foundations of common factor analysis principal factor analysis and alpha factor analysis”, British Journal of Mathematical and Statistical Psychology, Vol. 23 No. 1, pp. 1-21, doi: 10.1111/j.2044-8317.1970.tb00432.x.

McKibbin, W. and Fernando, R. (2020), “The economic impact of COVID-19”, in Baldwin, R. and Weder di Mauro, B. (Eds), Economics in the Time of COVID-19, CEPR Press, The Technology Source, available at: https://www.sensiblepolicy.com/download/2020/2020_CEPR_McKibbin_Fernando_COVD-19.pdf (accessed 30 September 2023).

Mohd, F., Che Daud, E.H. and Elzibair, I. (2015), “The usage of online assessment towards self-efficacy readiness in learning [Paper presentation]”, Proceedings of the 9th International Conference on Ubiquitous Information Management and Communication (IMCOM '15), United States, NY.

Nisperos, L.S. (2014), “Assessing the e-learning readiness of selected Sudanese Universities”, Asian Journal of Management Sciences and Education, Vol. 3 No. 4, pp. 45-59.

Pinyosinwat, P. (2020), “How to manage teaching and learning in the COVID-19 situation: lessons from international experiences”, The Technology Source, available at: https://shorturl.asia/DqjMi (accessed 22 September 2023).

Rafique, G.M., Mahmood, K., Warraich, N.F. and Ur Rehman, S. (2021), “Readiness for online learning during COVID-19 pandemic: a survey of Pakistani LIS students”, The Journal of Academic Librarianship, Vol. 47 No. 3, 102346, doi: 10.1016/j.acalib.2021.102346.

Rahim, M.A. and Magner, N.R. (1995), “Confirmatory factor analysis of the styles of handling interpersonal conflict: first-order factor model and its invariance across groups”, Journal of Applied Psychology, Vol. 80 No. 1, pp. 122-132, doi: 10.1037/0021-9010.80.1.122.

Rodriguez, A., Reise, S.P. and Haviland, M.G. (2016), “Applying bifactor statistical indices in the evaluation of psychological measures”, Journal of Personality Assessment, Vol. 98 No. 3, pp. 223-237, doi: 10.1080/00223891.2015.1089249.

Rovinelli, R.J. and Hambleton, R.K. (1977), “On the use of content specialists in the assessment of criterion-referenced test item validity”, Tijdschrift Voor Onderwijsresearch, Vol. 2, pp. 49-60.

Shraim, K. (2019), “Online examination practices in higher education institutions: learners' perspectives”, The Turkish Online Journal of Distance Education, Vol. 20 No. 4, pp. 41-62, doi: 10.17718/tojde.640588.

Simonson, M., Schlosser, C. and Orellana, A. (2011), “Distance education research: a review of the literature”, Journal of Computing in Higher Education, Vol. 23 Nos 2-3, pp. 124-142, doi: 10.1007/s12528-011-9045-8.

Skinner, C.E. (1965), Educational Psychology, Prentice-Hall, New York.

STOU (2020a), “Resolution of the 5th University Council Meeting of STOU, 2021, on the (Draft) policy guidelines for the development of STOU and the budget framework for income and expenditure for the 2021 fiscal year”.

STOU (2020b), “Resolution of the 9th University Council Meeting, STOU, 2020, on the report of online testing implementation”.

STOU (2023), Regulations for Student Admission and Registration for the 2023 Academic Year, STOU Press, Nonthaburi.

Sugilar (2015), “Determinants of students participating in online examination”, Journal of Education and Learning, Vol. 10 No. 2, pp. 119-126, doi: 10.11591/edulearn.v10i2.3256.

Tabachnick, B.G. and Fidell, L.S. (2013), Using Multivariate Statistics, 6th ed., Pearson Education, Boston.

Tang, Y.M., Chen, P.C., Law, K.M.Y., Wu, C.H., Lau, Y., Guan, J., He, D. and Ho, G.T.S. (2021), “Comparative analysis of student's live online learning readiness during the coronavirus (COVID-19) pandemic in the higher education sector”, Computers and Education, Vol. 168, 104211, doi: 10.1016/j.compedu.2021.104211.

Texas Education Agency (2008), “An evaluation of districts' readiness for online testing”, The Technology Source, available at: https://shorturl.asia/fLYTZ (accessed 30 September 2023).

van Dijk, J.A. (2005), The Deepening Divide: Inequality in the Information Society, Sage Publications, London.

Viktoria, V. and Aida, M. (2020), “Comparative analysis on the impact of distance learning between Russian and Japanese university students, during the pandemic of COVID-19”, Educational Research Review, Vol. 3 No. 4, pp. 438-446, doi: 10.31014/aior.1993.03.04.151.

Viladrich, C., Angulo-Brunet, A. and Doval, E. (2017), “A journey around alpha and omega to estimate internal consistency reliability”, Anales de Psicología, Vol. 33 No. 3, pp. 755-782, doi: 10.6018/analesps.33.3.268401.

Wagiran, W., Suharjana, S., Nurtanto, M. and Mutohhari, F. (2022), “Determining the e-learning readiness of higher education students: a study during the COVID-19 pandemic”, Heliyon, Vol. 8 No. 10, e11160, doi: 10.1016/j.heliyon.2022.e11160.

Acknowledgements

The research was funded by the Distance Education Division of Sukhothai Thammathirat Open University (STOU). Many thanks are extended to the Fund administration.

Corresponding author

Thanyasinee Laosum can be contacted at: thanyasinee.lao@stou.ac.th

Related articles