Structural variable validation of an Online Learning Response Behavior (OLRB) instrument: A comparison analysis of three extraction methods of Exploratory Factor Analysis

Mohd Hanafi Azman Ong (Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA – Cawangan Johor Kampus Segamat, Segamat, Malaysia)
Norazlina Mohd Yasin (Digital Learning Department, UTMSPACE, Skudai, Malaysia)
Nur Syafikah Ibrahim (Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA – Cawangan Johor Kampus Segamat, Segamat, Malaysia)

Asian Association of Open Universities Journal

ISSN: 2414-6994

Article publication date: 4 August 2022

Issue publication date: 5 October 2022

741

Abstract

Purpose

Measuring internal response of online learning is seen as fundamental to absorptive capacity which stimulates knowledge assimilation. However, the evaluation of practice and research of validated instruments that could effectively measure online learning response behavior is limited. Thus, in this study, a new instrument was designed based on literature to determine the structural variables that exist in the online learning response behavior.

Design/methodology/approach

A structured survey was designed and distributed to 410 Malaysian students enrolled in higher-education institutions. The questionnaire has 38 items, all of which were scored using a seven-point likert scale. To begin with, exploratory factor analysis with three types of extraction methods (i.e. principal component, principal axis factoring and maximum likelihood) was used as the method for comparing the outcomes of each extraction method's grouping variables by constantly using a varimax rotation method. In the second phase, reliability analysis was performed to determine the reliability level of the grouping variables, and finally, correlation analysis was performed to determine the discriminant nomological validity of the grouping variables.

Findings

The findings revealed that nine grouping variables were retrieved, with all items having a good value of factor loading and communalities, as well as an adequate degree of reliability. These extracted variables have good discriminant and nomological validity, as evidenced by correlation analysis, which confirmed that the directions of relationships among the extracted dimensions follow the expected theory (i.e. positive direction) and the correlation coefficient is less than 0.70.

Research limitations/implications

This study proposes a comprehensive set of questionnaires that measure the student's online learning response behavior. These questionnaires have been developed on the basis of an extensive literature review and have undergone a rigorous process of validity and reliability for the purpose of enhancing students' online learning response behavior.

Originality/value

This study's findings will aid academic practitioners in assessing the online learning response behavior of students, as well as enhancing the questionnaire's boost factor when administered in an online learning environment.

Keywords

Citation

Azman Ong, M.H., Mohd Yasin, N. and Ibrahim, N.S. (2022), "Structural variable validation of an Online Learning Response Behavior (OLRB) instrument: A comparison analysis of three extraction methods of Exploratory Factor Analysis", Asian Association of Open Universities Journal, Vol. 17 No. 2, pp. 134-146. https://doi.org/10.1108/AAOUJ-04-2022-0054

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Mohd Hanafi Azman Ong, Norazlina Mohd Yasin and Nur Syafikah Ibrahim

License

Published in the Asian Association of Open Universities Journal. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) license. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this license may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

In a typical setting, universities and colleges are urged to incorporate blended learning for all academic programs. Blended learning involves a blend of traditional face-to-face training with digital learning. In blended learning classes, the learning materials were primarily provided through face-to-face interaction, but they were also accessible through a robust learning management system (LMS) to support and increase instructor–student and student–student connection after class. However, as of March 2020, all higher-education institutions in Malaysia have been shuttered owing to the COVID-19 epidemic. During the implementation of this measure, students will still be in the beginning of the semester and will be required to transition to an online learning mode. This has resulted in a rapid shift to online instructional delivery. To assist in limiting the spread of the virus that causes COVID-19, some schools have elected to cancel all face-to-face sessions, including laboratories and other learning opportunities, and have demanded that teachers transfer their courses online. Globally, the majority of colleges and institutions outside of Malaysia have used the same approach as one of the restraints to minimize the COVID-19 viral transmission process.

Due to this, digitalization has contributed to a surge in interest among educational scholars worldwide, particularly in developing nations and the Asian region. Responses to technology adoption are characterized by extremely disparate attitudes, digital skills and degrees of readiness (Yasin et al., 2020). The phenomenon has also led to an emphasis on behavioral intention in the context of individual components such as cognition and motivation (Costley and Lange, 2017; Oluwajana et al., 2019), in addition to learning tools (Wammes et al., 2019). In the context of this study, behavioral engagement is demonstrated by students' participation such as idea generation, participation in the classroom and extracurricular activities, concentration, assignment submission and adherence to the instructor's directions (Reschly and Christenson, 2012). All these variables are associated with a higher number of behavioral interventions and educational tools for students to increase their involvement in online classrooms for successful learning (Hughes et al., 2020). Consequently, it is vital to research how novice digital learners react to learning.

Although there are similarities in the research issues on behavioral response, the approaches used to investigate students' perspectives on learning behavior have not been consistently verified. Therefore, educational researchers utilizing digital platforms must have access to instruments or measures that have been cross-validated across varied groups (Guo and Liu, 2018; Hsiao et al., 2019). This would increase the reliability, generalizability and practicality of online learning studies.

Still, validated instruments are used rather infrequently in research on the response behavior of students to online learning (Yasin et al., 2020). As a result, the purpose of this study is to investigate the procedures that may be used to determine the validity and reliability of the instrument. Evaluation of students' abilities to comprehend, produce and evaluate information offered within the context of a Malaysian online educational setting is carried out. The same method is advantageous for designing evaluation tools to measure other aspects of a student's readiness, such as their perspectives and preferred learning style. Consequently, we established two aims for this study: (1) to identify significant factors for assessing the online learning response behavior in higher-education institutions and (2) to assess the structural variable validity and reliability of the online learning response behavior questionnaire.

2. Methodology

2.1 Study design

In this study, both a cross-sectional applied survey and a combination of quantitative methodologies were utilized. According to Creswell (2014), the purpose of this study is to validate and investigate the structural aspects of the targeted structures, and hence, these two research methodologies were chosen. Moreover, the phrase of the operational items in the Likert scale that were adopted from the previous study was amended in accordance with the study context. Therefore, it is vital for the researchers to revalidate this instrument using a suitable quantitative technique (Hair et al., 2010).

This study is particularly interested in Malaysian public university students having at least one year of experience using an online learning platform during the Movement Control Order (MCO). In the lack of a sample frame for selecting respondents, this study used a combination of convenience and judgmental sampling methods, both of which are regarded appropriate (Creswell, 2014).

There were 410 Malaysian public university students who volunteered to participate in this study by responding to a social media–distributed questionnaire. The sample size for this study is sufficient since the minimal sample size for a large population (1m or more) is 384 (Krejcie and Morgan, 1970). In addition, Hair et al. (2010) and Tabachnick and Fidell (2007) recommend evaluating at least 350 exploratory factor analysis (EFA) samples. Consequently, a sample size of 410 respondents may be considered adequate for this study.

2.2 Process of selecting the sources of a questionnaire using systematic review

The process of constructing a questionnaire begins with the collection of accessible definitions, concepts and practicalities of the key notion of students' learning response behavior from the previous academic literature. This procedure begins with a search of the most cited sources pertaining to the assessment of learning response behavior in the Scopus database, which contains only peer-reviewed scientific publications, in an effort to exclude duplicate sources.

Using the Boolean search operator “TITLE ABS-KEY” (“learning response behavior”), a total of 167 relevant sites were retrieved (survey OR scale OR questionnaire OR model). From these, 108 relevant sources remained because they met the selection criteria for the inclusion of the studies, which are as follows:

  1. The research is published in a scientific publication.

  2. The study consists of empirical research with survey-based learning response behavior measures.

These 108 relevant sources were retrieved via the digital library service provided by the Universiti Teknologi MARA or via the open access option, with the material regarding survey-based assessment of learning response behavior being the only topic of interest. From a total of 108 suitable sources, 101 were eliminated from this procedure for the following reasons:

  1. Title and abstract irrelevance (15 papers);

  2. Conceptual papers (23 papers);

  3. Papers that employ a qualitative technique (43 papers);

  4. The articles' content did not address the optimal usage of questionnaire methodology (13 papers);

  5. The papers were not authored in English (7 papers).

Following the screening procedure, seven papers that met the aforementioned selection criteria were included in the systematic review. Figure 1 depicts the flow diagram of the preferred reporting items for systematic reviews and meta-analyses (PRISMA)-based search procedure used in this research. We integrate two major types of methodological criteria in the process of framework or item assessment: evaluation of a survey-based operationalization measurement scale (Table 1) and evaluation of measurement quality (Table 2). These stages, according to Carmines and Zeller (1979), are required but not sufficient for obtaining a trustworthy and valid structural variable instrument. Based on these two methodological criteria, all seven of the selected studies fit the requirements, with 100% of the criteria met (Table 3).

2.3 Questionnaire items

To evaluate the learning response behavior of students, the researcher developed instruments based on a systematic review analysis that matched the PRISMA criteria and two major methodological criteria, as shown in Table 3. A 38-item survey was modified using a seven-point Likert scale, where 1 corresponds to strongly disagree and 7 corresponds to strongly agree, and was randomly distributed in the instrument setup. Adapted from Yasin et al. (2020), three elements of the student's learning readiness were used (i.e. Q1, Q5, Q6). Based on Gao et al. (2020), the student's learning engagement is defined as the reflection of cognitive and emotional feelings representing their behavior in response to an online learning environment consisting of seven modified items with two dimensions: cognitive engagement (i.e. Q2, Q7, Q9) and emotional engagement (i.e. Q2, Q7, Q9) (i.e. Q3, Q4, Q8, Q10). This instrument consists of eight items to assess the learning motivation construct referred to as the beliefs, intentions and feelings that motivate students to learn, modified from the study by Liu (2020): four items for intrinsic motivation (i.e. Q12, Q13, Q16 and Q19) and four items for extrinsic motivation (i.e. Q11, Q14, Q15 and Q17).

Alternative assessment, on the other hand, is the variable used to evaluate students' learning and support their cognitive development by regularly examining and providing feedback on their work. Thus, eight questions (i.e. Q21, Q24, Q25, Q27, Q30, Q31, Q33 and Q35) were adopted from the study by Leeuwenkamp et al. (2019). This study also modified the Awidi and Paynter (2019) student's learning experience, which is defined as the process of knowledge construction and converging ideas developed through social interactions, with four items (i.e. Q22, Q23, Q29 and Q32), whereas the remaining four indicators were adapted from the study by Utriainen et al. (2018) to evaluate the critical thinking learning experience (i.e. Q20, Q26, Q28 and Q37). This instrument also incorporated students' response behavior toward online learning, adapted from the research of Bui et al. (2020), with four items (i.e. Q34, Q36, Q38 and Q18). The questionnaire is supplied in Appendix. Importantly, the premise of utilizing this questionnaire as an elicitation tool is that the unit of analysis must have online learning experience as the purpose of this questionnaire is to measure the respondent's online learning response behavior. In addition, the responder must be an adult learner in order to provide an accurate response to each questionnaire item as the items included in this questionnaire tend to be psychological measures that require exact evaluation.

2.4 Statistical analysis

The IBM SPSS version 21.0 statistical software was used for data input and analysis methods. The EFA and correlation analysis were used because the primary purpose of this study was to validate and investigate the structural variable validity and reliability of the instrument after undergoing the process of sentence modification and Likert-scale use by the original authors (Hair et al., 2010; Tabachnick and Fidell, 2007). As recommended by Hair et al. (2010) and William et al. (2010), this form of study enables the researcher to validate and modify the employed indicators (2010). In this work, three EFA extraction methods were utilized to compare the instrument's valid structure. The first extraction method that was used is a combination of principal component (PC) extraction method with a varimax rotation (PC + varimax), the second and third extraction methods were principal axis factoring (PAF) extraction method with a varimax rotation (PAF + varimax) and the final extraction method was maximum likelihood extraction method with a varimax rotation (ML + varimax).

Using a Kaiser–Meyer–Olkin (KMO) index, the existence of a nonidentical matrix and the sufficiency of the covariance matrix among the indicators must be determined at the initial phase of the study. As proposed by Field (2009) and Pallant (2010), this index value must exceed 0.60. As a support for this conclusion, Bartlett's test of sphericity must be significant (Pallant, 2010); hence, it confirms the existence of a nonidentical matrix and that the covariance matrix among the indicators is adequate for the EFA. Thompson and Daniel (1996) suggested that, in the second stage of this study, a researcher should use several criterion techniques to identify the number of dimensions that should be retrieved from the constructs and to check and validate the number of dimensions that exist in the constructs. Therefore, only dimensions with eigenvalues more than 1.00 are retrieved from the analysis, and the total percentage of variance extracted should be greater than 60%, as recommended by Hair et al. (2010), Field (2009), Tabachnick and Fidell (2007) and Thompson and Daniels (1996). In addition, Watkins (2000) recommends using the Monte Carlo PCA parallel analysis simulation eigenvalues to calculate the number of extracted variables. In this study, only extracted variables having an eigenvalue larger than the eigenvalue of the Monte Carlo simulation are considered extracted.

Examining the value of factor loading and communalities from each extraction technique (such as PC + varimax, PAF + varimax and ML + varimax) is the next step in validating the indicators under each dimension extract. According to Field (2009), for an exploratory study on refining and validating indicators using the EFA, the values of factor loading and communalities must be more than 0.40 and the sample size must exceed 350. This is a crucial step as it ensures that the retrieved dimensions are different, meaningful and valid with a sufficient degree of confidence. In the last step of the EFA methods, a Cronbach's alpha reliability test was used to evaluate the internal consistency of the indication group. Nunnally and Bernstein (1994) indicated that a threshold of 0.70 or above can be used to show the reliability of grouped elements.

As proposed by Creswell (2014), Hair et al. (2010) and Kaptein (2008), discriminant validity and nomological validity were assessed following the EFA and reliability analysis. Using correlation analysis, they describe nomological validity as the act of examining the logical and meaningful link between dimensions that follows the predicted direction of association (i.e. positive relationship or negative relationship). In addition, Hair et al. (2010) and Kaptein (2008) used the correlation coefficient to assess discriminant validity using correlation analysis. Observation suggests that when the correlation coefficient is less than 0.70, the extracted dimensions have discriminant validity.

3. Data analysis and results

3.1 Convergence validity from the EFA and reliability analysis

The findings of multiple criterions for determining the number of variables to be retrieved from this instrument's 38 total items are summarized in Table 4. The study shown in Table 4 revealed that the EFA analysis for these three extraction techniques should extract nine variables from a total of 38 items since the first nine eigenvalues under Kaiser's criteria (e.g. range: 1.059 to 5.762) were greater than 1.00. Moreover, when comparing these Kaiser's eigenvalues with simulation eigenvalues from the Monte Carlo analysis, the same result occurs: only the first nine Kaiser's eigenvalues are greater than the first nine simulation eigenvalues from the Monte Carlo analysis, confirming that nine variables should be extracted from a total of 38 items. The cumulative percentage of variation explained for these nine extracted variables is 83.83%, indicating that the extraction of these nine variables from a total of 38 indicators is legitimate. Bartlett's test of sphericity for the items was likewise highly significant (Χ2 = 20419.75, p < 0.01). On the other hand, the KMO index for these three extraction techniques of the EFA was 0.974. Therefore, the covariance matrix for these 38 items was not a matrix of identity. Consequently, all elements in the instruments can be used for the EFA if all three extraction techniques are included.

Table 5 displays the factor loading and communality value for each item based on the three extraction methods used in the EFA. As with the first EFA extraction approach (i.e. PCA + varimax), the analysis revealed that all items had loading (range: 0.504 to 0.903) and communality (range: 0.709 to 0.963) values greater than the minimal threshold value of 0.40. Alternatively, the second extraction technique of the EFA (i.e. PAF + varimax) yields the same result, with all 38 items having loading (range: 0.439 to 0.874) and communality (range: 0.508 to 0.915) values greater than 0.40. The loading (range: 0.387 to 0.948) and communality (range: 0.373 to 0.919) values were just below 0.40 for the third extraction technique of the EFA, ML + varimax. Since the sample size for this research (n = 410) is regarded to be substantial, the ML + varimax technique's findings can be considered reliable. Therefore, it is possible to infer that all 38 items were valid and could be utilized to measure the intended components in this study.

Based on the grouping pattern of indicators, these three EFA extraction techniques exhibit a similar pattern. Student's learning readiness, student's perception of alternative assessment provided and student's response behavior toward online learning factors were discovered to have maintained their one-dimensional structure. Based on these three extraction analysis approaches, all analyses demonstrate that the dimensions of the student's learning engagement, student's learning motivation and student's learning experience variables, each of which has a two-dimensional structure, were preserved. Cronbach's alpha scores for the grouped indicators ranged from 0.902 to 0.965, suggesting that all nine grouped indicators meet the minimal threshold value of 0.70. Therefore, it can be stated that all grouped indicators were legitimate and measured the variables consistently.

3.2 Discriminant and nomological validity from the correlation analysis

The nomological and discriminant validities of the extracted variables were examined using correlation analysis, and the results are presented in Table 6. At the 5% level of significance, the analysis revealed a substantial positive association between the extracted variables. All extracted variables may thus be confirmed to have good nomological validity since all bivariate correlations correspond to the predicted bivariate connection, which is a positive relationship. In addition, correlation coefficients for the analysis ranged from 0.410 to 0.667% when examined using the correlation coefficient. Since all correlation coefficients are less than 0.70, it can be concluded that the extracted variables provide a satisfactory level of discrimination between the variables.

4. Conclusion

This study illustrated the statistical processes for verifying, investigating and comparing the variable structures in the survey instrument utilizing three extraction techniques of the EFA and correlation analysis for the online learning response behavior instrument. Based on these three extraction techniques of the EFA, the validity of all items utilized to measure the targeted variables is satisfactory. This is due to the fact that all three different extraction methods for the EFA (i.e. PC + varimax, PAF + varimax and ML + varimax) show a similar pattern of item grouping after undergoing the process of item sentence modification and Likert-scale weighting at the start of instrument development. In addition, the examination of reliability revealed that all extracted variables had sufficient and satisfactory internal consistency, as all Cronbach's alpha values were greater than 0.70. The correlation analysis also confirmed that all extracted variables from the EFA have good nomological and discriminant validity as the bivariate relationship between the extracted variables was significant and followed the expected direction of the relationship, and all correlation coefficients were less than 0.70.

On the other hand, based on the grouped elements from the EFA, student's learning readiness may be viewed as a continuous process that shows the students' mental preparation to accept the diverse method of technology-based learning (Yasin et al., 2020). According to Gao et al. (2020), student's learning engagement may be characterized as a technique that guides students in cultivating the instructor-acquired information and can be classified as cognitive engagement and emotional engagement. The fourth and fifth grouped factors, internal and extrinsic motivation, respectively, can be considered the total learning motivation of a learner. In this study, extrinsic motivation is defined as motivating students to attain their objective by providing an external incentive throughout the learning process on an online platform in the hope that students will acquire a strong interest in learning via the online platform (Liu, 2020). In contrast to extrinsic motivation, the intrinsic motivation variable was measured by the internal needs of the students during the process of learning on an online platform, where internal incentives such as creating an enjoyable environment during the learning process are anticipated to bolster intrinsic motivation among the students (Liu, 2020).

On the basis of the quality of alternative assessments offered by instructors, students' perceptions of alternative assessments may be interpreted as their perceptions of their learning progress and information about their desired learning goals (Leeuwenkamp et al., 2019). Experience with competencies and critical thinking are two criteria that may be used to assess a student's learning experience. This variable represents the entire learning experience encountered by the learner during the learning process. According to Utriainen et al. (2018) and Awidi and Paynter (2019), a student's learning experience can also be viewed as the construction of knowledge by the student through interaction with others or as the development of competencies and the capacity to apply knowledge to other contexts, which is critical thinking experience. Finally, student's reaction behavior toward online learning variable may be described as the behavior of students continuing to use the online learning platform as the learning medium in the future and introducing the online learning platform to their social circle as the optimal means of learning.

5. Discussion

Based on the findings of the EFA and correlation analysis, this online learning response behavior instrument building procedure is suitable for measuring the intended constructs. Due to the present COVID-19 epidemic, this study method may also be categorized as significant processes as this instrument can serve as an alternative beneficial tool for monitoring the entire online learning response behavior of students. This research also stressed the usefulness of the EFA and correlation analysis for verifying the survey instruments after the items have undergone a process of alteration from the perspectives of convergent validity, discriminant validity and nomological validity.

In addition, the outcomes of this study suggested that this questionnaire has the potential to be utilized internationally, not just in the Malaysian higher-education institutions setting, because most higher-education institutions worldwide have shifted to an online learning platform. In addition, all the factors used to quantify the online learning response behavior occur in the majority of higher-education institutions. This questionnaire is simple to administer because it consists of 38 items that may be categorized as small. Consequently, the likelihood of administrative data inaccuracy is likewise minimal. Since the structural themes of all variables revealed in this study lean toward the positive theme, the interpretation of the variables should likewise be positive. Using the average score approach, the score of each variable's structure must be high for a positive interpretation.

6. Limitations

In reference to this research, one of the drawbacks of this study is the lack of a causal analysis to identify the global questionnaire fit indices. This study thus recommends a causal analysis to further explore the global fit indices of the instrument's validity by employing a covariance-based confirmatory factor analysis or partial least square consistency factor analysis. In addition, for the purpose of calculating an overall global fit index, this instrument may be administered to secondary students and private higher-education students in both urban and rural regions. Moreover, our systematic review approach is restricted to terms that assess just the learning response behavior, and only seven papers are best accessible for reviews. Therefore, it is advised that, for future study, a more comprehensive word, such as students' response behavior, and a greater number of publications should be chosen for examination.

Figures

Flow diagram of the searching procedure using PRISMA procedure

Figure 1

Flow diagram of the searching procedure using PRISMA procedure

Evaluation of a survey-based operationalization assessment scale

CategoryCriteriaCode
Conceptual background of item scaleReport the definition of conceptsC1
Report the definition of dimensionsC2
Report the source of itemsC3
Sample informationSample size of the study is higher than 300 samplesC4
Report the sample characteristicsC5
Statistical analysisReport the descriptive statistics of the items (mean, standard deviation, etc.)C6
Report the correlation/covariance matrix or coefficientsC7
Measurement model fitReport the result of exploratory factor analysis or confirmatory factor analysisC8

Evaluation of measurement quality

CategoryCriteriaCode
Construct validityReport the convergent validity of the measurement criteria (loading > 0.7 or weight > 0.4 and AVE > 0.5)C9
At least one discriminant validity criterion was reported (Fornell–Larcker, HTMT or cross-loading discriminant tests)C10
Report the criteria validity by indicating that the correlation between interconstructs is greater than 0.5C11
Internal consistencyThe stated Cronbach's alpha must be greater than 0.7C12

Note(s): AVE = Average variance explanation, HTMT = Heterotriat-Monotrait ratio of correlations

Evaluation of the 7 selected studies

StudyCriteria code
C1C2C3C4C5C6C7C8C9C10C11C12
(1)////////////
(2)////////////
(3)////////////
(4)////////////
(5)////////////
(6)////////////
(7)////////////

Note(s): Criteria code refers to Table 1 and Table 3. (1) = Yasin et al. (2020), (2) = Gao et al. (2020), (3) = Liu (2020), (4) = Leeuwenkamp et al. (2019), (5) = Awidi and Paynter (2019), (6) = Utriainen et al. (2018), (7) = Bui et al. (2020)

Multiple criteria for factors to be extracted

Component numberInitial eigenvalueMonte Carlo simulation eigenvalueCumulative % variance explainedDecision
15.7621.55443.78Accept to extract
22.8791.40750.26Accept to extract
31.8021.33155.68Accept to extract
41.3151.29363.52Accept to extract
51.2691.20268.52Accept to extract
61.1871.15772.13Accept to extract
71.1241.09776.97Accept to extract
81.0991.07180.81Accept to extract
91.0591.04383.83Accept to extract
100.9911.036Reject to extract
110.7881.011Reject to extract

Note(s): Only the initial 11 out of 38 components were reported

Comparative results of three extraction methods of the EFA

Components and items includedPC + varimaxPAF + varimaxML + varimax
λδλδλδ
Student's learning readiness
Q10.8890.8750.6530.5270.7180.503
Q50.8170.8530.6130.5080.6020.434
Q60.9030.9630.8740.7170.9480.634
Eigenvalue = 2.879, % variance explained = 18.24%. Cronbach's alpha = 0.902
Cognitive engagement
Q20.6880.8500.5590.8030.4660.790
Q70.6850.8960.5780.8910.5200.919
Q90.7570.8420.6670.7960.6180.799
Eigenvalue = 1.802, % variance explained = 16.72%. Cronbach's alpha = 0.929
Emotional engagement
Q30.7340.8830.7270.8890.7130.889
Q40.7000.8510.7050.8650.7190.888
Q80.6970.7090.6120.6090.5990.602
Q100.6900.8550.6880.8610.6800.863
Eigenvalue = 1.315, % variance explained = 7.21%. Cronbach's alpha = 0.938
Intrinsic motivation
Q120.5940.8900.6280.9150.4740.601
Q130.5740.8770.6130.8990.4460.568
Q160.5310.8370.4680.8100.3910.406
Q190.5040.7990.4730.7010.3870.373
Eigenvalue = 1.099, % variance explained = 3.24%. Cronbach's alpha = 0.918
Extrinsic motivation
Q110.5080.836−0.6330.8250.5190.617
Q140.5190.819−0.5880.8220.3960.413
Q150.5570.838−0.6590.8450.4870.509
Q170.7280.822−0.5380.6020.6060.698
Eigenvalue = 1.059, % variance explained = 2.56%. Cronbach's alpha = 0.925
Student's perception toward alternative assessment
Q210.6660.8190.6210.7730.6190.765
Q240.6730.8040.6480.7710.6490.769
Q250.7490.8540.7330.8390.7340.837
Q270.7000.8630.7010.8680.7010.866
Q300.6770.8120.6290.7500.6290.744
Q310.7220.8260.6780.7870.6760.785
Q330.6680.8210.6490.8010.6510.798
Q350.6380.8380.6220.8150.6190.816
Eigenvalue = 5.762, % variance explained = 19.34%. Cronbach's alpha = 0.965
Competency learning experience
Q220.6940.7750.5550.5890.5460.569
Q230.6920.8510.6730.8690.6270.829
Q290.6830.8520.6740.8580.6680.869
Q320.6650.8500.6580.8720.6490.922
Eigenvalue = 1.187, % variance explained = 5.30%. Cronbach's alpha = 0.934
Critical thinking learning experience
Q200.7030.829−0.6880.8060.7020.823
Q260.6450.841−0.6650.8400.6590.831
Q280.6200.776−0.6390.7920.6400.755
Q370.6770.792−0.6630.7720.6890.787
Eigenvalue = 1.269, % variance explained = 5.95%. Cronbach's alpha = 0.914
Student's response behavior toward online learning
Q340.6130.8660.5870.8460.6840.871
Q360.5920.8540.4690.8190.5370.744
Q380.5090.8210.4390.7590.5190.718
Q180.5450.8130.5060.7390.6190.842
Eigenvalue = 1.124, % variance explained = 5.27%. Cronbach's alpha = 0.929

Note(s): The negative signs in the loading values indicate that these items were located at the negative axis of the rotation method; all three extraction methods of the EFA produce the same value of eigenvalues and hence produce the same variance explained from the extracted components which is 83.83%; PC = principal component; PAF = principal axis factoring; ML = maximum likelihood; λ = loading value; δ = communality value; refer to Appendix for code item definition

Correlation analysis among extracted dimensions from the EFA

(1)(2)(3)(4)(5)(6)(7)(8)(9)
(1)1.000
(2)0.523*1.000
(3)0.591*0.657*1.000
(4)0.554*0.609*0.626*1.000
(5)0.567*0.613*0.603*0.667*1.000
(6)0.621*0.628*0.631*0.619*0.598*1.000
(7)0.410*0.507*0.483*0.492*0.489*0.623*1.000
(8)0.422*0.531*0.507*0.476*0.463*0.619*0.661*1.000
(9)0.532*0.498*0.452*0.468*0.509*0.637*0.610*0.598*1.000

Note(s): (1) = student's learning readiness; (2) = cognitive engagement; (3) = emotional engagement; (4) = intrinsic motivation; (5) = extrinsic motivation; (6) = student's perception toward alternative assessment given; (7) = competency experience; (8) = critical thinking experience; (9) = student's response behavior toward online learning; n = 410; *p < 0.05

Question wording and code

CodeItem description
Q1I have a private place in my home or at work that I can use for extended periods
Q2Using the online platform prompts me actively devote myself to study
Q3I enjoy the learning because using the online platform
Q4The course is interesting because using the online platform
Q5I have adequate time that will be uninterrupted which I can work on my online course
Q6I value flexibility which I can work on my online course
Q7Using the online platform keeps me active in learning
Q8Using the online platform reduces the amount of time I spend doing something else in the class
Q9Using the online platform can keep my intention focused on learning
Q10I feel exciting when learning by using this online platform
Q11I feel that learning via online platform is helpful for my academic future
Q12I enjoy the fun of learning interactively via online platform
Q13I feel that engaging with and learning interactively via online platform is fun
Q14I feel that learning via online platform attract me to participate in the learning system
Q15I think the experience learning via online platform will be helpful for studying into the next level
Q16I feel that learning via online platform is interesting
Q17I think it will be beneficial to my academic future If I achieve good result when I learn using online platform
Q18I will use online platform on a regular basis in the future
Q19I feel learning via online platform can challenge me to learn extensively
Q20My learning experience in this online platform has make me able to analyze and organize information for my subject
Q21In general, at this moment I perceive that, testing and assessment in online platform have a positive effect on my learning
Q22As a learner I see myself as highly competent in using online platform technologies
Q23My learning experience in this online platform has made me feel confident as a learner
Q24In general, at this moment I perceive that, testing and assessment in online platform add value to the time I have spent on the work done
Q25In general, at this moment I perceive that, testing and assessment in online platform are valuable instances of learning in their own right
Q26My learning experience in this online platform has make me able to evaluate issues critically for my subject
Q27In general, at this moment I perceive that, testing and assessment in online platform me to continue learning
Q28My learning experience in this online platform has make me able to apply theoretical knowledge to practice
Q29My learning experience in this online platform has motivated me to learn more
Q30In general, at this moment I perceive that, testing and assessment in online platform help me to navigate my own learning process
Q31In general, at this moment I perceive that, testing and assessment in online platform are geared towards the retention of my competencies in the longer period
Q32My learning experience in this online platform has fully engaged me as a learner
Q33In general, at this moment I perceive that, testing and assessment in online platform prepare me well for future learning activities
Q34I will make use the online platform regularly in the forthcoming time
Q35In general, at this moment I perceive that, testing and assessment in online platform give me confidence to continue learning
Q36I intend to make use the functions of online platform for providing assistance to my academic activities
Q37My learning experience in this online platform has make me able to develop new ideas
Q38I will give out my recommendation to others to use the online platform
Appendix

Table A1

References

Awidi, I.T. and Paynter, M. (2019), “The Impact of a flipped classroom approach on student learning experience”, Computers and Education, Vol. 128, pp. 269-283.

Bui, T.-H., Luong, D.-H., Nguyen, X.-A., Nguyen, H.-L. and Ngo, T.-T. (2020), “Impact of female students' perceptions on behavioral intention to use video conferencing tools in COVID-19: data of vietnam”, Data in Brief, Vol. 32, 106142, pp. 1-6.

Carmines, E.G. and Zeller, R.A. (1979), Reliability and Validity Assessment, SAGE, Beverly Hills.

Costley, J. and Lange, C. (2017), “The mediating effects of germane cognitive load on the relationship between instructional design and students' future behavioural intention”, Electronic Journal of E-Learning, Vol. 15 No. 2, pp. 174-187.

Creswell, J.W. (2014), Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research, 4th ed., Pearson New International Edition, London.

Field, A. (2009), Discovering Statistics Using SPSS, 3rd ed., SAGE Publications, London.

Gao, B.W., Jiang, J. and Tang, Y. (2020), “The effect of blended learning platform and engagement on students' satisfaction – the case from the tourism management teaching”, Journal of Hospitality, Leisure, Sport and Tourism Education, Vol. 27, 100272, pp. 1-11.

Guo, F.X. and Liu, Q. (2018), “A study on the correlation between online learning behaviour and learning effect – based on the teaching practice of the flipped classroom of blackboard”, Higher Education of Sciences, Vol. 1, pp. 8-13.

Hair, J.F., Black, W.C., Babin, B.J. and Anderson, R.E. (2010), Multivariate Data Analysis, 7th ed., Prentice-Hall, Upper Saddle River, New Jesey.

Hsiao, C.C., Huang, J.C., Huang, A.Y., Lu, O.H., Yin, C.J. and Yang, S.J. (2019), “Exploring the effects of online learning behaviours on short-term and long-term learning outcomes in flipped classrooms”, Interactive Learning Environments, Vol. 27 No. 8, pp. 1160-1177.

Hughes, M., Salamonson, Y. and Metcalfe, L. (2020), “Student engagement using multiple-attempt ‘weekly participation task’ quizzes with undergraduate nursing students”, Nurse Education in Practice, Vol. 46, 102803.

Kaptein, M. (2008), “Developing a measure of unethical behavior in the workplace: a stakeholder perspective”, Journal of Management, Vol. 34 No. 5, pp. 978-1008.

Krejcie, R.V. and Morgan, D.W. (1970), “Determining sample size for research activities”, Educational and Psychological Measurement, Vol. 30 No. 3, pp. 607-610.

Leeuwenkamp, K.J.G.V., Brinke, D.J. and Kester, L. (2019), “Students’ perceptions of assessment quality related to their learning approaches and learning outcomes”, Studies in Educational Evaluation, Vol. 63, pp. 72-82.

Liu, I.-F. (2020), “The impact of extrinsic motivation, intrinsic motivation, and social self-efficacy on English competition participation intentions of pre-college learners: differences between high school and vocational students in Taiwan”, Learning and Motivation, Vol. 72, 101675, pp. 1-11.

Nunnally, J.C. and Bernstein, I.H. (1994), Psychometric Theory, McGraw-Hill, New York.

Oluwajana, D., Idowu, A., Nat, M., Vanduhe, V. and Fadiya, S. (2019), “The adoption of students' hedonic motivation system model to gamified learning environment”, Journal of Theoretical and Applied Electronic Commerce Research, Vol. 14 No. 3, pp. 156-167.

Pallant, J. (2010), SPSS Survival Manual, 4th ed., McGraw-Hill Publications, New York.

Reschly, A.L. and Christenson, S.L. (2012), “Jingle, jangle, and conceptual haziness: evolution and future directions of the engagement construct”, Handbook of Research on Student Engagement, Springer, Boston, Massachusetts, pp. 3-19.

Tabachnick, B.G. and Fidell, L.S. (2007), Using Multivariate Statistics, 5th ed., Allyn and Bacon, Boston, Massachusetts.

Thompson, B. and Daniel, L.G. (1996), “Factor analytic evidence for the construct validity of scores: a historical overview and some guidelines”, Educational and Psychological Measurement, Vol. 56 No. 2, pp. 197-208.

Utriainen, J., Tynjälä, P., Kallio, E. and Marttunen, M. (2018), “Validation of modified version of the experiences of teaching and learning questionnaire”, Studies in Educational Evaluation, Vol. 56, pp. 133-143.

Wammes, J.D., Ralph, B.C., Mills, C., Bosch, N., Duncan, T.L. and Smilek, D. (2019), “Disengagement during lectures: media multitasking and mind wandering in university classrooms”, Computers and Education, Vol. 132, pp. 76-89.

Watkins, M.W. (2000), Monte Carlo PCA for Parallel Analysis (Computer Software), Ed & Psych Associates, State College, PA, pp. 432-442.

Williams, B., Brown, T. and Onsman, A. (2010), “Exploratory factor analysis: a five-step guide for novices”, Emergency Primary Health Care, Vol. 8 No. 3, pp. 1-13, doi: 10.33151/ajp.8.3.93.

Yasin, N.M., Ong, M.H.A. and Aziz, N.N.A. (2020), “Assessment of the blended learning implementation in higher education: students' readiness perspective”, International Journal of Advanced Science and Technology, Vol. 29 No. 6, pp. 702-712.

Corresponding author

Mohd Hanafi Azman Ong can be contacted at: napieong@uitm.edu.my

Related articles