Search results
1 – 9 of 9This study explores the distribution of insufficient effort responders according to methods of classifying students’ evaluation of teaching effectiveness in higher education. Five…
Abstract
Purpose
This study explores the distribution of insufficient effort responders according to methods of classifying students’ evaluation of teaching effectiveness in higher education. Five different methods were found in the literature to classify students’ evaluation of teaching effectiveness in higher education.
Design/methodology/approach
Quantitative research methodology was used to achieve the goals of this study. Data from a major public university was used through 20 five-point items that are designed to measure students’ evaluation of teaching effectiveness. The dataset that consisted of 26,679 surveys was analyzed. Detecting insufficient efforts responding was based on item response theory procedures.
Findings
The results show that insufficient effort responders are distributed differently to students’ evaluation of teaching effectiveness in higher education levels according to different methods of classifying these levels. The results of this study suggest using a percentage of students’ agreement of 4 or 5 for each item to classify SET levels and deleting IERs before interpreting SET results.
Research limitations/implications
According to the results of this study, it is recommended to research the relationships between IER and SET scores and students’ motivation to participate in evaluating teaching effectiveness.
Practical implications
According to the results of this study, it is recommended to:1– Exclude the IERs from the dataset before generating SET reports. 2– Use the percentage of 4 (agree) and 5 (strongly agree) satisfactions of SET items to classify and interpret SET results.
Originality/value
Reviewing the literature shows the absence of studies that explore the distribution of insufficient effort responders according to methods of classifying students’ evaluation of teaching effectiveness in higher education. The results suggest using a percentage of students’ agreement of 4 or 5 for each item to classify SET levels and deleting IERs before interpreting SET results.
Details
Keywords
This study aims at assessing item fairness in students' evaluation of teaching based on students' academic college using measurement invariance analysis (MI).
Abstract
Purpose
This study aims at assessing item fairness in students' evaluation of teaching based on students' academic college using measurement invariance analysis (MI).
Design/methodology/approach
The sample of this study consists of 17,270 undergraduate students from 12 different academic colleges. SET survey consists of 20 Likert-type items distributed to four factors: planning, instruction, management and assessment was used to collect the data. The Lavaan R package with confirmatory factor analysis (CFA) was used to evaluate measurement invariance (MI). Four models of CFA were investigated and assessed: the configural model, the metric model, the scalar model and the residual invariance model. ANOVA was used to test the differences in SET according to academic colleges.
Findings
MI analysis showed that the four levels of MI models are supported. ANOVA test showed that means of SET total scores are statistically different according to students' academic colleges. College of “Education” has the highest SET mean (88.64 out of 100), and all the differences between the College of Education’s SET mean and other colleges' SET means are statistically significant.
Practical implications
The study recommends that higher education institutions test the MI of SET according to academic colleges and then use colleges with the highest SET at the university level as internal benchmarking to develop and enhance their teaching practices.
Originality/value
This study is probably the only study that tested MI according to students' colleges before testing the differences between colleges in SET. If MI is not supported, then the comparisons between academic colleges are not applicable.
Details
Keywords
The purpose of this paper is to investigate the effect of insufficient effort responding (IER) on construct validity of student evaluations of teaching (SET) in higher education.
Abstract
Purpose
The purpose of this paper is to investigate the effect of insufficient effort responding (IER) on construct validity of student evaluations of teaching (SET) in higher education.
Design/methodology/approach
A total of 13,340 SET surveys collected by a major Jordanian university to assess teaching effectiveness were analyzed in this study. The detection method was used to detect IER, and the construct (factorial) validity was assessed using confirmatory factor analysis (CFA) and principal component analysis (PCA) before and after removing detected IER.
Findings
The results of this study show that 2,160 SET surveys were flagged as insufficient effort responses out of 13,340 surveys. This figure represents 16.2 percent of the sample. Moreover, the results of CFA and PCA show that removing detected IER statistically enhanced the construct (factorial) validity of the SET survey.
Research limitations/implications
Since IER responses are often ignored by researchers and practitioners in industrial and organizational psychology (Liu et al., 2013), the results of this study strongly suggest that higher education administrations should give the necessary attention to IER responses, as SET results are used in making critical decisions
Practical implications
The results of the current study recommend universities to carefully design online SET surveys, and provide the students with clear instructions in order to minimize students’ engagement in IER. Moreover, since SET results are used in making critical decisions, higher education administrations should give the necessary attention to IER by examining the IERs rate in their data sets and its consequences on the data quality.
Originality/value
Reviewing the related literature shows that this is the first study that investigates the effect of IER on construct validity of SET in higher education using an IRT-based detection method.
Details
Keywords
Mahmoud Fisal Alquraan, Sulaf Alazzam and Dima Farhat
The objective of this study is to explore the structural relationships among context (C), input (I) and process (P) and product (P) (CIPP) components of teacher preparation…
Abstract
Purpose
The objective of this study is to explore the structural relationships among context (C), input (I) and process (P) and product (P) (CIPP) components of teacher preparation programs based on students' perceptions.
Design/methodology/approach
In this study, data were collected using a 17-item scale. The study sample consisted of 213 pre-service teachers enrolled in the Postgraduate Professional Diploma in Teaching (PPDT). Quantitative research methodology with multivariate structural equation modeling (SEM) was utilized to examine the two suggested models.
Findings
The results of this study show that the CIPP model can be projected into preservice teachers' perceptions of the CIPP components. These preservice teachers' perceptions of preparation programs from the three components (CIP) can predict preservice teachers' perceptions of teachers’ preparation program products or outcomes (i.e. the fourth CIPP component). This result indicates that the relationships between the CIPP components and the pre-service teachers' perceptions of the Diploma in Teaching program are direct.
Originality/value
Two suggested models were tested to explore the structural relationships between CIPP components. The first model represents the original CIPP model with indirect relationships between the four components: CIPP. The second model suggests direct relationships between the first three components (CIP) and the objectives or products (P).
Details
Keywords
This study aims to utilized the item response theory (IRT) rating scale model to analyze students’ perceptions of assessment practices in two universities: one in Jordan and the…
Abstract
Purpose
This study aims to utilized the item response theory (IRT) rating scale model to analyze students’ perceptions of assessment practices in two universities: one in Jordan and the other in the USA. The sample of the study consisted of 506 university students selected from both universities. Results show that the two universities still focus on paper-pencil testing to assess students’ learning outcomes. The study recommends that higher education institutes should encourage their teachers to use different assessment methods to assess students’ learning outcomes.
Design/methodology/approach
The convenience sample consisted of 506 selected university students from the USA and Jordan, and participants were distributed according to their educational levels, thus: 83 freshmen, 139 sophomores, 157 juniors and 59 seniors. (Note: some students from both universities did not report their gender and/or their educational level). The USA university sample consisted of 219 students from three colleges at a major university in the southeast of the USA studying for arts and sciences, education and commerce and business qualifications, of whom 43 were males and 173 were females. The study used the Students Perception of Assessment Practices Inventory developed by Alquraan (2007), and for the purpose of this study, the RUMM2020 program was used for its rating scale model.
Findings
Both universities, in Jordan and the USA, still focus more on the developmental (construction of assessment tasks), organizational and planning aspects of assessment processes than they do on assessments of learning and assessment methods (traditional and new assessment methods). The assessment practices that are used frequently in both universities based on the teachers sampled are: “(I27) I know what to study for the test in this class”, “(I6) Teacher provides a good environment during test administration” and “(I21) My teacher avoids interrupting students as they are taking tests”. This indicates that teachers in the selected universities have a tendency to focus on the administrative and communicative aspects of assessment (e.g. providing a good environment during test administration) more than on using different assessment methods (e.g. portfolios, new technology, computers, peer and self-assessment) or even using assessment practices that help students learn in different ways (e.g. assessing students’ prior knowledge and providing written feedback on the graded tests).
Originality/value
This is a cross-cultural study focus assessment of students learning in higher education.
Details
Keywords
The purpose of this paper is to explore the assessment methods used in higher education to assess students' learning, and to investigate the effects of college and grading system…
Abstract
Purpose
The purpose of this paper is to explore the assessment methods used in higher education to assess students' learning, and to investigate the effects of college and grading system on the used assessment methods.
Design/methodology/approach
This descriptive study investigates the assessment methods used by teachers in higher education to assess their students' learning outcomes. An instrument consisting of 15 items (each item is an assessment method) was distributed to 736 undergraduate students from four public universities in Jordan.
Findings
Findings show that traditional paper‐pencil test is the most common method that is used to assess learning in higher education. Results also show that teachers in colleges of science and engineering and colleges of nursing use different assessment methods to assess learning, besides traditional testing such as: real life tasks (authentic assessment), papers, and projects. Also, the results show that teachers use the same assessment methods to assess learning, despite the grading systems (letter or numbers) used at their institutes.
Research limitations/implications
The sample of the study was limited to undergraduate students and teachers' points of views about the frequent use of assessment methods were not studied.
Practical implications
Higher education institutes should encourage teachers to use new and modern assessment methods as well as traditional paper‐pencil testing, and study the reasons for not using these new methods.
Originality/value
The paper should alert the higher education institutes about the important of developing the assessment process, through knowing their students' points of view about the assessment methods. This will help to get students involved in the learning process.
Details
Keywords
Mahmoud Alquraan and Abed Alnaser Aljarah
The purpose of this paper is to investigate the psychometric properties of a Jordanian version of the Metamemory in Adulthood (MIA) questionnaire of Dixon, Hultsch and Hertzog.
Abstract
Purpose
The purpose of this paper is to investigate the psychometric properties of a Jordanian version of the Metamemory in Adulthood (MIA) questionnaire of Dixon, Hultsch and Hertzog.
Design/methodology/approach
The sample for this study consisted of 656 students randomly selected from Yarmouk University‐Jordan. Translation‐back‐translation, classical test theory, IRT Rasch model, and confirmatory factor analysis procedures were used to evaluate the psychometric properties of a Jordanian version of the MIA (MIA‐Jo).
Findings
The results of these analyses show that 76 items (out of 108 original MIA items) provide sufficient evidence in support of the reliability and validity of the MIA‐Jo. The results also show that the MIA‐Jo has the same structure or subscales as the original MIA.
Research limitations/implications
The sample for this study consisted of 656 students randomly selected from Yarmouk University‐Jordan. Therefore, the study recommends the necessity to conduct more research on the MIA‐Jo using samples that have a wider range of age (up to 80 years) and other strata of Jordanian society.
Originality/value
This study is expected to provide researchers and educators in Jordan with a valid and reliable instrument to do more research on metamemory and its relationship with other cognitive variables.