Search results

1 – 10 of 20
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 5 February 2018

Madhabi Chatterji and Meiko Lin

The purpose of this study was to design and iteratively improve the quality of survey-based measures of three non-cognitive constructs for Grade 5-6 students, keeping in mind…

383

Abstract

Purpose

The purpose of this study was to design and iteratively improve the quality of survey-based measures of three non-cognitive constructs for Grade 5-6 students, keeping in mind information needs of users in education reform contexts. The constructs are: Mathematics-related Self-Efficacy, Self-Concept, and Anxiety (M-SE, M-SC, and M-ANX).

Design/methodology/approach

The authors applied a multi-stage, iterative and user-centered approach to design and validate the measures, using several psychometric techniques and three data samples. They evaluated the utility of student-level scores and aggregated, classroom-level means.

Findings

At both student and classroom levels, replicated evidence supported theoretically-grounded validity arguments on information produced by four of five scales tapping M-SC, M-ANX and M-SE. The evidence confirmed a second order, two-factor structure for M-SC, representing positive math affect and perceived competence, and a one factor structure for M-ANX representing negative math affect. Consistent with the literature, these served as precursors to a perceived confidence factor of M-SE which, in turn, positively influenced mathematics achievement scores, off-setting negative effects of M-ANX. Research is continuing on a self-regulatory efficacy factor of M-SE, which yielded mixed results.

Practical implications

The survey scales are in line with current reform policies in the United States calling for schools to monitor changes in cognitive and non-cognitive domains of student development. Validated scales could be useful in serving information needs of teachers, decision-makers and researchers in similar school-based contexts.

Originality/value

This study demonstrates a comprehensive, user-centered methodology for designing and validating construct measures, departing from purely psychometric traditions of scale development.

Details

Quality Assurance in Education, vol. 26 no. 1
Type: Research Article
ISSN: 0968-4883

Keywords

Access Restricted. View access options
Article
Publication date: 26 August 2014

Edmund W. Gordon, Michael V. McGill, Deanna Iceman Sands, Kelley M. Kalinich, James W. Pellegrino and Madhabi Chatterji

The purpose of this article is to present alternative views on the theory and practice of formative assessment (FA), or assessment to support teaching and learning in classrooms…

2678

Abstract

Purpose

The purpose of this article is to present alternative views on the theory and practice of formative assessment (FA), or assessment to support teaching and learning in classrooms, with the purpose of highlighting its value in education and informing discussions on educational assessment policy.

Methodology/approach

The method used is a “moderated policy discussion”. The six invited commentaries on the theme represent perspectives of leading scholars and measurement experts juxtaposed against voices of prominent school district leaders from two education systems in the USA. The discussion is moderated with introductory and concluding remarks from the guest editor and is excerpted from a recent blog published by Education Week. References and author biographies are presented at the end of the article.

Findings

While current assessment policies in the USA push for greater accountability in schools by increasing large scale testing of students, the authors underscore the importance of FA integrated with classroom teaching and learning. They define what formative classroom assessment means in theory and in practice, consider barriers to more widespread use of FA practices and address what educational policy makers could do to facilitate a FA “work culture” in schools.

Originality/value

The commentators, representing scholar and practitioner perspectives, examine the problem in a multi-faceted manner and offer research-based, practical and policy solutions to the observed issues in FA. Dialogue among stakeholders, as presented here, is a key first step in enacting assessment reforms in the directions discussed.

Details

Quality Assurance in Education, vol. 22 no. 4
Type: Research Article
ISSN: 0968-4883

Keywords

Access Restricted. View access options
Article
Publication date: 26 August 2014

W. James Popham, David C. Berliner, Neal M. Kingston, Susan H. Fuhrman, Steven M. Ladd, Jeffrey Charbonneau and Madhabi Chatterji

Against a backdrop of high-stakes assessment policies in the USA, this paper explores the challenges, promises and the “state of the art” with regard to designing standardized…

2009

Abstract

Purpose

Against a backdrop of high-stakes assessment policies in the USA, this paper explores the challenges, promises and the “state of the art” with regard to designing standardized achievement tests and educational assessment systems that are instructionally useful. Authors deliberate on the consequences of using inappropriately designed tests, and in particular tests that are insensitive to instruction, for teacher and/or school evaluation purposes.

Methodology/approach

The method used is a “moderated policy discussion”. The six invited commentaries represent voices of leading education scholars and measurement experts, juxtaposed against views of a prominent leader and nationally recognized teacher from two American education systems. The discussion is moderated with introductory and concluding remarks from the guest editor, and is excerpted from a recent blog published by Education Week. References and author biographies are presented at the end of the article.

Findings

In the education assessment profession, there is a promising movement toward more research and development on standardized assessment systems that are instructionally sensitive and useful for classroom teaching. However, the distinctions among different types of tests vis-à-vis their purposes are often unclear to policymakers, educators and other test users, leading to test misuses. The authors underscore issues related to validity, ethics and consequences when inappropriately designed tests are used in high-stakes policy contexts, offering recommendations for the design of instructionally sensitive tests and more comprehensive assessment systems that can serve a broader set of educational evaluation needs. As instructionally informative tests are developed and formalized, their psychometric quality and utility in school and teacher evaluation models must also be evaluated.

Originality/value

Featuring perspectives of scholars, measurement experts and educators “on the ground”, this article presents an open and balanced exchange of technical, applied and policy issues surrounding “instructionally sensitive” test design and use, along with other types of assessments needed to create comprehensive educational evaluation systems.

Access Restricted. View access options
Article
Publication date: 28 January 2014

Beatrice L. Bridglall, Jade Caines and Madhabi Chatterji

This policy brief, the second AERI-NEPC eBrief in the series “Understanding validity issues around the world”, focuses on validity as it applies to test-based models of evaluation…

910

Abstract

Purpose

This policy brief, the second AERI-NEPC eBrief in the series “Understanding validity issues around the world”, focuses on validity as it applies to test-based models of evaluation employed for schools, instructional programs, and teachers around the world. It discusses validity issues that could arise when data from student achievement test administrations and other sources are used for conducting personnel appraisals, program evaluations, or for external accountability purposes, suggesting solutions and recommendations for improving validity in such applications of test-based information.

Design/methodology/approach

This policy brief is based on a synthesis of conference proceedings and review of selected pieces of extant literature. It begins by summarizing perspectives of an invited expert panel on the topic. To that synthesis, the authors add their own analysis of key issues. They conclude by offering recommendations for test developers and test users.

Findings

The authors conclude that systematic improvement and transformation of schools depends on thoughtfully conceptualizing, implementing, and using data from testing and broad-based evaluation systems that incorporate multiple kinds of evidence. Evaluation systems that are valid and fair to students, teachers and education leaders need all three of the following: assessment resources and training for all participants and evaluation users; knowledgeable staff to continuously monitor processes and use assessment results appropriately to improve teaching and learning activities; and a strengths-based approach to make improvements to the education system based on relevant data and reports (as opposed to a deficits-based one in which blame or punishment is leveled at individuals or groups of workers when gaps in performance are observed).

Originality/value

To improve validity in interpretations of results from test-based teacher and school evaluation models, the authors provide recommendations for measurement and evaluation specialists as well as for educators, policy makers, and public users of data. Standardized test use in formative and more “high stakes” educational accountability contexts is rapidly spreading to various regions of the world. This eBrief shows that understandings of validity are still uneven among key stakeholders. By translating complex information pertinent to current validity issues, this policy brief attempts to address this need, and also bridge knowledge and communications gaps among different constituencies.

Details

Quality Assurance in Education, vol. 22 no. 1
Type: Research Article
ISSN: 0968-4883

Keywords

Access Restricted. View access options
Article
Publication date: 26 August 2014

Oren Pizmony-Levy, James Harvey, William H. Schmidt, Richard Noonan, Laura Engel, Michael J. Feuer, Henry Braun, Carla Santorno, Iris C. Rotberg, Paul Ash, Madhabi Chatterji and Judith Torney-Purta

This paper presents a moderated discussion on popular misconceptions, benefits and limitations of International Large-Scale Assessment (ILSA) programs, clarifying how ILSA results…

611

Abstract

Purpose

This paper presents a moderated discussion on popular misconceptions, benefits and limitations of International Large-Scale Assessment (ILSA) programs, clarifying how ILSA results could be more appropriately interpreted and used in public policy contexts in the USA and elsewhere in the world.

Design/methodology/approach

To bring key issues, points-of-view and recommendations on the theme to light, the method used is a “moderated policy discussion”. Nine commentaries were invited to represent voices of leading ILSA scholars/researchers and measurement experts, juxtaposed against views of prominent leaders of education systems in the USA that participate in ILSA programs. The discussion is excerpted from a recent blog published by Education Week. It is moderated with introductory remarks from the guest editor and concluding recommendations from an ILSA researcher who did not participate in the original blog. References and author biographies are presented at the end of the article.

Findings

Together, the commentaries address historical, methodological, socio-political and policy issues surrounding ILSA programs vis-à-vis the major goals of education and larger societal concerns. Authors offer recommendations for improving the international studies themselves and for making reports more transparent for educators and the public to facilitate greater understanding of their purposes, meanings and policy implications.

Originality/value

When assessment policies are implemented from the top down, as is often the case with ILSA program participation, educators and leaders in school systems tend to be left out of the conversation. This article is intended to foster a productive two-way dialogue among key ILSA actors that can serve as a stepping-stone to more concerted policy actions within and across national education systems.

Details

Quality Assurance in Education, vol. 22 no. 4
Type: Research Article
ISSN: 0968-4883

Keywords

Access Restricted. View access options
Article
Publication date: 28 January 2014

Meiko Lin, Erin Bumgarner and Madhabi Chatterji

This policy brief, the third in the AERI-NEPC eBrief series “Understanding validity issues around the world”, discusses validity issues surrounding International Large Scale…

804

Abstract

Purpose

This policy brief, the third in the AERI-NEPC eBrief series “Understanding validity issues around the world”, discusses validity issues surrounding International Large Scale Assessment (ILSA) programs. ILSA programs, such as the well-known Programme of International Student Assessment (PISA) and the Trends in International Mathematics and Science Study (TIMSS), are rapidly expanding around the world today. In this eBrief, the authors examine what “validity” means when applied to published results and reports of programs like the PISA.

Design/methodology/approach

This policy brief is based on a synthesis of conference proceedings and review of selected pieces of extant literature. It begins by summarizing perspectives of an invited expert panel on the topic. To that synthesis, the authors add their own analysis of key issues. They conclude by offering recommendations for test developers and test users.

Findings

ILSA programs and tests, while offering valuable information, should be read and used cautiously and in context. All parties need to be on the same page to maximize valid use of ILSA results, to obtain the greatest educational and social benefits, and to minimize negative consequences. The authors propose several recommendations for test makers and ILSA program leaders, and ILSA users. To ILSA leaders and researchers: provide more cautionary information about how to correctly interpret the ILSA results, particularly country rankings, given contextual differences among nations. Provide continuing psychometric or research resources so as to address or reduce various sources of error in reports. Encourage policy makers in different nations to share the responsibility for ensuring more contextualized (and valid) interpretations of ILSA reports and subsequent policy development. Raise awareness among policy makers to look beyond simple rankings and pay more attention to inter-country differences. For consumers of ILSA results and reports: read the fine print, not just the country rankings, to interpret ILSA results correctly in particular regions/nations. When looking to high-ranking countries as role models, be sure to consider the “whole picture”. Use ILSA data as complements to other national- and state-level educational assessments to better gauge the status of the country's education system and subsequent policy directions.

Originality/value

By translating complex information on validity issues with all concerned ILSA stakeholders in mind, this policy brief will improve uses and applications of ILSA information in national and regional policy contexts.

Details

Quality Assurance in Education, vol. 22 no. 1
Type: Research Article
ISSN: 0968-4883

Keywords

Access Restricted. View access options
Article
Publication date: 28 January 2014

Jade Caines, Beatrice L. Bridglall and Madhabi Chatterji

This policy brief discusses validity and fairness issues that could arise when test-based information is used for making “high stakes” decisions at an individual level, such as…

2886

Abstract

Purpose

This policy brief discusses validity and fairness issues that could arise when test-based information is used for making “high stakes” decisions at an individual level, such as, for the certification of teachers or other professionals, or when admitting students into higher education programs and colleges, or for making immigration-related decisions for prospective immigrants. To assist test developers, affiliated researchers and test users enhance levels of validity and fairness with these particular types of test score interpretations and uses, this policy brief summarizes an “argument-based approach” to validation given by Kane.

Design/methodology/approach

This policy brief is based on a synthesis of conference proceedings and review of selected pieces of extant literature. To that synthesis, the authors add practitioner-friendly examples with their own analysis of key issues. They conclude by offering recommendations for test developers and test users.

Findings

The authors conclude that validity is a complex and evolving construct, especially when considering issues of fairness in individual testing contexts. Kane's argument-based approach offers an accessible framework through which test makers can accumulate evidence to evaluate inferences and arguments related to decisions to be made with test scores. Perspectives of test makers, researchers, test takers and decision-makers must all be incorporated into constructing coherent “validity arguments” to guide the test development and validation processes.

Originality/value

Standardized test use for individual-level decisions is gradually spreading to various regions of the world, but understandings of validity are still uneven among key stakeholders of such testing programs. By translating complex information on test validation, validity and fairness issues with all concerned stakeholders in mind, this policy brief attempts to address the communication gaps noted to exist among these groups by Kane.

Details

Quality Assurance in Education, vol. 22 no. 1
Type: Research Article
ISSN: 0968-4883

Keywords

Access Restricted. View access options
Article
Publication date: 28 January 2014

Nancy Koh, Vikash Reddy and Madhabi Chatterji

This AERI-NEPC eBrief, the fourth in a series titled “Understanding validity issues around the world”, looks closely at issues surrounding the validity of test-based actions in…

586

Abstract

Purpose

This AERI-NEPC eBrief, the fourth in a series titled “Understanding validity issues around the world”, looks closely at issues surrounding the validity of test-based actions in educational accountability and school improvement contexts. The specific discussions here focus testing issues in the US. However, the general principles underlying appropriate and inappropriate test use in school reform and high stakes public accountability settings are applicable in both domestic and international settings. This paper aims to present the issues.

Design/methodology/approach

This policy brief is based on a synthesis of conference proceedings and review of selected pieces of extant literature. It begins by summarizing perspectives of an invited expert panel on the topic. To that synthesis, the authors add their own analysis of key issues. They conclude by offering recommendations for test developers and test users.

Findings

The authors conclude that recurring validity issues arise with tests used in school reform and public accountability contexts, because the external tests tend to be employed as policy instruments to drive reforms in schools, with unrealistic timelines and inadequate resources. To reconcile the validity issues with respect to educational assessment and forge a coherent understanding of validity among multiple public users with different agendas, the authors offer several recommendations, such as: adopt an integrated approach to develop content and standards of proficiency that represent a range of cognitive processes; support studies to examine validity of assessments and the effects of decisions taken with assessment data before results are fed into high stakes accountability-related actions that affect teachers, leaders or schools; align standards, curricula, instruction, assessment, and professional development efforts in schools to maximize success; increase capacity-building efforts to help teachers, administrators, policy makers, and other groups of test users learn more about assessments, particularly, about appropriate interpretation and use of assessment data and reports.

Originality/value

Baker points out that in response to growing demands of reformers and policy-makers for more frequent and rigorous testing programs in US public education, results from a single test tend to get used to meet a variety of public education needs today (e.g. school accountability, school improvement, teacher evaluation, and measurement of student performance). While this may simply be a way to make things more cost-efficient and reduce the extent of student testing in schools, a consequence is inappropriate test use that threatens validity in practice settings. This policy brief confronts this recurring validity challenge and offers recommendations to address the issues.

Details

Quality Assurance in Education, vol. 22 no. 1
Type: Research Article
ISSN: 0968-4883

Keywords

Available. Content available
Article
Publication date: 26 August 2014

John Dalrymple Madhabi Chatterji

223

Abstract

Details

Quality Assurance in Education, vol. 22 no. 4
Type: Research Article
ISSN: 0968-4883

Access Restricted. View access options
Article
Publication date: 26 August 2014

Graham Badley

– The purpose of this paper is to present pragmatism as a useful way for supervisors to help their research students become effective research writers.

635

Abstract

Purpose

The purpose of this paper is to present pragmatism as a useful way for supervisors to help their research students become effective research writers.

Design/methodology/approach

I first provide a brief overview of pragmatism, paying special attention to key figures such as John Dewey and Richard Rorty. Second, I suggest ways in which pragmatist supervisors might help research students improve as research writers by focusing on a set of issues including developing an andragogical relationship, adopting a pragmatist approach to ethics and discussing writing styles.

Findings

Pragmatism is not offered as an approach which must necessarily be adopted by supervisors but, rather, as a useful set of resources for them to use as they try to help doctoral students develop as thesis/research writers.

Originality/value

Pragmatism is rarely, if ever, discussed as a potentially fruitful and valuable way of helping students develop as doctoral writers.

Details

Quality Assurance in Education, vol. 22 no. 4
Type: Research Article
ISSN: 0968-4883

Keywords

1 – 10 of 20
Per page
102050