Alexis P. Benson, D. Michelle Hinn and Claire Lloyd
Evaluation is comprised of diverse, oftentimes conflicting, theories and practices that reflect the philosophies, ideologies, and assumptions of the time and place in which they…
Abstract
Evaluation is comprised of diverse, oftentimes conflicting, theories and practices that reflect the philosophies, ideologies, and assumptions of the time and place in which they were constructed. Underlying this diversity is the search for program quality. It is the search for understanding and, consequently, is the fundamental, and most challenging, task confronting the evaluator. This volume is divided into four broad sections, each of which conveys a different vision of quality that include postmodern and normative perspectives, practice-driven concerns, and applied examples in the field of education.
Lisa Claire Lloyd, Claire Hemming and Derek K. Tracy
Service user involvement in evaluating provided services is a core NHS concept. However individuals with intellectual disabilities have traditionally often had their voices…
Abstract
Purpose
Service user involvement in evaluating provided services is a core NHS concept. However individuals with intellectual disabilities have traditionally often had their voices ignored. There have been attempts to redress this, though much work has been quantitative, and qualitative study has more often explored populations transitioning to more mainstream care and those with milder disabilities. The authors set out to explore the views of individuals with more severe intellectual disabilities who were resident inpatients on what helped or hindered their care.
Design/methodology/approach
The paper uses qualitative analysis through semi‐structured interviews of eight (three male, five female, mean age 33) resident service users with severe intellectual disabilities.
Findings
Sub‐categories of staff personality, helpful relationships, and the concept of balanced care emerged under a core category of needing a secure base. Clients were very clearly able to identify and delineate: personal attributes of staff; clinical means of working; and the need to balance support with affording independence and growth. They further noted factors that could help or hinder all of these, and gave nuanced answers on how different personality factors could be utilized in different settings.
Originality/value
Little work has qualitatively explored the needs of residential clients with severe intellectual disabilities. The authors’ data show that exploring the views of more profoundly disabled and vulnerable individuals is both viable and of significant clinical value. It should aid staff in contemplating the needs of their clients; in seeking their opinions and feedback; and considering that most “styles” of personality and work have attributes that clients can value and appreciate.
Details
Keywords
Throughout the world, both in government and in the not-for-profit sector, policymakers and managers are grappling with closely-related problems that include highly politicized…
Abstract
Throughout the world, both in government and in the not-for-profit sector, policymakers and managers are grappling with closely-related problems that include highly politicized environments, demanding constituencies, public expectations for high quality services, aggressive media scrutiny, and tight resource constraints. One potential solution that is getting increasing attention is performance-based management or managing for results: the purposeful use of resources and information to achieve and demonstrate measurable progress toward agency and program goals, especially goals related to service quality and outcomes (see Hatry, 1990; S. Rep. No. 103-58, 1993; Organization for Economic Cooperation and Development, 1996; United Way of America, 1996).
This paper presents a performance-based view of assessment design with a focus on why we need to build tests that are mindful of standards and content outlines and on what such…
Abstract
This paper presents a performance-based view of assessment design with a focus on why we need to build tests that are mindful of standards and content outlines and on what such standards and outlines require. Designing assessment for meaningful educational feedback is a difficult task. The assessment designer must meet the requirements of content standards, the standards for evaluation instrument design, and the societal and institutional expectations of schooling. At the same time, the designer must create challenges that are intellectually interesting and educationally valuable. To improve student assessment, we need to design standards that are not only clearer, but also backed by a more explicit review system. To meet the whole range of student needs and to begin fulfilling the educational purposes of assessment, we need to rethink not only the way we design, but also the way we supervise the process, usage, and reporting of assessments. This paper outlines how assessment design is parallel to student performance and illustrates how this is accomplished through intelligent trial and error, using feedback to make incremental progress toward design standards.
When I hear the term “report” or “representation” applied to the concept of expressing quality I feel as though I am expected to believe that an understanding of quality can be…
Abstract
When I hear the term “report” or “representation” applied to the concept of expressing quality I feel as though I am expected to believe that an understanding of quality can be delivered in a nice, neat bundle. Granted, the delivery of information — numbers, dimensions, effects — can be an important part of such an expression, but it seems to me that the quality resides in and among these descriptors. By its very nature, therefore, quality is difficult to “report.” The only way to express this quality is through a concerted and careful effort of communication. It is for this reason that I prefer to limit my use of the term “reporting” to expressions of quantity, and my colleagues will hear me referring to the “communication” of quality.As I have noted, I see the communication of quality is an interactive process, whether this interaction takes the form of two friends talking about the quality of a backpack, an evaluator discussing the quality of a classroom teacher, or a critic's review speaking to its readers. In any case the effectiveness of the process is dependent on the interaction that takes place in the mind of the person who is accepting a representation (a re-presentation) of quality. The communicator's careful use of familiarity or some common language encourages this interaction and therefore enhances the communication of quality.I also used this forum to suggest that the complexities and responsibilities of social programs bring great importance to the effort of communicating quality. Given this importance, I recommend that program evaluators use descriptive and prescriptive methods, as well as subjectivity and objectivity, as tools to extend the capability of their work to communicate the quality that has been experienced. Again, their ability to communicate this quality rests upon the interaction that takes place between evaluator and audience. As I see it, the job of every evaluator, reviewer, and critic is to attend carefully to what has been described here as the communication of quality.