Karen Claxton and Nicola Marie Campbell-Allen
For any improvement tool to be successfully integrated into an organizations’ quality improvement or risk management programme, it needs to be relatively easy-to-use and proven to…
Abstract
Purpose
For any improvement tool to be successfully integrated into an organizations’ quality improvement or risk management programme, it needs to be relatively easy-to-use and proven to provide benefits to the customer and organization. Many healthcare organizations are facing fiscal constraints and increasing complexity of tests, putting strains on resources, particularly for those on “the shop floor” who are “hands on” in the design, delivery and improvement of products or services. Within a laboratory setting, there is often limited time for formal extensive process reviews; with the pressure to meet “turn-around times” for often “clinically urgent” results. Preventative and corrective actions are often identified through audits or root-cause analysis in some cases after an event has occurred. The paper aims to discuss these issues.
Design/methodology/approach
Failure modes effect analysis (FMEA) is a risk management tool, used to identify prospective failures within processes or products, before they occur. Within laboratory healthcare, risk management for prevention of failure (particularly an inaccurate result) is imperative, and underpins the design of all steps of sample handling. FMEA was used to review a laboratory process for a “gene mutation test” initially considered to have few opportunities for improvement. Despite this perception, a previous review of the process, and the time restrictions for review, new improvements were identified with implications to patient management.
Findings
This study shows that FMEA can yield benefits, for prospective risk management and general process improvement, within a laboratory setting where time and team input is restricted, and within a process that was considered to have few “problems”.
Originality/value
The study was undertaken in a large metropolitan public health system laboratory – one of the largest in the country. This laboratory is a significant contributor to the health outcomes of patients in the local region, and through its contribution to national laboratory testing and reporting. This was the first use of FMEA in this laboratory setting.
Details
Keywords
Zafar Iqbal, Nigel P. Grigg, K. Govinderaju and Nicola Campbell-Allen
Quality function deployment (QFD) is a methodology to translate the “voice of the customer” into engineering/technical specifications (HOWs) to be followed in designing of…
Abstract
Purpose
Quality function deployment (QFD) is a methodology to translate the “voice of the customer” into engineering/technical specifications (HOWs) to be followed in designing of products or services. For the method to be effective, QFD practitioners need to be able to accurately differentiate between the final weights (FWs) that have been assigned to HOWs in the house of quality matrix. The paper aims to introduce a statistical testing procedure to determine whether the FWs of HOWs are significantly different and investigate the robustness of different rating scales used in QFD practice in contributing to these differences.
Design/methodology/approach
Using a range of published QFD examples, the paper uses a parametric bootstrap testing procedure to test the significance of the differences between the FWs by generating simulated random samples based on a theoretical probability model. The paper then determines the significance or otherwise of the differences between: the two most extreme FWs and all pairs of FWs. Finally, the paper checks the robustness of different attribute rating scales (linear vs non-linear) in the context of these testing procedures.
Findings
The paper demonstrates that not all of the differences that exist between the FWs of HOW attributes are in fact significant. In the absence of such a procedure, there is no reliable analytical basis for QFD practitioners to determine whether FWs are significantly different, and they may wrongly prioritise one engineering attribute over another.
Originality/value
This is the first article to test the significance of the differences between FWs of HOWs and to determine the robustness of different strength of scales used in relationship matrix.
Details
Keywords
Zafar Iqbal, Nigel Peter Grigg, K. Govindaraju and Nicola Marie Campbell-Allen
Quality function deployment (QFD) is a planning methodology to improve products, services and their associated processes by ensuring that the voice of the customer has been…
Abstract
Purpose
Quality function deployment (QFD) is a planning methodology to improve products, services and their associated processes by ensuring that the voice of the customer has been effectively deployed through specified and prioritised technical attributes (TAs). The purpose of this paper is two ways: to enhance the prioritisation of TAs: computer simulation significance test; and computer simulation confidence interval. Both are based on permutation sampling, bootstrap sampling and parametric bootstrap sampling of given empirical data.
Design/methodology/approach
The authors present a theoretical case for the use permutation sampling, bootstrap sampling and parametric bootstrap sampling. Using a published case study the authors demonstrate how these can be applied on given empirical data to generate a theoretical population. From this the authors describe a procedure to decide upon which TAs have significantly different priority, and also estimate confidence intervals from the theoretical simulated populations.
Findings
First, the authors demonstrate not only parametric bootstrap is useful to simulate theoretical populations. The authors can also employ permutation sampling and bootstrap sampling to generate theoretical populations. Then the authors obtain the results from these three approaches. qThe authors describe why there is a difference in results of permutation sampling, bootstrap and parametric bootstrap sampling. Practitioners can employ any approach, it depends how much variation in FWs is required by quality assurance division.
Originality/value
Using these methods provides QFD practitioners with a robust and reliable method for determining which TAs should be selected for attention in product and service design. The explicit selection of TAs will help to achieve maximum customer satisfaction, and save time and money, which are the ultimate objectives of QFD.