The implementation of Lean Six Sigma for the optimization of robotic process automation systems in financial service operations

Bart Lameijer (Business Analytics, University of Amsterdam, Amsterdam, The Netherlands)
Elizabeth S.L. de Vries (Business Consultant, Utrecht, The Netherlands)
Jiju Antony (Marketing, Operations and Systems, Newcastle Business School, Northumbria University, Newcastle upon Tyne, UK)
Jose Arturo Garza-Reyes (Centre for Supply Chain Improvement, University of Derby, Derby, UK)
Michael Sony (Analytics, Information Systems and Operations, Oxford Brookes Business School, Oxford Brookes University, Oxford, UK)

Business Process Management Journal

ISSN: 1463-7154

Article publication date: 4 July 2024

1162

Abstract

Purpose

Many organizations currently transition towards digitalized process design, execution, control, assurance and improvement, and the purpose of this research is to empirically demonstrate how data-based operational excellence techniques are useful in digitalized environments by means of the optimization of a robotic process automation deployment.

Design/methodology/approach

An interpretive mixed-method case study approach comprising both secondary Lean Six Sigma (LSS) project data together with participant-as-observer archival observations is applied. A case report, comprising per DMAIC phase (1) the objectives, (2) the main deliverables, (3) the results and (4) the key actions leading to achieving the presented results is presented.

Findings

Key findings comprise (1) the importance of understanding how to acquire and prepare large system generated data and (2) the need for better large system-generated database validation mechanisms. Finally (3) the importance of process contextual understanding of the LSS project lead is emphasized, together with (4) the need for LSS foundational curriculum developments in order to be effective in digitalized environments.

Originality/value

This study provides a rich prescriptive demonstration of LSS methodology implementation for RPA deployment improvement, and is one of the few empirical demonstrations of LSS based problem solving methodology in industry 4.0 contexts.

Keywords

Citation

Lameijer, B., de Vries, E.S.L., Antony, J., Garza-Reyes, J.A. and Sony, M. (2024), "The implementation of Lean Six Sigma for the optimization of robotic process automation systems in financial service operations", Business Process Management Journal, Vol. 30 No. 8, pp. 232-259. https://doi.org/10.1108/BPMJ-08-2023-0640

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Bart Lameijer, Elizabeth S.L. de Vries, Jiju Antony, Jose Arturo Garza-Reyes and Michael Sony

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Under the umbrella of Industry 4.0 (I4.0) various digital information technology (IT) based solutions are rapidly being adopted (Choi et al., 2022), primarily by manufacturing and (financial) services companies (McKinsey, 2021). Both in academia- and practitioner-based communities of practice the expectations of these development are high: “Operational costs will dramatically reduce due to hyper automation” (Gartner, 2018), “Pioneers in I4.0/AI have up to 15% higher profit margins compared to their competitors” (McKinsey, 2021), and Davenport and Ronanki (2018, p. 110) show that three-quarters of the 250 surveyed executives “believe that I4.0/AI will substantially transform their companies within three years”. When looking into the collection of available I4.0/AI based technologies (i.e. see Choi et al., 2022 for an overview), robotic process automation (RPA) platform integrations are considered to make up a substantial portion of the growth in I4.0 based software implementations (Flechsig et al., 2022; Gartner, 2018). RPA is defined here as “the concept of using a software platform of virtual robots to manipulate existing application software in the same way that a human does to a process or transaction” (Suri et al., 2017). These virtual software robots are, despite the name, the equivalent of a software license (Lacity et al., 2016). For the interaction with the multiple workflow systems wherefore such virtual machines are deployed, the graphical user interfaces are accessed, just as humans would do (Cewe et al., 2018).

To date numerous examples of case studies and empirical research have explored and confirmed the effectiveness of Lean Six Sigma (LSS) as data-based process improvement methodology in various contexts (De Mast et al., 2022). However, we are currently witnessing a rapid transition towards the digitalization of process design, execution, control, assurance and improvement in organizations (Lameijer et al., 2021), creating different contextual conditions wherein the effectiveness of LSS data-based problem solving needs to be explored.

Research to date has debated the integration and enhancement of LSS techniques with/by I4.0 technologies and the integration of I4.0 techniques in LSS frameworks (e.g. Chiarini and Kumar, 2021). Thereby the potential value of LSS for I4.0 or alike advanced information technology implementations became apparent (e.g. Bhat et al., 2021). However, academic efforts on advanced I4.0 technologies, RPA included, is predominantly devoted to technological developments instead “examining the impacts of these emerging technological innovations within production and operations” (Heim and Peng, 2022). Moreover, the available implementation science predominantly focusses on manufacturing operations, with fewer examples in service operations (Spring et al., 2022). Hence, given the apparent potential of LSS data-based problem solving methodology in I4.0 contexts and the absence of empirical research exploring feasible ways to do so (Santos et al., 2020), we question: “How can Lean Six Sigma be applied for the optimization of Robotic Process Automation software deployments?”

This paper contributes to the literature by empirically demonstrating how a new type of problem (i.e. optimization of RPA based digitalized processes) can be overcome with existing data-based operational excellence (LSS) techniques (Lameijer et al., 2023b). Existing research has started to address the potential value of LSS for I4.0 or alike advanced information technology implementations, predominantly by process optimization and standardization before process automation (Rossini et al., 2019). Arguably in complex, unstructured and dynamic business environments subject to I4.0 technology implementation, process standardization is not the only benefit LSS’s structured approach to problem solving might bring. Therefore, this study provides a rich prescriptive demonstration of LSS methodology implementation for RPA deployment improvement, thereby providing relevant lessons for practitioners and scholars alike active in process improvement in digitalized environments.

2. Literature review

Under the category of Industry 4.0 (I4.0) (operations) management scholars have started to research use cases for these technologies (e.g. Choi et al., 2022). Robotic process automation is one of such knowledge-based technologies, with the objective to automate processes, that has become available for use in operations management contexts.

2.1 Robotic process automation technology

Robotic process automation is a branch of process automation designed to improve process efficiency, effectiveness and consistency, by reducing manual, repetitive processing time typically spend while working with information systems (Cewe et al., 2018). Typically, manual and structured tasks are performed faster and with less errors by software robots. Moreover, such software robots can perform high volume, low variety, repetitive tasks based on the core information system’s graphical user interface (GUI), instead of having to have access via application programming interfaces (APIs) (Cewe et al., 2018). Therefore, the core workflow supportive information technology (IT) infrastructure does not need to be changed: the software robot performs the tasks that used to be done by humans via the same interface, faster and typically more cost-efficient.

Reports on the adoption of RPA applications are to be found in among others financial administrative (Lacity and Willcocks, 2017) and human resources management (Hallikainen et al., 2018) business functions. Typical RPA tasks are filling forms, logging, monitoring events, performing checks, sending e-mails and data extraction. The business objective of RPA is to automate existing processes that are defined and are operational with human workers. Thereby RPA is considered “lightweight” IT, as it interacts via application front-ends. RPA is typically owned by business owners, and is suitable for process automation that requires business and process expertise, as RPA software configuration (almost) does not require programming skills. Moreover, RPA based application interactions are via the workflow systems’ user interface, therefore needing little to no integration nor IT infrastructure changes, leading to lower development costs and faster development times (Lacity et al., 2016; Suri et al., 2017).

Reported benefits of RPA implementation comprise among others (Santos et al., 2020) (1) the ability to operate 24 h, 7 days a week, (2) allowing human employees to engage with higher order cognitive tasks involving problem solving and exception handling, (3) leading to new human occupations (RPA management and consulting, etc.) and (4) reduces the dependency on outsourced (offshore) FTEs, (5) leading to faster and more consistent task execution (productivity), (6) in almost any workflow systems, (7) with higher security (i.e. no back-end modifications needed), (8) that are faster deployed than traditional IT solutions, (9) thereby being more scalable. Disadvantages comprise (1) RPA’s suitability for rule-based processes only and (2), is easily frustrated by processing exceptions (i.e. needs intensive human-supervision in case of increasing process complexity) (Santos et al., 2020).

Hence, despite the reported benefits of RPA implementation, process selection or readiness criteria comprise predominantly static process and process context prescribing factors (i.e. high volumes, low variety, high degree of process standardization, stable IT workflow environment, limited exception handling, high quality of data to be processed) (Santos et al., 2020). Consequently, industry implementation success rates are reportedly varying (Flechsig et al., 2022). To date, a series of teaching cases explicating the dimensions whereon RPA deployment leaders must make decisions exist (i.e. the scale of implementation, the degree of existing staff retraining needed, the risk of “process knowledge loss” by RPA-based employee-replacing implementations, etc.) (Barbosa et al., 2023; Mirispelakotuwa et al., 2023; Willcocks et al., 2017). Moreover, explorative empirical research on RPA implementation emerged, addressing for instance the importance of continuous improvement after RPA deployment in high-variability logistics’ business settings (Krakau et al., 2021) and the value of RPA for lean management based process waste elimination (Gradim and Teixeira, 2022; Martins et al., 2023). More systematic enquiries into RPA implementation research to date revealed an absence of industry specific guidance or implementation models, and a general absence of practical validation of the procedural models that have been presented to date (da Silva Costa et al., 2022; Krakau et al., 2021). Therefore, the various academic calls for empirical research on RPA implementation, covering among others implementation barriers, performance measurement and improvement (Da Silva Costa et al., 2022; Ylä-Kujala et al., 2023), and socio-technical implications (Danilova, 2019; Hartley and Sawaya, 2019; Syed et al., 2020), provide the rationale for this case study.

2.2 Lean Six Sigma process improvement methodology in digitalized environments

To assure efficient, effective and consistent operations companies need to invest in process improvement. Given the nature of operations, being either more or less digitalized and automated-systems’ supported (e.g. automated workflow systems), process improvement methodologies’ reliance on available process data is either more (e.g. process mining and other artificial intelligence based algorithmic analytical techniques, etc.) or less (e.g. probabilistic statistics and other more anecdotal-data based techniques) (De Mast et al., 2022). A widely applied, globally-standardized, methodology adopted by many organizations, among others in the financial services industry (e.g. Heckl et al., 2010), for process improvement is LSS, a combination of both the Lean management and Six Sigma methodologies (Näslund, 2008; Shah et al., 2008).

Research on LSS in the context of process digitalization and automation is commonly referred to as the integration of Lean, Six Sigma or LSS and Industry 4.0 (I4.0) (Pongboonchai-Empl et al., 2023; Tissir et al., 2022; Skalli et al., 2022). Essentially past research explored (1) the integration/correlation of LSS techniques with I4.0 technologies (e.g. machine learning, neural networks, etc.) (e.g. Chiarini and Kumar, 2021), (2) the enhancement of LSS with I4.0 technologies (i.e. I4.0 techniques typically deployed in LSS DMAIC phases) and (3) the integration of I4.0 techniques in LSS frameworks. Within the first research category, research has specifically started to address the potential value of LSS for I4.0 or alike advanced information technology implementations, predominantly by means of process optimization and standardization before process automation (Rossini et al., 2019). Hence, we aim to contribute by identifying a new type of problem (i.e. RPA deployment optimization) for which existing operational excellence (LSS) techniques arguably are useful (Lameijer et al., 2023b).

Core to our argumentation is the theoretical notion of organizational knowledge creation processes, for which LSS is typically recognized as effective vehicle (Lameijer et al., 2023a; Linderman et al., 2004). Original theories of organizational knowledge creation (Nonaka, 2009) stipulate the difference between tacit (i.e. non-easily accessible knowledge nested in the heads and hands of employees or in the algorithms of machines) and explicit knowledge (i.e. codified knowledge in knowledge management systems), and interaction of the two (explicit and tacit knowledge) is found to typically result in the development of inter-organizational, accessible knowledge. The problem-solving nature of LSS methodology has been found to facilitate such processes of tacit vs explicit knowledge confrontation (Anand et al., 2010). By means of structured approaches and data-driven enquiry, presumptions and uncertainties are falsified and clarified, thereby enhancing better situational understanding and hence effective solution deployment (Sin et al., 2015). Arguably, also in digitalized contexts, structured data-driven problem-solving approaches (i.e. LSS) could be feasible for identifying digitalized systems’ malfunctioning and complexities, and facilitate a process of root causes identification.

3. Research methods

The objective of this research is to empirically demonstrate how LSS DMAIC methodology, at the process level of analysis, is able to contribute to an increased understanding and improvement of digitalized service operations business processes in the context of an RPA implementation. An interpretive mixed-method case study approach (Meredith, 1998), comprising both secondary LSS project data together with participant-as-observer archival observations, is applied. Case-study research is applied here because of the exploratory nature of our research questions, and is deemed a powerful approach for the exploratory end of the spectrum of empirical research: identifying key issues; identifying relevant concepts, variables and factors; and identifying essential themes to be taken into account in more quantitative studies (Ketokivi and Choi, 2014).

3.1 Case description

The context of this case study is the organization-wide service operations business unit of a large financial services provider (referred to as FSP-NL for reasons of confidentially). For the project of study the LSS DMAIC methodology is used to improve the efficiency and effectiveness of the robotic software automated handling of customer due diligence (CDD) analyses in so called know-your-customer (KYC) client-review processes. The topic of KYC in the financial services sector is currently a globally recognized top-priority, and is driven by supra-national regulation on the prevention of money-laundering and terrorism financing (European Commission, 2023). As a result, KYC operations are the primary source of operational cost growth, apart from the investments in digital transformations, for financial services providers anno 2022, with an estimated 15% of all personnel working on KYC related matters (KPMG, 2022).

The project aimed to improve the efficiency and the quality of a partly RPA automated KYC process. In essence a KYC process is executed by a trained KYC officer with the objective to verify the authenticity of a customer and its finances (Figures 1 and 2).

During the implementation of the RPA solution, several issues raised such as data quality issues and system related errors. The client files to be reviewed were assigned to a KYC team, but as the numbers in the robots increased, also the manual work increased. A LSS project was proposed to identify where the process could improve to run the automated parts more effectively and speed up the handling of the client files.

The RPA-based automated workflow was based on company proprietary industry-standard software, and was initially deployed after various iterations with the objective to assess the system’s effectiveness. It is important to note that the sense of urgency felt for the LSS project was not necessarily perceived as resulting from poor automation processes (i.e. requirement analyses, technical functionality determination, etc.), but was generally rooted in a sequence of unexpected and difficult to explain surprises about the RPA system’s functionality after deployment to production.

3.2 Data collection procedure

The project naturally followed the five DMAIC phases: Define, Measure, Analysis, Improve and Control. In the LSS project implementation process, project-progress records are kept to codify the lessons learned and hurdles encountered. Moreover, the principal authors were actively involved as a participant-as-observer in the initiative of study within FSP-NL. The authors’ FSP-NL contextual familiarity provided detailed first-hand knowledge about FSP-NL as company and the employees involved with both the KYC processes and the LSS implementation (Gill and Johnson, 2002), contributing to interpretation of the results and implementation challenges (Delbridge and Kirkpatrick, 1994). To ensure objectivity and mitigate participant-as-observer bias, archival data existing of (1) meeting minutes from weekly project-progress steering-committee calls and (2) digital e-mail correspondence with key stakeholder in the implementation was searched for information that either confirmed or contradicted our emerging insights and findings.

4. Results

Next, for each phase a detailed description is provided, comprising (1) the objectives, (2) the main deliverables, (3) the results and (4) the key actions leading to achieving these presented results (emerging insights and lessons to be learned).

4.1 Define phase

Initially general consensus revolved around finding the root causes for (1) excessive manual process handling times and (2) identify opportunities for improving the software robotic automated flow of the process. Solving these issues would facilitate the scalability of the process, thereby reducing manual processing needs and mitigate the risk of non-compliance.

4.1.1 Objective of the project

The project focused on the critical-to-quality (CtQ) indicators number of files done per day manually (CtQ1) and automatically by robots (CtQ2), and on the first time right percentage (FTR%) per day for the manual work (CtQ3) and for the automatic robot work (CtQ4). The goal for the number of files per day was 300 in total (combining the numbers of manual and robots). For the FTR% the goal was set for manual work on more than 90% and for the FTR% of the robots on more than 60%. The 90% FTR of manual work was based on historical insights on FTR% and feasibility within the teams. The 60% FTR of robots was based on the outcomes of the measure phase of this project. The assumption was made in earlier stages that RPA would automate almost everything (going to 100%) and would have minimal failures. Absence of human interventions and programmed process steps would suggest a low number of mistakes. However, this was not the outcome up till now.

4.1.2 Anticipated benefits

The biggest risk for this financial institution was to have financial regulators withdrawing their license to operate due to not complying to the law. Also the operational costs could be dramatically reduced. Due to unforeseen fall out of the automated robotic solution and the higher complexity of the reviews, more analysts were hired to execute the reviews. Based on these factors, the anticipated financial benefits were set to EUR 711,000. This amount was calculated by anticipating on 11 FTE reduction. Moreover, the prevention of a fine from the regulator that fined other financial institutions with the same challenges was top priority. However, the main goal was to meet the deadline for processing the backlog of clients – and preventing to lose the operating license – without adding any extra FTEs (Table 1).

4.1.3 SIPOC and scope

To create a clear scope and overview of the relevant parties that play a role in the process, a SIPOC was created. The process starts the moment business clients are identified as fit for the process, meaning they comply to the rule set that determines that only the basic requirements set by law are to be analysed and no enhanced analysis are needed. The process ends the moment the files are reviewed as such, manually or by robot (Figure 3).

4.1.4 Stakeholder analysis

As this is a high value process, with many risks at stake, many stakeholders were involved. Sessions had to be held one on one and in bigger workgroups to determine their needs, concerns and cooperation. As the deadline was set within a year, all of this alignment had to be taken place with high priority and higher management had to be involved for prioritization and steering. Furthermore, an analysis of the process could also give employees the feeling of an upcoming reorganization, which could lead to employee turnover or uncertainties about their job, which had to be prevented where possible.

4.1.5 Project organization

The project organization consisted of a project lead, an expert on robotic solutioning for business purposes, the manager of the analysis department and two business analysts who were familiar with the way of working of the processes and work instructions. As we had a combination of knowledge about processes, projects, content, IT and management, this multidisciplinary team was able to look at the issues from multiple perspectives which led to fruitful discussions and efficient and effective analyses and decision-making.

4.1.6 Emerging insights and lessons learned in the define phase

Realistic and accepted target setting: Determining the right goals for the CtQ’s turned out to be most difficult, as setting goals for robots instead of humans was something new. It seems easy to set goals for automatic solutions and thinking that as long as it is programmed, you are in control of the outcome. However, unforeseen problems with the robotic solutions taught us that even automatic solutions have human and data related wastes which influences the feasibility of goals that were set. Only after the analysis phase realistic goals could be determined. For example, quality was said to be particularly important when starting the project, however, after analysis it seemed that the quality was already remarkably high for the manual work (source: define-phase participant-as-observer correspondence). How many files were really possible to finish and how much benefits could be achieved was changed constantly during the study, as measurements and analyses provided additional insight, leading to redefining the project charter (source: define-phase steering committee documentation).

4.2 Measure phase

To measure the Critical to Quality (CtQ) indicators (Figures 4 and 5), a measurement plan was set up which was validated by the data owners. The data was then collected based on this plan and data wrangling took place to have a complete and correct set of data for the analysis phase.

4.2.1 Measurement plan and data validation

To measure the number of files and FTR% a first draft of the data collection form was developed and discussed with the data analysts who owns the data in the workflow system. Data was accessible through the workflow system for all client files. The data was, after validation of twenty samples (Figure 6), good enough to extract information about the number of files per day, the quality per file and the executor (robot or human). The ultimate selection of 20 samples was based upon the principle of saturation, meaning that ongoing samples were drawn until the identified potential data validity risks were no longer complemented, contradicted or nuanced by the newly sampled records, signalling the emergence of information saturation (Saunders et al., 2018).

4.2.2 Data collection

Data was collected from the workflow systems in which statuses and their time stamps were kept automatically when an analysts proceeded to the next process step. For this project, the status “Completed” was the one to focus on to measure the number of files done per day manually (CtQ1). To measure the FTR% per day for the manual work (CtQ3) the FTR files in Microsoft Excel were used, in which analysts manually saved the time stamps for their files. RPA system output files were used to measure the number of files done per day automatically by robots (CtQ2), and to measure the FTR% per day for the automatic robot work (CtQ4).

4.2.3 Data wrangling

The different files that were used, were exported to CSV (comma separated values) files and merged with Power Query (data processing application) due to the file sizes. The data covered period November 2019 till March 2021, because the first data collections of the CtQ FTR% per day for the manual work (CtQ3) started at November 2019 and the measurement phase of this study was started in March 2021. Data per client file was valuable when it covered the full process including all in between steps, otherwise manual or technical interventions like skipping steps in the process would influence the outcome. For FTR% robot, only days with runs were taken into account, otherwise FTR was 0% while there were no runs, which would influence the outcome.

4.2.4 Emerging insights of the measure phase

Complexity in data preparation: The formats of the data were already determined in the system and by earlier decisions, which made it harder to fit them into the required templates. Although the data was validated in the beginning, the templates did not fit at once, as the exports of the files sometimes moved the fields and the data in it. Many files had to be compared and merged. Additionally, there was substantial missing data within the files, which had to be removed or filled in based on information from other files (source: measure-phase participant-as-observer correspondence). In theory Minitab had to be used for the measurement and analysis phase, however Power Query and Microsoft Excel were more practical and easier to use to merge the vast amount of files.

Redundancy of measurement system analysis: Theory also required a Gauge R&R or Kappa study for the measurement plan. This seemed not feasible, as files could not be handled twice due to system entries. The CtQ data was set in predefined rules in the workflow tool and was not influenced by opinions, only by facts, therefore extra controls on the measurement system seemed not useful (source: measure-phase participant-as-observer correspondence).

4.3 Analysis phase

In the analysis phase, the current state of the CtQs as well as the influence factors were determined. Data analyses in Minitab were performed. Additionally a brainstorm session, value stream mapping (VSM) and a Failure Mode and Effect Analysis (FMEA) were executed.

4.3.1 Current state CtQ1-2: number of files

Minitab was used to measure the current state of the CtQs. The number of files done per day manually was on average (X̅̅) 33.43 files with a standard deviation (s) of 20.45 files. On 25.41% of the days the number of files was below the lower specification level (LSL) (Figure 7), as corroborated by the lower specification limit process capability (PPL = Ppk). Assessment of normality revealed that neither of the tested distributions (normal, lognormal, Weibull and the 3-parameter versions of the latter two) adequately fitted. Therefore, we chose not to engage in parametric PCA-based predictions, but merely focus on diagnostic analysis of the sample data.

For the robot, an a-typical data set was used with many zero values due to days where no input was delivered to the robots. The number of files done per day automatically by the robot was on average 308.4 if the robot was running, with a standard deviation of 252 files and in 47.62% the number of files done by the robot was below the LSL. When including the days with no runs in the sample, the average number of files per day done by the robot was 232.9 files, with a standard deviation of 255.1 files and in 60.71% of the days the robot performed below the LSL, also here as corroborated by the lower specification limit process capability (PPL = Ppk). Also here assessment of normality revealed that neither of the tested distributions (normal, lognormal, Weibull and the 3-parameter versions of the latter two) adequately fitted, hence we chose not to engage in parametric PCA-based predictions, but merely focussed on the diagnostic analysis of the sample data.

Including the days when the robot was not running, the norm was not met on average. Variation was large, with large differences per day for input, which apparently influenced reaching the goal (Figure 8).

4.3.2 Current state CtQ 3–4: FTR%

The FTR% per day for the manual handling of files was on average 90%, which was already on the norm of 90%. In 38.54% of the days the number of files was below the LSL. However, as the average was already on the norm, this CtQ was not further analysed for improvement.

The FTR% per day of the robot was on average 33.29%. In 86.67% of the days, the robot performed below the norm of 60%. Also here assessment of normality revealed that neither of the tested distributions adequately fit, hence we chose to not engage in parametric predictions and limit ourselves to diagnostic analyses.

This outcome of the FTR% for the robot was surprisingly bad and had the direct attention of the stakeholders, as expectations of automatic solutions were high on effectiveness and this outcome was far from the norm of 60% (Figure 9).

4.3.3 Updated project objectives

The conclusions based on descriptive statistics and diagnostics of the process data were.

  1. CtQ # files done per day

    • Number of files done manually is low (33 on average per day).

    • Robot can process much more files than required per day, however the standard deviation is large, so predictability is low.

    • The robot has many days where there is zero input. This seems to affect the outcomes.

  2. CtQ FTR% per day

    • FTR% of robot is far from the norm.

    • FTR% manual is already on the norm on average and gets better over time; seems to be less important for this project to improve.

  3. Adjustments to project objectives and benefits

    • FTR% manual already conforms to norm; no focus on this CtQ for improvement.

    • The initiate estimation of FTR% automatic was too high, changed the objective for robot FTR% to 60%, due to current performance measured.

    • The benefits itself do not change as costs and purpose stay the same.

4.3.4 Diverging search for influence factors: data analysis, VSM and FMEA sessions

Apart from the influence factors that appeared from the exploratory data analysis (Figure 10), eight disturbances were identified from the FMEA (Figure 11). From the VSM, several process inefficiencies were identified with possible improvements. However, to convince the stakeholders, this study revealed the importance of showing factual data. Stakeholders were already aware of the workflow system generated information, and were looking for the “big fish” to improve. The session outcomes (FMEA, VSM) were mostly used to explain what was found in the exploratory data analysis.

4.3.5 Converging establishment of vital few influence factors and established effects

From the long list of trivial many potential influence factors, four vital few influence factors were found in the data, and the effects were established (Figure 12).

  1. Client legal entity had an effect on the CtQ manually handled files. The Mann–Whitney test (non-normal residuals) showed that there was a significant effect between the medians of number of client type 2 (n = 122) files handled per day manually and the client type 2 files (n = 64) (P < 0.05). To approximate a better estimate of the effect of legal entity a general linear model (GLM), to determine the effect of the legal entity on the number of files handled, was fitted. Explained variance was high (99.98%) and a difference of twenty-eight files per day manually handled was signalled due to the client type (i.e. client type 2) (Figure 13).

  2. Robotic process fall out influenced the number of files done automatically. Main reasons for process fall out based on a Pareto analysis were missing information from the client or clients that had exited their company (43%). If all process fall out was resolved, on average 304 more files could be finished per day (# 2 in Figure 14).

  3. The daily offered inflow of client files influenced the number of files done automatically. On average 233 files were done per day including days when the robot was not running. On average 308 files were done per day when robot was running. On average seventy-five more files could be finished if robot would run every day with a predictable input (# 3 in Figure 14), thereby having a severely diminishing effect on the variability in daily files to-be-handled by the robot.

  4. Technical issues influenced the FTR% automatic. 23% of all fall out was due to technical issues, so FTR% could be improved by 23% if these were to be prevented (# 2 in Figure 15).

  5. Finally, the client type also influenced the CtQ FTR% automatic. The Mann–Whitney test showed that there was a significant effect between the medians of client type 2 (n = 254) and client type 1 (n = 0) (P <. 05). Also here, GLM estimation was used to determine the effect of the legal entity more closely. Client type 2 had a significant effect (P < 0.05) on the FTR% automatic, Client type 1 did not (P = 0.081). Overall, 41.46% of FTR% automatic seemed to be explained by the client type 2 company, this was a small effect. The conclusion was that it seemed that if the client type 2 robot was used, the FTR% was likely to be higher. The current mean of FTR% automatic was 33%, while the improved mean of FTR% automatic was 44% (in case the client type 2 and client type 1 robots performed equally well). The effect was therefore estimated at 44% minus 33% is 11% per day (# 3 in Figure 15).

4.3.6 Emerging insights of the analysis phase

Process contextual understanding for correct data interpretation: Many zero-values were observed, which influenced the outcomes. Had we not been aware that the input of these values was caused by manual actions, incorrect assumptions were done and different conclusions were drawn (source: analyse-phase participant-as-observer correspondence). Many times, Microsoft Excel was used again to redefine the data and make new data subsets.

Dominancy of automated process workflow induced inefficiencies: People tend to think that waste is mostly caused in human processes and not in technical processes. However, this study showed that the first time right percentage for the robotic process was extremely low, and the FTR% of the manual process remarkably high, opposite from what stakeholder would expect (source: analyse-phase steering committee documentation). This caused relevant discussions among them, on what the effectiveness of the process was and how to improve this. It indicated the importance of this study even more, but also took time to manage and inform these stakeholders on the next steps.

4.4 Improve phase

Client type proved to be a nuisance variable. It could not be prevented, only compensated for. The organization had to take into account the differences in handling client types, and make sure this was part of the planning. Additionally they had to consider to do client types with higher risks for fall out first, as this would take more time.

Robotics process fall out was a controllable variable, the organization had to improve the business rules and look into new possibilities of automating manual work. Moreover, the organization had to address technical issues at the IT teams to resolve them as soon as possible.

Daily planning and the daily offered work to the robot was a controllable variable as well as a disturbance. The organization had to create data flow script, and in parallel work on automation of in- and output creation by the robot itself, so that they were less dependent on manual steps in the process.

The process flow was redesigned with the following changes (Figure 16).

  1. Removed double steps in the process;

  2. Output robot is input next step: automatically by robot, creating less handovers, less work for data analysts and constant input;

  3. Within analyst department, create pull effect: no more assigning by team lead, but picking up by analyst;

  4. Only start process based on planning, so that there was a constant flow of input and within the different steps the parties had enough inflow and were able to handle the amount of work in time.

4.4.1 Emerging insights of the improve phase

Involvement of operations-, IT and managerial stakeholders: The stakeholders from IT, operations and the responsible management functions had been actively involved in the analysis phase, and therefore could efficiently think along in the improvement phase. Some actions could be taken up immediately, while others took more time to resolve (source: improve-phase steering committee documentation).

Fact-based decision making due to elaborate data-based problem analysis: Decisions could be made based on the effects that improvements were estimated to have in the analysis phase. Therefore, not much time was needed in the improve phase to convince the stakeholders. However, getting the prioritization right was taking time and effort, this could lead to delays in delivery. Management meetings helped in aligning those priorities (source: improve-phase steering committee documentation).

4.5 Control phase

In the control phase several process documentations and standard operating procedures (SOPs) were developed to ensure that the process was correctly executed. Additionally a control plan was set up for the four CtQs including roles and responsibilities (Figure 17).

Dashboards were created to monitor and act upon process performance results. To control the process and have everyone aligned, a weekly “Chain Meeting” was organized, where all important stakeholders were present, so a quick feedback loop was integrated in the process. The outcomes of these chain meetings were directly discussed the day after in the daily “Automated Execution” meeting, where the tasks and responsibilities for improvements were determined.

The following improvement actions would help to reduce errors and not let them happen again.

  1. Create data flow script, parallel work on automation of in- and output creation by robot itself. New robots would take this directly into their scripts. This would make sure that files were not left “hanging” in the process (WIP) and the numbers of input were always constant;

  2. Use of robots in handling files and improve the business rules, so more would be done automatically and less had to be done manually;

  3. Minimize number of issue names and establish clear routing: this would provide more clarity and less variability in what could be done wrong;

  4. Give more people access to the IT environments, so that there were no days without input if the only person that could do it right now was not working.

4.5.1 Benefit realization and tracking

Direct material benefits resulting from this project have led to the reduction of the number of process managers from five to three, saving €202.000, p.a. After the implementation of each consequent improvement, dashboards were monitored to see if the foreseen effect was following from the improvement.

5. Discussion and future research

This section covers predominant insights and theoretical contributions of our mixed methods case study, for which the collection of emerging insights are summarized and discussed next (Table 2).

First, employee and management commitment is a long-known enabling factor in LSS implementations (Schroeder et al., 2008), and was reaffirmed as equally important for the optimization of a digitally deployed vis-à-vis a solely manually deployed workforce (Quaadgras et al., 2014). Furthermore, three themes emerged from our analysis.

5.1 Effective problem solving approach for RPA process automation optimization

Emerging insights 2 and 6 revolve around the unforeseen problems with the robotic solutions, that appeared to have human and data related root causes and proved the initial objectives set to be unrealistic. It appeared the involved workers and managers thought that waste is mostly caused in human processes and not in technical processes, while the contrary turned out to be the case.

The information management literature has for long acknowledged the ambiguous relation between investment in information technology (IT) and performance effects (Brynjolfsson and Hitt, 1996). Typically the empirical studies on the business value of IT consider IT to be a uniform aggregate asset and only little empirical work has analysed the economic impact of specific types of IT investments (such as RPA) (Enholm et al., 2022). Explanations for the unclarity about the performance effects of IT investment revolve among others around assumed lagged effects due to learning that must take place for optimal utilization, and mismanagement of the IT implementation and maintenance processes (Stratopoulos and Dehning, 2000). In the presented case arguably such lagged performance effects were apparent. Factual LSS based analysis revealed that initial RPA operations were not performing as expected, and an iterative approach focusing on one problem at the time to be solved was engaged in. Management was made aware and learned about the specific amendments needed, and the team overseeing RPA operations better understood how to maximize RPA deployment benefits. Thereby LSS project based learning was the vehicle for identifying root causes, testing solutions’ effects and implementing improvements. The organizational learning that LSS-based problem solving facilitates was thereby corroborated for I4.0/RPA contexts, thereby extending earlier organizational learning-theory based research (Sin et al., 2015).

Moreover, prior research has explored the feasibility of LSS and DMAIC based process analysis and improvement (1) prior to introducing RPA based solutions (Chiarini and Kumar, 2021) and (2) in traditional IT infrastructural settings (i.e. ERP) (Su et al., 2006). Apart from an education programme proposal that calls for integration (Money and Mew, 2023), implementation of LSS based problem solving for RPA process automation optimization has not been demonstratively reported before.

5.2 Need for big data preparation and validity assessment procedures

The insights that emerged under 3 and 4 comprised the complexity in data retrieval due to data availability and quality issues. It appeared that querying the RPA workflow system data resulted in several initial errors, leading to extensive manual data collection, interpretation and integration exercises. Moreover, it appeared unfeasible to assess the workflow system’s data validity by means of techniques that assess the probability of measurement system agreement.

The integration and use of large unstructured datasets (Big Data Analytics) in LSS based projects is commonly acknowledged for (1) the selection of feasible areas of improvement (Koppel and Chang, 2021) and (2) the ramifications for secondary historical data collection and pre-processing procedures (also known as Data Wrangling) (Lameijer et al., 2021; Laux et al., 2017; Zwetsloot et al., 2018). The concept of measurement validity of system generated data however has received less attention to date. In controlled observations or data collection procedures LSS project leaders have the responsibility that before, during and after the data collection measurement validity is safeguarded. By deciding to use secondary historical data typically there is a gain in representativeness of the data (i.e. more sampled observations, capturing a larger share of the variety in the population), but a loss in validity of the data (i.e. little to no control over the design and execution of the automated measurement system). Then, typically only after-the-data-collection procedures to assure data-validity are left to apply (De Mast et al., 2022). The detailing of such procedures for application in LSS initiatives to date is absent, despite a growing need (i.e. ever ongoing digitalization and system data generation), and acknowledgement of the need to understand and assess system functionality (and hence valid data generation) in adjacent fields (e.g. also known as black, grey and white box testing in system development research) (Runsha et al., 2021).

For instance, Qiu et al. (2018) present how the use of big data introduces all sorts of adversary effects, such as biases due to noise-data, measurements errors introduced by the software tools to process the data, or the selection of inaccurate proxies for variables of interest. Moreover, inherent ethical risks imposed by the use of big data comprise among others the lack of transparency (i.e. openness about how data is collected, processed, compiled and disseminated), the need for an informed use of information (i.e. by providing meta-data capturing quality frameworks’ adherence in collection procedures), and selection biases (i.e. a lack of understanding the governing dimensions that ultimately led to the compiled dataset) (Tam and Kim, 2018). Therefore, we call for future research and operationalization of a “validity first” (Saracci, 2018) approach for the use of existing large historical datasets in LSS initiatives, in which structured approaches to assess system data validity are explored and developed.

5.3 Prerequisite idiosyncratic contextual understanding of automated processes

Finally emerging insights 5 and 7–9 all related to the importance of LSS project leaders’ factual understanding of the automated process and the context it is operated in. The importance of project managerial ownership and commitment has been acknowledged (Lameijer et al., 2022), and in our case specifically it appeared that the biased data that the RPA system generated or the factual estimation of designed solutions’ effects proved to be pivotal for correctly analysing the data and selecting the appropriate improvements.

Familiarity and understanding of digitally operated processes thereby is stipulated as prerequisite for LSS project leaders to be effective in digital working environments. Industry standard LSS methodology curricula prescribing bodies (i.e. among others the American Society for Quality (ASQ, 2023)) typically do not yet address this growing need. On the other hand new definitions of project leaders with fact-based problem solving abilities that do have a general information technology fluency emerge (e.g. the “Analytics Translator”) (Henke et al., 2018). Hence, future research on the integration of I4.0 and alike process digitalization developments, and how these affect and could or should be integrated in the foundational curriculum for LSS project leaders, is called for.

6. Conclusion

This mixed methods case study into LSS based improvement of a RPA deployment in a service operations setting provided a confirmatory demonstration of the DMAIC-phased LSS approach. In the process of implementation emerging insights have been captured, summarized and discussed. Apart from the theoretical contributions and future research opportunities identified in the discussion section, practical implications that have resulted from this study comprise the awareness and knowledge of the applicability and key learnings on LSS methodology application specifically relevant in the context of an RPA deployment.

Practically, the implications for professionals resulting from this research comprise several. First, the importance of employee and managerial involvement, information and education was corroborated for the ultimate success of effective LSS based RPA workflow optimization. LSS project based learning is demonstrated to be an effective vehicle for identifying root causes, testing solutions’ effects and implementing improvements, and the stakeholder-inclusive structured approach is demonstrated to help in managing expectations and facilitate contributions. Second, the trade-offs in selecting data for LSS project based problem solving are made concrete. Apart from the call for more concrete guidance to assess historical data validity, practical advice for professionals is given, comprising the awareness for noise-data, measurements errors introduced by the software tools to process the data, the selection of inaccurate proxies for variables of interest, a lack of transparency (i.e. unclarity about how data is collected, processed, compiled and disseminated), the need for an informed use of information (i.e. by seeking meta-data about quality frameworks’ adherence in collection procedures), and selection biases (i.e. a lack of understanding the governing dimensions that ultimately led to the compiled dataset). Third, familiarity and understanding of digitally operated processes is put forward as prerequisite for LSS project leaders to be effective in digital working environments. Developing an understanding and a familiarity with the design principles and actual workings of RPA is thereby advised for professionals active in the context of LSS based problem solving and I4.0/RPA.

Theoretically, thereby a demonstration of practically feasible measures to mitigate for instance the risk of “process knowledge loss” by RPA-based employee-replacing implementations, etc.) (Mirispelakotuwa et al., 2023) is provided. Moreover, existing research advocating the importance of continuous improvement after RPA deployment in supply-chain logistics (Krakau et al., 2021) is complemented, by also demonstrating the importance of continuous improvement methodology for RPA implementations in financial service operations. Finally, prior research showcasing the value of RPA for lean management based process waste eliminations (Gradim and Teixeira, 2022; Martins et al., 2023) is complemented, by demonstratively providing evidence for the bi-directional synergetic effect of LSS-based problem solving in the context of RPA implementation.

The main limitations of this study is the scope on the financial services sector. This case study demonstrated a single implementation in a financial services operations context. Typical process characterizations comprise differences in volume and variety, visibility and variability (Johnston et al., 2021). Financial service operations processes are typically characterized by relatively high volumes, with a simultaneously relatively high variety (many exceptions in client-case handling) due to the complex nature of financial services (i.e. an intersection of plain retail operations with high regulatory-, legal- and risk-oriented standards). Moreover, process visibility is typically relatively high (i.e. many customer interactions whilst applications are in process) and processing variability (i.e. the pace of inflow and throughput in processes) is also typically substantial due to the relatively complex nature of financial services. That makes the case-study presented limited to the delineation presented, and implementation processes in other organizations may be idiosyncratic, and different (i.e. more or less complex) in several aspects.

Figures

Manually executed KYC process

Figure 1

Manually executed KYC process

RPA automated KYC process

Figure 2

RPA automated KYC process

SIPOC

Figure 3

SIPOC

CtQ flowdown

Figure 4

CtQ flowdown

CtQ operational definitions

Figure 5

CtQ operational definitions

Measurement validation

Figure 6

Measurement validation

Histogram, time series and process capability analysis (PCA) for # files manual

Figure 7

Histogram, time series and process capability analysis (PCA) for # files manual

Histogram, time series and process capability analysis (PCA) for # files automatic

Figure 8

Histogram, time series and process capability analysis (PCA) for # files automatic

Histogram, time series and process capability analysis (PCA) for FTR% per day automatic

Figure 9

Histogram, time series and process capability analysis (PCA) for FTR% per day automatic

Influence factors identified from the exploratory data analysis

Figure 10

Influence factors identified from the exploratory data analysis

Overview of influence factors identified in the FMEA and VSM sessions

Figure 11

Overview of influence factors identified in the FMEA and VSM sessions

Overview of vital few influence factors, their effects and improvement actions

Figure 12

Overview of vital few influence factors, their effects and improvement actions

Step chart effect estimation of client legal entity (client type 2 – client type 1)

Figure 13

Step chart effect estimation of client legal entity (client type 2 – client type 1)

Step chart effect estimation of fall out reduction (2) and improved daily planning (3)

Figure 14

Step chart effect estimation of fall out reduction (2) and improved daily planning (3)

Step chart effect estimation of fall out reduction (2) and estimation of client legal entity (client type 2 – client type 1) (3)

Figure 15

Step chart effect estimation of fall out reduction (2) and estimation of client legal entity (client type 2 – client type 1) (3)

Overview of vital few influence factors, their effects and improvement actions

Figure 16

Overview of vital few influence factors, their effects and improvement actions

Control plan

Figure 17

Control plan

Business case calculation

Characteristics to be improved (CTQs)Current performance
# Files done per day
# Review files per day – manual33 files per day on average
# Review files per day – automatic233 files per day on average
FTR%
Automatic33%
Manual90%
Benefits of the project for the customer
No unneccessary risk related questions or restrictions for the customer due to wrong assessment
Benefits for the business
1. Less operational costs, due to reduction of FTE
2. Less implementation costs, due to less human involvement
3. No fines or bank license restrictions for extending the deadline or having performed wrong assessments
Anticipated investments
1. Analysts: 46.8 * €51,000, per year = € 2,386,800, per year
2. Teamlead: 3 * €101,000, per year = € 303,000, per year
3. Process managers: 2 * €101,000, per year = € 202,000, per year
4. Data analists: 2 * €101,000, per year = € 202,000, per year
5. Robotics: 1 team (6 fte) = € 500,000, to build team up, then annual costs based on SLA
Hard benefits (=direct bottom-line monetary savings)
1. Analysts (110,000 reviews to do till end 2022 = 5,000 per month. Robot is expected to do 60% = 3,000 per month, leaving 2,000 for the manual workflow. Manually doing now = 1,733 reviews per month – missing 267 reviews per month, for which 8 more FTE would be required): 8 * €51,000, per year = € 408,000, per year
2. Process managers (now 5, goal is to reduce to 2): 3 * €101,000, per year = € 303,000, per year
Soft benefits (=risk avoidance and nonmonetary benefits)
1. Quality conformance with Anti-Money Laundering and Anti-Terrorist Financing Act
2. Better control of taken steps in process
3. Prevention of losing bank license
Strategic benefits (=the project is an enabler)
Lower operational cost

Source(s): Table by authors

Summary of emerging insights

PhaseNo.Emerging insight
Define1Importance of employee and management commitment
2Realistic and accepted target setting for automation solution deployments
Measure3Complexity in data preparation
4Redundancy of measurement system analysis
Analyze5Process contextual understanding for correct data interpretation
6Dominancy of automated process workflow induced inefficiencies
Improve7Involvement of operations-, IT and managerial stakeholders
8Fact-based decision making due to elaborate data-based problem analysis
Control9Shared responsibility was promoted in weekly meetings, so that everyone would work on solutions best fitting their responsibility

Data availability statement: The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to participant privacy restrictions.

References

American Society for Quality (2023), “Lean Six Sigma black belt course overview”, available at: https://asq.org/training/lean-six-sigma-black-belt-training-ssb (accessed 21 Febrauary 2023).

Anand, G., Ward, P.T. and Tatikonda, M.V. (2010), “Role of explicit and tacit knowledge in Six Sigma projects: an empirical examination of differential project success”, Journal of Operations Management, Vol. 28 No. 4, pp. 303-315, doi: 10.1016/j.jom.2009.10.003.

Barbosa, D., Cardoso, J., Alves, A., Mota, R. and Marques, C. (2023), “Robotization of training enrolment process in a continuous improvement department of a retail company”, Proceedings of the International Conference on Flexible Automation and Intelligent Manufacturing, Cham, Springer Nature Switzerland, pp. 433-440.

Bhat, V.S., Bhat, S. and Gijo, E.V. (2021), “Simulation-based lean six sigma for Industry 4.0: an action research in the process industry”, International Journal of Quality and Reliability Management, Vol. 38 No. 5, pp. 1215-1245, doi: 10.1108/ijqrm-05-2020-0167.

Brynjolfsson, E. and Hitt, L. (1996), “Paradox lost? Firm-level evidence on the returns to information systems spending”, Management Science, Vol. 42 No. 4, pp. 541-558, doi: 10.1287/mnsc.42.4.541.

Cewe, C., Koch, D. and Mertens, R. (2018), “Minimal effort requirements engineering for robotic process automation with test driven development and screen recording”, Proceedings of the BPM 2017 International Workshops, Springer International, pp. 642-648.

Chiarini, A. and Kumar, M. (2021), “Lean Six Sigma and Industry 4.0 integration for operational excellence: evidence from Italian manufacturing companies”, Production Planning and Control, Vol. 32 No. 13, pp. 1084-1101, doi: 10.1080/09537287.2020.1784485.

Choi, T.M., Kumar, S., Yue, X. and Chan, H.L. (2022), “Disruptive technologies and operations management in the Industry 4.0 era and beyond”, Production and Operations Management, Vol. 31 No. 1, pp. 9-31, doi: 10.1111/poms.13622.

Da Silva Costa, D.A., São Mamede, H. and da Silva, M.M. (2022), “Robotic process automation (RPA) adoption: a systematic literature review”, Engineering Management in Production and Services, Vol. 14 No. 2, pp. 1-12, doi: 10.2478/emj-2022-0012.

Danilova, K.B. (2019), “Process owners in business process management: a systematic literature review”, Business Process Management Journal, Vol. 25 No. 6, pp. 1377-1412, doi: 10.1108/bpmj-05-2017-0123.

Davenport, T.H. and Ronanki, R. (2018), “Artificial intelligence for the real world”, Harvard Business Review, Vol. 96 No. 1, pp. 108-116.

De Mast, J., Does, R.J., de Koning, H., Lameijer, B.A. and Lokkerbol, J. (2022), Operational Excellence with Lean Six Sigma: Handbook for Implementing Process Improvement with Lean Six Sigma, Van Haren, Hertogenbosch.

Delbridge, R. and Kirkpatrick, I. (1994), “Theory and practice of participant observation”, Principles and Practice in Business and Management Research, Vol. 1, pp. 35-62.

Enholm, I.M., Papagiannidis, E., Mikalef, P. and Krogstie, J. (2022), “Artificial intelligence and business value: a literature review”, Information Systems Frontiers, Vol. 24 No. 5, pp. 1709-1734, doi: 10.1007/s10796-021-10186-w.

European Commission (2023), “EU context of anti-money laundering and countering the financing of terrorism”, available at: https://finance.ec.europa.eu/financial-crime/eu-context-anti-money-laundering-and-countering-financing-terrorism_en (accessed 24 April 2023).

Flechsig, C., Anslinger, F. and Lasch, R. (2022), “Robotic process automation in purchasing and supply management: a multiple case study on potentials, barriers, and implementation”, Journal of Purchasing and Supply Management, Vol. 28 No. 1, 100718, doi: 10.1016/j.pursup.2021.100718.

Gartner (2018), “Robotic process automation software reviews and ratings”, Beyond Tactical RPA. Gartner. Gartner. (2021), Gartner, available at: https://www.gartner.com/reviews/market/robotic-process-automation-softwaredevelopmentcompanies

Gill, J. and Johnson, P. (2002), Research Methods for Managers, 3rd ed., Sage, London.

Gradim, B. and Teixeira, L. (2022), “Robotic process automation as an enabler of Industry 4.0 to eliminate the eighth waste: a study on better usage of human talent”, Procedia Computer Science, Vol. 204, pp. 643-651, doi: 10.1016/j.procs.2022.08.078.

Hallikainen, P., Bekkhus, R. and Pan, S.L. (2018), “How opus capita used internal RPA capabilities to offer services to clients”, MIS Quarterly Executive, Vol. 17 No. 1, pp. 41-52.

Hartley, J.L. and Sawaya, W.J. (2019), “Tortoise, not the hare: digital transformation of supply chain business processes”, Business Horizons, Vol. 62 No. 6, pp. 707-715, doi: 10.1016/j.bushor.2019.07.006.

Heckl, D., Moormann, J. and Rosemann, M. (2010), “Uptake and success factors of Six Sigma in the financial services industry”, Business Process Management Journal, Vol. 16 No. 3, pp. 436-472, doi: 10.1108/14637151011049449.

Heim, G.R. and Peng, X. (2022), “Introduction to the special issue on ‘technology management in a global context: from enterprise systems to technology disrupting operations and supply chains’”, Journal of Operations Management, Vol. 68 Nos 6-7, pp. 536-559, doi: 10.1002/joom.1216.

Henke, N., Levine, J. and McInerney, P. (2018), “You don’t have to be a data scientist to fill this must-have analytics role”, Harvard Business Review, Vol. 5.

Johnston, R., Shulver, M. and Slack, C., G. (2021), Service Operations Management, Pearson Education, Harlow.

Ketokivi, M. and Choi, T. (2014), “Renaissance of case research as a scientific method”, Journal of Operations Management, Vol. 32 No. 5, pp. 232-240, doi: 10.1016/j.jom.2014.03.004.

Koppel, S. and Chang, S. (2021), “MDAIC–a Six Sigma implementation strategy in big data environments”, International Journal of Lean Six Sigma, Vol. 12 No. 2, pp. 432-449, doi: 10.1108/ijlss-12-2019-0123.

KPMG The Netherlands (2022), “State of the banks: the road to post-pandemic recovery”, available at: https://kpmg.com/nl/en/home/insights/2022/04/state-of-banks.html (accessed 24 April 2023).

Krakau, J., Feldmann, C. and Kaupe, V. (2021), “Robotic process automation in logistics: implementation model and factors of success, In adapting to the future: maritime and city logistics in the context of digitalization and sustainability”, Proceedings of the Hamburg International Conference of Logistics (HICL), Vol. 32, Epubli GmbH, Berlin, pp. 219-256.

Lacity, M., Willcocks, L.P. and Craig, A. (2016), “Robotic process automation at Telefonica O2”, MIS Quarterly Executive, Vol. 15 No. 1, pp. 21-35.

Lacity, M.C. and Willcocks, L.P. (2017), “A new approach to automating services”, MIT Sloan Management Review, Vol. 58 No. 1, pp. 41-51.

Lameijer, B.A., Pereira, W. and Antony, J. (2021), “The implementation of Lean Six Sigma for operational excellence in digital emerging technology companies”, Journal of Manufacturing Technology Management, Vol. 32 No. 9, pp. 260-284, doi: 10.5703/1288284317330.

Lameijer, B.A., Antony, J., Borgman, H.P. and Linderman, K. (2022), “Process improvement project failure: a systematic literature review and future research agenda”, International Journal of Lean Six Sigma, Vol. 13 No. 1, pp. 8-32, doi: 10.1108/ijlss-02-2020-0022.

Lameijer, B.A., Boer, H., Antony, J. and Does, R.J.M.M. (2023a), “Continuous improvement implementation models: a reconciliation and holistic metamodel”, Production Planning and Control, Vol. 34 No. 11, pp. 1062-1081, doi: 10.1080/09537287.2021.1974114.

Lameijer, B.A., De Mast, J. and Antony, J. (2023b), “How to publish operational excellence case studies in the IJLSS: a viewpoint article”, International Journal of Lean Six Sigma, Vol. 15 No. 2, pp. 469-478, doi: 10.1108/ijlss-02-2024-255.

Laux, C., Li, N., Seliger, C. and Springer, J. (2017), “Impacting big data analytics in higher education through Six Sigma techniques”, International Journal of Productivity and Performance Management, Vol. 66 No. 5, pp. 662-679, doi: 10.1108/ijppm-09-2016-0194.

Linderman, K., Schroeder, R.G., Zaheer, S., Liedtke, C. and Choo, A.S. (2004), “Integrating quality management practices with knowledge creation processes”, Journal of Operations Management, Vol. 22 No. 6, pp. 589-607, doi: 10.1016/j.jom.2004.07.001.

Martins, C.M.G., São Mamede, H., and Mira da Silva, M. (2023), “A lean approach to robotic process automation”, SSRN 4391981.

McKinsey (2021), “The state of AI in 2021”, available at: https://www.mckinsey.com/capabilities/quantumblack/our-insights/global-survey-the-state-of-ai-in-2021 (accessed 1 November 2022).

Meredith, J. (1998), “Building operations management theory through case and field research”, Journal of Operations Management, Vol. 16 No. 4, pp. 441-454, doi: 10.1016/s0272-6963(98)00023-0.

Mirispelakotuwa, I., Syed, R. and Wynn, M.T. (2023), “Is RPA causing process knowledge loss? Insights from RPA experts”, Proceedings of the International Conference on Business Process Management, Cham, Springer Nature Switzerland, pp. 73-88.

Money, W.H. and Mew, L.Q. (2023), “A proposal for combining project based learning and Lean Six Sigma to teach Robotic Process Automation development and enhance systems integration”, Information Systems Education Journal, Vol. 21 No. 2, p. 2.

Näslund, D. (2008), “Lean, six sigma and lean sigma: fads or real process improvement methods?”, Business Process Management Journal, Vol. 14 No. 3, pp. 269-287, doi: 10.1108/14637150810876634.

Nonaka, I. (2009), “The knowledge-creating company”, in The Economic Impact of Knowledge, Routledge, pp. 175-187.

Pongboonchai-Empl, T., Antony, J., Garza-Reyes, J.A., Komkowski, T. and Tortorella, G.L. (2023), “Integration of industry 4.0 technologies into lean six sigma DMAIC: a systematic review”, Production Planning and Control, pp. 1-26, doi: 10.1080/09537287.2023.2188496.

Qiu, L., Chan, S.H.M. and Chan, D. (2018), “Big data in social and psychological science: theoretical and methodological issues”, Journal of Computational Social Science, Vol. 1, pp. 59-66, doi: 10.1007/s42001-017-0013-6.

Quaadgras, A., Weill, P. and Ross, J.W. (2014), “Management commitments that maximize business impact from IT”, Journal of Information Technology, Vol. 29 No. 2, pp. 114-127, doi: 10.1057/jit.2014.7.

Rossini, M., Costa, F., Tortorella, G.L. and Portioli-Staudacher, A. (2019), “The interrelation between Industry 4.0 and lean production: an empirical study on European manufacturers”, The International Journal of Advanced Manufacturing Technology, Vol. 102 No. 9, pp. 3963-3976, doi: 10.1007/s00170-019-03441-7.

Runsha, D., Ya-Nan, Z., Yang, W., Xinzhou, C. and Lexi, X. (2021), “Proof of concept testing analysis of big data products”, Proceedings of the 7th International Conference on Signal and Information Processing, Networking and Computers (ICSINC), Singapore, Springer, pp. 1027-1034.

Santos, F., Pereira, R. and Vasconcelos, J.B. (2020), “Toward robotic process automation implementation: an end-to-end perspective”, Business Process Management Journal, Vol. 26 No. 2, pp. 405-420, doi: 10.1108/bpmj-12-2018-0380.

Saracci, R. (2018), “Epidemiology in wonderland: big Data and precision medicine”, European Journal of Epidemiology, Vol. 33 No. 3, pp. 245-257, doi: 10.1007/s10654-018-0385-9.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B. and Jinks, C. (2018), “Saturation in qualitative research: exploring its conceptualization and operationalization”, Quality and Quantity, Vol. 52 No. 4, pp. 1893-1907, doi: 10.1007/s11135-017-0574-8.

Schroeder, R.G., Linderman, K., Liedtke, C. and Choo, A.S. (2008), “Six sigma: definition and underlying theory”, Journal of Operations Management, Vol. 26 No. 4, pp. 536-554, doi: 10.1016/j.jom.2007.06.007.

Shah, R., Chandrasekaran, A. and Linderman, K. (2008), “In pursuit of implementation patterns: the context of Lean and Six Sigma”, International Journal of Production Research, Vol. 46 No. 2, pp. 6679-6699, doi: 10.1080/00207540802230504.

Sin, A.B., Zailani, S., Iranmanesh, M. and Ramayah, T. (2015), “Structural equation modelling on knowledge creation in Six Sigma DMAIC project and its impact on organizational performance”, International Journal of Production Economics, Vol. 168, pp. 105-117, doi: 10.1016/j.ijpe.2015.06.007.

Skalli, D., Charkaoui, A., Cherrafi, A., Garza-Reyes, J.A., Antony, J. and Shokri, A. (2022), “Industry 4.0 and Lean Six Sigma integration in manufacturing: a literature review, an integrated framework and proposed research perspectives”, Quality Management Journal, Vol. 30 No. 1, pp. 16-40, doi: 10.1080/10686967.2022.2144784.

Spring, M., Faulconbridge, J. and Sarwar, A. (2022), “How information technology automates and augments processes: insights from Artificial‐Intelligence‐based systems in professional service operations”, Journal of Operations Management, Vol. 68 Nos 6-7, pp. 592-618, doi: 10.1002/joom.1215.

Stratopoulos, T. and Dehning, B. (2000), “Does successful investment in information technology solve the productivity paradox?”, Information and Management, Vol. 38 No. 2, pp. 103-117, doi: 10.1016/s0378-7206(00)00058-6.

Su, C.T., Chiang, T.L. and Chang, C.M. (2006), “Improving service quality by capitalizing on an integrated Lean Six Sigma methodology”, International Journal of Six Sigma and Competitive Advantage, Vol. 2 No. 1, pp. 1-22, doi: 10.1504/ijssca.2006.009367.

Suri, V.K., Elia, M. and van Hillegersberg, J. (2017), “Software bots-the next Frontier for shared services and functional excellence”, Proceedings of the Global Sourcing of Digital Services Workshop, Springer International, pp. 81-94.

Syed, R., Suriadi, S., Adams, M., Bandara, W., Leemans, S.J., Ouyang, C., Reijers, H.A., van de Weerd, I. and Wynn, M.T. (2020), “Robotic process automation: contemporary themes and challenges”, Computers in Industry, Vol. 115, 103162, doi: 10.1016/j.compind.2019.103162.

Tam, S.M. and Kim, J.K. (2018), “Big data ethics and selection-bias: an official statistician's perspective”, Statistical Journal of the IAOS, Vol. 34 No. 4, pp. 577-588, doi: 10.3233/sji-170395.

Tissir, S., Cherrafi, A., Chiarini, A., Elfezazi, S. and Bag, S. (2022), “Lean six sigma and industry 4.0 combination: scoping review and perspectives”, Total Quality Management and Business Excellence, Vol. 34 No. 3, pp. 261-290, doi: 10.1080/14783363.2022.2043740.

Willcocks, L., Lacity, M. and Craig, A. (2017), “Robotic process automation: strategic transformation lever for global business services?”, Journal of Information Technology Teaching Cases, Vol. 7 No. 1, pp. 17-28, doi: 10.1057/s41266-016-0016-9.

Ylä-Kujala, A., Kedziora, D., Metso, L., Kärri, T., Happonen, A. and Piotrowicz, W. (2023), “Robotic process automation deployments: a step-by-step method to investment appraisal”, Business Process Management Journal, Vol. 29 No. 8, pp. 163-187, doi: 10.1108/bpmj-08-2022-0418.

Zwetsloot, I.M., Kuiper, A., Akkerhuis, T.S. and de Koning, H. (2018), “Lean Six Sigma meets data science: integrating two approaches based on three case studies”, Quality Engineering, Vol. 30 No. 3, pp. 419-431, doi: 10.1080/08982112.2018.1434892.

Corresponding author

Bart Lameijer can be contacted at: b.a.lameijer@uva.nl

Related articles