Abstract
Purpose
This project aims to optimise a secondary agricultural company’s reporting and data lifecycle by providing self-help business intelligence at an optimal price point for all business users.
Design/methodology/approach
A design for Lean Six Sigma approach utilising the define, measure analyse, design and verify methodology was utilised to design a new reporting and data product lifecycle.
Findings
The study found that this approach allowed a very structured delivery of a complex program. The various tools used assisted greatly in delivering results while balancing the needs of the team.
Practical implications
This study demonstrates how improving data analysis and enhanced intelligence reporting in agribusinesses enable better decision making and thus improves efficiencies so that the agribusiness can leverage the learnings.
Social implications
Improving data analysis increases efficiency and reduces agrifood food wastage thus improving sustainability and environmental impacts.
Originality/value
This paper proposes creating a standardised approach to deploying Six Sigma methodology to correct both the data provisioning lifecycle and the subsequent business intelligence reporting lifecycle. It is the first study to look at process optimisation across the agricultural industry’s entire data and business intelligence lifecycle.
Keywords
Citation
Trubetskaya, A., McDermott, O., Durand, P. and Powell, D.J. (2024), "Improving value chain data lifecycle management utilising design for Lean Six Sigma methods", The TQM Journal, Vol. 36 No. 9, pp. 136-154. https://doi.org/10.1108/TQM-01-2024-0020
Publisher
:Emerald Publishing Limited
Copyright © 2024, Anna Trubetskaya, Olivia McDermott, Pierre Durand and Daryl John Powell
License
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
The agricultural industry in South Africa plays a vital role in the nation’s economy, contributing approximately 2.5% to the gross domestic product (GDP) and providing employment to over 800,000 individuals (Department of Agriculture, 2021). This sector grows diverse crops, encompassing maize, wheat, citrus fruits, grapes, sugarcane and various vegetables (Geldenhuys et al., 2020). Despite facing many challenges, such as climate change, water scarcity and land reform issues (Department of Agriculture, 2021), it demonstrates remarkable resilience. The industry is steadily gaining global competitiveness, particularly in the exports of citrus fruits and wines (IBRD, 2021).
The COVID-19 pandemic profoundly affected South Africa’s agricultural industry, presenting a spectrum of challenges that spanned from production and supply chain disruptions to labour shortages and market instability. Measures imposed by the government to curb virus transmission, including lockdowns, upset operations across the entire agricultural value chain, from on-farm production to processing and distribution stages (Boshoff and Kleynhans, 2020). The pandemic-induced changes also stretched to the demand for agricultural commodities. Due to the closure of hospitality venues such as hotels, restaurants and other food service establishments, the need for fresh produce experienced a decline. In contrast, supermarkets saw an upswing in the demand for staple foods such as maize meal and rice, spurred by panic buying (Abdulai et al., 2020). In the face of these challenges, the pandemic underscored the need to fortify the industry’s resilience, which is achievable through strategic investments in technology, infrastructure and human resources (Kruger and Bekker, 2020). The business world has increasingly recognised the importance of data collection, management and analysis in making informed decisions and gaining a competitive advantage (Kim and Hwang, 2020). Ensuring data quality and integrity, crucial to any data-driven decision-making process, is facilitated by adherence to best practices, leading to accurate and reliable insights and enabling well-informed business decisions (Chen et al., 2012). Efficient data management and business intelligence (BI) practices can enhance operational efficiency by minimising errors, streamlining processes and eliminating redundant tasks, leading to substantial time and cost savings (Negash and Gray, 2008).
The agricultural industry has increasingly relied on data and BI to optimise production processes, reduce waste and increase efficiency (Smith, 2017). Lean Six Sigma (LSS) is a widely used problem-solving approach in the agribusiness that combines Lean principles and Six Sigma methodologies (Folinas et al., 2014; Trubetskaya et al., 2023a, b, c). While Six Sigma methods reduce variability in processes, Lean tools aid in waste reduction and combined as LSS deliver synergistic benefits (George, 2002a, b). LSS has been applied in data architecture and BI practices (Eaton et al., 2023). However, these examples typically concentrate on each of these areas in isolation. Recent studies have discussed Lean 4.0 and LSS 4.0 and how LSS integrated with digitalisation have aided non-value add waste reduction (Nelson et al., 2022; Slattery et al., 2022).
Design for LSS provides a holistic approach to designing a new process that identifies and remediates process issues from various perspectives in an entirely new layout (Thomas and Singh, 2006). Design for Lean Six Sigma (DFLSS) can be utilised to optimise agricultural operations, streamline processes and identify and mitigate variations. DFLSS offers a robust framework particularly fitting for the intricate and dynamic nature of BI and data capabilities within the agricultural industry, where optimising resource utilisation, minimising waste and ensuring consistent quality are vital to success. To the authors’ knowledge, this is the first study considering deploying LSS across the entire data and business information (BI) lifecycle.
This research article aims to answer the following question:
Can DFLSS methodology be used to manage the centralisation and optimisation of this organisation's data and business intelligence lifecycle?
Section 2 outlines the literature review, Section 3 utilised the methodology, whilst Sections 4, 5 and 6 summarise the study’s results, discussion and conclusion.
2. Literature review
2.1 The importance of big data and data collection
The business world has increasingly recognised the importance of data collection, management and analysis in making informed decisions and gaining a competitive advantage (Kim and Hwang, 2020). The advent of big data, BI and analytics has enabled organisations to capture, store and analyse massive amounts of data from various sources, such as social media, customer interactions and transactional data (Hwang and Kim, 2018). Despite abundant available data, many companies need help to extract valuable insights from it. It was reported that only 23.4% of executives considered their organisations data-driven, despite 97.2% investing in big data and artificial intelligence (AI) (Partners, 2017). To address this challenge, various frameworks, such as the technology acceptance model (TAM) and the technology-organisation-environment (TOE) framework, have been proposed to standardise reporting practices and help businesses make data-driven decisions (Ghobakhloo and Tang, 2015; Davis, 1989).
Efficient data pipelines and architectural practices are crucial for successfully implementing data and BI. The data pipeline involves collecting, processing and delivering data, and it is important to ensure its effectiveness (Lee et al., 2018). Architectural practices refer to designing and implementing the information technology (IT) infrastructure that supports the data pipeline and is crucial to ensure their scalability, security and flexibility (Kim and Hwang, 2020).
In addition to outright failures, up to 50% of IT projects required significant rework to achieve their objectives. This often results in cost overruns and delays, negatively impacting the project’s overall success. The International Data Corporation (IDC) report has highlighted an essential aspect: 20%–25% of IT projects do not generate a return on investment (ROI) (IDC, 2022). This suggests that the value derived from these undertakings must sufficiently offset their costs. These projects cannot fulfil their designated business objectives or deliver the projected benefits to the organisation.
Many complex and varied reasons lie behind the significant failure rate of IT projects. One common issue is the need for more project management, typified by deficient planning, ill-defined scope and poor execution. The lack of thorough planning is also an issue, marked by the failure to establish clear objectives, the inability to identify all stakeholders, and the absence of a realistic timeline and budget (Trubetskaya et al., 2023a, b, c).
The lack of executive sponsorship is another contributing factor. This signifies that the project needs to have the senior management’s endorsement, leading to resource scarcity and reduced prioritisation. In addition, evolving business requirements, the risk of technology becoming obsolete, and fluctuating market conditions are other elements that can lead to the failure of a project (PMI, 2022).
2.2 Lean Six Sigma
LSS tools can improve BI processes, emphasising the importance of data quality and accuracy (Mishra, 2022). They recommend practical steps such as developing a project charter, identifying key performance indicators (KPI) and implementing statistical process control tools to measure and monitor data quality as illustrated in the supplemental material (Figure S-1). The data collection and measurement system analysis (MSA) were highlighted in other process improvement studies (Xie et al., 2018; Datta and Vardhan, 2017). They recommended tools such as process mapping, root cause analysis and statistical analysis to identify and address data collection and analysis issues.
The define, measure, analyse, improve, control (DMAIC) is the problem-solving framework or methodology employed in LSS (Nelson et al., 2022). DMAIC is typically used for process improvement, identifying root causes, analysing data and implementing solutions to improve existing processes (Pyzdek, 2003). There are some examples of practical implementation of LSS methodologies in data-oriented projects within agriculture (Lee et al., 2018). The authors elucidate how many organisations operating in this sector need to improve in the necessary expertise or resources for effective data analysis and interpretation. By offering a case study of an agricultural company that applied LSS to its data-driven project, they highlighted a 15% yield increase and a 20% waste reduction. The authors delved into how LSS can be leveraged to modernise BI systems. They substantiate their findings through numerous exemplars of LSS techniques such as process mapping, value stream mapping and root cause analysis applied to BI system modernisation, underscoring the value of employing LSS methodologies. The primary merit of utilising LSS techniques for BI modernisation lies in its structured and data-driven approach towards identifying and rectifying inefficiencies within BI systems. This could improve data quality, process efficiency and alignment with organisational goals. Furthermore, LSS methodologies can facilitate organisations to comprehend their data better, leading to more informed decision-making. However, the authors also remark on the challenging nature of the amalgamation of LSS and BI, highlighting the need for substantial expertise and resources. They caution that organisations may need help aligning their processes and data infrastructure and may experience resistance from employees accustomed to traditional BI systems. Furthermore, applying LSS techniques to BI modernisation may not be universally appropriate. Organisations should, therefore, meticulously weigh the costs and benefits before adopting this approach (Thomas and Chindarkar, 2019).
2.3 Design for Lean Six Sigma
Design for LSS has been deployed as an engineering design methodology and also as a business problem-solving method. DFLSS has been used for the design of products and processes rather than existing process improvement (Trubetskaya et al., 2023b; Ó Longaigh et al., 2023). Yang and Cai (2009) highlighted the complimentary nature of design for Six Sigma, as well as lean product development and knowledge management to improve product value and quality. There are many different definitions of DFLSS. The literature has referred to DFSS, and this has evolved to incorporate Lean principles and become DFLSS. The DFSS process produces data that can show the way to achieve Six Sigma levels of quality (Johnson et al., 2006). Several acronyms exist for the methods and process of DFSS and DFLSS. These include define, characterise, optimise and verify (DCOV) and define, measure, explore, develop and implement (DMEDI), identify, define, develop, optimise and verify (IDODV). DFLSS emphasised the use of voice-of-the-customer as an important element of quality (Javier Lloréns-Montes and Molina, 2006; Cronemyr, 2007). The design for LSS methodology employs different statistical tools to analyse certain critical parameters. Tools such as Kano analysis and voice of the customer (VOC) can support the identification of the data life cycle management, emphasising the importance of stakeholder engagement and process ownership to achieve sustainable process improvements (García-Alcaraz et al., 2019). Traditional data management approaches use systems to distribute data across functions lacking a systematic, holistic view, forcing companies to install custom cloud and archive solutions to accommodate solutions (Karpoff et al., 2016).
While DMAIC focuses on existing processes and their improvement, define, measure, analyse, design, verify (DMADV), conversely, is used for the process design or creation of new products or services, emphasising defined customer requirements, designing the process or product and verifying its performance. In the agribusiness, where innovation and customisation are critical, DMADV can be advantageous in designing new agricultural technologies, products or services that meet specific customer needs (Chakravorty et al., 2014). By utilising DMADV, agribusiness practitioners can proactively design and verify novel solutions, mitigating risks and ensuring that the final product or process meets customer requirements and performs optimally.
Eaton et al. (2023) utilised DMADV to redesign a data management system and remove data deemed non-value add that did not contribute to satisfying customer requirements. While DFLSS focuses on product/process design and improvement, cross-industry standard process for data mining (CRISP-DM) has been utilised with DFLSS to provides a structured framework for data science projects. Probabilistic design is an established practice utilised to address risk and uncertainty in design and it overlaps with DFSS and DFLSS with both focussing on improving existing processes and designing for quality (Koch, 2002). DFLSS for software has been utilised in software engineering processes to map and integrate the Software Engineering Institute (SEI) Capability Maturity Model (CMM) with the DMADV process steps in the software development life cycle (Shenvi, 2008). The framework of requirements, architecture, design, implementation, integration, optimisation, verification, and validation (RADIOV) has been integrated with software development to improve software quality design (Maass and McNair, 2009).
3. Methodology
3.1 Case study organisation background
This project is based on a South African secondary agricultural organisation nearly 100 years old. With almost a century of experience, their 5,000 plus employees help farmers overcome the challenges of running and growing modern Agri-grain businesses. The organisation is vital in handling and storing maize, accounting for nearly a third of all South African maize produced in a typical season (>5 million megatonnes). As the closest silo operator to the major Gauteng market, their role in this commodity is crucial. The company also has operations and a footprint in rural towns, providing essential services to farmers. These services include storage and post-harvest solutions, credit and financial products, training, sales and service of mechanised equipment, commodity marketing and a full retail offering with everything from ammunition to fuel.
The company experienced challenges during and post-COVID-19. It established that although there were multiple companies within the group, they needed to be correctly integrated and more parenting advantages must be gained in the current group structure. These benefits include access to resources, such as financial capital, intellectual property and management expertise business units, thereby leveraging economies of scale (Barlett and Ghoshal, 1989).
The organisation decided to centralise any function that could create parenting advantage, more efficiency, streamline communication and ensure better coordination between different business units or divisions. Part of the centralisation drive was to understand the full extent of data and reporting capabilities and ensure the needs of the business would be met in the future. The centralised IT team had to be able to support all business needs across the group within the organisation LSS or any structured continuous improvement methodology. This was typically left to each business management team to decide on and implement, leading to a general lack of standards and standardisation.
3.2 Methodology utilised – DFLSS
DMADV is a data-driven methodology that focuses on designing new processes or products with robust quality from the outset rather than improving existing methods, as in DMAIC (Trubetskaya and Muellers, 2021). In the context of data lifecycle management, DMADV can be particularly useful when organisations want to implement new data management processes or systems or when significant changes are required to the existing methods (Eaton et al., 2023).
One of the key advantages of DMADV is that it emphasises early identification and mitigation of risks and uncertainties in the design phase, which can help prevent potential issues from arising later in the process (George et al., 2005). By proactively designing data management processes with quality and efficiency in mind, organisations can reduce the need for costly rework or corrections in the later stages of the data lifecycle.
Another benefit of DMADV is that it encourages a more customer-centric approach to process design. In the define phase, organisations can thoroughly understand customer requirements and expectations related to data lifecycle management and then use this information to guide the design and verification stages (Brophy et al., 2023). This customer-focused approach ensures that the resulting data management processes are aligned with the needs of the end-users and stakeholders, leading to higher customer satisfaction and improved process performance.
During the “Define” stage, tools such as project charters, the critical to quality (CTQ) tree, and suppliers, inputs, processes, outputs, customers (SIPOC) diagrams were deployed. These tools assist in delineating the project’s scope and pinpointing customer pre-requisites (George, 2002a, b). Project charters offer a comprehensive project picture, encapsulating its purpose, content and expected outcomes – conversely, CTQ trees and SIPOC diagrams aid in mapping customer expectations to operational and process requisites.
In the “Measure” stage, tools like data collection plans and MSA were used, as they ensure a structured approach towards data collection and validate that the measurement systems are accurate and reliable. Resources like cause-and-effect diagrams, failure mode and effects analysis (FMEA) and hypothesis testing were generally used for the analysis stage. These tools enable a profound comprehension of the correlations between process variables and the output (Antony, 2014).
In the “design” phase, tools such as design of experiments (DOE), simulation models, and quality function deployment (QFD) were applied. DOE aids in testing and defining the impact and interactions of various process variables on the output, while simulation models help predict system behaviour (Montgomery, 2013). QFD ensures that customer requirements are systematically integrated into the design process (Hauser and Clausing, 1988).
Finally, the “Verify” stage often involves tools like pilot runs, implementation plans and control plans. These facilitate testing the newly designed process under controlled conditions before a full-scale implementation, planning the new process’s roll-out and maintaining control over the process to ensure its continued performance (Schroeder et al., 2008).
Selecting suitable tools within the DMADV process depends on project characteristics and requirements. For instance, a manufacturing organisation aiming to reduce defects might find FMEA more valuable in the “Analyse” phase, whereas a service-focused organisation emphasising customer satisfaction might heavily rely on CTQ Trees during the “Define” phase (Pande et al., 2000).
In summary, successful navigation through the DMADV cycle necessitates a profound understanding of these tools, the flexibility to employ them per the project’s needs and the ability to select the most suitable ones to achieve optimal outcomes.
Based on this understanding of DMADV and the fact that there were multiple processes, tools and no existing reporting for the client phase of the project, it meant that the DMADV approach of creating and designing a new method instead of improving an existing one would fit the organisational needs better. DMADV is utilised when a new process needs to be designed and an existing process to be improved does not exist (Ryan et al., 2023).
4. Results
4.1 Define
In this initial phase, the project team defined the project goals, objectives, scope and deliverables. They also identified the customer requirements and determined the outcome required to solve the data and reporting issues. The current reporting and data environment had multiple platforms and non-integrated data sets. For an employee to report on or understand their clients, report on financials or make decisions requiring data requires many hours of collation and manual work. This is non-value add waste and is laborious and time-consuming, leading to overprocessing waste where data is only trusted with copious amounts of fact-checking and validations, taking further time and causing frustration.
The team identified key customer preferences through an initial VOC analysis. The customers expressed a clear need for empowerment through self-service BI to eliminate unnecessary wait times. Furthermore, they emphasised the importance of making swift and accurate decisions facilitated by seamless data integration. Gaining a deeper comprehension of their customer base emerged as another priority, enabling them to effectively cater to their client’s needs and expectations.
As shown in Figure 1, the organisation’s data and reporting domains are many and consist of non-value added waste via over processing of data from multiple platforms as well as inventory waste through the storage of large data banks and files. These platforms have largely grown by reacting to the business’ short-term needs rather than a long-term planned and controlled design. Figure 1 captures the top-level electronic data flow as part of a high-level process mapping exercise and value stream map (VSM) exercise.
This reactive approach to reporting has led to multiple direct links between different data sources and a selection of reporting tools known as stovepipe architecture. Stovepipe architecture is an organisational or system architecture was components or departments function independently, without integration (Huda, 2018). In such a scenario, different silos of information and functionality exist within an organisation or system, with no coordination or communication thus making it difficult to make informed decisions or respond quickly to changing market conditions or customer needs.
A “simplistic” VSM in Figure 2 following on from the mapping of data flow in Figure 1 was created as part of a larger VSM. This demonstrates the waste regarding the current state of month-end financial reporting at the outset of this project. During the current state VSM analysis, the team identified numerous types of reporting required; these were then grouped based on the data source type and user group that will need that reporting functionality. This analysis showed all types of non-value added waste were present with data over-processing, data inventory, waiting for data, overproduction of data, defects in data reporting and huge process variability in the extract, download, work and uploading of data processes. Designing a client and financial reporting would form a foundation for all other reporting types. As the economic team in the organisation had already been allocated to several large strategic initiatives, with this capacity constraint in mind and the complete lack of client reporting in the organisation, the team decided to address the client reporting component first. As can be seen in Figure 3, the total data platform project was broken down into seven models, with the client being prioritised first; the client 360° view was then further broken down into seven distinct pieces of work called epics or (e-pictures) by the team, this was done based on a combination of architectural dependency as well as VOC prioritisation.
4.2 Measure
The team constructed a comprehensive data collection plan aligned with the deliverables outlined in the VOC analysis. As shown in Figure 4, the first component of this plan was an outline of required data sources linked with the corresponding epic, allowing the team to understand and prioritise only the necessary data for ingestion into the planned platform.
In addition to this, it included an initial rough timeline of when each of the seven pieces of work would be addressed. Figure 5 shows the team’s stakeholder roles. Their influence on the design and business rules of the solution is pivotal, as is their role in approving the testing phase (Mumford, 2003). Subsequently, end users would be incorporated into the project as it matures to a stage where the work undertaken bears relevance to them.
4.3 Analyse
In an increasing focus on data privacy regulations and cybersecurity threats, compliance with data management and BI best practices helps maintain data security and regulatory compliance (Ranjan, 2009). Figure 6 shows the architectural future state as defined by best practice.
An array of emerging trends, such as escalating data volumes, real-time data requirements, novel data sources and types, new deployment or hybrid models in the cloud and the need for advanced analytics and machine learning are exerting unprecedented pressure on the conventional data warehouse paradigm (Haleem et al., 2021). IT professionals are therefore exploring novel means of modernising data warehousing to fulfil the shifting business needs.
In addition to this technical analysis, the team created a Business requirement definition (BRD) document to capture the full requirements per epic and stakeholder in phase 1. The BRD contained several key items, including project objectives, scope for both data source and functionality required, the business needs, the questions that the business wanted answers to, a proposed best practice solution design specification, functional data domains and a comprehensive collection of business rules and KPI that would need to be reported on or adhered to.
4.4 Design
As shown in Figure 7, the design and development process were enhanced through symbiotic design reviews and prototyping. Design reviews, which encompassed cross-functional evaluations, were employed to gauge the progress and efficacy of the design. Conversely, prototyping contained the creation of tangible or virtual models to verify and affirm the design’s functionality and user experience. Prototypes facilitated the validation of design concepts, promoting iterative refinement based on feedback and fostering effective communication amongst stakeholders.
An initial prototype was developed based on the BRD’s defined requirements and design concepts. It served as a starting point to convey the intended functionality and design of the data visualisation. User feedback and evaluation were gathered during this stage. Iterative enhancements were made based on this analysis – changes involved adjusting the visual design, refining data representation, improving interactivity or addressing usability issues. The objective was to align the prototype more closely with user needs, preferences, and the desired goals. Usability testing was conducted using the refined prototype. Users interacted with the prototype in a controlled environment, helping identify any remaining issues, such as confusing navigation, unclear labels or suboptimal user interactions. The feedback from usability testing informed further refinements. The refined prototype was validated against the initial objectives and requirements to ensure it met the intended goals. This process continued until the prototype achieved the desired usability, effectiveness, and user satisfaction. The final design was documented, including specifications, interaction guidelines and design principles for developing the actual data visualisation solution.
Figure 8 above illustrates one of the initial prototype reporting dashboard designs for clients; this was created with fake data to validate and test the types of visualisations the business users had in mind and what would work best for their specific needs. The team created a total of sixty-eight variations of these visualisations throughout the design phase of this project. This work showed that including fake data for testing the visualisations made the process much easier. As shown in Figure 9, each iterative step allowed the team to add detail and refine the underlying data model to enable the BI needs as defined through the VOC process.
4.5 Verify
The researcher and team now moved into a pilot phase for the project; clear objectives were defined for the pilot, along with success criteria. Specific aspects of the data visualisation and reporting solution to be tested and evaluated were determined, guiding the pilot and ensuring relevant feedback was gathered. Various scenarios, including different data types, volumes and complexity, were covered to test the solution’s capabilities thoroughly. End-users using the data visualisation and reporting solution were involved throughout the pilot. Access to the pilot environment was limited, and sensitive data was anonymised as required for customer confidentiality purposes.
A realistic environment was created for the pilot, closely resembling the production environment. Factors such as network connectivity, hardware infrastructure, software configurations and system integration were considered to identify any technical or compatibility issues early on. As can be seen in Figure 10, the answer was tested with realistic data volumes and user loads, ensuring it could handle expected usage patterns without significant performance degradation.
Adequate training and support were provided to users participating in the pilot. The pilot process was documented, including any changes, issues encountered and lessons learnt.
As seen in Figure 11, an extensive change management and a communication plan was implemented to ensure communication was maintained with relevant stakeholders throughout the pilot. This ensured that expectations were managed, any resistance or concerns regarding the new data visualisation and reporting solution were addressed, and stakeholders were engaged in the pilot process, obtaining their buy-in for wider adoption.
Significant achievements marked the outcome of the project. The team developed twelve bespoke dashboards for client reporting, as illustrated in the supplemental material (Figures S-2 to S-13). Furthermore, customisable views were established for all identified client stakeholders and business units, offering flexibility and personalisation. In terms of infrastructure, a data lake was created alongside a modernised architecture, promising sustainable reporting and efficient data management for a long time. Lastly, a market opportunity exceeding fifty million euros within the existing client base was identified due to the new process.
5. Discussion
As dissected in the literature review, the project outcomes with LSS principles indicate the potential for augmenting process efficiency, enhancing data quality and fortifying the alignment of decision-making processes with organisational goals (Snee, 2010). For instance, within this project, the newly introduced dashboards and updated architecture were outcomes of the DFLSS methodology, where the team set the objectives, assessed the prevailing processes, scrutinised the data and consequently designed new ones.
In executing this project, the DMADV approach was a systematic framework for resolving problems and enhancing process performance (Saunders et al., 2016). It provided clear stages of operation, each accompanied by associated tools, such as project charters, SIPOC analysis, flowcharts, cause and effect diagrams, design reviews and prototyping. This project underscored the necessity of these tools within each phase of the DMADV methodology for successfully improving the full value chain of data lifecycle management in agri-business.
The design phase tools in particular aided the project, the iterative nature of prototyping and doing design review did not fit into the organisation’s traditional project management framework whereas it is integrated in the DMADV model. Classic upfront detail design had to be replaced by frequent iterative improvements.
It is worth noting that while these tools and frameworks offer structure and direction, adhering to them too rigidly can present challenges. A vital part of LSS’s success is its flexibility and adaptability. Processes may differ substantially across different sectors or even between various projects within the same organisation. Consequently, the choice and application of Lean tools should be appropriately adapted to suit specific project requirements. Being overly strict in their application can lead to misalignment with actual project needs, potentially inhibiting the successful implementation of Lean strategies (Emiliani, 2008).
The integration of LSS and BI calls for considerable expertise and resources. Organisations must harmonise their processes and data infrastructure and might encounter resistance from personnel accustomed to traditional BI systems (Smith et al., 2020). This was evident in the rollout of the client reporting dashboards to the business users and necessitated the implementation of much stronger change management practice. Furthermore, using LSS methods may not universally apply to BI modernisation initiatives (Pande et al., 2000).
This project’s first phase identified a market potential exceeding 50 million euros within the organisation’s existing client network. This discovery underscores the potential of LSS methods to unveil concealed opportunities in an organisation’s data assets (George, 2002a, b). Within an agricultural context, these insights can boost yield, mitigate waste and deliver substantial cost savings, exemplifying the tangible benefits of applying LSS methodologies in this sector (Kennedy et al., 2013).
Incorporating LSS methodologies in the agricultural industry and managing the data and BI lifecycle exhibits considerable potential (Folaron, 2003). Through diligent planning, utilising DMAIC or DMADV methodologies and commitment to overcoming obstacles, it is possible to improve operational efficiency and BI significantly (Antony et al., 2012). Nevertheless, applying these methodologies is full of complexities and organisations must weigh the costs and benefits before undertaking such a transformative endeavour (De Mast and Lokkerbol, 2012).
Through its application, the data platform and BI capability were greatly improved within the organisation in question; this approach could add similar value to any organisation seeking similar platform modernisation. BI, with its data-driven decision-making approach, complements LSS’s focus on continuous process improvement and waste reduction, creating a synergistic effect that has the potential to drive superior project outcomes (Ravichandran and Rai, 2000).
Going forward, exploring the compatibility of the BI toolkit in a lean manufacturing environment would be worthwhile, emphasising enhancing operational efficiency. By integrating BI and LSS in lean manufacturing, organisations can benefit from streamlined processes and gain deep insights into trends, facilitating more informed and effective decision-making (Sunder, 2016).
Leveraging the methodologies of LSS, this project successfully navigated the organisation from a fragmented and inefficient data management system to a systematic and streamlined approach. Initially, the organisation was plagued with a “stovepipe” architecture, marked by a lack of integration and increased inefficiencies due to reactive responses to business needs. This resulted in a complex, non-integrated data environment and inefficient manual work. The model presented leverages LSS methodologies, primarily the DMADV approach, and integrates BI tools to drive significant improvements in process efficiency and effectiveness; however, it also comes with potential limitations.
These primarily include the high degree of expertise and resources required to integrate LSS and BI harmoniously. Implementing such a transformative model might encounter resistance from personnel accustomed to traditional systems, making strong change management practices necessary. Further, the model may only be universally applicable to some BI modernisation initiatives due to contextual variations across industries or different projects within the same organisation. Lastly, despite the substantial benefits derived from this approach, organisations must also consider the costs associated with such a transformative endeavour, including financial outlay and the commitment of time and human resources.
This LSS recipe derived through phase 1 and the delivery of the Client 360° view will now be repeated for financial reporting as the next prioritised phase. In addition to this delivery model, the new data platform has opened several possibilities for the future. The team has already started using additional sales data as an overlay to provide insight and predict stock movement, pricing, and promotions sensitivity.
The organisation is well underway with their digitisation journey, which includes augmented reality (AR), AI, and Internet of things (IoT) capabilities. Modern technologies like AI, IoT and AR certainly have the potential to operate independently of the type of data platform described. However, these technologies' efficiency, scalability and overall effectiveness can be greatly enhanced with a structured, streamlined and well-managed data platform. AI, for instance, relies heavily on large volumes of high-quality data to train models and generate accurate predictions. With a robust data management system, the quality and availability of this data can be maintained, which may positively impact the performance of AI algorithms. IoT, which generates a massive amount of data from various devices and sensors, also requires efficient data management. With a well-structured data platform, it could be easier to store, process, and analyse the high volume and velocity of data produced by IoT devices, leading to potential inefficiencies and missed opportunities for insight.
While these technologies can function without this data platform, their effectiveness, scalability and potential for delivering valuable insights are amplified when robust data management systems are complemented (Eaton et al., 2023).
6. Conclusion
The novelty of this project relies on the utilisation of the DFLSS and its DMADV model in the reporting and data lifecycle of an agricultural business. This study has practical implications as the study demonstrates how BI can enhance continuous improvement and provide an opportunity to refine processes, enhance data capabilities and optimise BI practices. Thus, the study has further implications for management in demonstrating how LSS can aid in moving large organisations into the new modern data era. The academic implications of the study are in adding to the current state of the art literature in combining BI and LSS methods and adding to the literature application of LSS 4.0.
A limitation of this study is that it is a case study in a single organisation, however the learnings and DMADV model application have applicability to other agribusiness and organisational sectors.
Future research opportunities are to deploy the program from this study and the initial deployment will serve as a blueprint that must be repeatedly applied to future and subsequent phases, making it increasingly better and more effective. A further opportunity for future research is to investigate how the BI toolset enhances and assists through the DMADV and DMAIC lifecycles.
Figures
The supplementary material for this article can be found online.
References
Abdulai, A., Owusu, V., Abdul-Rahman, S. and Kabanunye, M. (2020), “Impact of COVID-19 pandemic on the agri-food system and livelihoods in Africa: case study of Ghana”, Environmental Challenges, Vol. 4, 100052.
Antony, J. (2014), “Readiness factors for the Lean Six Sigma journey in the higher education sector”, International Journal of Productivity and Performance Management, Vol. 63 No. 2, pp. 257-264, doi: 10.1108/ijppm-04-2013-0077.
Antony, J., Kumar, M. and Labib, A. (2012), “Gearing Six Sigma into UK manufacturing SMEs: results from a pilot study”, Journal of the Operational Research Society, Vol. 63 No. 4, pp. 543-553.
Barlett, C. and Ghoshal, S. (1989), Managing across Borders: The Transnational Solution, Harvard Business School Press, Boston.
Boshoff, N. and Kleynhans, T. (2020), “Covid-19 and agriculture in South Africa: a discussion of the impact and possible mitigation strategies”, Development Southern Africa, Vol. 37 No. 5, pp. 665-681.
Brophy, P., Trubetskaya, A. and McDermott, O. (2023), “Implementing a customised Lean Six Sigma methodology at a compound animal feed manufacturer in Ireland”, International Journal of Lean Six Sigma, Vol. 14 No. 5, pp. 1075-1095, doi: 10.1108/ijlss-08-2022-0169.
Chakravorty, U., Pelli, M. and Ura, B. (2014), “Does the quality of electricity matter? Evidence from rural India”, Journal of Economic Behavior and Organization, Vol. 107 No. Part A, pp. 228-247, doi: 10.1016/j.jebo.2014.04.011.
Chen, H., Chiang, R. and Storey, V. (2012), “Business intelligence and analytics: from big data to big impact”, MIS Quarterly, Vol. 36 No. 4, pp. 1165-1188, doi: 10.2307/41703503.
Cronemyr, P. (2007), “DMAIC and DMADV - differences, similarities and synergies”, International Journal of Six Sigma and Competitive Advantage, Vol. 3 No. 3, pp. 193-209, doi: 10.1504/IJSSCA.2007.015065.
Datta, K. and Vardhan, J. (2017), “Framework for assessing quality of international branch campuses in UAE: a management students' perspective”, SAGE Open, Vol. 7 No. 1, pp. 1-9.
Davis, F. (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS Quarterly, Vol. 13 No. 3, pp. 319-340, doi: 10.2307/249008.
De Mast, J. and Lokkerbol, J. (2012), “An analysis of the Six Sigma DMAIC method from the perspective of problem solving”, International Journal of Production Economics, Vol. 139 No. 2, pp. 604-614, doi: 10.1016/j.ijpe.2012.05.035.
Department of Agriculture (2021), Land Reform and Rural Development, Republic of South Africa, Pretoria, available at: http://www.dalrrd.gov.za/
Eaton, R., Noonan, J. and McDermott, O. (2023), “Enhancement of a data management system using design for lean six sigma”, in McDermott, O. et al. (Eds), Lean, Green and Sustainability, Springer International Publishing, Cham, pp. 297-306.
Emiliani, B. (2008), “Standardized work for executive leadership”, Leadership and Organization Development Journal, Vol. 29 No. 1, pp. 24-46, doi: 10.1108/01437730810845289.
Folaron, J. (2003), “The evolution of six sigma”, Six Sigma Forum Magazine, Vol. 2 No. 4, pp. 38-44.
Folinas, D., Aidonis, D., Malindretos, G., Voulgarakis, N. and Triantafillou, D. (2014), “Greening the agrifood supply chain with lean thinking practices”, International Journal of Agricultural Resources, Vol. 10 No. 2, pp. 129-145, doi: 10.1504/ijarge.2014.063580.
García-Alcaraz, J., Jamil, G., Avelar-Sosa, L. and Briones Peñalver, A. (2019), Handbook of Research on Industrial Applications for Improved Supply Chain Performance, 1st ed., IGI Global, Hershey, PA.
Geldenhuys, C., Nel, W. and Kirsten, J. (2020), “Agricultural production in South Africa”, in Agricultural Production, Springer, Cham, pp. 217-234.
George, M.L. (2002a), Lean Six Sigma: Combining Six Sigma Quality with Lean Production Speed, McGraw-Hill, NY.
George, M. (2002b), Lean Six Sigma: Combining Six Sigma with Lean Speed, 1st ed., McGraw-Hill Education, London.
George, M., Maxey, J. and Rowlands, D. (2005), The Lean Six Sigma Pocket Toolbook: A Quick Reference Guide to Nearly 100 Tools for Improving Quality and Speed, 1st ed., McGraw-Hill Education, London.
Ghobakhloo, M. and Tang, S. (2015), “Information system success among manufacturing SMEs: case of developing countries”, Information Technology for Development, Vol. 21 No. 4, pp. 573-600, doi: 10.1080/02681102.2014.996201.
Haleem, A., Javaid, M. and Khan, I. (2021), “Big data analytics: from traditional data warehouse to modern data environment”, Journal of Big Data, Vol. 8 No. 1, pp. 1-21.
Hauser, J. and Clausing, D. (1988), “The house of quality”, Harvard Business Review, pp. 63-73.
Huda, J. (2018), “An examination of policy narratives in agricultural biotechnology policy in India”, World Affairs, Vol. 181 No. 1, pp. 42-68, doi: 10.1177/0043820018783046.
Hwang, J. and Kim, J. (2018), “Big data analytics in business intelligence: a review and research agenda. International”, Journal of Business Intelligence Research, Vol. 9 No. 1, pp. 1-30.
IBRD (2021), World Bank 2021: Data for the Better Life, s.l., available at: https://www.worldbank.org/en/publication/wdr2021
IDC (2022), The Global State of IT Project Management, IDC, Worcester, MA.
Javier Lloréns-Montes, F. and Molina, L.M. (2006), “Six Sigma and management theory: processes, content and effectiveness”, Total Quality Management and Business Excellence, Vol. 17 No. 4, pp. 485-506, doi: 10.1080/14783360500528270.
Johnson, J.A., Gitlow, H., Widener, S. and Popovich, E. (2006), “Designing new housing at the university of Miami: a ‘six sigma’ DMADV/DFSS case study”, Quality Engineering, Vol. 18 No. 3, pp. 299-323, doi: 10.1080/08982110600719399.
Karpoff, J., Fang, V. and Huang, A. (2016), “Short selling and earnings management: a controlled experiment”, The Journal of Finance, Vol. 71 No. 3, pp. 1251-1294, doi: 10.1111/jofi.12369.
Kennedy, F., Widener, S. and Fullerton, R. (2013), “Lean manufacturing and firm performance: the incremental contribution of lean management accounting practices”, Journal of Operations Management, Vol. 31 Nos 7-8, pp. 455-467.
Kim, J. and Hwang, J. (2020), “The impact of big data analytics on business intelligence: a systematic review and research agenda”, International Journal of Business Intelligence Research, Vol. 11 No. 1, pp. 1-27.
Koch, P. (2002), “Probabilistic design: optimizing for six sigma quality”, 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, p. 1471, doi: 10.2514/6.2002-1471.
Kruger, J. and Bekker, S. (2020), “Covid-19 and agriculture in South Africa: food security and the implications of changing consumer behaviour”, Development Southern Africa, Vol. 37 No. 5, pp. 682-694.
Lee, E., Kim, M., Lee, K. and Hong, H. (2018), “Applying Lean Six Sigma to data-driven projects in the agricultural sector”, Journal of Agribusiness in Developing and Emerging Economies, Vol. 8 No. 3, pp. 297-311.
Maass, E. and McNair, P.D. (2009), Applying Design for Six Sigma to Software and Hardware Systems, Pearson Education, London.
Mishra, M. (2022), “Identify critical success factors to implement integrated green and Lean Six Sigma”, International Journal of Lean Six Sigma, Vol. 13 No. 4, pp. 765-777, doi: 10.1108/ijlss-07-2017-0076.
Montgomery, D. (2013), Introduction to Statistical Quality Control, 7th ed., John Wiley & Sons, Hoboken, NJ.
Mumford, E. (2003), Redesigning Human Systems, 1st ed., Hershey: Idea Group, Hershey, PA.
Negash, S. and Gray, P. (2008), “Business intelligence”, in Handbook on Decision Support Systems 2, 2nd ed., Springer, Berlin.
Nelson, S., McDermott, O., Woods, B. and Trubetskaya, A. (2022), “An evaluation of Lean deployment in Irish micro-enterprises”, Total Quality Management and Business Excellence, Vol. 34 Nos 7-8, pp. 1-20, doi: 10.1080/14783363.2022.2140651.
Ó Longaigh, B., Noonan, J., Trubetskaya, A. and McDermott, O. (2023), “Strategic facility and space planning utilising design for lean six sigma”, [Preprint], International Journal of Sustainable Engineering, Vol. 16 No. 1, pp. 1-13, doi: 10.1080/19397038.2023.2268639.
Pande, P., Neuman, R. and Cavanagh, R. (2000), The Six Sigma Way: How GE, Motorola, and Other Top Companies Are Honing Their Performance, 1st ed., McGraw-Hill, London.
Partners, N.V. (2017), Big Data and AI Executive Survey, New York, available at: https://www.newvantage.com
PMI (2022), Pulse of the Profession, 4th ed., Project Management Institute, Newtown Square, PA.
Pyzdek, T. (2003), The Six Sigma Handbook: A Complete Guide for Green Belts, Black Belts and Managers at All Levels, McGraw-Hill Book Company, New York.
Ranjan, J. (2009), “Business intelligence: concepts, components, techniques and benefits”, Journal of Theoretical and Applied Information Technology, Vol. 9 No. 1, pp. 60-70.
Ravichandran, T. and Rai, A. (2000), “Total quality management in information systems development: key constructs and relationships”, Journal of Management Information Systems, Vol. 16 No. 3, pp. 119-156, doi: 10.1080/07421222.1999.11518259.
Ryan, A., McDermott, O. and Trubetskaya, A. (2023), “Application of design for lean six sigma to strategic space management”, The TQM Journal, Vol. 35 No. 9, pp. 42-58, doi: 10.1108/tqm-11-2022-0328.
Saunders, M., Lewis, P. and Thornhill, A. (2016), Research Methods for Business Students, 7th ed., Pearson Education, Edinburgh.
Schroeder, R., Linderman, K., Liedt, C. and Choo, A. (2008), “Six sigma: definition and underlying theory”, Journal of Operations Management, Vol. 26 No. 4, pp. 536-554, doi: 10.1016/j.jom.2007.06.007.
Shenvi, A.A. (2008), “Design for six sigma: software product quality”, Proceedings of the 1st India software engineering conference, Association for Computing Machinery (ISEC ’08), New York, NY, pp. 97-106, doi: 10.1145/1342211.1342231.
Slattery, O., Trubetskaya, A., Moore, S. and McDermott, O. (2022), “A review of lean methodology application and its integration in medical device new product introduction processes”, Processes, Vol. 10 No. 10, p. 2005, doi: 10.3390/pr10102005.
Smith, B. (2017), “BI, data management changing, converging”, Information Management, Vol. 27 No. 3, pp. 30-34.
Smith, I., McGovern, T. and Hicks, C. (2020), “Adapting Lean methods to facilitate stakeholder engagement and co-design in healthcare”, BMJ, Vol. 368, pp. 1-4, doi: 10.1136/bmj.m35.
Snee, R. (2010), “Lean Six Sigma–getting better all the time”, International Journal of Lean Six Sigma, Vol. 1 No. 1, pp. 9-29, doi: 10.1108/20401461011033130.
Sunder, M. (2016), “Lean six sigma project management - a stakeholder management perspective”, The TQM Journal, Vol. 28 No. 1, pp. 132-150, doi: 10.1108/tqm-09-2014-0070.
Thomas, V. and Chindarkar, N. (2019), “The picture from cost-benefit analysis”, in Economic Evaluation of Sustainable Development, 1st ed., Palgrave Macmillan, Singapore.
Thomas, M. and Singh, N. (2006), “Design for lean six sigma (DFLSS): philosophy, tools, potential and deployment challenges in automotive product development”, SAE Technical Paper 2006-01-0503.
Trubetskaya, A. and Muellers, H. (2021), “Transforming a global human resource service delivery operating model using Lean Six Sigma”, International Journal of Engineering Business Management, Vol. 13, pp. 1-16.
Trubetskaya, A., McDermott, O. and McGovern, S. (2023a), “Implementation of an ISO 50001 energy management system using Lean Six Sigma in an Irish dairy: a case study”, The TQM Journal, Vol. 35 No. 9, pp. 1-24, doi: 10.1108/tqm-08-2022-0252.
Trubetskaya, A., McDermott, O. and Ryan, A. (2023b), “Application of design for lean six sigma to strategic space management”, The TQM Journal, Vol. 35 No. 9, pp. 42-58, doi: 10.1108/TQM-11-2022-0328.
Trubetskaya, A., Ryan, A. and Murphy, F. (2023c), “An implementation model for digitisation of visual management to develop a smart manufacturing process”, International Journal of Lean Six Sigma, Vol. 15 No. 8, pp. 32-49, doi: 10.1108/ijlss-07-2022-0156.
Xie, X., Shen, Y., Zhang, H. and Guo, F. (2018), “Can digital finance promote entrepreneurship—evidence from China. China economics quarterly”, China Economics Quarterly, Vol. 17, pp. 1557-1580.
Yang, K. and Cai, X. (2009) ‘The integration of DFSS, lean product development and lean knowledge management’, International Journal of Six Sigma and Competitive Advantage, Vol. 5 No. 1, pp. 75–99.