Abstract
Purpose
This study demonstrates how artificial intelligence (AI) shapes the strategic planning process in volatile, uncertain, complex and ambiguous (VUCA) business environments. Having adopted various domains of the Cynefin framework, the research explores AI's transformative potential and provide insights regarding how organisations can harness AI-driven solutions for strategic planning.
Design/methodology/approach
This conceptual paper theorises the role of AI in strategic planning process in a VUCA world by integrating extant knowledge across multiple literature streams. The “model paper” approach was adopted to provide a theoretical framework predicting relationships among considered concepts.
Findings
The paper highlights potential application of the Cynefin framework to manage complexities in strategic decision-making process, the transformative impact of AI at different stages of strategic planning, the required strategic planning characteristics within VUCA to be supported by AI and the attendant challenges posed by AI integration in the uncertain business landscape.
Originality/value
This study pioneers a theoretical exploration of AI's role in strategic planning within the VUCA business landscape, guided by the Cynefin framework. Thus, it enriches scholarly discourse and expands knowledge frontiers.
Keywords
Citation
Biloslavo, R., Edgar, D., Aydin, E. and Bulut, C. (2024), "Artificial intelligence (AI) and strategic planning process within VUCA environments: a research agenda and guidelines", Management Decision, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/MD-10-2023-1944
Publisher
:Emerald Publishing Limited
Copyright © 2024, Roberto Biloslavo, David Edgar, Erhan Aydin and Cagri Bulut
License
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
Recent advances in artificial intelligence (AI) technology, defined as “an assemblage of technological components that collect, process and act on data in ways that simulate human intelligence” [1] (Canhoto and Clear, 2020, p. 184), have required a transformation of businesses across technological, organisational and managerial dimensions. The latter is particularly significant due to the disruptive nature of AI technologies and the role data play in the business context. The new wave of AI systems, particularly generative AI such as ChatGPT, have significantly increased the ability of organisations to leverage their own internal and available external data to improve their business practices and achieve competitive advantage (Agrawal et al., 2018; Bassano et al., 2018).
Although some have warned of the potential dangers of AI (see Anderson et al., 2018; Floridi et al., 2018; Hoffman et al., 2022; Statement on AI Risk [2]) the view prevails that it will be the main technological trend of the coming years and that it can contribute to achieving challenges such as delivering the Sustainable Development Goals (Ferilli et al., 2021). For example, AI can speed up workflows and processes within the organisation (e.g. market segmentation campaigns automated by AI enable companies to act quickly to changing consumer needs and expectations (Haleem et al., 2022)), increase organisational agility (Wang et al., 2022) and reduce risk (e.g. AI applied to financial data analysis and risk prediction (Aziz and Dowling, 2019; Cao, 2021)).
In this conceptual paper, the aim is to provide a comprehensive approach to understanding the role of AI in the process of strategic planning by engaging in a discourse that evaluates both its beneficial and adverse dimensions. Based on this balanced analysis and considering the rapid advances in AI, the aim is to define the characteristics of strategic planning in VUCA (Volatility, Uncertainty, Complexity, Ambiguity) environments, as these characterise modern global markets (Baran and Woznyj, 2020). The define characteristics are contextualized within the domains outlined by the Cynefin framework (simple, complicated, complex, chaotic), aiming to provide a more detail understanding of how AI applications could be adapted to navigate the multifaceted challenges posed by different strategic decision-making contexts. Through this approach, a nuanced perspective can be developed to the ongoing discourse on AI, offering a framework that is both reflective of AI's potential benefits and cognizant of the complexities and uncertainties that accompany its advancement and application.
Considering existing examples of AI supporting the operational level of organisation, the real challenge organisations are facing today seems to be moving AI from the operational level, where the main goal is to perform similar activities better than rivals, to the strategic level (Hesel et al., 2022; Jarrahi, 2018) where the goal is to deliver a unique mix of value to customers (Porter, 1996) and thus gain a competitive advantage. According to some authors the growth of processing power, the development of machine learning algorithms and the proliferation of Big data make that possible (Kiron and Schrage, 2019; Lee et al., 2018; Valter et al., 2018), especially as AI can help with (1) gathering data from internal and external sources, (2) analysing and interpreting that data through pattern identification and (3) supporting decision-making through predictive analysis (Perifanis and Kitsios, 2023). The successful use of AI at the strategic level is however largely dependent on understanding and mastering the complexities of the strategic decision-making process itself (Courtney, 2001; Papadakis, 2006; Spitz, 2020). Its complexity arises from a combination of many factors that need to be taken into account not only in relation to the external VUCA environments, which is the main focus of this paper, but also the characteristics of the decision maker (cognitive style, decision-making styles, training, level of knowledge and experience), the parties involved (individuals or groups), the type of problem to be solved, the degree of criticality (routine, critical or urgent decisions), the time horizon of the decision (short, medium or long term) and the effects that the decision may have on other actors (Bromiley and Rau, 2016; Elbanna et al., 2020).
In business environment, where today's successful market positions and strategic capabilities cannot be sustained over time (McGrath, 2013), the ability to make rational strategic decisions is crucial, but the combination of the factors mentioned above prevents decision makers from being able to make them (Simon, 1991). The ability of AI to process large data sets at incredible speed and to identify complex patterns in them can be the key to finding a solution to this problem (Keding, 2021). The literature around human decision-making supported by AI defines the latter's advantages in terms of speed, quality and originality (Jarrahi, 2018). Speed refers to the reduced time for executing a choice among multiple options. Data analysis and information generation take time to make decisions for individuals, while AI creates the opportunity for providing reliable analysis and choice in a short time (Moore, 2016). The quality aspect indicates that AI has a more objective interpretation of the real world since human factors, especially emotions, do not influence it (Mahroof, 2019) and thus it removes the stress concerns of human-being in decision-making (Danziger et al., 2011). For this reason, AI can transcend emotionally charged situations that decrease the quality of decision-making. Lastly, the originality aspect shows that AI can analyse a huge amount of data and information useful for processing decision-making output (Moore, 2016). This could lead managers to make entirely new and original choices compared to the past (Kahneman et al., 2021) and identify inconsistencies and anomalies in previous decisions made. Therefore, AI can enhance the analytical capabilities of management and act as a vehicle to increase creativity (Daugherty and Wilson, 2018; Bouschery et al., 2023).
Despite all this, organisations need to evaluate the effectiveness of AI both in terms of the multiple opportunities it offers (Rahman et al., 2022) and in terms of its challenges (Aydin et al., 2023). This is especially important in the VUCA environment, where managers may be inclined to rely solely on AI analyses and results, seeing them as a risk-averse option (Ferrer et al., 2021; Kyriazanos et al., 2019). In contrast, research shows that AI can discriminate against certain social groups (Aydin et al., 2023), genders or some political views as its rationality is limited by the programmed biases of the dataset from which it learns (Renieris, 2022).
This observation underscores the pressing need for scholarly attention, particularly as the forefront of AI application steadily shifts from operational to strategic management domains, and the tools become less informative driven and more geared towards taking actual action. Taking these factors into account, the research will explore how AI could effectively help organisations improve their strategic decision-making processes in such a competitive landscape (Bennett and Lemoine, 2014b). To address this question, the Cynefin framework proposed by Snowden (Snowden and Boone, 2007) is adopted as a system that can help practitioners to properly sense the decision context and avoid the pitfalls of applying reductionist approaches to complex situations (Van Beurden et al., 2013).
The paper consists of five sections. In the next section the strategic planning process is conceptualised in the business context characterised by VUCA, here, the strategic challenges of the VUCA environment and how AI itself is contributing to increasing VUCA is explored, before presenting the application of the Cynefin framework. In the following section the adopted methodological approach is clarified, explaining the conceptual nature of the paper. The fourth section explores how AI can be applied within the strategic planning process adapted to the VUCA context, highlighting the expected benefits and challenges. The last section summaries the main findings, and presents the theoretical and managerial implications, together with limitations and suggestions for further research.
2. Literature review
2.1 Conceptualization of strategic planning process
To understand how AI may be incorporated into strategic decision-making, it is useful to understand what strategic planning is and how it can be conceptualised. Strategic planning is a formal and systematic procedure, which constitutes the instrument for defining, implementing and controlling a strategy (Wolf and Floyd, 2017). So, whilst strategy is result-oriented and concerns basic choices for achieving business objectives, strategic planning is process-oriented and concerns the organisational logic through which strategy is determined or revised.
Strategic planning should be understood as an operational mechanism applied in business practice in either a formal or a more ad hoc way. The larger the organisation is, the more formal the planning is likely to be, and vice versa for smaller organisations. However, the absence of formalised planning does not necessarily mean strategy is absent. Often, the outlines of strategy can still be found in the leader's mind. Indeed, there are even those who argue that the absence of prescriptive planning is a virtue, a sign of not wanting to waste resources and time on the useless, ceremonial, construction of formalised plans (Mintzberg, 1994). According to them, the enacted strategy is not a result of a formalised planning process but the final result (i.e. a pattern) of several decisions made in time by many individual decision-makers (Mason, 2007).
By its very nature, planning constitutes a useful tool for analytically representing and analysing an organisation's context. Planning plays a crucial role in gathering needed data and based on it logically articulating the key challenges (i.e. opportunities and threats) that the organisation is confronting as well as stakeholders' expectations and, consequently, outlining the goals that need to be achieved. The complexity and uncertainty of the current business environment is however too great (Taleb, 2007) to be handled by rationality alone and therefore perception and intuition are required (Calabretta et al., 2017; Khatri and Ng, 2000) to synthesize different aspects of reality. Once the decision has been made, the hierarchical decomposition of strategic consequences into sub-strategies and ad hoc programmes, and finally into operational plans detailing specific sequential interventions, timescales and actors is performed. In this sense, planning also constitutes a tool for communicating and legitimising the strategy both inside and outside the company (Spee and Jarzabkowski, 2011).
While the strategic planning approach offers numerous benefits, there are common pitfalls that planners should be aware of, like ill-defined or unrealistic assumptions about the market or customer behaviour, usage of limited data instead of conducting rigorous experiments or gathering sufficient feedback, and resistance to change that can prevent timely adjustment (Wolf and Floyd, 2017).
The pitfalls are particularly evident in the VUCA environment, where organisations often strive for a rational approach to strategy development but are unable to implement it (Das and Ara, 2014). The scarcity of information, high search costs and stringent time requirements lead the decision-maker to use intuition supported by individual experience and available information (Miller and Ireland, 2005). The point is that organisations cannot respond to the challenges of the VUCA environments by using the same strategies of the past (Giones et al., 2019; Mankins and Gottfredson, 2022) i.e. their tried-and-tested methods or best practices simply do not work in complex and dynamic environments. In a world where management can neither identify cause-and-effect relationships nor base a decision on past experience, even the term strategic planning itself seems to be an oxymoron. According to Bennett and Lemoine (2014a) VUCA conditions render any efforts to understand the future and to plan responses useless. On the other hand, without a clear vision of the future, and a strategy to achieve it organisation's survival is put at risk (Thorén and Vendel, 2019). Moreover, strategic planning is still one of the most widely used management tools in modern organisations (Rigby and Bilodeau, 2018). Different situations call for different decision-making strategies and, consequently, different approaches to strategic planning are needed. It is not that strategic planning does not work, but rather a recognition that there needs to be a shift from an approach that insists on a prescriptive list of objectives and plans based on an assumed “most likely” future, to a more agile approach that emphasises perception, understanding and experimentation (Birkinshaw and Ridderstråle, 2017). The VUCA-responsive strategic planning process needs to be agile, less reliant on traditional tools and linear models, able to quickly adjust the company's strategic direction if required (Giones et al., 2019; Sloan, 2019), and allow a synergy of analytical and judgment skills (Ahammad et al., 2020).
2.2 An emerging context of VUCA environment and the AI
In the contemporary business landscape, the term VUCA succinctly encapsulates organisations' multifaceted challenges relating to (V)olatility, which signifies instability and the rapid and often unpredictable changes brought about by technological advancements and market fluctuations (Codreanu, 2016; Kail, 2010c). (U)ncertainty, where the future trajectory of the industry and consumer preferences remains shrouded in ambiguity, making long-term planning problematic and stimulating a shift toward more flexible and adaptive planning processes (Kail, 2010b). (C)omplexity which acknowledges the intricate web of relationships that organisations must navigate, including competitors, suppliers, customers and various regulations and standards, all contributing to the intricacy of the modern business environment (Kail, 2010a), and (A)mbiguity reflected in the multiple and often conflicting interpretations of market trends, technological developments and customer preferences, resulting in decision-making challenges, hasty judgments based on incomplete information and the potential for costly mistakes (Kail, 2011).
According to Baran and Woznyj (2020) five trends will contribute to VUCA in the next years: (1) technological advances and innovations; (2) economic and financial issues; (3) environmental and societal concerns; (4) geopolitics, regulations and security issues and (5) workforce dynamics. However, this work was prior to ChatGPT's launch in 2022 and the explosion of accessible AI enabled technology reaching the market. There have therefore been significant developments that Baran and Woznyj's (2020) did not foresee and are therefore not covered by their work but are worthy of consideration as part of the drivers and shapers of VUCA into the future and the impact of AI and associated systems (e.g. Internet of Things (IoT), robotics). These crucial elements are: regulatory changes, cybersecurity threats and ethical dilemmas.
Firstly, AI advancement brings the need for regulatory changes. The threats posed by AI are undeniably real, necessitating the implementation of effective governance through economic and social regulation (Den Hertog, 1999). Economic regulation is necessary to correct information asymmetries between regulators and operators and thus align the interests of operators with those of the government, ensuring transparency and fairness in economic activities, which is particularly vital in the context of AI (Wang and Siau, 2018). Social regulations refer to a set of government policies, rules and regulations designed to oversee and influence various aspects of society beyond the economic sphere by emphasizing safety, transparency and respect for fundamental rights as evident in the AI act proposed by the European Commission (2023).
Secondly, the rapid evolution of AI technology has started a new era of cybersecurity challenges. Ansari et al. (2022) demonstrate the escalating threats in this context, with a particular focus on spear phishing, malware and Domain Name System (DNS) attacks, which continuously evolve to bypass conventional security measures, and necessitate the development of adaptive defences. In particular, DNS attacks disrupt the Internet's fundamental infrastructure which is a growing concern as organisations increasingly rely on AI for DNS management. Therefore, in this dynamic technological landscape, it is crucial to remain vigilant and utilize AI's capabilities for proactive cybersecurity defence.
Finally, the need for more transparency in AI decision-making processes poses a significant ethical challenge. Many AI models, especially deep learning models, are often considered “black boxes” because it is challenging to understand how they arrive at their conclusions. This opacity can hinder accountability and responsibility (Lepri et al., 2018) meaning if AI makes a wrong decision, it can be challenging to determine who is responsible. This is a particular concern in healthcare, autonomous vehicles and finance (Janssen et al., 2020). Addressing this issue involves improving AI model interpretability and establishing clear lines of accountability and ethical guidelines for AI developers and users. Additionally, bias in AI systems can lead to discrimination and unfair treatment of individuals or groups caused by AI learning reflecting historical prejudices (Hagendorff et al., 2023) of users or data sets.
Having discussed the aspects of AI that co-create a new wave VUCA environment for organisations, the interaction between the VUCA environment and Cynefin Framework is explored to determine how AI can contribute to strategic planning within such an environment.
2.3 The Cynefin framework and VUCA environment
A holistic understanding of the strategic decision-making process in VUCA environments is of critical importance if appropriate AI solutions are to be used to support it. Issues raised by VUCA have multiple interacting causes that can only be made meaningful by applying a complex enough sense-making tool. In that sense an analytical approach that favours reductionism does not help, as the whole system's behaviour cannot be understood by reducing a system to its parts (Grewatsch et al., 2023). The Cynefin framework, however, provides a lens to view the VUCA environment from multiple perspectives and to understand the nature of differences in the contextual conditions that volatility, uncertainty, complexity and ambiguity define. This is done by considering five domains, which differ mainly according to the complexity and ambiguity of the cause-effect relationships involved (Snowden and Boone, 2007). According to Kurtz and Snowden (2003, pp. 6–7) the main value of the Cynefin framework is to help decision-makers “consider the dynamics of situations, decisions, perspectives, conflicts, and changes in order to come to a consensus for decision-making under uncertainty”. The framework identifies five domains to categorise problems or situations: simple/clear, complicated, complex, chaotic and disorder/confusion. The idea is to identify which domain you're in and then choose the appropriate action based on its characteristics of that domain.
In the Simple/clear domain the cause-effect relationships are clear and the approach to solving the problem is easy to identify and pursue. Consequently, the recommended course of action is: perceive – classify – respond. The appropriate approach in this domain is to understand and categorise the situation, then apply a previously known and used solution, i.e. the best practices. Complicated domains also have a clear cause-and-effect relationship. However, expert knowledge is required to make decisions, because the decisional context involves non-linear processes and several variables. An example is problems related to supply chain constraints or distributed transactions. This requires a thorough analysis of the situation using appropriate analytical methods with action being: perceive – analyse – respond.
The Complex domain is characterized by too many variables to identify the cause-effect relationship in advance. This is a common situation in the global world where what happens results from endless interactions, different concurrences and mutual conditioning. In this domain it is only possible to find a correlation between cause and effect a posteriori, after the space for experimentation has been created so action tends to be probe – sense – respond with solutions emerging over time. Therefore, by applying an empirical approach and learning, the results can be reached that will later enable decisions to be made that are appropriate to the type of situation that emerges. Such empiricism allows the discovery of important information that can lead to new emerging solutions in a similar fashion to critical uncertainties in the domain of scenario planning. Therefore, as information is gathered through experimentation and analysis of results, decisions on the next steps towards a solution can be made, with the aim to move the problem into the domain of the “Complicated” rather than complex.
The Chaotic domain requires a rapid reaction. In a state of crisis, action must be almost immediate to prevent further damage and restore the situation to normal. An example is a cyber malware or ransomware attack on a corporation. In this domain, no one has a clear idea of the right solution, and multiple and often conflicting interests of different stakeholders may be involved. One finds oneself making many decisions, often with very little time for reflection. Information gathering and analysis take second place. What counts is the ability to act immediately and decisively to correct the problem, in essence: act – perceive – respond. The initial solution may not be the right one, but the important thing is that it helps to contain the problem and bring it back into the “Complex” domain.
Finally, the domain of Disorder represents situations where it is unclear which of the other four domains is applicable. In these cases, leaders must quickly assess the situation, breakdown the components to determine the most appropriate domain fit and act, while waiting for the “proper” domain to emerge.
The Cynefin framework seeks to correct the inadequacy of the approach by which management seeks to sense and interpret the decision-making situation that exhibits one or more of the characteristics of the VUCA environment. Management in fact, given its limited cognitive capabilities, in most cases simplifies complex and/or complicated strategic situations (Simon, 1987) to the point of distorting them and leading to “casual blind spots” (Bettis and Prahalad, 1995; Eisenhardt and Sull, 2001) or making decisions based on extrapolating the past with linear predictions (Spitz, 2020). In the end management jeopardises the success of the organisation itself by failing to recognise the real nature of a situation and often gets trapped in ineffective cycles of collecting and analysing data, thus losing valuable time (Snowden, 2002, 2005). In this sense, the framework's value lies in its ability to help recognise the nature of the context at hand and craft a decision-making approach that best fits it.
AI has an obvious role to play here. The fact that large amounts of data can be collected from an abundance of sources, and insights provided almost instantaneously which are not immediately apparent to human decision-makers, means domains where management don't know what they don't know (i.e. complex and chaotic domains) become manageable. Using machine learning algorithms and natural language processing (NLP) techniques, managers can grasp various Cynefin domains more efficiently and accurately. In complicated decision-making contexts and accelerated change AI provides leaders with the necessary knowledge helping them to sense, analyse and respond appropriately. The application of such AI tools is explored in the findings after the methodological approach has been explained.
3. Methodological approach
The methodological approach adopted is the “model paper” approach, which is designed to construct a theoretical framework for predicting the relationships between concepts as proposed by Jaakkola (2020). A model paper serves multiple functions: it identifies and clarifies connections between constructs, introduces new constructs and elucidates the causal processes leading to specific outcomes (Jaakkola, 2020).
The process starts by reviewing literature and based on it defining and describing in detail the key constructs: artificial intelligence (AI), strategic planning, VUCA environments and Cynefin domains (see Figure 1). The constructs are defined according to two main assumptions. The first is that strategic planning makes sense in VUCA environments but needs to be adapted accordingly. The second is that VUCA environments can be meaningfully transformed into the individual domains of the Cynefin framework. Therefore, after defining the constructs, the Cynefin framework is introduced to enhance the understanding of decision making at different levels of complexity (Snowden and Boone, 2007). In the following, the potential transformative impact of AI and its implications for organisation and governance are demonstrated through synthesising insights from the strategic management and AI literature. A transformation of VUCA environments into the domains of the Cynefin framework is then undertaken and the activities that need to be implemented in the strategic planning process are identified. This allows for the management of the domain-specific challenges. In doing so, the Cynefin guidelines on the proper pattern of actions to be followed in each domain that includes action of sensing, understanding and responding are profiled. Lastly, the paper methodically integrates strategic planning activities into the Cynefin framework and, through a review of the literature on the use of AI in business, provide practical examples of its use to support the activities identified. To further illustrate the inherent complexity of using AI in the strategic domain, the challenges that can arise are defined and analysed.
4. Findings
In this section, the paper explores how current AI is co-shaping the modern strategic planning process by supporting the strategic decision-making process in various stages of the planning process. Then, using the Cynefin framework, the characteristics required of the strategic planning process are identified and aligned to the specificities of different Cynefin domains and the VUCA environments. After that, an outline of the possibilities of using AI in the different domains of the Cynefin framework to support strategic decision-making is provided. Finally, the challenges of AI in the context of strategic planning are discussed.
4.1 AI and the stages of strategic planning process
As a first step, the role of AI in the strategic planning process is illustrated, which starts with gathering and analysing information to understand the macro-environment, the industry, the company, competitors and customers, defining strategic objectives, developing a strategy with individual activities and allocating resources. Table 1 indicates the abovementioned stages in the planning process with examples of applied AI solutions.
4.2 Cynefin's domains, strategic planning and AI
In VUCA conditions, management strives to attain comprehensive insights. However, in organisations, data often turn out to be incomplete or overwhelming and/or subject to manual editing, limiting the ability to process it into manageable information and viable alternatives (Citroen, 2011). This predicament compels executives to make decisions based on either partial information or their intuitive judgments, thereby augmenting the risk of bias in strategic decision-making, a concern underscored by numerous scholars and practitioners (Wu et al., 2023; Atsmon, 2023; Hesel et al., 2022; Barnea, 2020). Additionally, the monitoring of inputs frequently lacks systematicity or lags, potentially leading to the oversight of critical “weak signals” or engendering prediction errors. The delayed monitoring of input data may impede timely responsiveness by businesses, which is critical within VUCA. This issue can be solved by use of AI. Nevertheless, an indiscriminate application of AI solutions is inadequate, given that strategic planning is contingent upon the distinctive characteristics of VUCA in a given context (Bennett and Lemoine, 2014a).
Applying the Cynefin framework can help identify the required strategic actions in different VUCA environments (Vasilescu, 2011). The relationship between the VUCA factors and Cynefin domains is based on the proposition that environments characterised by volatility, uncertainty, complexity and ambiguity are intertwined and not separated by a clear line (Loyd, 2015). According to Taskan et al. (2022) uncertainty can be associated with volatility, complexity can be a cause of uncertainty and uncertainty cause of ambiguity. Moreover, not all VUCA characteristics are equally present at the same time, and one is often dominating. The dominant characteristics in combination with the others forms the decision context that can be associated with each domain of the Cynefin framework.
In general, decision-makers are more likely to use AI when they have low situational awareness or, in other words, when they have a poor understanding of the situation at hand (Schneider and Leyer, 2019), which is mostly in a complex or chaotic domain. These are two domains that pose a challenge for designers of AI solutions in the future, but simply knowing in which domains the desire for AI support is greatest does not reveal much about what kind of AI solutions are needed to best support strategic decision makers in the different domains of the Cynefin framework. Hence, a more discerning comprehension of the strategic planning process within each domain is needed. For example, according to Snowden and Boone (2007) the simple and complicated domains of the framework assume an ordered context in which cause-and-effect relationships are perceptible and the correct answers can be determined from the facts. On the other hand, in complex and chaotic domains is no clear relationship between cause and effect. Such a context is disordered, and appropriate strategies can only be determined based on emerging patterns that need to be first “created” and then understood. Failure to achieve this nuanced understanding may result in the inappropriate application of AI, thereby diminishing the effectiveness of the strategic decision-making process. This, in turn, can lead to organisational resistance towards AI adoption (Booyse and Scheepers, 2023). Beside that there is a need to consider that predictive machine learning (ML) models are trained on qualified historical data. Without them, AI may not provide a proper solution, but when they are designed and implemented correctly, it can certainly detect repeat occurrences of problems that can be too complex or chaotic for human brains to be comprehended in the real-time.
Building upon the insights derived from the Cynefin framework and its application to strategic planning in different domains, the specific attributes required for effective strategic planning across these domains have been delineated, each primarily influenced by a singular VUCA factor. This comprehensive approach allows organizations to bolster their preparedness and responsiveness to the dynamic VUCA landscape, as shown in Figure 2.
From Figure 2, the simple domain is dominated by volatility. Organisations should establish clear, straightforward strategic goals in scenarios characterized by high volatility yet inherent simplicity. The foundation of consistent planning execution lies in established procedures and the adherence to best practices. Crucially, continuous monitoring of the external environment and the organisation's internal performance metrics is imperative to ensure swift responses to deviations from established plans. This approach aligns with the recommendations by Mankins and Steele (2006), who emphasize the significance of solidifying strategic goals and procedural clarity in volatile contexts.
The complicated domain is dominated by uncertainty. Facing uncertainty demands specialized knowledge, where organisations gain significantly from adopting a data-driven approach to decision-making. Leveraging expertise and insights from diverse sources is instrumental in navigating the intricacies of complicated challenges. Using scenario planning and comprehensive risk assessments, as Courtney et al. (1997) suggested, becomes indispensable in dissecting and understanding the multifaceted nature of uncertain environments.
The complex domain is dominated by complexity. The inherent complexity of this domain necessitates a culture of cross-functional collaboration and the integration of diverse perspectives into the strategic planning process. The goal is not to lose the available collective intelligence. Effective strategy in this context is characterized by its iterative nature, allowing organisations to adapt and evolve their strategies as new insights emerge. Experimentation and a flexible approach to strategy pivoting are key elements for navigating complexity, echoing the findings of Basten and Haamann (2018) regarding the need for adaptive strategic planning methods in complex settings.
Finally, the chaotic domain is dominated by ambiguity. In the face of chaotic scenarios marked by profound ambiguity, organisations must prioritize rapid response mechanisms and crisis management protocols. The focus on mitigating potential negative impacts becomes crucial. According to Sargut and McGrath (2011), and further emphasized by Weick et al. (2005) and Weick and Sutcliffe (2006), the capacity for swift communication and decision-making is paramount. Delays in these processes can significantly impede an organisation's agility and adaptability, leading to compounded issues as the environment evolves rapidly.
AI's ability to process large volumes of data, derive insights and automate tasks, as described before, aligns well with the necessary characteristics of the strategic planning process. How this can happen is described in Table 2 together with practical examples.
The application of the Cynefin framework to the VUCA made it possible to explore the complex nature of the strategic decision-making process according to the specificity of the external business context and to propose the type of AI application that best relates to the needs of the planning process itself. While it is clear that AI has much to contribute to enhancing the speed, quality and robustness of strategic decisions, it is not without own challenges.
4.3 Challenges of AI in strategic decision making
This paper has highlighted that using AI for strategic decision-making in a VUCA context offers many benefits, from automating processes to deriving data-driven insights. Even so, a real evaluation of AI benefits can be only done after comprehensively considering potential consequences of the adopted process (Trunk et al., 2020). Two types of challenges can be considered: data-based challenges and user-based challenges.
4.3.1 Data-based challenges
AI models rely on the data they receive. If the data is incomplete, outdated, or biased, the AI's recommendations may be inaccurate or misleading (Vincent, 2021). This places some emphasis on the quality and availability of data to provide meaningful information and perhaps even knowledge that is free from bias or ethical dilemmas. It is possible that AI models can inadvertently perpetuate or amplify biases present in the data, type of data collected and representation of the data (in various form). According to Silva and Kenney (2018) the possibilities for bias can even increase when using AI for decisions. This is because any algorithm is only as good as the input data and the mining process, both of which are developed by people driven by their own biases. Possible results are discriminatory strategic decisions that can have significant societal impacts and harm an organisation's reputation. This issue can make it hard to validate and accept any AI-driven strategic recommendations. To ensure the accuracy, reliability and compliance of the data used for the training and operation of artificial intelligence systems, it is essential to invest in data management and governance infrastructure (Perifanis and Kitsios, 2023).
4.3.2 User-based challenges
There are many user-based challenges to using AI in strategic decision making. Most relate to the need to strike a balance between AI-driven recommendations and human judgment. This often happens due to the complexity of implementing the AI and the skills gaps of the users using the AI. Integrating AI into existing processes and systems can be technically challenging and requires expertise, resources and often a cultural shift within the organisation (Shrestha et al., 2019). Not all organisations have access to experts who can develop, implement and maintain AI systems (Fountaine et al., 2019) or the funds required to support such systems. Skills gaps can appear, and systems can be misused, or overfitting may occur, i.e. the AI is overtrained and becomes generalised rather than providing insights. This would be problematic in a VUCA environment where the rapid pace of advancement in AI technologies means that what's cutting-edge today might become obsolete in a few years. This introduces even more cost and more levels of required expertise. On the other hand, Dennis et al. (2023) study shows that while team members are perceived to have higher ability and integrity, the presence of AI results in lower decision-making process satisfaction. The limited transparency and explainability of AI can result in lack of trust as often the AI is seen as a “black box” which is not fully accessible or understood (Tambe et al., 2019). The latter raises the question of how human-artificial intelligence interactions should be organised in radical situations such as those brought about by the chaotic or complex domain.
It can be seen that many of the challenges are user-based, leading to further questions about the profiles and skills of today's and tomorrow's managers and strategic decision-makers (Brynjolfsson and Mcafee, 2017). Although this is not the subject of this paper, it highlights the need to ensure that sufficient time and effort is given to the thoughtful and critical integration of AI, so that every decision can be based on a combination of human expertise, contextual understanding and AI.
5. Conclusion
Scholarly discourse on the use of AI in facilitating strategic decision-making processes has no consensus. It is widely acknowledged that the efficacy of AI applications is contingent upon various factors, prominently the available data and the specific objectives for which AI is deployed – a point underscored in this paper.
The study represents a concerted effort to delve into the profound influence of AI on the strategic planning paradigm within volatile, uncertain, complex and ambiguous (VUCA) business environments. These environments are characterised by their inherent cognitive demands, necessitate sophisticated reasoning, problem-solving capabilities and a propensity for continuous learning. By meticulously dissecting the transformative potential of AI across various domains of the Cynefin framework, the paper offers nuanced insights into how organisations can effectively harness AI-driven solutions to enhance strategic planning processes.
The impact of AI on strategic planning processes within VUCA environments is both profound and multifaceted, fundamentally transforming how organisations approach decision-making and strategy development. AI enhances decision-making efficiency by processing vast amounts of data much faster than humans, identifying patterns, trends and potential outcomes that may not be immediately apparent. This capability is crucial in VUCA environments where speed and accuracy in decision-making can significantly influence organisational success. Furthermore, AI's predictive analytics can forecast trends based on historical and real-time data, allowing organisations to anticipate changes and prepare proactive strategies. This foresight is invaluable in unpredictable environments, offering a competitive edge by enabling organisations to be one step ahead. AI also plays a crucial role in risk management by analysing diverse data sources to identify and assess potential risks, thus helping organisations devise resilient strategies against potential threats and uncertainties.
At the same time, the integration of AI into strategic planning presents significant challenges. It raises ethical and social considerations, such as data privacy, algorithmic bias and new job competences needed, that organisations must address to ensure sustainable and responsible business practices. Strategic application of AI requires a balanced approach that carefully considers technological capabilities and ethical implications. In particular, managers need to be aware of the potential biases of AI when using it in strategic planning and have a deeper understanding of how AI algorithms learn and evolve to avoid these biases.
It is recognised that there is a need for a wider conversation around the use of AI in strategic planning in VUCA environments and while the research has contributed to questioning the deployment of AI in the strategic planning process and identify some practices of organisations trying to leverage AI for strategic decisions in VUCA contexts, further conversations are needed, with more empirical work.
5.1 Theoretical contribution
This study makes significant theoretical contributions to the discourse on artificial intelligence (AI) within the rapidly evolving technological and societal context.
The integration of AI into the Cynefin Framework provides a structured approach to understanding and managing the complexities of AI deployment in different strategic decision-making contexts. It was highlighted that the Cynefin framework categorises problems into five domains: Simple/Clear, Complicated, Complex, Chaotic and Disorder. By exploring AI applications within these domains, an understanding of the contextual factors that influence different types of decision-making can be gained. This then allows for tailored AI strategies that can align with the specific characteristics of each domain and which represents an under research area of strategic management and management science literature. This is a significant contribution which can be further tested in different environmental context.
The Cynefin framework also provided a valuable lens for examining the ethical implications of AI and extending the drivers and influences on the VUCA environment. In domains such as Chaotic, where rapid decision-making is necessary, the ethical challenges of AI become particularly evident. So, using the framework applied to VUCA and AI there are contributions to be made around ethics, governance and potentially risk and resilience. Some observations have been made in these areas, but it is argued that the approach adopted in this research opens up new ways for fellow researchers to explore this important field. So, academics can use the framework to explore how different contexts impact ethical considerations of the use of AI as well as the nature of human-AI collaboration in strategic decision making.
Finally here, it is believed that integrating AI with the Cynefin Framework and VUCA environments will encourage interdisciplinary research, combining insights from AI, decision science, complexity theory and organizational behaviour. This approach can enhance research methodologies by considering both technical and contextual factors and opens up a range of interesting methodological insights and approaches to analysing AI's impact on decision-making and associated case studies. Through this exploration, operational boundaries of AI technology can be delineated, offered critical reflections on its practical use and provided for insights into the use of AI in strategic decisions making. Collectively, these contributions enrich the scholarly debate on AI and provide practical guidelines for its incorporation into diverse stages of strategic planning process.
5.2 Managerial contribution
The study presents two key practical contributions. Firstly, it offers organisations a framework that can be utilized to develop their strategies and strategic plans. This framework is particularly valuable for preparing for challenging times perpetuated by AI innovation, enhancing organisational efficiency and effectiveness. By leveraging the proposed framework, organisations can proactively address potential adversities and optimize their strategic planning process for better outcomes. Given the rapid advancement of AI technology there has been a noticeable gap in academic research and industrial knowledge regarding ethical domain and Human-AI interaction. The study addresses this gap by offering a comprehensive conceptualization of AI in relation to the strategic planning process executed in the VUCA environment. This includes the provision of guidelines for the effective application of AI not as the replacement of human decision-making, but as a key attribute of a quality strategic decision-making system.
Secondly, a more adaptive and flexible way of considering AI in strategic decision making is offered. Many examples given capture the current use of AI and the impact it is having on business operations. However, this is the tip of the iceberg for the future. Emerging and future AI technologies hold immense potential to transform business strategic decision-making by providing deeper insights, enabling real-time adjustments, enhancing operational efficiency and fostering innovation. These elements have been covered in some depth, while highlighting the challenges managers will face around bias, privacy, job displacement, ethics and security. However, the future of AI promises transformative benefits, provided it is deployed responsibly and equitably. As AI moves from the current stages of data collection, analysis and processing and insight generation to the future potential of decision making (with or without human interaction) and action implementation (autonomously, through self-learning, advanced scenario modelling or in collaboration with humans) practitioners will need to be equipped with the insights needed to navigate the complexities of such AI usage, ensuring informed decision-making that adhere to legal and ethical standards and meet the values and goals of the organisation. So, it is not yet possible to predict the future or the impact that AI will have, but it is useful to have a way of looking at the role of AI and equip the manager with an approach to better understand and use AI in their strategic decision-making. The paper provides this.
5.3 Limitations and suggestions for further research
This study introduces a comprehensive framework, laying the groundwork for an expanded exploration of AI within various VUCA contexts. However, it also highlights the need for further research to delve deeper into the impacts of AI across different levels – micro, meso, macro and meta. Future studies should aim to understand AI not just as a technological innovation, but as a multifaceted context that influences various industries and processes. To this end, the conceptual study serves as a foundational research guideline. It encourages subsequent research to explore the specific reflections of AI on certain industries and processes, thereby enriching the collective understanding of AI's broader implications, as well as exploring the emerging forms of AI, e.g. Advanced natural language processing (NLP), enhanced predictive analytics, autonomous systems and robotics, emotional AI and hyper-personalisation through AI. This approach will enable a more holistic comprehension of AI's role and its potential to shape future developments across multiple sectors.
Figures
Strategic planning stages and AI
Stages | Purpose | Examples of AI |
---|---|---|
Analysis – external environment | Understand the external factors that might affect the organisation's performance. Identify opportunities and threats in the external environment that can be leverage or mitigate | Machine learning algorithms can scan and analyse vast amounts of data from a wide range of relevant sources to identify trends and insights related to the external environment. Natural Language Processing (NLP) can be used to extract sentiments and emerging themes from news articles, customer feedback, or market discussions E.g. Google Cloud Natural Language or Brandwatch |
Analysis – internal environment | Evaluating the organisation's strengths and weaknesses by examining internal processes, resources, capabilities, culture, financial health and other relevant aspects | AI-powered analytics platforms can analyse internal data sources, such as sales data, operational metrics and employee feedback. Predictive analytics can then help anticipate potential internal challenges, while recommendation systems can suggest areas of improvement or highlight strengths to capitalise on E.g. Tableau or IBM Watson Analytics |
Identify objectives | Setting clear, measurable and achievable goals for the organisation which give direction to the organisation and help to prioritise efforts | AI can analyse historical data to predict achievable targets. For instance, sales forecasting tools can predict future sales trends based on past performance and external factors, helping set sales objectives. Additionally, AI-driven simulations can test various scenarios and outcomes, aiding in more informed goal setting E.g. DataRobot or ThoughtSpot |
Propose actions | Proposing strategies and tactics to help the organisation achieve its objectives. Setting up strategic choices, considering various options and selecting the most effective and efficient course of action | Optimisation algorithms can suggest the most efficient strategies or actions to achieve set objectives. For instance in marketing, AI can recommend the best channels or campaigns for reaching a particular audience segment or goal. At the same time decision support systems can evaluate the pros and cons of various strategic options E.g. Optuna or H2O.ai |
Determine resources required | Ensuring that the organisation has the necessary resources to implement the proposed actions through effective budgeting, resource allocation and resource mobilisation | Machine learning models can forecast staffing needs, capital requirements, or other resources based on historical data and the proposed actions as well as optimise resource allocation efficiency E.g. AnyLogic or Jedox |
Source(s): Authors' own elaboration
Application of AI to Cynefin domains
Cynefin domain/dominant VUCA element | Strategic planning characteristics | Contribution of AI | Examples of AI solutions |
---|---|---|---|
Simple domain/volatility | Setting clear and simple strategic goals | By providing insights about organisational results, competitor movements, employee performance, industry developments and regulatory changes, AI provides a comprehensive overview essential for crafting competitive strategic goals (Von Krogh et al., 2021; Chowdhury et al., 2022). | Waste reduction and recycling goals: by analysing waste generation patterns, AI can help a waste management company to set realistic waste reduction goals for businesses. AI-powered systems monitor waste processing in real-time, optimizing recycling processes and reducing landfill waste. Sales and marketing performance goals: AI algorithms analyse historical sales data, market trends and customer behaviour to set achievable sales targets. |
Application of established procedures and best practices | AI-driven process automation ensures routine tasks and decisions consistently follow established procedures and thus reduces errors and improves efficiency (Braganza et al., 2017). | Supply chain optimization: use AI to automate inventory and logistics, minimizing stock issues. By processing large volumes of real-time data from various sources, AI can identify patterns and inefficiencies in the supply chain, suggesting further optimizations. Automated Customer Service Systems: Implement AI chatbots that provide 24/7 service, with natural language processing to understand and respond to customer inquiries accurately ensuring consistent quality of service and effectively reducing response times. | |
Continuous monitoring for deviations | AI-powered analytics and monitoring tools can track key performance indicators (KPIs) in real-time and send alerts or notification if deviations occur or are likely (Overgoor et al., 2019; Seyedan and Mafakheri, 2020). Moreover, by analysing emerging trends, customer feedback and current product performance, AI uncovers market opportunities for future development (Huang and Rust, 2021; Ledro et al., 2022). | Analytics dashboard: a company creates a dashboard that uses machine learning to analyse market data and customer behaviour in real-time, helping it adjust strategies quickly. Retail chain optimization: implement AI analytics to monitor real-time sales and inventory, enabling swift adjustments to stock and promotions. Operational efficiency optimization: a manufacturing company uses machine learning predictions to adjust in real time production schedules and manage inventory, optimizing operational efficiency. Investment strategy adjustment: financial institutions use machine learning in real time to predict market fluctuations, adjusting investment strategies to maximize short-term gains. | |
Complicated domain/uncertainty | Meaningful data collection and predictive analytics | AI leverages big data analytics to process vast amounts of information and provide actionable insights. Machine learning algorithms analyse complex data sets, helping inform decisions (Selz, 2020; Seyedan and Mafakheri, 2020). | Market analysis and forecasting: businesses can use AI to analyse market trends, customer preferences and economic indicators to forecast future market conditions and plan product launches or expansions accordingly. Customer behaviour analysis: companies can leverage machine learning algorithms to analyse customer data, identifying patterns in purchasing behaviour to tailor marketing strategies, improve customer engagement and enhance product offerings. Risk management: financial institutions can use AI for risk assessment, analysing vast datasets to identify potential risks and vulnerabilities in their investment portfolios and adjust their strategies to mitigate them. M&A investment strategies: AI techniques can be used to create predictive quantitative models on M&A targets that assist decision-maker in estimating potential synergies and evaluating deal value. |
Access to expertise and insights | AI can act as a knowledge repository, providing access to vast amounts of expert knowledge through natural language processing (NLP) and chatbots. This supports decision-makers in navigating complicated issues (Patel and Trivedi, 2020). | Healthcare decision support: implementing AI to analyse medical data and literature, aiding in the diagnosis and treatment planning for complex diseases. Market data repository and analyses: developing AI chatbots to systematically categorizes and analyses available internal and public market data. Innovation and research development: applying AI to sift through extensive research materials and patents to identify new product development and innovation opportunities. | |
Scenario planning and risk assessment | AI-powered predictive analytics can construct different scenarios and assess risks associated with them. This allows organisations to prepare for multiple potential outcomes (Noriega et al., 2023; Spaniol and Rowland, 2023). | Retail: utilize AI to forecast consumer behaviour and economic impacts, aiding in inventory management, store placement and targeted marketing. This helps in adapting to consumer preferences and supply chain challenges. Healthcare: apply AI to simulate patient demand under pandemics or policy changes, ensuring resource efficiency and emergency preparedness through better capacity planning and contingency strategies. Financial services: use AI to model market fluctuations and economic conditions to assess risks like loan defaults or investment losses, guiding strategies for risk mitigation through diversified investments and credit policies. Manufacturing: leverage AI to predict supply chain disruptions or demand shifts, facilitating risk assessment for production and inventory management and creating adaptive strategies for business continuity. | |
Complex domain/complexity | Real-time data collection and analysis | AI can analyse large amounts of data from a variety of sources, including customer feedback, market research and social media in real time, to identify new trends and opportunities (Davenport, 2018). AI predictive analytics can forecast immediate consequences (Shancang et al., 2018). | Agriculture: by monitoring satellite and soil data, AI optimizes crop management and sustainability, suggesting precise farming techniques and crop rotation strategies. Finance: utilizing market trends and investor behaviour analysis, AI predicts stock movements, aiding in personalized investment advice and risk management. Manufacturing: by analysing production and market demand data, AI optimizes processes and implements predictive maintenance, enhancing supply chain efficiency and reducing costs. |
Cross-functional collaboration and diverse perspectives | AI can facilitate collaboration by providing data-sharing platforms and collaborative tools using natural language processing and sentiment analysis which allows alternative stakeholder perspectives to be captured and considered (Tan et al., 2023). | Global project management: implement an AI-driven platform integrating slack for real-time translation and summary of discussions across languages. Use sentiment analysis to identify concerns, improving project management across global teams. Product development insights: utilize an AI tool with sentiment analysis to gather consumer feedback from social media on product prototypes, guiding feature prioritization based on user preferences for a technology startup. | |
Emergent and adaptive strategies | AI can provide real-time analysis of the situation and simulate various response scenarios enabling decision-makers to understand the potential impact of different strategies (Aldoseri et al., 2023). | Customer service: AI chatbots powered by machine learning algorithms and natural language processing analyse in real-time customer requests and suggest products customers are most likely to need or want and therefore buy. E-commerce: by applying sentiment analysis and NLP techniques, AI can identify emerging issues, concerns and sentiments towards the retailer's brand or products. This real-time feedback allows the company to adapt its strategies rapidly and address customer needs and preferences effectively. Banking: AI predictive analysis is used to understand the relationship between equity capital markets deals and investors based on the equity offering details, historical deal participation, trading and client touch point information, and market data, allowing the bank to make very targeted investor pitches. | |
Chaotic domain/ambiguity | Rapid response and crisis management protocols | AI can trigger alerts when specific thresholds are breached, facilitating immediate action. Machine learning algorithms can help in automatic detection of anomalies and escalate to the right stakeholders for immediate response, thereby increasing the speed of crisis management (Baryannis et al., 2019). | Cybersecurity: AI uses machine learning to spot abnormal activities or breaches and sends immediate alerts, enabling companies to take prompt action against potential cyber threats. Mental health: AI analyse text conversations and identify patterns. The learnings are then used to evaluate different interventions and enhance crisis response for future interactions. Manufacturing: AI creates “cognitive supply chains,” which predict potential shortages or disruptions, automate inventory management and recommend alternative suppliers or delivery routes, ensuring minimal disruption in times of crisis. |
Trial-and-error learning | AI can provide decision support through the generation of virtual models or simulations, automated analysis of a large variety of data and propose multiple courses of action to support experimentation in real-time (Phillips-Wren, 2012). | Predictive toxicology: AI models predict potential toxicity of compounds early in the drug development process. By analysing historical data on molecular structures and their effects, AI can forecast adverse reactions, reducing the risk of late-stage failures in drug development. Business model innovation: NLP can process and analyse customer feedback, expert opinions and market commentary, providing qualitative insights about alignments between a company's value proposition and customer expectations, prompting changes to value creation and delivery components. | |
Real-time communication and collaboration | AI machine learning algorithms classify and sort emails in real-time based on their content and importance, decluttering inboxes and ensuring that crucial communication gets timely attention, while NLP can suggest responses to messages or emails by understanding the context, allowing swift and efficient communication (Mca, 2020). | Remote working: AI-powered analytics tools use machine learning and natural language processing to analyse meeting data and provide insights into how to optimise team productivity. Video conference providers: AI-powered video conference can automatically divide cloud recordings into smart chapters for easy review, highlight important information, create next steps for attendees to take action or write a summary of the meeting. Real-time chat: through chat sentiment analysis, teams can gauge the emotional tone of conversations, while automated response suggestions streamline communication processes. Project management: AI real-time progress tracking can monitor each team member's contributions, track project timelines, identify bottlenecks and ensuring timely interventions when necessary. |
Source(s): Authors' own elaboration
Notes
There are many different definitions of AI in the literature (see Russell and Norvig, 2016; Huang and Rust, 2018), but for the purpose of this paper, this definition is used, as it is sufficiently generic and comparable to others.
Statement of AI Risk, available at: https://www.safe.ai/statement-on-ai-risk, (accessed 20 June 2023).
References
Agrawal, A., Gans, J. and Goldfarb, A. (2018), Prediction Machines: the Simple Economics of Artificial Intelligence, Harvard Business Press, Boston.
Ahammad, M.F., Glaister, K.W. and Gomes, E. (2020), “Strategic agility and human resource management”, Human Resource Management Review, Vol. 30 No. 1, 100700, doi: 10.1016/j.hrmr.2019.100700.
Aldoseri, A., Al-Khalifa, K.N. and Hamouda, A.M. (2023), “Re-thinking data strategy and integration for artificial intelligence: concepts, opportunities, and challenges”, Applied Sciences, Vol. 13 No. 12, p. 7082, doi: 10.3390/app13127082.
Anderson, J., Rainie, L. and Luchsinger, A. (2018), Artificial Intelligence and the Future of Humans, Pew Research Center.
Ansari, M.F., Dash, B., Sharma, P. and Yathiraju, N. (2022), “The impact and limitations of artificial intelligence in cybersecurity: a literature review”, International Journal of Advanced Research in Computer and Communication Engineering, Vol. 11 No. 9, pp. 81-90, doi: 10.17148/ijarcce.2022.11912.
Atsmon, Y. (2023), Artificial Intelligence in Strategy, McKinsey & Company, available at: https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/artificial-intelligence-in-strategy#/ (accessed 22 August 2023).
Aydin, E., Rahman, M. and Ozeren, E. (2023), “Does industry 5.0 reproduce gender (in) equalities at organisations? Understanding the interaction of human resources and software development teams in supplying human capitals”, Information Systems Frontiers, pp. 1-15, doi: 10.1007/s10796-023-10450-1.
Aziz, S. and Dowling, M. (2019), “Machine learning and AI for risk management”, in Lynn, T., Mooney, J., Rosati, P. and Cummins, M. (Eds), Disrupting Finance. Palgrave Studies in Digital Business & Enabling Technologies, Palgrave Pivot, Cham, doi: 10.1007/978-3-030-02330-0_3.
Baran, B.E. and Woznyj, H.M. (2020), “Managing VUCA: the human dynamics of agility”, Organizational Dynamics, Vol. 50 No. 2, 100787, doi: 10.1016/j.orgdyn.2020.100787.
Barnea, A. (2020), “How will AI change intelligence and decision-making?”, Journal of Intelligence Studies in Business, Vol. 10 No. 1, pp. 75-80, doi: 10.37380/jisib.v1i1.564.
Baryannis, G., Validi, S., Samir, D. and Antoniou, G. (2019), “Supply chain risk management and artificial intelligence: state of the art and future research directions”, International Journal of Production Research, Vol. 57 No. 7, pp. 2179-2202, doi: 10.1080/00207543.2018.1530476.
Bassano, C., Piciocchi, P. and Pietronudo, M.C. (2018), “Managing value co-creation in consumer service systems within smart retail settings”, Journal of Retailing and Consumer Services, Vol. 45, pp. 190-197, doi: 10.1016/j.jretconser.2018.09.008.
Basten, D. and Haamann, T. (2018), “Approaches for organizational learning: a literature review”, SAGE Open, Vol. 8 No. 3, p. 215824401879422, doi: 10.1177/2158244018794224.
Bennett, N. and Lemoine, G.J. (2014a), “What a difference a word makes: understanding threats to performance in a VUCA world”, Business Horizons, Vol. 57 No. 3, pp. 311-317, doi: 10.1016/j.bushor.2014.01.001.
Bennett, N. and Lemoine, G.J. (2014b), “What VUCA really means for you”, Harvard Business Review, Vol. 92 Nos 1/2, p. 27.
Bettis, R.A. and Prahalad, C.K. (1995), “The dominant logic: retrospective and extension”, Strategic Management Journal, Vol. 13 No. 1, pp. 15-36, doi: 10.1002/smj.4250160104.
Birkinshaw, J. and Ridderstråle, J. (2017), Fast/Forward: Make Your Company Fit for the Future, Stanford Business Book, Stanford, CA.
Booyse, D. and Scheepers, C.B. (2023), “Barriers to adopting automated organisational decision-making through the use of artificial intelligence”, Management Research Review, Vol. 47 No. 1, pp. 64-85, doi: 10.1108/MRR-09-2021-0701.
Bouschery, S.G., Blazevic, V. and Piller, F.T. (2023), “Augmenting human innovation teams with artificial intelligence: exploring transformer-based language models”, Journal of Product Innovation Management, Vol. 40 No. 2, pp. 139-153, doi: 10.1111/jpim.12656.
Braganza, A., Brooks, L., Nepelski, D., Ali, M. and Moro, R. (2017), “Resource management in big data initiatives: processes and dynamic capabilities”, Journal of Business Research, Vol. 70, pp. 328-337, doi: 10.1016/j.jbusres.2016.08.006.
Bromiley, P. and Rau, D. (2016), “Social, behavioral, and cognitive influences on upper echelons during strategy process: a literature review”, Journal of Management, Vol. 42 No. 1, pp. 174-202, doi: 10.1177/0149206315617240.
Brynjolfsson, E. and Mcafee, A. (2017), “The business of artificial intelligence”, Harvard Business Review, Vol. 7, pp. 3-11.
Calabretta, G., Gemser, G. and Wijnberg, N.M. (2017), “The interplay between intuition and rationality in strategic decision making: a paradox perspective”, Organization Studies, Vol. 38 Nos 3-4, pp. 365-401, doi: 10.1177/0170840616655483.
Canhoto, A.I. and Clear, F. (2020), “Artificial intelligence and machine learning as business tools: a framework for diagnosing value destruction potential”, Business Horizons, Vol. 63 No. 2, pp. 183-193, doi: 10.1016/j.bushor.2019.11.003.
Cao, L. (2021), “AI in finance: challenges, techniques, and opportunities”, ACM Computing Survey (CSUR), Vol. 55 No. 3, pp. 1-38, doi: 10.1145/3502289.
Chowdhury, S., Dey, P., Joel-Edgar, S., Bhattacharya, S., Rodriguez-Espindola, O., Abadie, A. and Truong, L. (2022), “Unlocking the value of artificial intelligence in human resource management through AI capability framework”, Human Resource Management Review, Vol. 33 No. 1, 100899, doi: 10.1016/j.hrmr.2022.100899.
Citroen, C.L. (2011), “The role of information in strategic decision-making”, International Journal of Information Management, Vol. 31 No. 6, pp. 493-501, doi: 10.1016/j.ijinfomgt.2011.02.005.
Codreanu, A. (2016), “A VUCA action framework for a VUCA environment. Leadership challenges and solutions”, Journal of Defense Resources Management, Vol. 7 No. 2, pp. 31-38.
Courtney, J.F. (2001), “Decision making and knowledge management in inquiring organizations: toward a new decision-making paradigm for DSS”, Decision Support Systems, Vol. 31 No. 1, pp. 17-38, doi: 10.1016/s0167-9236(00)00117-2.
Courtney, H., Kirkland, J. and Viguerie, P. (1997), “Strategy under uncertainty”, Harvard Business Review, Vol. 75 No. 6, pp. 66-79.
Danziger, S., Levav, J. and Avnaim-Pesso, L. (2011), “Extraneous factors in judicial decisions”, Proceedings of the National Academy of Sciences, Vol. 108 No. 17, pp. 6889-6892, doi: 10.1073/pnas.1018033108.
Das, K.K. and Ara, A. (2014), “Leadership in VUCA world: a case of Lenovo”, International Journal of Current Research, Vol. 6 No. 4, pp. 6410-6419.
Daugherty, P.R. and Wilson, H.J. (2018), Human + Machine: Reimagining Work in the Age of AI, Harvard Business Press, Boston, MA.
Davenport, T.H. (2018), The AI Advantage: How to Put the Artificial Intelligence Revolution to Work, MIT Press, Cambridge, MA.
Den Hertog, J.A. (1999), “General theories of regulation”, in Encyclopaedia of Law and Economics, pp. 223-270.
Dennis, A.R., Lakhiwal, A. and Sachdeva, A. (2023), “AI agents as team members: effects on satisfaction, conflict, trustworthiness, and willingness to work with”, Journal of Management Information Systems, Vol. 40 No. 2, pp. 307-337, doi: 10.1080/07421222.2023.2196773.
Eisenhardt, K.M. and Sull, D. (2001), “Strategy as simple rules”, Harvard Business Review, Vol. 79 No. 1, pp. 107-116.
Elbanna, S., Thanos, I. and Jansen, R. (2020), “A literature review of the strategic decision-making context: a synthesis of previous mixed findings and an agenda for the way forward”, M@n@gement, Vol. 23, pp. 42-60, doi: 10.3917/mana.232.0042.
European Commission (2023), “Commission welcomes political agreement on artificial intelligence act”, available at: https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473 (accessed 15 December 2023).
Ferilli, S., Girardi, E., Musto, C., Paolini, M., Poccianti, P., Pochettino, S. and Semeraro, G. (2021), L’Intelligenza Artificiale Per Lo Sviluppo Sostenibile, CNR Edizioni, Roma.
Ferrer, X., Van Nuenen, T., Such, J.M., Coté, M. and Criado, N. (2021), “Bias and discrimination in AI: a cross-disciplinary perspective”, IEEE Technology and Society Magazine, Vol. 40 No. 2, pp. 72-80, doi: 10.1109/mts.2021.3056293.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P. and Vayena, E. (2018), “AI4People — an ethical framework for a good AI society: opportunities, risks, principles, and recommendations”, Minds and Machines, Vol. 28 No. 4, pp. 689-707, doi: 10.1007/s11023-018-9482-5.
Fountaine, T., McCarthy, B. and Saleh, T. (2019), “Building the AI-powered organization”, Harvard Business Review, Vol. 97 No. 4, pp. 62-73.
Giones, F., Brem, A. and Berger, A. (2019), “Strategic decisions in turbulent times: lessons from the energy industry”, Business Horizons, Vol. 62 No. 2, pp. 215-225, doi: 10.1016/j.bushor.2018.11.003.
Grewatsch, S., Kennedy, S. and Tima) Bansal, P. (2023), “Tackling wicked problems in strategic management with systems thinking”, Strategic Organization, Vol. 21 No. 3, pp. 721-732, doi: 10.1177/14761270211038635.
Hagendorff, T., Bossert, L.N., Tse, Y.F. and Singer, P. (2023), “Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals”, AI and Ethics, Vol. 3 No. 3, pp. 717-734, doi: 10.1007/s43681-022-00199-9.
Haleem, A., Javaid, M., Qadri, A.M., Singh, R.P. and Suman, R. (2022), “Artificial intelligence (AI) applications for marketing: a literature-based study”, International Journal of Intelligent Networks, Vol. 3, pp. 119-132, doi: 10.1016/j.ijin.2022.08.005.
Hesel, N., Buder, F. and Unfried, M. (2022), “The next frontier in intelligent augmentation: human-machine collaboration in strategic marketing decision-making”, NIM Marketing Intelligence Review, Vol. 14 No. 2, pp. 49-53, doi: 10.2478/nimmir-2022-0017.
Hoffman, S.G., Joyce, K., Alegria, S., Bell, S.E., Cruz, T.M., Noble, S.U., Shestakofsky, B. and Smith-Doerr, L. (2022), “Five big ideas about AI”, Contexts, Vol. 21 No. 3, pp. 8-15, doi: 10.1177/15365042221114975.
Huang, M.-H. and Rust, R.T. (2018), “Artificial intelligence in service”, Journal of Service Research, Vol. 21 No. 2, pp. 155-172, doi: 10.1177/1094670517752459.
Huang, M.H. and Rust, R.T. (2021), “A strategic framework for artificial intelligence in marketing”, Journal of the Academy of Marketing Science, Vol. 49 No. 1, pp. 30-50, doi: 10.1007/s11747-020-00749-9.
Jaakkola, E. (2020), “Designing conceptual articles: four approaches”, AMS Review, Vol. 10 No. 10, pp. 18-26, available at: https://doi.org/10.1007/s13162-020-00161-0 (accessed 8 August 2023).
Janssen, M., Brous, P., Estevez, E., Barbosa, L.S. and Janowski, T. (2020), “Data governance: organizing data for trustworthy artificial intelligence”, Government Information Quarterly, Vol. 37 No. 3, 101493, doi: 10.1016/j.giq.2020.101493.
Jarrahi, M.H. (2018), “Artificial intelligence and the future of work: human-AI symbiosis in organizational decision making”, Business Horizons, Vol. 61 No. 4, pp. 577-586, doi: 10.1016/j.bushor.2018.03.007.
Kahneman, D., Sibony, O. and Sunstein, C.R. (2021), Noise: A Flaw in Human Judgment, 1st ed., Little, Brown Spark, New York.
Kail, E.G. (2010a), “Leading effectively in a VUCA environment: C is for complexity”, Harvard Business Review, December, available at: https://hbr.org/2010/12/leading-effectively-in-a-vuca (accessed August 2023).
Kail, E.G. (2010b), “Leading in a VUCA environment: U is for uncertainty”, Harvard Business Review, November, available at: https://hbr.org/2010/11/leading-in-a-vuca-environment-1 (accessed August 2023).
Kail, E.G. (2010c), “Leading in a VUCA environment: V is for volatility”, Harvard Business Review, November, available at: https://hbr.org/2010/11/leading-in-a-vuca-environment (accessed August2023).
Kail, E.G. (2011), “Leading effectively in a VUCA environment: a is for ambiguity”, Harvard Business Review, January, available at: https://hbr.org/2011/01/leading-effectively-in-a-vuca-1 (accessed August 2023).
Keding, C. (2021), “Understanding the interplay of artificial intelligence and strategic management: four decades of research in review”, Management Review Quarterly, Vol. 71 No. 1, pp. 91-134, doi: 10.1007/s11301-020-00181-x.
Khatri, N. and Ng, H.A. (2000), “The role of intuition in strategic decision making”, Human Relations, Vol. 53 No. 1, pp. 57-86, doi: 10.1177/0018726700531004.
Kiron, D. and Schrage, M. (2019), “Strategy for and with AI”, MIT Sloan Management Review Magazine, available at: https://sloanreview.mit.edu/article/strategy-for-and-with-ai/ (accessed 24 June 2023).
Kurtz, C.F. and Snowden, D.J. (2003), “The new dynamics of strategy: sense-making in a complex-complicated world”, IBM Systems Journal, Vol. 42 No. 3, pp. 462-483, doi: 10.1147/sj.423.0462.
Kyriazanos, D.M., Thanos, K.G. and Thomopoulos, S.C. (2019), “Automated decision making in airport checkpoints: bias detection toward smarter security and fairness”, IEEE Security and Privacy, Vol. 17 No. 2, pp. 8-16, doi: 10.1109/msec.2018.2888777.
Ledro, C., Nosella, A. and Vinelli, A. (2022), “Artificial intelligence in customer relationship management: literature review and future research directions”, Journal of Business and Industrial Marketing, Vol. 37 No. 13, pp. 48-63, doi: 10.1108/JBIM-07-2021-0332.
Lee, J., Davari, H., Singh, J. and Pandhare, V. (2018), “Industrial Artificial Intelligence for industry 4.0-based manufacturing systems”, Manufacturing Letters, Vol. 18, pp. 20-23, doi: 10.1016/j.mfglet.2018.09.002.
Lepri, B., Oliver, N., Letouzé, E., Pentland, A. and Vinck, P. (2018), “Fair, transparent, and accountable algorithmic decision-making processes: the premise, the proposed solutions, and the open challenges”, Philosophy and Technology, Vol. 31 No. 4, pp. 611-627, doi: 10.1007/s13347-017-0279-x.
Loyd, T. (2015), “Stories and strategies from the VUCA world”, available at: http://vucabook.com/stories-and-strategies-from-the-vuca-world/ (accessed 24 June 2023).
Mahroof, K. (2019), “A human-centric perspective exploring the readiness towards smart warehousing: the case of a large retail distribution warehouse”, International Journal of Information Management, Vol. 45, pp. 176-190, doi: 10.1016/j.ijinfomgt.2018.11.008.
Mankins, M. and Gottfredson, M. (2022), “Strategy-making in turbulent times: a dynamic new model”, Harvard Business Review, available at: https://hbr.org/2022/09/strategy-making-in-turbulent-times (accessed 8 August 2023).
Mankins, M.C. and Steele, R. (2006), “Stop making plans; start making decisions”, Harvard Business Review, Vol. 84 No. 1, pp. 76-84.
Mason, R.B. (2007), “The external environment's effect on management and strategy: a complexity theory approach”, Management Decision, Vol. 45 No. 1, pp. 10-28, doi: 10.1108/00251740710718935.
Mca, S. (2020), “AI and ML techniques to analyze communication emails and text patterns to secure from attacks”, SSRN Electronic Journal, Vol. 8, pp. 2084-2087.
McGrath, R.G. (2013), The End of Competitive Advantage: How to Keep Your Strategy Moving as Fast as Your Business, Harvard Business Review Press, Boston.
Miller, C.C. and Ireland, R.D. (2005), “Intuition in strategic decision making: friend or foe in the fast-paced 21st century?”, Academy of Management Perspective, Vol. 19 No. 1, pp. 19-30, doi: 10.5465/ame.2005.15841948.
Mintzberg, H. (1994), The Rise and Fall of Strategic Planning: Reconceiving Roles for Planning, Plans, Planners, Free Press, Maxwell Macmillan Canada, New York, Toronto.
Moore, A.W. (2016), “Predicting a future where the future is routinely predicted”, MIT Sloan Management Review, Vol. 58 No. 1, pp. 12-13.
Noriega, J.P., Rivera, L.A. and Herrera, J.A. (2023), “Machine learning for credit risk prediction: a systematic literature review”, Data, Vol. 8 No. 11, p. 169, doi: 10.3390/data8110169.
Overgoor, G., Chica, M., Rand, W. and Weishampel, A. (2019), “Letting the computers take over: using AI to solve marketing problems”, California Management Review, Vol. 61 No. 4, pp. 156-185, doi: 10.1177/0008125619859318.
Papadakis, V.M. (2006), “Do CEOs shape the process of making strategic decisions? Evidence from Greece”, Management Decision, Vol. 44 No. 3, pp. 367-394, doi: 10.1108/00251740610656269.
Patel, N. and Trivedi, S. (2020), “Leveraging predictive modelling, machine learning personalization, NLP customer support, and AI chatbots to increase customer loyalty”, Empirical Quests for Management Essences, Vol. 3 No. 3, pp. 1-24, available at: https://researchberg.com/index.php/eqme/article/view/46 (accessed 24 June 2023).
Perifanis, N.-A. and Kitsios, F. (2023), “Investigating the influence of artificial intelligence on business value in the digital era of strategy: a literature review”, Information, Vol. 14 No. 2, p. 85, doi: 10.3390/info14020085.
Phillips-Wren, G. (2012), “AI tools in decision making support systems: a review”, International Journal on Artificial Intelligence Tools, Vol. 21 No. 02, p. 1240005, doi: 10.1142/S0218213012400052.
Porter, M.E. (1996), “What is strategy?”, Harvard Business Review, Vol. 74, pp. 61-78.
Rahman, M., Kamal, M.M., Aydin, E. and Haque, A.U. (2022), “Impact of Industry 4.0 drivers on the performance of the service sector: comparative study of cargo logistic firms in developed and developing regions”, Production Planning and Control, Vol. 33 Nos 2-3, pp. 228-243, doi: 10.1080/09537287.2020.1810758.
Renieris, M.E. (2022), Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse, MIT Press, Boston.
Rigby, D. and Bilodeau, B. (2018), Management Tools and Trends 2018, Bain & Company, Boston.
Russell, S.J. and Norvig, P. (2016), Artificial Intelligence: A Modern Approach, 3rd ed., Pearson, Essex.
Sargut, G. and McGrath, R.G. (2011), “Learning to live with complexity”, Harvard Business Review, Vol. 89 No. 9, pp. 68-76.
Schneider, S. and Leyer, M. (2019), “Me or information technology? Adoption of artificial intelligence in the delegation of personal strategic decisions”, Managerial and Decision Economic, Vol. 40 No. 3, pp. 223-231, doi: 10.1002/mde.2982.
Selz, D. (2020), “From electronic markets to data driven insights”, Electronic Markets, Vol. 30 No. 1, pp. 57-59, doi: 10.1007/s12525-019-00393-4.
Seyedan, M. and Mafakheri, F. (2020), “Predictive big data analytics for supply chain demand forecasting: methods, applications, and research opportunities”, Journal of Big Data, Vol. 7 No. 53, 53, doi: 10.1186/s40537-020-00329-2.
Shancang, L., Li, D.X. and Shanshan, Z. (2018), “5G internet of things: a survey”, Journal of Industrial Information Integration, Vol. 10, pp. 1-9, doi: 10.1016/j.jii.2018.01.005.
Shrestha, Y.R., Ben-Menahem, S.M. and von Krogh, G. (2019), “Organizational decision-making structures in the age of artificial intelligence”, California Management Review, Vol. 61 No. 4, pp. 66-83, doi: 10.1177/0008125619862257.
Silva, S. and Kenney, M. (2018), “Algorithms, platforms, and ethnic bias: an integrative essay”, Phylon, Vol. 55 Nos 1 and 2, pp. 9-37, available at: https://www.jstor.org/stable/26545017
Simon, H. A. (1987), “Making management decisions: the role of intuition and emotion”, The Academy of Management Executive (1987-1989), Vol. 1 No. 1, pp. 57-64, available at: http://www.jstor.org/stable/4164720
Simon, H.A. (1991), “Bounded rationality and organizational learning”, Organizational Science, Vol. 2 No. 1, pp. 125-134, doi: 10.1287/orsc.2.1.125.
Sloan, J. (2019), Learning to Think Strategically, 4th ed., Routledge, doi: 10.4324/9780429030529.
Snowden, D. (2002), “The new simplicity: context, narrative and content”, Journal of Knowledge Management, Vol. 5 No. 10, pp. 11-15.
Snowden, D. (2005), “Strategy in the context of uncertainty”, in Handbook of Business Strategy, Vol. 6 No. 1, pp. 47-54, doi: 10.1108/08944310510556955.
Snowden, D.J. and Boone, M.E. (2007), “A leader's framework for decision making. A leader's framework for decision making”, Harvard Business Review, Vol. 85 No. 11, pp. 68-149.
Spaniol, M.J. and Rowland, N.J. (2023), “AI-assisted scenario generation for strategic planning”, Futures and Foresight Science, Vol. 5 No. 2, p. e148, doi: 10.1002/ffo2.148.
Spee, A.P. and Jarzabkowski, P. (2011), “Strategic planning as communicative process”, Organization Studies, Vol. 32 No. 9, pp. 1217-1245, doi: 10.1177/0170840611411387.
Spitz, R. (2020), “The future of strategic decision-making”, Journal of Futures Studies, available at: https://jfsdigital.org/2020/07/26/the-future-of-strategic-decision-making/ (accessed 23 February 2024).
Statement of AI risk, available at: https://www.safe.ai/statement-on-ai-risk (accessed 20 June 2023).
Taleb, N.N. (2007), The Black Swan: The Impact of the Highly Improbable, Random House, New York.
Tambe, P., Cappelli, P. and Yakubovich, V. (2019), “Artificial intelligence in human resources management: challenges and a path forward”, California Management Review, Vol. 61 No. 4, pp. 15-42, doi: 10.1177/0008125619867910.
Tan, K.L., Lee, C.P. and Lim, K.M. (2023), “A survey of sentiment analysis: approaches, datasets, and future research”, Applied Sciences, Vol. 13 No. 7, p. 4550, doi: 10.3390/app13074550.
Taskan, B., Junça-Silva, A. and Caetano, A. (2022), “Clarifying the conceptual map of VUCA: a systematic review”, International Journal of Organizational Analysis, Vol. 30 No. 7, pp. 196-217, doi: 10.1108/IJOA-02-2022-3136.
Thorén, K. and Vendel, M. (2019), “Backcasting as a strategic management tool for meeting VUCA challenges”, Journal of Strategy and Management, Vol. 12 No. 2, pp. 298-312, doi: 10.1108/jsma-10-2017-0072.
Trunk, A., Birkel, H. and Hartmann, E. (2020), “On the current state of combining human and artificial intelligence for strategic organizational decision making”, Business Research, Vol. 13 No. 3, pp. 875-919, doi: 10.1007/s40685-020-00133-x.
Valter, P., Lindgren, P. and Prasad, R. (2018), “Advanced business model innovation supported by artificial intelligence and deep learning”, Wireless Personal Communication, Vol. 100 No. 1, pp. 97-111, doi: 10.1007/s11277-018-5612-x.
Van Beurden, E.K., Kia, A.M., Zask, A., Dietrich, U. and Rose, L. (2013), “Making sense in a complex landscape: how the Cynefin framework from complex adaptive systems theory can inform health promotion practice”, Health Promotion International, Vol. 28 No. 1, pp. 73-83, doi: 10.1093/heapro/dar089, available at: http://www.jstor.org/stable/45153410
Vasilescu, C. (2011), Strategic Decision Making Using Sense-making Models: The Cynefin Framework, Romanian National Defense University, Regional Department of Defense Resources Management Studies, Brasov, available at: https://www.proquest.com/conference-papers-proceedings/strategic-decision-making-using-sense-models/docview/1127283106/se-2 (accessed August 2023).
Vincent, V.U. (2021), “Integrating intuition and artificial intelligence in organizational decision-making”, Business Horizons, Vol. 64 No. 4, pp. 425-438, doi: 10.1016/j.bushor.2021.02.008.
Von Krogh, G., Ben-Menahem, S.M. and Shrestha, Y.R. (2021), “Artificial intelligence in strategizing: prospects and challenges”, in Duhaime, I.M., Hitt, M.A. and Lyles, M.A. (Eds), Future of Strategic Management, Oxford University Press, Oxford, UK, pp. 625-646.
Wang, W. and Siau, K. (2018), “Artificial intelligence: a study on governance, policies, and regulations”, MWAIS 2018 Proceedings.
Wang, X., Lin, X. and Shao, B. (2022), “How does artificial intelligence create business agility? Evidence from chatbots”, International Journal of Information Management, Vol. 66, 102535, doi: 10.1016/j.ijinfomgt.2022.102535.
Weick, K.E. and Sutcliffe, K.M. (2006), Managing the Unexpected: Assuring High Performance in an Age of Complexity, Wiley India Pvt., New Delhi.
Weick, K.E., Sutcliffe, K.M. and Obstfeld, D. (2005), “Organizing and the process of sensemaking”, Organization Science, Vol. 16 No. 4, pp. 409-421, doi: 10.1287/orsc.1050.0133.
Wolf, C. and Floyd, S.W. (2017), “Strategic planning research: toward a theory-driven agenda”, Journal of Management, Vol. 43 No. 6, pp. 1754-1788, doi: 10.1177/0149206313478185.
Wu, C., Ramamohanarao, K., Zhang, R. and Bouvry, P. (2023), “Strategic decisions: survey, taxonomy, and future directions from artificial intelligence perspective”, ACM Computing Surveys, Vol. 55 No. 12, pp. 1-30, doi: 10.1145/3571807.
Corresponding author
About the authors
Roberto Biloslavo is Full Professor of Management and Vice-President of the Euro-Mediterranean University EMUNI. His research work is focused on strategic management, sustainable development and wisdom. He is a former Vice-rector for academic affairs and Vice-dean for education and research. Beside teaching and researching he consults to different domestic and international companies about strategic planning, sustainable business models and leadership development. He has a wide range of academic experience from programme development at all levels, to international collaboration and academic management and leadership. He is on the editorial board of a range of journals and regularly reviews for journals and conferences.
David Edgar is Professor of Strategy and Business Transformation and member of the Department of Business Management at Glasgow School for Business and Society. His main areas of research and teaching are in the field of strategic management, specifically dynamic capabilities, responsible management, business uncertainty and complexity, and innovation. He has worked with a range of organizations on Business Transformation projects in particular relating to e-business strategies, innovation, ethical sustainability and knowledge or talent management. David's interest in innovation relates to innovation as an element of dynamic capabilities and the design of business models.
Erhan Aydin is a Senior Lecturer at Liverpool Business School, Liverpool John Moores University, and an Affiliate Research Fellow at IPAG Business School Paris (France). He obtained his PhD from Brunel Business School, Brunel University London. He served as a Recognized Visiting Researcher at Said Business School, University of Oxford, from October 2016 to April 2017. His research focuses on key areas, including diversity, equality and inclusion in organizations, HRM, e-HRM and entrepreneurship. He is an active member of the Academy of Management, the European Academy of Management and Turkish Academy of Management. Dr Aydin has held prestigious editorial roles, including Editor-in-Chief of the Gender Issues Journal, Regional Editor of the Journal of Organizational Change Management and Associate Editor of Management Decision. His work on Digital Leadership received the Outstanding Paper Award from Management Research Review as part of the Emerald Literati Award.
Cagri Bulut is Prof. Dr of Business Management at the Business Management Department of Yasar University, Turkey. Prior joining to Yasar University Prof.Dr Bulut served as a postdoctoral Economist at FAO of the United Nations, CountrySTAT Project, at FAO headquarters, Rome–Italy. Cagri Bulut has a range of publications on Strategic Management of Innovation and Corporate Entrepreneurship. His research particularly focused on Competitiveness, Culture-Based Strategic Orientations of organizations and firm performance, Social and Technological Innovations, Intrapreneurship and Intellectual Capital Management. Currently Prof. Dr Cagri Bulut continues his research on Intrapreneurship besides his roles as the acting Dean of the Faculty of Economics and Administrative Sciences and the Dean of the Business Faculty.