Citation
Simon, J.P. (2019), "Guest editorial", Digital Policy, Regulation and Governance, Vol. 21 No. 3, pp. 197-207. https://doi.org/10.1108/DPRG-05-2019-080
Publisher
:Emerald Publishing Limited
Copyright © 2019, Emerald Publishing Limited
Winters and summers: the continuing story of artificial intelligence
There is a consensus that computers have not yet passed a valid Turing Test but there is growing controversy at this point. Ray Kurzweil, The Age of Spiritual Machines.
Introduction
Over the past few years, the issue of artificial intelligence (AI) has come to the fore among technological trends. Some years ago, big data and the Internet of Things (IoT) were among the leading technological trends as regularly analysed by specialist consultancies. A December 2017 IDATE survey of experts’ opinions ranked AI first as the key technology for the 2025 horizon, ahead of IoT [Big data was ranked fifth (Seval), 2018]. A February 2018 GSMA report states: “Many regard 2017 as the year AI sprang out of fiction and fully entered the mainstream consciousness for the first time” (GSMA, 2018a, p. 4).
AI is now expected to be one of the most pervasive disruptive technologies. AI was one of the big attractions at the 2018 Consumer Electronics Show (CES 2018[1]), supposedly “rocking” the CES floors according to the trade press. Magazines have been covering the topic quite extensively. A recent United Nations report stresses that AI has the “ability to transform vast amounts of complex, ambiguous information into real insights [and] has the potential to help solve some of the world’s most enduring problems and to undertake tasks with greater efficiency and scale than a human could” (Project Breakthrough [UN], 2017a). However, what AI is supposed to disturb and how is not very clear. As noted by Atkinson (2018) the techno-utopians, as he calls them, base their predictions on over-optimistic assumptions. Indeed, Bringsjord and Govindarajulu (2018) stress that, at the 1956 kick-off conference, the famous DARPA-sponsored summer conference at Dartmouth College, in Hanover, “Herb Simon predicted that thinking machines able to match the human mind were ‘just around the corner’”. However, as Aggarwal (2018) stresses: “The year 2000 had come and gone but Alan Turing’s prediction of humans creating an AI computer remained unfulfilled” (see illustration below, Chas Addams’s “Computer Repairman”). Kurtzweil even noted that Simon and Newell’s over-optimistic 1958 paper became “an embarrassment and making researchers careful to circulate their prognostication” (Kurttzweil, 2000, p. 69).
Jordan (2018) notes that “AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering”. Obviously, this is reminiscent of the kind of hype that big data was triggering a few years ago (De Prato, Simon, 2015). Typically, big data was hailed as the “new oil” of this century, now AI is praised as the “new electricity”[2]: “AI is the new electricity, and is transforming multiple industries” (Ng[3] Coursera, Stanford) (Artificial Intelligence Index, 2017) or as the glue of the next industrial revolution. The 2018 EC communication on AI notes as well “Like the steam engine or electricity in the past, AI is transforming our world, our society and our industry”. A softer way to describe the potential changes and disruptions is to predict a change similar to that caused by the introduction of the personal computer in the 1980s. One can remain sceptical or critical of what Filloux (2017) qualifies as “the upcoming miracles of A.I., all promoted by great silicon snake oil storytellers”. For the internet guru J. Lanier “AI is just a propagandist metaphor”[4] (quoted by Rendelues, 2018). They are indeed some good reasons to be cautious, as noted by Ezratty (2017a), because of the current hype, a lot of companies are simply doing what he called “AI washing”, that is, dressing up old products and services with fancy AI features or rebranding them with the AI tag. For instance, the German VC company Asgard, when creating their database of AI companies in Europe, found that only 60 per cent of companies claiming to be AI firms were genuine AI firms[5] (Westerheide, 2017). James (2017) noted that “Marketers start attaching the buzzword to their projects to give them a patina of holier-than-thou tech”.
Even McKinsey Global Institute (2017a), the consulting firm that has been one of the first to deal with the big data phenomenon[6], notes that AI investment is growing fast, but that AI adoption outside of the tech sector is at an early, often experimental, stage. The report stresses that not only has the scope of AI deployment been limited so far but there also is “only tepid demand for artificial intelligence applications for businesses”. Their survey[7] found that many business leaders are uncertain about what exactly AI can do. The consulting firm deems that although AI technologies have advanced significantly in recent years, “adoption, however, remains in its infancy” (p. 21). One of the reports of the same company issued in 2018 states that (McKinsey Global Institute, 2018a, p. 28): “AI can seem an elusive business case for now”. Another consulting firm, PWC[8] (2017a) found a similar result through their 2017 survey. A survey on the French market also stresses that AI is still not a priority for the interviewed companies (BCG/Malakoff Médéric, 2018, p. 27): only 20 per cent are considering AI as a priority. Up to now, with 2016 revenues estimated by Tractica (2016) at just US$644m, the market for AI looks indeed modest. Ironically, a few years ago, most of the same companies were also noting that the development of the then new hype, big data, has been characterized by an uneven deployment between companies, sectors and regions.
On the consumer side, part of the notoriety of AI derives from the fact that it has been embedding itself in daily life through AI-powered virtual assistants (voice user interface such as Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, Google’s Google Assistant and Samsung’s Bixby). AI became almost omnipresent in smartphones through not only more and more sophisticated photo[9] and video functions but also login with facial recognition (e.g. iPhone X’s Face ID). GSMA (2018b, p. 48) even speaks of “resurrection” of voice as a user interface. However even with these popular applications, little is known about them. Companies do not disclose data about the use. So as stressed by Gassée (2018), absent numbers and some basic questions about the patterns of use (when, where, by whom, how often?) remain unanswered. This remark is in line with one comment of the AI Index 2017: “Without the relevant data for reasoning about the state of AI technology, we are essentially “flying blind” in our conversations and decision-making related to AI”. A recent report from Asgard and the consultancy Roland Berger deems that “globally there are too many chatbots and too few real problem-solving applied AI solutions” (Asgard and Roland Berger, 2018). The three techniques for implementing AI used today (rule-based machine learning, neural networks and pattern recognition) were invented decades ago, according to James (2017), but in spite of refinement and an increase in accuracy because of the use of big data, he holds that “the results aren't particularly spectacular, because there have really been no breakthroughs”.
To make things worse, there is no real consensus on any definition, as the definition does fluctuate over time (see paper in this issue by J.P Simon), focussing on some aspects then moving onto some others (Russell and Norvig, 2009). Taking into account this fuzzy background, it is not surprising that an AI researcher could state, “There is, clearly, an AI bubble at present” (Wooldridge,[10] AI Index, 2017, p. 66). Or to put it more mildly, there may be a discrepancy between breakthroughs in AI in the academic research community and some hyped commercial ventures, as stressed by Zia (2017). Wooldridge writes “The question that this report[11] raises for me is whether this bubble will burst, [like] the dot-com boom of 1996-2001, or gently deflate, and when this happens, what will be left behind”?[12]
The 60-year history of AI reveals some ups and downs or “AI winters”[13], for instance, in the 1980s, a similar hype took place around what was then called “expert systems”. Stone et al. (2016)[14] although noted that “the rate of progress in AI has been patchy and unpredictable”, consider that there have been significant advances over the past 60 years. Advances that have recently occurred across the fields of robotics and AI are fuelled by the rise in computer processing power, the profusion of data and the development of AI techniques such as “deep learning”.
R&D programmes and strategic plans
Indeed, after a rather long “winter”, AI resurfaced in the 1980s around the development of “expert systems” (software programs that assess a set of facts using a database of expert knowledge and then offer solutions to problems). The government launched new research programmes and an array of new research entities were created. For instance, in the wake of this “summer”, the German Research Centre for Artificial Intelligence (DFKI), now one of the world’s largest AI labs, was founded in 1988. In the same period, France launched a national research programme called PRC-IA (Programme de Recherche Concertée en Intelligence Artificielle) run by the French entity in charge of research, CNRS. The programme was dealing with what was identified as the “heart of AI” at that time: knowledge representation and reasoning process, learning and analogy, control aspects in automated reasoning and expert systems methodology for applications. The main centres were located in Grenoble, Nancy, Paris, Marseille, Rennes and Toulouse and were mostly run by public entities with some private companies. Most national scientific organizations were created during that period.
Since 2016, a growing number of governments have been crafting national AI development plans (Dutton, 2018[15]): France (France Intelligence Artificielle, 2017a, 2017b, Mission Villani, 2018a, 2018b, 2018c), China (Fabre, 2017; Feijoo, 2019; McKinsey Global Institute, 2017b), Japan (Mext, 2016), South Korea (Government of the Republic of Korea Interdepartmental Exercise, 2016; Benaich and Hogarth, 2018), the USA (NSTC, 2016a), and the UK (Hall and Pesenti, 2017), later joined by Canada (DG Trésor, 2018), Taiwan (Fulco, 2017), Sweden (Lauterback, 2018), have all issued national strategic plans with significant AI dimensions, in some cases backed up by billions of dollars of AI-specific funding initiatives (McKinsey Global Institute, 2017a, 2017b, 2017c, p. 36). India is also now stepping up on the AI market ladder [National Association of Software & Service Companies (Nasscom), 2018], and Germany adopted its €3bn plan as of November 2018 (Strategie Deustschland, 2018).
In October 2016, the Obama administration published three reports that concluded that AI technology will be the driving force behind transformations across both the economy and national security. One of the reports (White House, NSTC, 2016c, p. 5) proposed a national AI R&D strategy that establishes a set of objectives for federally funded AI research, both research occurring within the government and federally funded research occurring outside of government, such as in academia. The ultimate goal of this research is to produce new AI knowledge and technologies that provide a range of positive benefits to society, while minimizing the negative impacts.
As a follow-up to the previous 2015 “Internet Plus plan”, AI became a major priority in China with a new plan foreshadowing massive increases in funding for AI R&D. With China’s July 2017 “Next Generation Artificial Intelligence Development Plan”, the country has announced that it sees AI as the transformative technology that will underpin future economic and military power. The plan calls for exceeding all other nations in AI by 2030 and aims to create a US$150bn industry by 2030 (Fabre, 2017, p. 15).
Europe did not have any uniform AI strategy, but the EU member states have numerous differing measures in place (Simon, 2018). France and the UK adopted strategic plans. With the 2017 France IA (“Intelligence Artificielle”), France’s objective was to clarify discussion and debate on AI and to boost the activeness of the French AI community domestically and internationally. In September of that same year, Cédric Villani, a well-known French mathematician was appointed as the head of a “mission sur l’intelligence artificielle”. One of the goals was to flesh out the mapping and inventory offered by the previous reports. On 29 March 2018, the final report “For a meaningful artificial intelligence. Towards a French and European Strategy” was released through a conference entitled “AI for humanity: French strategy for artificial intelligence”.
In 2013, the UK Government identified “robotics and autonomous systems” (RAS) as one of its “Eight Great Technologies” (Eight Great Technologies infographic, 2013). In 2015, the Alan Turing Institute was created as the national institute for data science. In 2016, the House of Commons Science and Technology Committee released a report on “robotics and artificial intelligence” (House of Commons, 2016). In March 2017, the UK Government announced an industry-led review of how industry and government can create the conditions for the AI industry to continue to thrive and grow in the UK. As a contribution to the debate and to the government’s industrial strategy, Hall and Pesenti (2017) released a report, “Growing the Artificial Intelligence Industry in the UK”.
The European Commission (EC) also took various initiatives which aim to harmonize practices and legislation, and to support the development of AI and digital business. AI is highlighted in the Horizon 2020 Work Programme through various thematic fields (Craglia, 2018). In April 2018, 24 member states and Norway committed to joining forces on AI and entered into a strategic dialogue with the Commission. That same month, the EC released a communication on AI, paving the way for an increased coordination of policies (EC, 2018a). Later that same year, the Commission appointed 52 experts to a new High-Level Expert Group on Artificial Intelligence (comprising representatives from academia, civil society, as well as industry) (EC, 2018b) to serve as the steering group for the European AI Alliance’s work (EC 2018c). As of January 2019, the EC launched its AI4EU project which aims to drive adoption of AI in a wide range of industries (EC, 2018c).
The aim of this issue is to better document the issue (what is AI?), to look for a set of possible answers to this “What is AI?” question, which, as noted by Bringsjord and Govindarajulu, (2018), has considerable currency in the field itself. It offers some analysis of its various dimensions (What kinds of applications are there now and envisaged?) and the challenges it brings (What issues are arising or might arise, e.g. ethical, legal, socio-economic, employment, etc.?).
The impact of AI triggered hot debates with polarized opinions and a strong dividing line between techno-optimists and “heralds of gloom”. J. Lanier stresses, “If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems […]” (Lanier, in Brockman, 2018). This issue is an opportunity to explore some of these myths (Figure 1).
The contributions gathered for this issue review the definition of AI, explore some of the main current applications (agriculture, automotive, health, retail and the industries involved), assess the specific markets, deal with some of the major players and identify some of the issues (bias in data and algorithms […]) and unintended consequences raised as well as the main challenges ahead (data governance, security, transparency […]). The issue opens with two papers introducing the AI landscape (Jean Paul Simon and Anastassia Lauterbach), and then it deals with vertical applications in retail (Felix Weber and Reinhard Schütte) and health (Jenifer Winter and Elisabeth Davidson) followed by three papers dealing with societal issues (Antonio Vetro et al. on AI as “socially responsible agent”, Martha Garcia-Murillo on the historical impact of previous revolutionary computing technologies and Pierre Jean Benghozi and Hughes Chevalier on the HAL syndrome).
Jean Paul Simon follows some of the ups and downs or “AI winters” of the past 60 years. Techno-optimists and techno-pessimists may share the same belief about the potential disruptions, about an unprecedented coming pace of change, but evidence to reach any such strong conclusions is hard to find. Jean Paul Simon aims at marshalling more elements about the phenomenon. He reviews the definitions of this umbrella term for the science of making machines smart, gives an overview of the current main current applications, thereby fleshing out the scope of the phenomenon. According to most experts, the past 15 years have seen considerable AI advances, especially on machine learning. However, whatever the current optimist predictions, these did not translate so far into major move in the markets; the markets seem rather modest. The pace of the change is uncertain and likely to not be as drastic as often held. If supply is not well defined, demand on both the business markets and consumer markets is even more difficult to anticipate.
Although, the impact is difficult to predict, there are some good reasons to anticipate leading applications and growth in various areas. Some applications in some areas are often seen, nevertheless as promising, for instance, in domains for IT, such as health care or education, or the usual variation on “smart” application and devices (cars, building, cities […]) often linked to the IoT and big data applications. Advances are uneven in each “domain”. This holds true for the industrial sectors that seem most involved already in AI as for regions – some are more “advanced” than others. Early movers tend to be large businesses that are digitally mature and more focussed on AI-enabled growth. One finds among sectors leading in AI, telecom and tech companies, financial institutions and automakers. Indeed, most IT companies, digital natives, are already leaders in the field, as they are in the field of big data. GAFAM[16] and BATX companies are investing in the technologies, creating labs, hiring talents and acquiring pioneering companies.
EU countries are generally seen as somehow lagging behind the USA and China in terms of scale of AI investment and leading companies. However, there are some reasons to not be overly pessimistic as Europe is, nevertheless, home to a world-leading AI research community, as well as innovative entrepreneurs and deep-tech start-ups. France and the UK, the latter generally considered as a leader or co-leader in the field of AI, came up with comprehensive AI strategy plans as noted. The EC is following suite and coordinating initiatives. Jean Paul Simon concludes with some of the challenges that policymakers are likely to face.
Anastassia Lauterbach takes a closer look at the various AI concepts as shaped by social sciences, and reviews the progress made around statistical approaches in AI, represented in “different tribes” of machine learning. She then considers the main challenges in today’s machine learning technologies. She stresses that the implementation challenges focus around three major areas: bias in data and algorithms, transparency in AI models and security questions around AI.
The following section is devoted to the presentation of three case studies: agriculture, automotive and health care. In agriculture, there are five broad categories of AI applications, namely, satellite imagery, in-field monitoring, crop and soil heat assessment, agricultural robots and predictive analytics. Health care is a field where a lot of collaboration between humans and machines is to be expected. AI will not make human doctors obsolete but allow them to focus more on the patients instead of doing paper work or image analyses. Patients should get better advices and improved diagnosis.
A number of national AI strategies are discussed: Canada, China, Finland, France, the EU, India, South Korea, Sweden, the UK and the USA. The author then suggests a social governance framework for AI which might provide a base for future discussions between governments and other actors in society. She notes that creating an ethical framework around goal alignment between humans and machines, getting reward mechanisms right and addressing human emotions and instincts in potential collaboration scenarios with robots are not easy tasks. A failure to demystify AI technologies and to promote a dialogue around them across different actors in society may further lead to an increase in inequality, discrimination and marginalization. She calls for policymakers and business leaders to step in to prevent this scenario.
Felix Weber et al. stress that the relatively high personnel costs and use, on the one hand, and the low operating margins, on the other, make the retail domain appear as the ideal industry for the application of AI and related technologies. The article concentrates on the main value-added core tasks of retailing, i.e. managing goods, ordering goods, serving customers, handing out goods, transporting goods, making goods available and carrying out financial accounting activities (combining billing goods, accounts payable and receivable and auditing).
This paper is based on an analysis a total of 6590 articles from relevant scientific journal that have dealt with the application of AI in wholesaling and retailing. To give an overview about the market adoption for the worldwide retail industry, the ten largest retail companies (ranked by their turnover in 2016) and their publicly announced AI initiatives and the applications already in use were analysed.
This paper finds that there is a vast number of possible applications of AI in all areas of retail and wholesale; however, within the different value-added core tasks, the numbers of conceivable applications (scientifically) and practical applications vary greatly. For instance, a particularly small number of AI methods can be seen in the areas that interact directly with the customer. Looking at the market adoption of AI within the biggest trade companies, the same pattern is observable: there are some pioneers (Amazon, Walmart) and some challengers (Walgreens or The Home Depot), but a lot of other trade companies show neither any active use nor any efforts to invest in AI applications in the future. The paper concludes by asking whether the financial and competitive success of the examined companies would be related in the future to early or late investment in AI technologies.
Jenifer Winter and Davidson note that the combination of AI, deep learning and digitized PHI (Protected Health Information or Personal Health Information) data has been heralded as transformational for health care. IT firms aggressively pursue AI ventures in health care; however, if the projected benefits from health-care AI are stressed, much less consideration has been given to possible unintended or undesirable consequences of these developments for individuals and for society, and that many questions remain unanswered.
The authors highlight two attributes of AI deep learning that, while not unique to the health-care setting, pose significant and novel challenges to PHI data governance: the scale and scope of data consumed by deep learning algorithms, and the opacity of algorithms in regards to how data are used and new data or results are produced. On the one hand, the scale and scope of personally identifiable health-related data that will be available in the near future for AI and deep learning have increased dramatically in the past decade. On the other hand, in the health-care domain, opacity is problematic for not only monitoring what PHI data are used but also understanding the purposes and outcomes of data use, for instance, the possibility of discriminatory profiling.
Therefore, monitoring PHI data governance will be particularly problematic. They argue that existing data governance structures will not be sufficient to address the radical uses and reuse of PHI data brought about by deep learning technologies, focussing on a review of two common governance approaches: pre-emptive privacy regulation and informed consent. There is a need to enhance PHI data governance structures even in the case of the GDPR, presented as currently the strongest data protection regime in the world. The paper concludes by considering new approaches to data governance required to not only enable accessibility of PHI for AI innovations but also preserve the autonomy and rights of individuals.
Antonio Vetrò analyses the limitations of the mainstream definition of AI as what he described as “a rational agent”, which currently drives the development of most AI system. He investigates the kind of rationality that characterizes AI, revealing some of the biases of the forms of knowledge governing AI such as disproportionate and unbalanced data sets (unbalanced data sets can overestimate or underestimate the weight of certain variables), collinearity due to data collection techniques that leads to the frequent mistake of “correlation with causation”.
Although a deterministic approach to the design and building of AI agents can give the impression of giving “scientific and objective” bases of their decision, such logic does not avoid the risk of discriminating decisions. His analysis is based on the study of three cases of collinearity applying the metrics that he defined to these three data sets (credit card default, COMPAS recidivism racial bias data set and student alcohol consumption). It shows how a too narrow focus on rationality in terms of efficiency and optimization could lead to excessive risks of discrimination towards specific population groups, often disadvantaged.
He gives an overview of other open issues connected to the use of AI agents for decision-making purposes – liability, transparency, explainability, privacy – followed by a few general principles for designing AI agents that are more respectful of the human principles. According to his analysis, a more comprehensive approach is needed, which should take into account, alongside the definition of AI as a “rational agent”, another one in parallel: that of a “socially responsible agent”.
Martha Garcia-Murillo looks retrospectively at the impact that previous revolutionary computing technologies had in the past, and the institutional values that shaped the way workers were affected. The paper is a historical recollection of the experiences that the US society has had with technological innovation, based on an institutional economics approach.
The instrumental nature of the private sector, coupled with the ceremonial values of society and the mixed values of government, plunged many of the communities affected not only by automation but also by other external forces, into endemic cycles of poverty, continued urban decay and crime. Later, in the 1980s, the introduction of personal computers by companies produced disappointments reminiscent of those of the earlier mainframe period. In both cases, there were some evidence of deskilling. These technologies have had negative effects on certain communities who were unable to transition successfully.
Nevertheless, the introduction of sophisticated computing technologies has increased productivity, reduced hazards, improved workers’ conditions and society’s quality of life and expanded the range of goods and services that we have at our disposal. It has also generated an explosion of new knowledge available for both education and research. AI will bring positive changes to society in similar ways that other computing technologies have in the past. However, the historical account suggests that the transition is likely to follow the same patterns as well. It is possible that many AI applications will bring with them the same types of installation problems, system bugs and troubleshooting frustrations that have plagued previous generations of computing. As AI is not entirely ready for deployment, the transition is likely to start with the pains of implementation.
To deal with these issues, the author notes that one of the major challenges for policymakers and society at large is the fact that, even at times of rapid technological development, organizational change happens at a relatively slower pace. The author concludes that our ability to enjoy the wonders of AI while minimizing the negative effects will depend on our ability to recognize its effects on work and then experiment with ways to alleviate these negative effects.
Pierre Jean Benghozi and Hughes Chevalier use the famous computer HAL, the hero of Kubrick’s “2001: A Space Odyssey” as a metaphor of the perceptions and misperceptions of the almightiness of computers: the HAL syndrome. However, 50 years after the release of film, the HAL syndrome, as it continues to influence our minds, becomes the basis of the questioning, concerns and enthusiasms triggered by AI.
What is a stake with the HAL syndrome hovers around the vision of the limitless possibilities of AI and of its necessary control by human being. According to the authors, HAL helps us to understand that AI is very often just an umbrella term that covers very different configurations and systems, made out of a tinkering of heterogeneous applications and services relying as much on the raw computing power as on self-learning and connectivity. This dimension of “bricolage” of AI is without doubt one of the answers to the need to master and monitor. Indeed, the technological diversity of the various configurations of components of AI, as well as the unavoidable limits of their interoperability, offers for sure the best protection against the almightiness of the HAL syndrome.
Figures
Notes
For a report on the CES 2018, see Ezratty (2018).
Or combined with the “new oil” as in: “Big Data is the Oil, Artificial Intelligence the Drill. Artificial Intelligence is the enabler behind several technologies & concepts” (ECommerce Foundation, 2018).
An expert in the field nevertheless, former Baidu Research head and now with Google, running the Google Brain deep learning project.
My translation from Spanish.
They found indeed a lot of “AI washing” with companies using AI as a marketing tool and adopting buzz words, such as “customer intelligence”, “marketing automation” and “big data optimization”.
With a widely-read 2011 report on big data also dealing with a “next frontier”!
Of more than 3,000 businesses around the world.
PwC’s 2017 Digital IQ survey of senior executives worldwide.
For instance, Huawei’s latest premium smartphone, P20, is 4D predictive focus, calculating moving objects and focussing on them to capture detail. The device also features AI-assisted composition, providing intelligent suggestions to frame group shots and landscapes.
An AI researcher who leads the computer science department at the University of Oxford.
The AI Index.
Adding: “My great fear is that we will see another AI winter, prompted by disillusion following the massive speculative investment that we are witnessing right now. There are plenty of charlatans and snake oil salesmen out there, who are quite happy to sell whatever they happen to be doing as AI”. LeCun, one of the founders of the field of dep learning, even speaks of “Cargo Cult AI’ or ‘Potemkin AI”.
The term “AI winters” is currently used to describe periods when progress slows and funding decreases.
The Stanford report is part of long-term investigation of the field of Artificial Intelligence, the “One Hundred Year Study on Artificial Intelligence”, launched in the fall of 2014.
The article summarizes the key policies and goals of each strategy, as well as related policies and initiatives that have announced since the release of the initial strategies. The author keeps it updated on a regular basis.
FANG (Facebook, Amazon, Netflix and Google) is now also frequently used on the content side, adding Netflix as a leading digital native company (See: https://www.investopedia.com/terms/f/fang-stocks-fb-amzn.asp). In India, Flipkart, the major competitor of Amazon, just acquired Liv.ai, a Bengaluru-based startup involved in the development of speech recognition platforms using Artificial Intelligence. BATX is the Chinese equivalent, standing for Baidu, Alibaba, Tencent and Xiaomi.
References
Aggarwal, A. (2018), “Resurgence of AI during 1983-2010”, available at: www.kdnuggets.com/2018/02/resurgence-ai-1983-2010.html
Artificial Intelligence Index (2017), “Annual report 2017”, available at: https://aiindex.org/2017-report.pdf
Asgard and Roland Berger (2018), “The global artificial intelligence landscape”, available at: https://asgard.vc/global-ai/
Benaich, N. and Hogarth, I. (2018), “The state of artificial intelligence in 2018 report”, available at: www.slideshare.net/nb410/the-state-of-artificial-intelligence-in-2018-a-good-old-fashioned-report-103568798?ref=www.stateof.ai/?utm_source=Asgard&utm_source=Asgard+Singularity+Fund+2018&utm_campaign=e9ceba39d2-Asgard-August-Update-2018&utm_medium=email&utm_term=0_f2a91683dd-e9ceba39d2-129201137
Bringsjord, S. and Govindarajulu, N.S. (2018), “Artificial intelligence”, in Fall 2018 Edition, Zalta, E.N. (Ed.), The Stanford Encyclopedia of Philosophy, available at: https://plato.stanford.edu/entries/artificial-intelligence/
Brockman, J. (2018), “The myth of AI’, a conversation with Jaron Lanier”, Edge, October 2018, available at: www.edge.org/conversation/jaron_lanier-the-myth-of-ai
Craglia, M. (Ed.) (2018), “Artificial intelligence: a European perspective”, EUR 29425 EN, Publications Office, Luxembourg, 2018, ISBN 978-92-79-97217-1, doi: 10.2760/11251, JRC113826.
DG Trésor (French Ministry of Finance) (2018), “Étude comparative internationale sur les stratégies nationales en matière d'intelligence artificielle”, available at: www.tresor.economie.gouv.fr/Articles/2018/03/28/etude-comparative-internationale-sur-les-strategie-nationales-en-matiere-d-intelligence-artificielle
Dutton, T. (2018), “AI strategies”, available at: https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd
European Commission (EC) (2018a), “Communication from the commission to the European parliament, the European council, the council, the European economic and social committee and the committee of the regions on artificial intelligence for Europe”, available at: https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe
European Commission (EC) (2018b), “High-level expert group on artificial intelligence”, available at: https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence?utm_source=Asgard+Singularity+Fund+2018&utm_campaign=15b68c19a1-Asgard-2018-Review&utm_medium=email&utm_term=0_f2a91683dd-15b68c19a1-129201137
European Commission (EC) (2018c), “Artificial intelligence: the AI4EU project launches on 1 January 2019”, available at: https://ec.europa.eu/digital-single-market/en/news/artificial-intelligence-ai4eu-project-launches-1-january-2019
European Commission (EC) (2018c), “European AI alliance”, available at: https://ec.europa.eu/digital-single-market/en/european-ai-alliance
Fabre, G. (2017), China' Digital Transformation: Why is artificial Intelligence a priority for Chinese R&D.
Feijoo, C. (2019), The Industrial Innovation Ecosystem of Artificial Intelligence, China, Current status and prospects, European Commission, Joint Research Centre, Seville.
Filloux, F. (2017), “Let’s welcome experts to our journalism schools”, Monday Note, Monday Note #479 11 December.
France Intelligence Artificielle (2017a), “Rapport de synthèse”, available at: www.economie.gouv.fr/files/files/PDF/2017/Rapport_synthese_France_IA_.pdf
France Intelligence Artificielle (2017b), “Rapport de synthèse: groupe de travail”, available at: www.economie.gouv.fr/files/files/PDF/2017/Conclusions_Groupes_Travail_France_IA.pdf
Fulco, M. (2017), “Taiwan's artificial intelligence adventure”, available at: https://international.thenewslens.com/article/81606
Government of the Republic of Korea Interdepartmental Exercise (2016), “Mid- to long-term master plan in preparation for the intelligent information society”, available at: http://msip.go.kr/cms/english/pl/policies2/__icsFiles/afieldfile/2017/07/20/Master%20Plan%20for%20the%20intelligent%20information%20society.pdf
GSMA (2018a), “The future of tech: how UK companies are driving a connected future”, Whitepaper, available at: www.mobileworldlive.com/wp-content/uploads/2018/01/DIT-WP.pdf
GSMA (2018b), “The mobile economy 2018”, available at: www.gsmaintelligence.com/research/?file=061ad2d2417d6ed1ab002da0dbc9ce22&download
Hall, W. and Pesenti, J. (2017), “Growing the artificial intelligence industry in the UK”, available at: www.gov.uk/government/uploads/system/uploads/attachment_data/file/652097/Growing_the_artificial_intelligence_industry_in_the_UK.pdf
House of Commons (2016), “Robotics and artificial intelligence”, House of Commons Science and Technology Committee, Fifth Report of Session 2016-17, available at: www.publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/145.pdf
James, G. (2017), Is Artificial Intelligence Just Marketing Hype? Why are the big breakthroughs always five to 20 years in the future?.
Jordan, M. (2018), “Artificial intelligence – the revolution hasn’t happened yet”, available at: https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7
Kurttzweil, R. (2000), The Age of Spiritual Machines, Penguin, London.
McKinsey Global Institute (MGI) (2017a), “Artificial intelligence: the next digital frontier?”, available at: www.mckinsey.com/∼/media/McKinsey/Industries/Advanced%20Electronics/Our%20Insights/How%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/MGI-Artificial-Intelligence-Discussion-paper.ashx
McKinsey Global Institute (MGI) (2017b), “Artificial intelligence: implications for China”, available at: www.mckinsey.com/∼/media/McKinsey/Global%20Themes/China/Artificial%20intelligence%20Implications%20for%20China/MGI-Artificial-intelligence-implications-for-China.ashx
McKinsey Global Institute (2018a), “Notes from the AI frontier: applications and value of deep learning”, discussion paper, available at: www.mckinsey.com/∼/media/McKinsey/Featured%20Insights/Artificial%20Intelligence/Notes%20from%20the%20AI%20frontier%20Applications%20and%20value%20of%20deep%20learning/Notes-from-the-AI-frontier-Insights-from-hundreds-of-use-cases-Discussion-paper.ashx
Mext (2016), “Challenges in realizing a super smart society supported by the IoT, big data, and artificial intelligence - Japan as a global frontrunner”, available at: www.mext.go.jp/en/publication/whitepaper/title03/detail03/1384513.htm
Mission Villani (2018a), “Mission villani sur l'intelligence artificielle (executive summary), (MVES)”, available at: www.aiforhumanity.fr/pdfs/MissionVillani_Summary_ENG.pdf
Mission Villani (2018b), “Mission villani sur l'intelligence artificielle (final report) (MVFR), for a meaningful artificial intelligence: towards a French and European strategy”, available at: www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf
Mission Villani (2018c), “Mission villani sur l'intelligence artificielle (booklet) (MVB), what is artificial intelligence?”, available at: www.aiforhumanity.fr/pdfs/MissionVillani_WhatisAI_ENG(1)VF.pdf
National Association of Software & Service Companies (Nasscom) (2018), “Artificial intelligence primer 2018”, available at: www.nasscom.in/knowledge-center/publications/nasscom-artificial-intelligence-primer-2018
National Science and Technology Council (NSTC) (2016a), “Preparing for the future of artificial intelligence”, Executive Office of the President, available at: https://info.publicintelligence.net/WhiteHouse-ArtificialIntelligencePreparations.pdf
National Science and Technology Council (NSTC) (2016c), “National artificial intelligence research and development strategic plan”, available at: https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/national_ai_rd_strategic_plan.pdf
Rendelues, C. (2018), “Enredados y desecantados”, Babelia, El Pais, 15 September, pp. 2-3.
Russell, S. and Norvig, P. (2009), Artificial Intelligence: A Modern Approach, 3rd ed., Prentice Hall, Saddle River.
Simon, J.P. (2018), “Review of public programmes, research funding, and similar initiatives on AI in Europe at a national, subnational or super-national level”, EC Joint Research Centre, Digital Economy Unit (B6).
Strategie Deustschland, B. (2018), “Artificial intelligence (AI) made in Germany”, available at: www.ki-strategie-deutschland.de/home.html
Tractica ( (2016), “Artificial intelligence market forecasts”, available at: www.tractica.com/newsroom/press-releases/artificial-intelligence-revenue-to-reach-36-8-billion-worldwide-by-2025/
Westerheide, F. (2017), “The European artificial intelligence landscape: more than 400 AI companies built in Europe”, Asgard, available at: https://medium.com/cityai/the-european-artificial-intelligence-landscape-more-than-400-ai-companies-build-in-europe-bd17a3d499b
Further reading
HM Government (2017), “UK industrial strategy a leading destination to invest and grow”, available at: www.gov.uk/government/uploads/system/uploads/attachment_data/file/672468/uk-industrial-strategy-international-brochure-single-pages.pdf
HM Government (2019), “Eight great technologies infographic, October 2013”, available at: www.gov.uk/government/uploads/system/uploads/attachment_data/file/249255/eight_great_technologies_overall_infographic.pdf
Marquart, S. (2016), “South Korean government announces nearly $1 billion in AI funding”, available at: https://futurism.com/south-korean-government-announces-nearly-1-billion-ai-funding/
National Science and Technology Council (NSTC) (2016b), “Artificial intelligence, automation and the economy”, available at: www.whitehouse.gov/sites/whitehouse.gov/files/images/EMBARGOED%20AI%20Economy%20Report.pdf
Simon, H.A. and Newell, A. (1958), “Heuristic problem solving: the next advance in operations research”, Operations Research, Vol. 6 No. 1.
About the author
Jean Paul Simon is based at JpsMultiMedia, Seville, Spain.