Citation
Isabella, G., Almeida, M.I.S.d. and Mazzon, J.A. (2023), "ThinkBox: One-way road: the impact of artificial intelligence on the development of knowledge in management", RAUSP Management Journal, Vol. 58 No. 3, pp. 249-255. https://doi.org/10.1108/RAUSP-07-2023-273
Publisher
:Emerald Publishing Limited
Copyright © 2023, Giuliana Isabella, Marcos Inácio Severo de Almeida and Jose Afonso Mazzon.
License
Published in RAUSP Management Journal. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction
This article aims to discuss and provoke researchers in Applied Social Sciences about the impact of artificial intelligence (AI) on producing and disseminating scientific content. This is a growing concern among researchers from various fields after the popularity that tools and applications (such as ChatGPT – generative pre-trained transformer) have gained, including in the scientific community. Studies show that even scientists have been unable to identify whether scientific texts were produced by humans or AI technology (Else, 2023). In view of these concerns, a good practice guide for scientific manuscript production was published in an editorial format, highlighting the need for researchers to publicize how this tool is used in the production of manuscripts (Buriak et al., 2023). We imagine it is time to broaden the scope of the discussion of these issues in the Brazilian academic community. For example, can an AI-based software build and deconstruct theses, empirically test research hypotheses, analyze data and contribute to science? Furthermore, can it automate the article evaluation process in scientific journals? What are the impacts and limits of these actions on knowledge production?
Artificial intelligence
AI is based on developing intelligent computer systems (McCarthy, 2004) capable of simulating human reactions, performing routine tasks based on pattern recognition and assisting in decision-making. Its diverse applications include robotics, process automation (Cui & van Esch, 2022), data analysis and business intelligence (Luo, Tong, Fang, & Qu, 2019). In addition, learning from “mistakes” allows for feedback on information (Jordan & Mitchell, 2015). Undoubtedly, AI represents a disruptive innovation in all areas of knowledge, hence its undeniable impact on universities.
There are various applications of AI in business, both in B2C aspects, such as aiding checkout systems (Cui & van Esch, 2022) and supporting structured sales (Luo et al., 2019), as well as in B2B (Saura, Ribeiro-Soriano, & Palacios-Marqués, 2021).
Applications developed by large corporations, such as IBM Watson and Microsoft Azure Cognitive Service, denote continuous AI development. In July 2019, OpenAI launched the OpenAI API, which allows developers to create their own applications. In 2022, the platform was made available for general users to test conversation applications. The goal was to allow people to converse naturally with computers, which seek to interpret message intentions through Natural Language Processing techniques and provide the most appropriate responses to users’ demands. Google Bard is one of the latest tools of this kind. In April 2023, Google DeepMind was created to accelerate the company’s AI efforts in tasks such as speech recognition, computer vision, content translation, robotics, image processing and encryption systems.
According to the Stanford University report, private investment in AI in 2021 “totaled around $93.5 billion – more than double the total private investment in 2020 […].” Additionally, “the United States and China had the highest number of cross-border collaborations in AI publications from 2010 to 2021, increasing fivefold since 2010.” (Clark & Perrault, 2022, p. 3). The global AI market size was valued at US$136.55bn in 2022 and is projected to expand at a compound annual growth rate of 37.3% from 2023 to 2030, according to Grand View Research (2022).
Although there are positive and negative aspects, it is undeniable that AI is here to stay. Many articles on the use of AI in Administration have been published, and there is still much to be explored in this field. However, its impact is not limited to Administration and its subareas but also the scientific-academic process, its content, analysis and development of scientific communication material in the form of articles and research reports. With the increasingly progressive entry of AI into the academic world, universities have begun to question their teaching methods (Huang, 2023), while researchers reconsider their academic conduct (Thorp, 2023) and possibilities for incorporating technologies such as ChatGPT in the knowledge production process (Buriak et al., 2023). Therefore, it is in this field of discussion that this essay is located. The aim is to discuss and provoke the researcher in the field of Applied Social Sciences about the impact of AI on scientific knowledge production.
Presence of AI in academic publishing in administration and its use in the development of scientific papers
According to the most recent data released on the Scopus database website (February 2023), 6,527 and 3,916 scientific articles containing the keyword AI in their abstract were published in the fields of Social Sciences and Administration, respectively, in the period from 2010 to 2022. This growth is accompanied by researchers’ concern about systematizing knowledge in various subareas of Administration, such as Operations (Toorajipour, Sohrabpour, Nazarpour, Oghazi, & Fischl, 2021), Marketing (Chintalapati & Pandey, 2022; Mustak, Salminen, Plé, & Wirtz, 2021), Finance (Ahmed, Alshater, Ammari, & Hammami, 2022), People Management (Pereira, Hadjielias, Christofi, & Vrontis, 2023) or Public Administration (Medaglia, Gil-Garcia, & Pardo, 2023).
The great leap in knowledge production seems to be generative conversational AI for creating text and images, which correspond to models capable of generating content similar to human-written text and images. That is, generative AI can learn by identifying patterns and groupings within available data and is, therefore, capable of creating new information – text, images and even music – based on pre-existing knowledge. Through a procedural reference framework that involves perceptual intelligence, cognitive intelligence and decision-making intelligence (Xu et al., 2021), it can assist in translating texts into different languages and automatically review and correct grammatical and spelling errors. Some use it to format references in patterns defined by journals. Some AI-based systems suggest research topics or existing references on a particular topic. That is, there are article-searching systems connected to specific research problems, where the researchers include their questions, and the system suggests related academic articles and texts on the topic.
AI also creates automatic summaries based on authors’ texts, following journal styles. The summaries created are so authentic that an experimental survey identified that about 33% of a sample of scientists could not differentiate between scientific texts produced by an AI from those produced by humans (Else, 2023). These technologies can also suggest articles for developing the theoretical framework by listing texts related to the research. In a more in-depth way, they could even summarize academic articles for the researcher to better understand the text to be read by organizing a list of summaries of possible works to be incorporated into a research project. Chatbots can be used in the data collection process of a survey, for example, even clarifying doubts of the interviewees.
Data analyses can be carried out through scripts developed by the system. For example, automated routines in Python or R can be developed by AI and provided to researchers, who would then run the analysis with their own data. There are concrete examples of data scientists who requested ChatGPT to create a dataset and an analysis routine capable of running the data created (R-bloggers, 2022). There is also the possibility that soon, the researcher could offer data to these technologies, which could then analyze them according to a pre-defined technique or even return more efficient analytical options.
In the conclusion section of a scientific paper, systems can assist by suggesting positive and negative aspects of the work and aiding in writing limitations and future research sections. New systems can also assist in identifying gaps in the literature on a particular topic or area, bringing ideas for future research that may interest the academic community. Table 1 describes some popular applications that use AI for academic-scientific purposes. From this exploratory survey, the potential of AI is identified to act in the different phases of the academic production process, from data collection to analysis and presentation of results. With the evolution of technology and different applications, new possibilities arise and the trend is for AI to become increasingly present in scientific research.
Use of AI in the evaluation of scientific articles and concerns of the participants in the scientific publishing process
Participants in the editorial process could also use technology to their advantage. Supported by AI, editors could easily identify authors of similar articles to the one submitted to evaluate the submitted papers, saving time-consuming information searches and increasing the likelihood that the chosen reviewer has more adherence to the research topic or method. In addition, AI could assist researchers in finding relevant articles for the research topic by assessing the quantity and quality of scientific articles used in the reviewed article. In other words, AI can identify and analyze citations in scientific articles, verifying the relevance and accuracy of the sources used.
It is possible to verify the quality of the language used in scientific production, such as grammatical, structural, orthographic and punctuation errors. The formatting of articles according to journal standards can be checked and automatically adjusted, for example. Another concern of editors is the development of scientific articles based on fraudulent data. Although AI cannot identify 100% of these cases, it could be used to verify the consistency of the data used in the research, pointing out indications of possible fraud. Despite these points of convergence that flag the benefits of AI, authors are requesting that governments establish regulations for the use of these technologies and language models in science due to the possible public mistrust of these types of software (Okerlund et al., 2022). An inherent risk is the ability of language-based applications to produce and distribute misinformation by creating fake scientific articles (Van Noorden, 2022). An essential question in the evaluation of scientific articles is whether, for example, ChatGPT should be considered a co-author, be only described in the methods section or be cited in the acknowledgments. Even more complex is the decision to use it when analyzing a submitted article for review or only to assist reviewers in the process.
Final remarks
As AI advances into scientific production, universities question and discuss this new reality. The Organization for Economic Co-operation and Development highlights that AI is capable of proposing new hypotheses, as well as identifying research subjects for experimental tests in areas such as Medicine, for example, but points out the need for improvements in machine learning technologies and non-governmental organizations (NGOs) and civil society organizations that are capable of monitoring this development (Nolan, 2021). International academic journals have been taking a position on the subject, such as Nature with the article “ChatGPT: five priorities for research” (van Dis et al., 2023). Educational institutions, such as the University of Chicago, have organized interdisciplinary events to understand this new field of study (Keith, 2023). In Brazil, funding institutions such as FAPESP have been expressing their views (Andrade, 2023), and several universities have already held events to discuss the use and implications of AI in the context of teaching and research [e.g. USP – Prado’s (2023) report, or Insper with Round Tables – Steiw (2023) news].
In the field of administration, this discussion is still in its early stages, although AI applications have already penetrated various business and government spheres, from customer service to organizational processes. More incipiently, we have discussions on the impact of AI on knowledge construction in administration. Several questions can be raised. For example: how to ensure that AI applications in knowledge construction are less biased and more precise, equitable and effective? How to control the recursion of AI learning and the reinforcement of biases if we use AI to produce content and AI will learn on top of content created by itself? How will peer reviews be affected by the advancement of AI? Will they still have a place? What limits will be imposed on the use of AI-based tools in Academia? How will the replacement of human decision-making processes by machine learning processes affect the richness of state-of-the-art learning? What government and NGO policies must be implemented to ensure AI is developed ethically and responsibly in knowledge construction? In other words, researchers and the authors of this text are concerned about ensuring that AI applications benefit teaching, research and services for the community.
It is an irreversible path. Although AI can be more precise, fast and apparently free from the cognitive limitations and biases inherent in people, it replicates biases from the texts in which it was trained, hence the importance of content curation for AI training. Finally, it should be noted that replacing the human decision-making process with the machine learning process can potentially alter the richness of learning (Balasubramanian, Ye, & Xu, 2022). Therefore, let us put our brains to work together with AI!
Some AI tools used in scientific knowledge production
AI tools | Description |
---|---|
Elicit Endnote Click |
Supports researchers in finding relevant research topics based on keywords or themes |
Grammarly Language Tool |
Analyzes grammar and spelling of a text, suggesting corrections for errors |
AI-Insights Cloud Natural Language Einstein Discovery Google Analytics 4 IBM Watson Analytics MidJourney Vertex AI |
Analyzes data and extracts patterns, allowing researchers to identify relevant insights and trends |
TiInside | Photo editing app that allows users to create realistic avatars and photos |
Bing chat Dall-E GPT-4 Jasper NotionAI Quillbot Resoomer SummarizeBot WordAI.com |
Automatically generates summaries of published articles, assisting researchers in identifying key works on a topic |
Github Copilot Sci Space Copilot |
Explains research contexts and mathematical equations in detail. Suggests program code development |
Galactica | Produces texts, summaries, solves mathematical formulas and even generates files for Wikipedia |
Google Translate GPT-4 |
Automatically translates texts from one language to another |
Lumen5.com Synthesoa.io |
Automatically transforms an article from a website into a video on YouTube |
Survey conducted within the tools in April 2023
References
Ahmed, S., Alshater, M. M., Ammari, A. E., & Hammami, H. (2022). Artificial intelligence and machine learning in finance: A bibliometric review. Research in International Business and Finance, 61, 101646, doi: 10.1016/j.ribaf.2022.101646.
Andrade, R. O. (2023). O Salto da inteligência artificial. Revista FAPESP, Edição, 24(235), 17–22.
Balasubramanian, N., Ye, Y., & Xu, M. (2022). Substituting human decision-making with machine learning: Implications for organizational learning. Academy of Management Review, 47(3), 448–465, doi: 10.5465/amr.2019.0470.
Buriak, J. M., Akinwande, D., Artzi, N., Brinker, C. J., Burrows, C., Chan, W. C., Chen, C., Chen, X., Chhowalla, M., Chi, L., & Chueh, W. (2023). Best practices for using AI when writing scientific manuscripts. ACS Nano, 17(5), 4091–4093, doi: 10.1021/acsnano.3c01544.
Chintalapati, S., & Pandey, S. K. (2022). Artificial intelligence in marketing: A systematic literature review. International Journal of Market Research, 64(1), 38–68, doi: 10.1177/14707853211018428.
Clark, J., & Perrault, R. (2022). Artificial intelligence index report. Stanford University – Human-Centered Artificial Intelligence. Em. Retrieved from https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf (Acesso em 24 de abril de 2023).
Cui, Y., & van Esch, P. (2022). Autonomy and control: How political ideology shapes the use of artificial intelligence. Psychology & Marketing, 39(6), 1218–1229, doi: 10.1002/mar.21649.
Else, H. (2023). Researchers cannot always differentiate between AI-generated and original abstracts. Nature, 613(7944), 423–423, doi: 10.1038/d41586-023-00056-7.
Grand View Research. (2022). Market analysis report. Artificial Intelligence. Retrieved from www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market (accessed 14 March 2023).
Huang, K. (2023). Alarmed by AI chatbots, universities start revamping how they teach. The New York Times. Retrieved from www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html (accessed 14 March 2023).
Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255–260. doi: 10.1126/science.aaa8415.
Keith, T. (2023). Combating academic dishonesty, part 6: ChatGPT, AI, and academic integrity, The University of Chicago. Retrieved from https://academictech.uchicago.edu/2023/01/23/combating-academic-dishonesty-part-6-chatgpt-ai-and-academic-integrity/ (accessed 14 March 2023).
Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science, 38(6), 937–947, doi: 10.1287/mksc.2019.1192.
McCarthy, J. (2004). What is artificial intelligence? Engineering Materials and Design, 32(3), 1–14, doi: 10.55248/gengpi.2022.31261.
Medaglia, R., Gil-Garcia, R., & Pardo, T. A. (2023). Artificial intelligence in government: Taking stock and moving forward. Social Science Computer Review, 41(1), 123–140, doi: 10.1177/08944393211034087.
Mustak, M., Salminen, J., Plé, L., & Wirtz, J. (2021). Artificial intelligence in marketing: Topic modeling, scientometric analysis, and research agenda. Journal of Business Research, 124, 389–404, doi: 10.1016/j.jbusres.2020.10.044.
Nolan, A. (2021). Artificial intelligence and the future of science. OECD.AI Policy Observatory. Retrieved from https://oecd.ai/en/wonk/ai-future-of-science (accessed 19 March 2023).
Okerlund, J., Klasky, E., Middha, A., Kim, S., Rosenfeld, H., Kleinman, M., & Parthasarathy, S. (2022). What’s in the chatterbox? Large language models, why they matter, and what we should do about them. Technology Assessment Project Report, University of Michigan. Retrieved from https://stpp.fordschool.umich.edu/sites/stpp/files/2022-04/UM%20TAP%20Large%20Language%20Models%20Full%20Report%202022.pdf (accessed 14 March 2023).
Pereira, V., Hadjielias, E., Christofi, M., & Vrontis, D. (2023). A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective. Human Resource Management Review, 33(1), 100857, doi: 10.1016/j.hrmr.2021.100857.
Prado, L. (2023). ChatGPT: entre o fascínio e o temor pela tecnologia, Jornal da USP. Retrieved from https://jornal.usp.br/cultura/chatgpt-entre-o-fascinio-e-o-temor-pela-tecnologia (accessed 13 April 2023).
R-bloggers. (2022). ChatGPT can create datasets, program in R… and when it makes an error it can fix that too! Retrieved from www.r-bloggers.com/2022/12/chatgpt-can-create-datasets-program-in-r-and-when-it-makes-an-error-it-can-fix-that-too (accessed 16 March 2023).
Saura, J. R., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2021). Setting B2B digital marketing in artificial intelligence-based CRMs: A review and directions for future research. Industrial Marketing Management, 98, 161–178. doi: 10.1016/j.indmarman.2021.08.006.
Steiw, P. (2023). Como os professores podem conviver com o ChatGPT. www.insper.edu.br/noticias/como-os-professores-podem-conviver-com-o-chatgpt/ Mesa Redonda no Youtube. Retrieved from www.youtube.com/watch?v=bVltLiuT8Ms&ab_channel=Insper (accessed 8 April 2023).
Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313. doi: 10.1126/science.adg7879.
Toorajipour, R., Sohrabpour, V., Nazarpour, A., Oghazi, P., & Fischl, M. (2021). Artificial intelligence in supply chain management: A systematic literature review. Journal of Business Research, 122, 502–517, doi: 10.1016/j.jbusres.2020.09.009.
Van Noorden, R. (2022). How language-generation AIs could transform science. Nature, 605(7908), 21, doi: 10.1038/d41586-022-01191-3.
Xu, Y., Liu, X., Cao, X., Huang, C., Liu, E., Qian, S., Wu, Y., Dong, F., Qiu, C.W., & Qiu, J. (2021). Artificial intelligence: A powerful paradigm for scientific research. The Innovation, 2(4), 100179. doi: 10.1016/j.xinn.2021.100179.
Further reading
Brill, T. M., Munoz, L., & Miller, R. J. (2019). Siri, Alexa, and other digital assistants: A study of customer satisfaction with artificial intelligence applications. Journal of Marketing Management, 35(15-16), 1401–1436, doi: 10.1080/0267257X.2019.1687571.
Mitchum, R., & Lerner, L. (2019). How AI could change science. The University of Chicago News. Retrieved from https://news.uchicago.edu/story/how-ai-could-change-science (Acesso em 19 de Março de 2023).
Salvagno, M., Taccone, F. S., & Gerli, A. G. (2023). Can artificial intelligence help for scientific writing? Crit Care, 27(1), 1–5, doi: 10.1186/s13054-023-04380-2.
Acknowledgements
Erratum: It has come to the attention of the publisher that the article “Editorial: One-way road: the impact of artificial intelligence on the development of knowledge in management” by Giuliana Isabella, Marcos Inácio Severo de Almeida and Jose Afonso Mazzon, published in RAUSP Management Journal, Vol. 58 No. 3, https://doi.org/10.1108/RAUSP-07-2023-273, contained the wrong copyright line. This error was introduced during the production process and has now been corrected online. The publisher sincerely apologises for this error and for any confusion caused.
Erratum: It has come to the attention of the publisher that the article “Editorial: One-way road: the impact of artificial intelligence on the development of knowledge in management” by Giuliana Isabella, Marcos Inácio Severo de Almeida and Jose Afonso Mazzon, published in RAUSP Management Journal, Vol. 58 No. 3, https://doi.org/10.1108/RAUSP-07-2023-273, was wrongly labelled as an ‘Editorial’. This error was introduced during the production process and it has now been corrected online to ‘ThinkBox’. The publisher sincerely apologises for this error and for any confusion caused.