ChatGPT and imaginaries of the future of education: insights of Finnish teacher educators

Henriikka Vartiainen (School of Applied Educational Science and Teacher Education, University of Eastern Finland, Joensuu, Finland)
Teemu Valtonen (School of Applied Educational Science and Teacher Education, University of Eastern Finland, Joensuu, Finland)
Juho Kahila (School of Applied Educational Science and Teacher Education, University of Eastern Finland, Joensuu, Finland)
Matti Tedre (School of Computing, University of Eastern Finland, Joensuu, Finland)

Information and Learning Sciences

ISSN: 2398-5348

Article publication date: 13 August 2024

1045

Abstract

Purpose

In 2022 generative AI took the Internet world by storm. Free access to tools that can generate text and images that pass for human creations triggered fiery debates about the potential uses and misuses of generative AI in education. There has risen a need to check the popular utopian and dystopian narratives about AI against the diversity of hopes, concerns and future imaginaries that educators themselves associate with generative AI. The purpose of this study is to investigate the perspectives of Finnish teacher educators on the use of AI in education.

Design/methodology/approach

This article reports findings from a hands-on workshop in teacher training, where participants learned about how generative AI works, collaboratively explored generative AI and then reflected on its potential and challenges.

Findings

The results reveal nuanced, calm and thoughtful imaginaries rooted in deep understanding of educational policy, evaluation and the sociocultural context of education. The results cover teachers’ views on the impact of AI on learners’ agency, metacognition, self-regulation and more.

Originality/value

This article offers a unique exploration into the perceptions and imaginaries of educators regarding generative AI in specific (instead of “monolithic AI”), moving beyond dystopian views and instead focusing on the potential of AI to align with existing pedagogical practices. The educators contrasted the common techno-deterministic narratives and perceived AI as an avenue to support formative assessment practices and development of metacognition, self-regulation, responsibility and well-being. The novel insights also include the need for AI education that critically incorporates social and ethical viewpoints and fosters visions for a future with culturally, socially and environmentally sustainable AI.

Keywords

Citation

Vartiainen, H., Valtonen, T., Kahila, J. and Tedre, M. (2024), "ChatGPT and imaginaries of the future of education: insights of Finnish teacher educators", Information and Learning Sciences, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/ILS-10-2023-0146

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Henriikka Vartiainen, Teemu Valtonen, Juho Kahila and Matti Tedre.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

The revolution in deep learning during the 2010s has profoundly impacted on the information and communication technology landscape. Given the availability of massive amounts of data, growth in computational power and more sophisticated statistical techniques (Darwiche, 2018), artificial intelligence (AI) and especially machine learning (ML) has become ubiquitous and deeply embedded in our everyday life. But AI itself is currently undergoing a paradigm shift due to the emergence of foundation models (Bommasani et al., 2021). Foundation models, which are trained on massive amounts of data and can be adapted to a wide variety of tasks, have expanded our understanding and imagination of what is possible for AI as well as raised various kinds of concerns. Risks and opportunities of foundation models have become a topic of intensive discussion worldwide, ranging from their capabilities, technical properties and applications in various domains to their societal impact, including complex economic, environmental, legal and ethical questions (Bommasani et al., 2021). In the midst of these significant changes, the hype around AI has given rise to various kinds of visions and imaginaries of how value is extracted, social order and relations re-organized, the future of work transformed and power distributed, to name a few (e.g. Brynjolfsson and McAfee, 2014; Crawford, 2021; Zuboff, 2019; Eubanks, 2018).

Lupton and Watson (2022) have argued that the future of AI and human relationship is typically portrayed with either techno-utopian or techno-dystopian visions. For example, imaginaries of commercial actors often propagate overhyped and unrealistic expectations by presenting AI as a much needed solution to our social, environmental and economic problems. In contrast, alternative narratives are put forth by news reports, op-eds, and science fiction, which often present dystopian visions of AI that exploit people and strip them of their free will, agency, jobs, and much more (Lupton, 2021; Lupton and Watson, 2022). AI is also present in national strategies, which often portray AI as an inevitable and highly disruptive technological development by using rhetorical devices such as a grand legacy and international competition (Bareis and Katzenbach, 2022).

According to Jasanoff (2015), these kinds of “sociotechnical imaginaries” are visions of the desired future, put forth by a variety of actors (Jasanoff, 2015, p. 27). They create a backdrop of assumptions and expectations that can influence not only public acceptance and uptake of new technologies, but also the ways that these systems are designed and regulated (Cave and Kanta, 2019; Fisher, 2006). Imaginaries are not just a description of a desired future, but they also drive action and change in the present (van Lente, 2012). As Borup et al. (2006) have pointed out, imaginaries are fundamentally generative, as they also give guidance, provide structure and legitimation, attract interest, and foster investment. From this perspective, it is important to proactively challenge the futures these technologies promise to bring to bear, rather than just uncritically follow their potentially widespread adoption and normalization in society and in education (Egliston and Carter, 2022).

While there always are multiple imaginaries of the desired future of education (Jasanoff, 2015), dominant imaginaries have tended to neglect the everyday realities, values, and concerns of people whose work and practices are impacted by these emerging technologies (Pink, 2022b). Yet, learning and teaching are always context-bound, and cannot be divorced from the values, rules, and social structures that shape our practices and tool-mediated actions (Vygotsky, 1978; Cole and Engeström, 1993). In advancing our understanding of AI in education, it is therefore important to investigate the imaginaries of teachers, as they play a key role in educating future generations in the age of AI.

However, little is known about the hopes, concerns and future imaginaries that teacher educators associate with AI, particularly with regards to conversational language models such as ChatGPT, which has garnered significant attention worldwide. The present paper contributes to filling this gap in knowledge by exploring the imaginaries of Finnish teacher educators (N = 9) regarding the use of generative AI (ChatGPT) in education. To achieve this, we adopted a research-creation approach (Lupton and Watson, 2022) and organized a hands-on workshop in teacher education, where participants learned about how generative AI works, collaboratively explored generative AI, and then reflected on its potential and challenges from various perspectives. The focus on a specific branch of AI (generative AI) and a hands-on approach were adopted in order to avoid an overly generic and all-encompassing view of AI, which often complicates “teachers’ perceptions of AI” studies. The paper addresses the following research question:

RQ1.

What kinds of imaginaries of learning and education do teachers co-create when working with generative AI?

ChatGPT: an overview

Of the many branches and applications of AI, generative AI has seen especially significant advancements in recent years (Cao et al., 2023). Those advancements have been speeded up by ever larger foundation models, that is, large neural networks trained using vast amounts of unlabeled data and adaptable to a broad variety of tasks (Bommasani et al., 2021). Many recent breakthroughs in AI covered in the media have relied on foundation models. Examples include language models like BERT (Devlin et al., 2019), GPT-3 (Brown et al., 2020) and LaMDA (Thoppilan et al., 2022), which have the ability to generate text passages that are almost indistinguishable from those written by humans. They have demonstrated impressive flexibility, universality, expressivity, and sense of context and genre.

One reason for their success is that over the past few years, the number of internal parameters in foundation models has significantly increased. The 2018 BERT model, which was used to improve Google's search engine, had 110 million to 340 million parameters depending on the variant (Devlin et al., 2019). The language model GPT-3, released just two years later, has 175 billion parameters (Brown et al., 2020). At the same time, companies are simultaneously discovering techniques to train models more effectively with fewer parameters, and the 2022 LaMDA has 137 billion parameters (Thoppilan et al., 2022).

The increasing popularity of foundation models has led to concerns about their training practices. Experts have pointed out issues with, for instance, ownership and environmental impact. The process of web crawling enormous datasets for training these models may risk violating the ownership and privacy of individuals whose data are used (e.g., Bommasani et al., 2021). This is because the data used to train these models are often collected from websites without any curation, consent of image owners, transparency, or option for individuals to exclude their own data from these datasets (Jo and Gebru, 2020; Paullada et al., 2020). The environmental impact of training models in massive data centers for months has raised concerns: For example, it took 58 days to train the LaMDA model on a massive array of 1024 specialized processing units (Thoppilan et al., 2022).

The value of foundation models in the domain of education has been challenged, too. In absence of a clear answer to the question “what can new AI technology afford?” substantial thought must be invested in trying to map and mitigate any disruptions in the complex space that is education (Bommasani et al., 2021). The adoption of foundation models risk homogenization, shrinking stakeholder influence in decision-making, and totalizing visions of education (Blodgett and Madaio, 2021).

Blodgett and Madaio (2021) revisited the history of previous educational breakthrough technologies (radio, TV, MOOC) and urged the reader to critically interrogate claims about AI that promise education at scale for all learners. Indeed, over the recent decades a large number of technologies have been integrated into school everyday practices (Weller, 2020). Technologies have been designed for supporting teaching and learning along with other school activities. But the purposes of using technology have varied in the past by different ways to understand learning (Koschmann, 2012). Different paradigms of educational technology have been embedded with strong expectations about how new technologies can revolutionize the teaching and learning practices in school. Lehtinen (2006) referred to these expectations as utopias, criticizing the habit to spin up excessive expectations without proper research evidence. Imaginaries related to educational technology have always been built with strong expectations of their ability to provide new and innovative ways to support and trigger the desired learning activities. AI is no different (Bommasani et al., 2021).

Earlier educational technologies have provided more precise and bounded frameworks for imaginaries than ChatGPT does. Technologies such as learning management systems, cloud service, personal devices, and virtual reality have been integrated for specific purposes, supporting and updating established school practices. In contrast, ChatGPT was not designed with education in mind. Currently there are papers providing suggestions how GPT should be used for teaching and learning (Trust et al., 2023; Costello, 2023), and typically these papers focus on how ChatGPT can assist teachers’ work, such as designing courses, inventing analogies, examples, and quizzes, as well as exploring ways to make the traditional teacher work more fluent and efficient. Even if ChatGPT was never designed to be an education tool, its malleability challenges some normal school practices. From the perspective of students, ChatGPT provides opportunities for outsourcing their efforts required for different learning tasks, replacing the activities expected to trigger the learning processes.

Methodology

This study followed a research-creation approach that was designed to explore imaginaries related to emerging technologies, such as AI and automation (Lupton and Watson, 2022). This approach involves collaborative creation of artifacts (e.g. images, text) to encourage discussions around complex relationships with emerging digital technology (Lupton and Watson, 2022). While this approach may yield new insights into the experiences, actions, concerns, and aspirations of those involved, it is fundamentally part of critical methods focused on revealing tensions between institutional, automated, and economic systems, and identifying perspectives on societal inequalities and power dynamics (Pink, 2022a).

The empirical data for this study were collected during a one-day workshop that took place in a Finnish university's teacher education program in December 2022. As noted by Niemi et al. (2018), a unique characteristic of Finnish education is that teachers are highly educated and valued as professionals who enjoy a great deal of professional freedom. In Finland, education authorities, education policymakers, as well as the general public have trust in teachers, who have the autonomy to decide on teaching and assessment methods. In general, the teacher training aligns with the Finnish National Core Curriculum for basic education (FNBoE, 2014), built on strong equity ethos with the aim to provide a high-quality education to every child regardless of their social, ethnic, and economic background (Niemi et al., 2018). Moreover, there is no standardized testing, auditing or outside teaching supervision in Finland and the National Core Curriculum highlights formative assessment as well as the development of self-assessment skills (FNBoE, 2014).

Workshop

The workshop was co-designed and implemented by a multidisciplinary team of researchers in computer science and education research and was based on our previous research and workshops in teacher education (Vartiainen et al., 2022; Jormanainen et al., 2023). The workshop took place at a time when ChatGPT had just been introduced and mass media showed some signs of incoming media hype: Some op-ed headlines – some alarmist, some excited – were shown to participants in the workshop introduction.

The call for the workshop was distributed openly to all teacher educators and participation was voluntary. The participants were informed that the workshop was part of research and development activities and that no previous experience with AI was required. In line with the institutional policy required by the university, those who had registered for the workshop received a participant information and consent sheet (including a description of the purpose of the study, a statement that participation in the study was entirely voluntary and that participants had the right to withdraw, information about the types of personal data that would be collected, how the data would be processed, and how the research results would be reported). The research conformed to the standing guidelines of the Finnish National Board on Research Integrity (TENK, 2019). In addition to three researchers organizing the workshop, a total of 9 people working in the department of teacher education participated in this study. To protect the anonymity of participants from a small unit, no demographic details were collected.

The 5 hour workshop began with a 30-minute introduction to language models and GPT 3.5, which covered their training, functionality, applications, and limitations. It aimed at demystifying the technology, exposing some of its mechanisms, outlining its drawbacks, and correcting some misconceptions. Then, the participants were instructed to work in groups of three and explore how ChatGPT can create lesson plans or course design for their own courses and/or generate answers to course tasks. The groups were then given the freedom to experiment with ChatGPT as they pleased. At the end of the first part of the workshop, each group was asked to share their impressions, ideas, and thoughts on two questions:

Q1.

What opportunities, challenges and ethical issues does artificial intelligence pose for teacher education? and, related to the use of generative AI in teaching practice,

Q2.

Which aspects of teaching can be automated? Which cannot? What should not be automated?

The second part of the workshop focused on text-to-image generative AI, specifically Midjouney, and followed a similar structure to the first part. The introduction to text-to-image generative AI covered how neural networks are trained using text-image pairs, their functionality, applications, and ethical issues such as copyrights and biases. After exploring text-to-image generative AI in small groups, the participants were once again asked to share their insights and thoughts on generative AI in education.

Data collection and analysis

Following the creative making principles (e.g., Lupton and Watson, 2022), the data collection was embedded in joint activities and discourses and recorded using three GoPro cameras. The video data collected from group discussions and joint reflections were transcribed (resulting in a 35836 word transcript), anonymized and uploaded into the Atlas.ti qualitative analysis software. Thematic analysis (Braun and Clarke, 2006) was used to analyze the data. The first step involved reading the data multiple times to obtain a general sense of the nature of collaborative discourses and reflections. Secondly, meaningful units relevant to the research question were identified, coded, and given descriptive names. Thirdly, the coded meaning units were compared and discussed between the two researchers, to identify how they could relate and form overarching themes. In the final phase, potential themes were reviewed and aggregated into five main themes in collaboration with the researchers.

Results

During the workshop, the teachers freely explored ChatGPT by prompting it to generate things like various course designs, lesson plans, and feedback on essays. They explored different literary genres, such as poems, opinion pieces, letters to the editor, plays, and advertisement. Although the mass media had already highlighted ChatGPT’s ability to generate content in Finnish and English, the participants were amazed by the outcomes. Take, for instance, Minna, who got frightened reading a AI-generated course design that she prompted for her own course: This is really […] yikes! […] I got the chills because these course learning outcomes and so on […] these are almost spot on. Likewise, Sami got excited with how well ChatGPT-generated feedback on student essays matched what teachers tend to provide when learner groups are large and there is no time for very personalized feedback:

it was a good piece of text. Sure it’s pretty much generic blah-blah – but if the point was to approach it so that there are many courses with long essays, to which you can’t give any individual feedback […] and now this [ChatGPT] did it in 10 seconds.

The participants went on to ponder how and whether AI could be used by the students themselves to get feedback on their own work. Jorma said:

if you’ve got extra time in the course, they [students] could do it themselves, send the [ChatGPT-generated] feedback to you [the teacher] and also evaluate if they feel the feedback was, in their opinion, relevant.

In the research literature, generative AI has given rise to fears and alarms. Dehouche (2021, p. 17) expressed concern over how “access to these [text generation] capabilities, in a limited yet worrisome enough extent, is available to the general public.Cotton et al. (2023, p. 3) wrote how submitting AI-generated essays that are not one’s own work “undermines the very purpose of higher education, which is to challenge and educate students, and could ultimately lead to a devaluation of degrees.” Finnie-Ansley et al. (2022, p. 18) considered OpenAI Codex an “emergent existential threat to the teaching and learning of introductory programming.” Arising from fears like those, there are a multitude of suggestions on how to prevent AI-driven plagiarism. In contrast, in this study, the participants’ initial awe was followed by deepening discussions about new (meta) skill sets for future jobs, about broad long-term developments and implications on professions, and about the philosophy of education and evaluation. Their main worries then turned not to how to prevent plagiarism but to the deep social, cultural and philosophical implications of a looming generative AI avalanche and then to the environmental threats that may come in its wake.

Prompting as a new skill to be learned

In their free-form exploration of generative AI the teachers occasionally faced frustration when the system responses did not meet their expectations. That led the teachers to turn their attention to how they had phrased their prompts. Sauli reflected:

it requires now the skills to request, in a right way, what you desire […] so that in a way those experiments that I made weren’t really very successful in my opinion, but I bet that if I had those terms, the right words, the right requests and right queries […] it would’ve probably turned out much better, but I should practice it […].

The low-floor design of ChatGPT (it is very easy to produce the first useful results) resulted in a feeling of empowerment, and working collaboratively with generative AI was associated with positive feelings, even with artistic experiences. Saara described how:

it was really nice and fun to mull and ideate together, which in itself can be a kind of artistic experience, that conveys […] fun to explore those words [prompts] and what they bring to that [generated response], so that you can view that art also from that [co-creation] perspective.

ChatGPT allows many application possibilities, but at the same time requires the skill to prompt the system for the right things in the right way. This brings up the question of whether prompting is on its way of becoming another 21st century skill required of future professionals (e.g., Binkley et al., 2012; Chalkiadaki, 2018). For the participants of this study, these skills are reflected in the national curriculum goals, where one aim is that students will learn how to use technology for supporting their future studies and their future work (FNBoE, 2014). ChatGPT thus presents a new challenge to these expectations; a new type of interface where desired results are prompted through communication in natural language demands new skills. What is more, that new type of interface impacts the learning process and outcomes where it is adopted, potentially changing the practices of creation and co-creation.

Long-term transformations and educational implications

In their reflections, teachers focused predominantly on the long-term impact of AI in learning rather than short-term gains or tradeoffs it will bring to individual skills, courses, or lessons. Drawing on his experiences, Paavo related the emergence of AI to the advent of CAS calculators (calculators able to perform advanced algebraic operations, such as solve equations), which had a notable impact on mathematics education in high schools – but which were not always useful or even allowed in university courses. Against that historical backdrop, he reflected on the broader implications of machines taking over cognitive work previously performed by students and considered how such shifts can impact students' learning, their success in future studies, and the advancement of knowledge and science more broadly:

Paavo: […] when ten years ago CAS calculators, which can integrate and differentiate, came to high schools […] in learning mathematics, those who could use the calculators really well and had learned how to use them, then that skill was sort of reset once their studies here [at university] started. And those who hadn’t used it [CAS] maybe had lost in the [national] exams at the time, but in fact they had better skills to succeed in the future. And […] what makes me wonder in my own field is that what does it do [to us] when we recognize how technology helps and takes us further – but what ensures that our [human] development continues so that the humanity’s intellectual and scientific development won’t grind down to a halt because we lull ourselves into believing that someone else [AI] takes care of those?

Sauli continued to compare AI-based tools with examples from the history of educational technology, and pointed out trajectories of hype and anxiety – from moral panic to gradual acceptance and adoption – that their arrival triggered in education. He pondered if AI could still be different as it provides a personal assistant:

there’s been technology before and people were afraid that it ruins everything; and then came spell checking and again everything was lost and so on […] But all those have just been adopted as tools for thinking and learning in the classroom: So when you have a calculator you can easier focus on the important topic today [and not on] trivial steps. But this [AI] is in some way so different and so much broader and deeper, especially how there’s […] you have your personal [generic] ‘ask the chef’ there all time […].

Later on he further reasoned that schools should prepare students for a new world of work by teaching them how to use those AI-based tools that professionals actually will use:

So how about these [AI-augmented] tools for learning […] these are today’s tools, and professionals in different occupations use them. We train future professionals so our schools have to use these tools but in a way that doesn’t completely undermine [the foundations of learning].

Sami countered with an alternative perspective to how the demands of companies may get included in the educational policy at the cost of other educational values important for Finnish education that has deep roots in enlightenment philosophy:

What we want schoolkids to be taught is a lot about educational policy too, and also about this kind of classical liberal education or Chydenian [enlightenment philosopher] thinking […] in the sense that we have arts and crafts – or is it just that we take the Estonian route, see what the industry needs and start to push those. It’s pretty much educational policy-related […].

Reflections of Sami also recognized how the work of teachers is guided by the national imaginaries that create a backdrop of assumptions and expectations for education. He further recognized how multiple actors promote their visions to educational policymakers, as policy changes have the potential to lead to large scale investments (Selwyn, 2016).

Related to the fact that learning takes time and effort, Saara presented a dilemma in which efficiency discourses, promoted by technological progress rhetoric, contradicts human values. She pointed out that human learning demands time and patience rather than speed and efficiency. She suggested an alternative approach in which technologies should help one to slow down and focus on the things that are meaningful for oneself:

Saara: In relation to this I think of time […] or that what this has shown is that this text suggestion [comes] really fast, but learning […] is slow in a way and I feel all the time that the society is really [fast], that when you get some new technology, it just accelerates further, because we can do these processes faster and more efficiently […] and it doesn’t bring maybe in a broad sense some kind of good feeling or feeling of meaningfulness. […] So at what stage will they create technology […] that can be used for slowing down or calming down? That, I suppose, is the original purpose that then you would have time to stop and think and study and do those interesting things

A divide by metacognitive skills

Cotton et al. (2023) reported that among academics the most obvious reaction to ChatGPT has been a worry about cheating, and they wrote “many are predicting the end of essays as a form of assessment” (Cotton et al., 2023, p.8). Among the teachers in this study there was no knee-jerk reaction of the type Cotton et al. (2023) described. Instead, teachers started off by discussing what useful role generative AI could play in education:

Sami: […] the threat that we’re taking education to a lot of directions, and if we’re increasingly moving towards a world of images and text […] we need to give this [AI] some other function than writing reports online somewhere – because it’s a fact that someone else can write those much faster and more credibly. What is more, if we there [online] evaluate the product and not the process – how do we even evaluate the process if we aren’t even present?

The challenge was also seen as a matter of trusting the students. The discussion highlighted the importance of students' self-reflection and own understanding of their work and responsibility. That cheating has always happened was made clear by Minna: “All that cheating [in evaluating the product] just becomes faster and easier; in our field they’ve used [cheats] before, too [laughter]”. Instead of focusing on cheating students, the discussion emphasized how students can grow to realize the value of their own work and learning:

Jorma: Here we may get to see a human side too that people start to limit their own wrongdoing at some point and understand that there’s a saturation point to cheating, and think to themselves that wait a minute, who am I really cheating, that I’m cheating myself if I keep on cheating others. So then it starts to feel nicer to do the right thing, or maybe not […] maybe I’m too idealistic.

Some teachers saw a threat of a widening divide between students by their metacognitive skills; between those who consider their own work and responsibility to be important for learning and those who see ChatGPT as a tool for outsourcing work. As Sauli pointed out, using AI as a way to avoid pain and toil is not conducive to learning: […]Or [you end up with] crazy high polarization where some people get it and others don't get that they're [just] cheating themselves.

One teacher remarked that desire is key:

This I think is the interesting bit, or I mean I’m interested in desire, human as a desiring creature. This [using generative AI] is related to desire, this is so incredibly addictive and all [that we do here] generates more of the same, stimuli for the desire world. You need to be really strong to fight against this desire [using AI for generating all kinds of material]

Educational power tools or something else?

Teachers readily identified dimensions of AI related to power, capital, control, tech companies, and asymmetries of agency. They were concerned that, unaware of the mechanisms of AI and underpinning issues with power and control, users and communities may entrust too much power and authority to AI systems and to those who develop and control AI systems. Sami feared that in the hands of few, progress in AI may lead to an impenetrable black-box of manipulation and control. Maija countered with an alternative future that focused on developing individual abilities to understand the ways that these technologies actually work and for whom. Teachers were suspicious of introducing AI-driven tools in the school without proper understanding of their mechanisms and control, power, politics, and surveillance in the systems:

Sami: how in this [AI-driven] universe as this sort of creature could you somehow live a sensible life […] as this is lending your own being away to someone; and I believe that the elite has always been better informed about what being is, instead of becoming a part of a machine […] and there’s of course this sort of machine logic that leads you around by the nose – and at the same time others are laughing while they take their kids to a school that doesn’t use AI and machinery […].

Maija: Mmm. They [AI] are tools.

Sami: Sure, but the dangers in this school [AI] learning machine mindset and so on is that you just go with this one thing […] and suddenly we realize that we, the bunch of us, have drifted into that black box.

Maija: Indeed, if you don’t know what are the working principles and who administers [them] and follow all this ‘take this, here’s a nice tool, let’s use this to everything, nice app’ are used to everything that we don’t understand.

Teachers’ concerns regarding power and authority resonate well with literature on the ethics of AI. AI-driven systems are by no means “neutral tools” that produce objective, value-free results through an unbiased process. Like all other artifacts, they can be highly compatible, or even require, particular kinds of political relationships or become preferred ways of establishing power and authority in a community (Winner, 1980). For instance, foundation models (like those used for ChatGPT and Midjourney) are intrinsically value-laden and embody numerous institutionalized patterns of politics, power, and authority – from collecting, annotating, curating, and documenting their training data (Paullada et al., 2020) to the priorities of state agencies and tech companies that train the models (Crawford, 2021:181–209), to applications that employ them (Bommasani et al., 2021), to their uses in society (Chouldechova and Roth, 2020), and much more.

The teachers expanded their initial view that systems that use AI are “just tools” toward a more nuanced view of them as extensions of human: Artifacts extend homo faber’s mental and bodily faculties that realize one’s intentions (McLuhan, 1964; Brey, 2000). Ubiquitous systems that have the potential to alter social structures, to upend social contracts, and to shift power relations, require communities to pose questions about AI’s impact on behavior, agency, moral, and ethics, among other things (Coeckelbergh, 2020). The teachers’ discussions revealed a dire need in education systems to reconsider the relationship between people and AI-driven technology.

AI is not separate from the environment

Far from seeing AI as a “virtual” phenomenon detached from the physical world, teachers were well aware of the massive global industrial infrastructures necessary for maintaining the sense of immediacy and remarkable capability that tools like ChatGPT and Midjourney exhibit. Saara led the discussion first to the massive global electricity bill – and the carbon footprint that ensues – of personal devices, communication infrastructure, server farms and data centers:

Saara: How much energy do [AI systems] like these consume, in the time of energy crisis and at least Maija asked at some point specifically that whether these are on some server […] or at least I was left wondering how much energy do these really use?

From there the discussion went to AI’s massive, ever-growing environmental strain:

Saara: And is it really possible […] to produce this technological ecosystem in a way that we don’t at the same time destroy our whole Earth. I myself can barely fathom; I still think it’s kind of an important question there that is it […] We do even today use massive [amounts of energy […] but in a way if we get new technology all the time then is it wrong to assume that it also consumes even more energy?

By doing that the teachers touched on a sore spot of the information society. The field of computing has long been troubled by the close connection between the ML model size and the power consumed by its training, as well as by the massive carbon footprint of storing data and moving it around (e.g., Bender et al., 2021). For decades the field has recognized harmful practices related to e-waste management and to transporting it to countries with lower environmental standards, with repeated attempts to combat those practices (e.g., Widmer et al., 2005). Researchers have pointed out disparities between environmental harm and economical gain over the equipment’s life span from mining rare Earth metals to disposing e-waste: The most environmentally harmful parts are the least financially rewarding and they risk predominantly socially vulnerable populations (e.g., Stewart et al., 2014). The environmental concerns raised by the teachers are those increasingly raised by the AI industry, researchers, NGOs, and critics of AI (e.g., Bommasani et al., 2021; Crawford, 2021).

Discussion

This study was set up to explore the hopes, concerns, and future imaginaries that educators themselves associate with generative AI. In order to concretize what AI means in this context, participants joined a hands-on workshop to learn about the mechanisms of generative AI, explore it, and reflect on its potential and challenges from various perspectives.

The results showed a rich display of educational visions and balanced scrutiny of short term and long term pros and cons related to generative AI. Instead of dystopian scenarios of how generative AI will ruin university education and force teachers to abandon their summative assessment practices, the participants of this study reflected on and saw the potential of AI to support the many forms of formative assessment practices they already had in place. Those reflections illustrate how teachers' imaginaries of AI malleably found a fit with locally valued sociocultural practices and goals that are pursued in teaching. In Finland, formative assessment practices have long been part of teacher education and they are also emphasized in the national core curriculum (FNBoE, 2014). Likewise, teacher reflections emphasize inquiry based and knowledge creative learning that is also supported by the curricula. In general, such pedagogical approaches highlight open-ended learning tasks, which require knowledge creation, reflection of theories through context-bound problems, and taking responsibility for learning and regulating one’s own and shared actions (Hakkarainen et al., 2013). In this respect, these imaginaries of educational technologies mirror teachers' pedagogical values and approaches, through which teachers view the potential and challenges of AI.

Teachers' pedagogical reflections emphasized things like the development of metacognition, self-regulation, responsibility, and well-being, which can be seen as resistance to techno-deterministic imaginaries. When the teachers reflected on power-relations they had identified, they emphasized that resistance demands that individuals and communities understand the mechanisms of AI and the hidden power relations embedded in AI-enhanced technology, rather than surrendering to quick desires and blindly becoming part of the efficiency narrative. By doing that, the teachers positioned themselves between AI cynicism that sees AI so morally derailed that it is beyond fixing (e.g., Crawford, 2021: pp. 223-227) and AI utopian visions that well from enchanted determinism [i.e., discourses that go beyond the usual marketing or press hype to shroud AI-driven systems in an aura of mystery and obfuscate accountability for them (Campolo and Crawford, 2020)].

Of importance in teachers’ reflections was that values and pedagogical approaches that reflected thinking skills, self-regulation, taking responsibility, and supporting well-being, were collectively shared within the teacher community. While these values and pedagogical goals are also supported by the existing curricula and current imaginaries of national educational authorities, teachers also raised concerns that national imaginaries may incorporate techno-deterministic values in the future. That concern has to some extent materialized already; for instance, Eubanks (2018) study showed how AI systems that were deployed in the name of the welfare state have, much as Zuboff’s third law stated as early as in the 1980s, turned into systems of surveillance and control.

The results of the study further revealed that in the imaginaries of teachers, responsibility was seen as a wider issue than individual decision making and action. Instead, these reflections of teachers emphasized responsible action in relation to the environment. Imaginaries of teachers were not solely focused on human-technology relationship, but included deep responsibility toward other living beings and the environment. Teachers focused on how AI could accelerate environmental problems rather than how AI could help humans to create solutions to complex problems, such as climate change (cf. Coeckelbergh, 2020; Bommasani et al., 2021).

In the big picture, encounters with generative AI prompted teachers to discuss and contemplate very complex and fundamental issues related to teaching and learning. These complex questions have been a frequent topic for discussion for a long time and still are, but in this workshop hands-on experiments with AI challenged teachers to reconsider some of their earlier views. Interestingly, these teachers never identified any single problem in education that could be said to be something that AI was designed to solve. This may be related to the fact that they believed that there is no easy fix that AI could provide for learning, well-being, and moral responsibility in the complex relationships of human and non-human actors.

While this study presents early, exploratory findings, further research needs to be carried out in order to better understand how the impact of AI can be discussed and planned proactively in a contextual way, rather than just uncritically following its potentially widespread adoption and normalization in education (Egliston and Carter, 2022). The study shows the strength of focusing on a well-defined branch of AI instead of generic and vague “monolithic AI”: It allows discussions about specific use cases and what sociocultural and processual transformations AI might support.

The results point to a need to develop AI education for teachers that systematically incorporates critical perspectives to social and ethical viewpoints to AI in education, and challenges learners to weigh ethical and social questions in the context of their own everyday work. Efforts need to be spent especially on education that facilitates the ability to view existing practices in ways that allow new solutions, imaginaries, and visions for a future with AI that is socially, culturally, and environmentally sustainable. The participants in this study scratched the surface of what such future-building requires from oneself, communities, societies, legislation, and other perspectives. The ability to set one’s own visions is necessary for resistance against technological determinism. A key policy priority should be to plan national curricula and recommendations with an eye toward developing learners’ and teachers’ understanding of AI and its impacts in ways outlined above.

Limitations and future directions

Results of the study showed that creative making can provide a participatory method for capturing teachers' experiences, concerns, and hopes in a way that raises collective awareness of deeper socio-technical tensions and power relations associated with AI (see Pink, 2022a, Pink, 2022b). However, it is important to note that the tensions and sensitivities described in this study are based on Finnish teacher educators' first encounters with generative AI in specific cultural contexts. As learning and teaching and imaginaries of them are context-bound, the findings of this exploratory study are not intended to be generalized.

The potential influence of participants’ self-directed desire to participate in the workshop likely shaped their engagement with the tools and the relatively positive attitude reflected in that engagement and discussions. Being colleagues in a small unit, the internal personal chemistry within the work community facilitated collaborative and supportive atmosphere and open, sometimes critical and surprising discussions on the topic. However, despite facilitation that aimed to probe for different perspectives and encouraged controversial viewpoints, the dynamics between senior and junior as well as existing relationships might have created conformity and possibly discouraged dissenting opinions by some participants. In a small community of colleagues, some participants might have been biased toward harmony and consensus-seeking. Future research might benefit from a more diverse group of participants with varying levels of interest in, attitudes toward, and familiarity with AI.

Moreover these results may not be readily transferable to other contexts due to differences in education, culture, and educational policy environments, among others. The unique characteristics of Finnish teacher education and school environment, such as high autonomy of teachers and focus on formative assessment, may hinder the transferability of results to other settings. Instead, an important future line of research is to explore the imaginaries and experiences of teachers from different cultural and educational backgrounds. This would also open opportunities to gain deeper insights how contextual issues, such as pedagogical practices as well as larger structures and visions promoted by curricula and educational policy, will emerge in teachers' imaginaries of AI. Such discourses could bring attention back to the core values that we aspire to cultivate in different spheres of education, and how these values relate to contextual motives, tensions and tradeoffs of adopting generative AI in the educational practices.

While the impact of generative AI on education in the present and future remains unclear, what is clear is that now is the time to discuss the critical aspects of foundation models and their societal consequences (Bommasani et al., 2021; Bender et al., 2021). As many of these mechanisms, tensions and impacts of generative AI are largely opaque to teachers, there is an evident need for in- and preservice teacher training, in which they can critically explore the data-intensive technologies that are shaping our everyday life, learning and interactions with the world (Authors, 2022; 2023). Equally important would be to discuss what kind of context-specific values and ethical insights need to be negotiated and taken into account if these technologies become part of teaching and learning. As Pink (2022b) has noted, AI ethics and trust in automated decision-making are also negotiated and shaped in everyday actions and decisions that people make, including the rules of using AI technologies in a responsible manner. As technological imaginaries are not fixed or finished (Pink, 2022b), future research could also examine how teachers' imaginaries of AI potentially change in everyday practices and over time.

References

Bareis, J. and Katzenbach, C. (2022), “Talking AI into being: the narratives and imaginaries of national AI strategies and their performative politics”, Science, Technology, and Human Values, Vol. 47 No. 5, pp. 855-881.

Bender, E.M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021), “On the dangers of stochastic parrots: can language models be too big?”, In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, pp. 610-623.

Binkley, M., Erstad, O., Herman, J., Raizen, S., Ripley, M., Miller-Ricci, M. and Rumble, M. (2012), “Defining twenty-first century skills”, in Griffin, P., McGaw, B. and Care, E. (Eds), Assessment and Teaching of 21st Century Skills, Springer Netherlands, Dordrecht, pp. 17-66.

Blodgett, S.L. and Madaio, M. (2021), “Risks of AI foundation models in education”, arXiv.org (2110.10024) [cs.CY], available at: https://arxiv.org/abs/2110.10024

Bommasani, R., Hudson, D.A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M.S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Davis, J.Q., Demszky, D., Donahue, C., Doumbouya, M., Durmus, E., Ermon, S., Etchemendy, J., Ethayarajh, K., Fei-Fei, L., Finn, C., Gale, T., Gillespie, L., Goel, K., Goodman, N., Grossman, S., Guha, N., Hashimoto, T., Henderson, P., Hewitt, J., Ho, D.E., Hong, J., Hsu, K., Huang, J., Icard, T., Jain, S., Jurafsky, D., Kalluri, P., Karamcheti, S., Keeling, G., Khani, F., Khattab, O., Koh, P.W., Krass, M., Krishna, R., Kuditipudi, R., Kumar, A., Ladhak, F., Lee, M., Lee, T., Leskovec, J., Levent, I., Li, X.L., Li, X., Ma, T., Malik, A., Manning, C.D., Mirchandani, S., Mitchell, E., Munyikwa, Z., Nair, S., Narayan, A., Narayanan, D., Newman, B., Nie, A., Niebles, J.C., Nilforoshan, H., Nyarko, J., Ogut, G., Orr, L., Papadimitriou, I., Park, J.S., Piech, C., Portelance, E., Potts, C., Raghunathan, A., Reich, R., Ren, H., Rong, F., Roohani, Y., Ruiz, C., Ryan, J., , C., Sadigh, D., Sagawa, S., Santhanam, K., Shih, A., Srinivasan, K., Tamkin, A., Taori, R., Thomas, A.W., Tramèr, F., Wang, R.E., Wang, W. and Liang, P. (2021), “On the opportunities and risks of foundation models”, arXiv.org (2108.07258) [cs:LG], available at: https://arxiv.org/abs/2108.07258

Borup, M., Brown, N., Konrad, K. and Lente, H.V. (2006), “The sociology of expectations in science and technology”, Technology Analysis & Strategic Management, Vol. 18 Nos 3/4, pp. 285-298, doi: 10.1080/09537320600777002.

Braun, V. and Clarke, V. (2006), “Using thematic analysis in psychology”, Qualitative Research in Psychology, Vol. 3 No. 2, pp. 77-101.

Brey, P. (2000), “Technology as extension of human faculties”, In Mitcham, C., editor, Metaphysics, Epistemology, and Technology, Emerald Group Publishing, Bingley, pp. 59-78.

Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D.M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I. and Amodei, D. (2020), “Language models are few-shot learners”, arXiv.org, (2005.14165).

Brynjolfsson, E. and McAfee, A. (2014), The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, W. W. Norton and Co,., New York, NY.

Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P.S. and Sun, L. (2023), “A comprehensive survey of AI-generated content (AIGC): a history of generative AI from GAN to ChatGPT”.

Campolo, A. and Crawford, K. (2020), “Enchanted determinism: power without responsibility in artificial intelligence. Engaging science”, Technology, and Society, Vol. 6 No. 2020, pp. 1-19.

Cave, S. and Kanta, D. (2019), “Hopes and fears for intelligent machines in fiction and reality”, Nature Machine Intelligence, Vol. 1 No. 2, pp. 74-78.

Chalkiadaki, A. (2018), “A systematic literature review of 21st century skills and competencies in primary education”, International Journal of Instruction, Vol. 11 No. 3, pp. 1-16.

Chouldechova, A. and Roth, A. (2020), “A snapshot of the frontiers of fairness in machine learning”, Communications of the ACM, Vol. 63 No. 5, pp. 82-89.

Cole, M. and Engeström, Y. (1993), “A cultural-historical approach to distributed cognition”, In G. Salomon (Ed.) Distributed Cognitions: Psychological and Educational Considerations, Cambridge University Press, Cambridge, pp. 1-46.

Costello, E. (2023), “ChatGPT and the educational AI chatter: full of bullshit or trying to tell Us something?”, Postdigital Science and Education, Vol. 6 No. 2, pp. 425-430.

Cotton, D.R.E., Cotton, P.A. and Shipway, J.R. (2023), “Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innovations in education and teaching international 2023”, pp. 1-12.

Crawford, K. (2021), Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, New Haven, CT, USA.

Coeckelbergh, M. (2020), AI Ethics, The MIT Press, Cambridge, MA.

Darwiche, A. (2018), “Human-level intelligence or animal-like abilities?”, Communications of The ACM, Vol. 61 No. 10, pp. 56-67.

Dehouche, N. (2021), “Plagiarism in the age of massive generative pre-trained transformers (GPT-3)”, Ethics in Science and Environmental Politics, Vol. 21 No. 2021, pp. 17-23.

Devlin, J., Chang, M.-W., Lee, K. and Toutanova, K. (2019), “BERT: pre-training of deep bidirectional transformers for language understanding”, Proceedings of the 2019 Conference of the North American Chapter of the ACL: Human Language Technologies, Minneapolis, MN, pp. 4171-4186, .

Egliston, B. and Carter, M. (2022), “‘The interface of the future’: mixed reality, intimate data and imagined temporalities”, Big Data & Society, Vol. 9 No. 1, p. 20539517211063690, doi: 10.1177/20539517211063689.

Eubanks, V. (2018), Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, St. Martin’s Press, New York, NY.

Finnie-Ansley, J., Denny, P., Becker, B.A., Luxton-Reilly, A. and Prather, J. (2022), “The robots are coming: exploring the implications of OpenAI codex on introductory programming”, Proceedings of the 24th Australasian Computing Education Conference, ACE ’22, New York, NY, pp. 10-19.

Fisher, T. (2006), “Educational transformation: is it, like ‘beauty’, in the eye of the beholder, or will we know it when we see it?”, Education and Information Technologies, Vol. 11 No. 3, pp. 293-303, doi: 10.1007/s10639-006-9009-1.

Hakkarainen, K., Paavola, S., Kangas, K. and Seitamaa-Hakkarainen, P. (2013), “Socio-cultural perspectives on collaborative learning: towards collaborative knowledge creation”, In C. Hmelo-Silver, C. Chinn, C. Chan and A. O’Donnell (Eds), International Handbook of Collaborative Learning, Routledge, New York, NY, pp. 57-73.

Jasanoff, S. (2015), “Future imperfect: science, technology, and the imaginations of modernity”, In S. Jasanoff and S.-H. Kim (Eds), Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power, University of Chicago Press, Chicago, IL, pp. 1-34.

Jo, E.S. and Gebru, T. (2020), “Lessons from archives: strategies for collecting socio-cultural data in machine learning”, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAccT ’20, pp. 306-316, New York, NY.

Jormanainen, I., Tedre, M., Vartiainen, H., Valtonen, T., Toivonen, T. and Kahila, J. (2023), “Learning machine learning in K–12”, in Sentance, S., Barendsen, E., Howard, N.R. and Schulte, C. (Eds), Computer Science Education: Perspectives on Teaching and Learning in School, chapter 6, Bloomsbury Academic, London.

Koschmann, T. (2012), CSCL: Theory and Practice of an Emerging Paradigm, Routledge.

Lehtinen, E. (2006), “Teknologian kehitys ja oppimisen utopiat”, in Järvelä, S., Häkkinen, P. and Lehtinen, E. (Eds), Oppimisen teoria ja teknologian opetuskäyttö, WSOY Oppimateriaalit, Helsinki, pp. 264-278.

Lupton, D. (2021), “‘Flawed,’ ‘cruel’ and ‘irresponsible’: the framing of automated decision-making technologies in the Australian press”, Social Science Research Network, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3828952

Lupton, D. and Watson, A. (2022), “Research-Creations for speculating about digitized automation: bringing creative writing prompts and vital materialism into the sociology of futures”, Qualitative Inquiry, Vol. 28 No. 7, pp. 754-766.

McLuhan, M. (1964), Understanding Media: The Extensions of Man, McGraw-Hill, New York, NY.

Niemi, H., Lavonen, J., Kallioniemi, A. and Toom, A. 2018, The Role of Teachers in the Finnish Educational System: High Professional Autonomy and Responsibility. in H. Niemi, A. Toom, A. Kallioniemi and J. Lavonen (Eds), The Teacher's Role in the Changing Globalizing World: Resources and Challenges Related to the Professional Work of Teaching Brill Sense, Leiden, pp. 47-61.

Paullada, A., Raji, I.D., Bender, E.M., Denton, E. and Hanna, A. (2020), “Data and its (dis)contents: a survey of dataset development and use in machine learning research”, arXiv.org (2012.05345) [cs:LG], available at: https://arxiv.org/abs/2012.05345

Pink, S. (2022a), “Methods for researching automated futures”, Qualitative Inquiry, Vol. 28 No. 7, pp. 747-753.

Pink, S. (2022b), “Trust, ethics and automation: Anticipatory imaginaries in everyday life”, in S. Pink, M. Berg, D. Lupton, and M. Ruckenstein (Eds), Everyday Automation: Experiencing and Anticipating Emerging Technologies, Routledge, New York, NY, pp. 44-58.

Selwyn, N. (2016), “Minding our language: why education and technology is full of bullshit … and what might be done about it”, Learning, Media and Technology, Vol. 41 No. 3, pp. 437-443, doi: 10.1080/17439884.2015.1012523.

Stewart, I.T., Bacon, C.M. and Burke, W.D. (2014), “The uneven distribution of environmental burdens and benefits in Silicon valley’s backyard”, Applied Geography, Vol. 55, pp. 266-277.

The Finnish National Board of Education (FNBoE) (2014), “Perusopetuksen opetussuunnitelman perusteet 2014 [Finnish national core curriculum for basic education]”, available at: www.oph.fi/sites/default/files/documents/perusopetuksen_opetussuunnitelman_perusteet_2014.pdf

Thoppilan, R., Freitas, D.D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H.-T., Jin, A., Bos, T., Baker, L., Du, Y., Li, Y., Lee, H., Zheng, H.S., Ghafouri, A., Menegali, M., Huang, Y., Krikun, M., Lepikhin, D., Qin, J., Chen, D., Xu, Y., Chen, Z., Roberts, A., Bosma, M., Zhao, V., Zhou, Y., Chang, C.-C., Krivokon, I., Rusch, W., Pickett, M., Srinivasan, P., Man, L., Meier-Hellstern, K., Morris, M.R., Doshi, T., Santos, R.D., Duke, T., Soraker, J., Zevenbergen, B., Prabhakaran, V., Diaz, M., Hutchinson, B., Olson, K., Molina, A., Hoffman-John, E., Lee, J., Aroyo, L., Rajakumar, R., Butryna, A., Lamm, M., Kuzmina, V., Fenton, J., Cohen, A., Bernstein, R., Kurzweil, R., Aguera-Arcas, B., Cui, C., Croak, M., Chi, E. and Le, Q. (2022), “LaMDA: language models for dialog applications”, arXiv.org (2201.08239) [cs:LG], available at: https://arxiv.org/abs/2201.08239

Trust, T., Whalen, J. and Mouza, C. (2023), “ChatGPT: challenges, opportunities, and implications for teacher education”, Contemporary Issues in Technology and Teacher Education, Vol. 21 No. 1.

van Lente, H. (2012), “Navigating foresight in a sea of expectations: lessons from the sociology of expectations”, Technology Analysis and Strategic Management, Vol. 24 No. 8, pp. 769-782.

Vartiainen, H., Pellas, L., Kahila, J., Valtonen, T. and Tedre, M. (2022), “Pre-service teachers’ insights on data agency”, New Media & Society, pp. 1-20.

Vygotsky, L.S. (1978), Thought and Language, MIT Press, Cambridge, MA.

Weller, M. (2020), 25 Years of Ed Tech, AU Press, Athabasca University, Edmonton.

Widmer, R., Oswald-Krapf, H., Sinha-Khetriwal, D., Schnellmann, M. and Böni, H. (2005), “Global perspectives on e-waste”, Environmental Impact Assessment Review, Vol. 25 No. 5, pp. 436-458, doi: 10.1016/j.eiar.2005.04.001.

Winner, L. (1980), “Do artifacts have politics?”, Daedalus, Vol. 109 No. 1, pp. 121-136.

Zuboff, S. (2019), The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Profile Books, New York, NY.

Acknowledgements

This work was supported by the Strategic Research Council (SRC) established within the Research Council of Finland under Grant #352859 and Grant #352876. The authors would like to thank January Collective for core support.

Corresponding author

Matti Tedre can be contacted at: matti.tedre@uef.fi

Related articles