Guest editorial

Charles M. Ess (Department of Media and Communication, University of Oslo, Oslo, Norway)

Journal of Information, Communication and Ethics in Society

ISSN: 1477-996X

Article publication date: 6 September 2021

Issue publication date: 16 September 2021

420

Citation

Ess, C.M. (2021), "Guest editorial", Journal of Information, Communication and Ethics in Society, Vol. 19 No. 3, pp. 313-328. https://doi.org/10.1108/JICES-08-2021-140

Publisher

:

Emerald Publishing Limited

Copyright © 2021, Emerald Publishing Limited


Interdisciplinary dialogues on the social and ethical dimensions of digital technologies

Welcome and background

Welcome to this issue of the Journal of Information, Communication and Ethics in Society (JICES) – the inaugural issue of a new collaboration between the Association of Internet Researchers (AoIR) and the Journal.

This collaboration was initiated in early 2019 by Simon Rogerson, Chief Editor of JICES. As he noted, JICES aims to “[…] promote thoughtful dialogue regarding the wider social and ethical issues related to the planning, development, implementation and use of new media and information and communication technologies” (2019). The Journal thereby offers “necessary interdisciplinary, culturally and geographically diverse works essential to understanding the impacts of the pervasive new media and information and communication technologies.” At the same time, Prof Rogerson observed in the call for the AoIR 2019 conference “great resonance between the aims of AoIR and JICES” (ibid). I hardly needed persuading. As someone who has also engaged with JICES and its affiliated ETHICOMP conferences for some time, this resonance is clear: the AoIR community has fostered critical reflection on the ethical and social dimensions of the internet and Internet-facilitated communication – in particular, regarding internet research ethics – since its inception in 2000. Following the model of the AoIR annual special issue published by Information, Communication and Society made up of papers selected from the annual conference, Prof Rogerson proposed a similar collaboration, “focusing on the ethical and social impact of the internet” (ibid). Certainly, AoIR panels and presentations on these themes have subsequently appeared over the years as chapters in journals and anthologies addressing the widely interdisciplinary communities represented within AoIR. However, this new collaboration promised a singular advantage: “Through JICES the work of AoIR would reach out to the information and computer ethics community and have the potential of new membership and increased influence” (Rogerson, 2019; for a much more extensive exploration of the overlaps here, see Ess, 2020b).

The proposal fits neatly with our long-term practice of calling for papers on ethics, including research ethics, for presentation and discussion at AoIR; but it came as the Ethics Working Group was completing its three-year project of developing the second amendment to the AoIR Internet Research Ethics (IRE 1.0) guidelines as first published in Ess and the AoIR ethics working committee (2002) and its first amendment in 2012 (IRE 2.0 – Markham and Buchanan, 2012). Completing IRE 3.0 in 2019 (franzke et al., 2020) required postponing the collaboration to the AoIR 2020 conference. This proved fortuitous: the emphases in the AoIR 2020 conference call included power, justice and inequality in digitally mediated lives; Life, sex and death on, through and with social media; and political life online (https://aoir.org/aoir2020/cfp/) – emphases that clearly dovetailed with JICES’ interests and ones, hence, directly taken up in the articles collected here. Most happily, aline shakti franzke, a primary co-chair and author of IRE 3.0, agreed to collaborate in the task of developing calls and organizing panels for the AoIR 2020 conference that would generate candidate papers for this first special issue.

Overview

We developed two panels of papers for presentation at the AoIR conference on October 29, 2020. From these, five contributors developed their work into papers for further review and revision for inclusion here. Prof Rogerson wisely suggested our including here the sixth paper by Nesibe Kantar and Terrell Ward Bynum that, as we will see, works perfectly to set a primary background for both the first set of articles concerned with the ethical and political dimensions of ICTs and the second set focusing on (internet) research ethics.

Kantar and Bynum’s “Global ethics for the digital age – flourishing ethics” begins with the long history of how especially virtue ethics (VE) has come to take a central place in the ethics of information and communication technologies. This sets the stage for two contributions that develop important critiques of virtue ethics – Bastiaan Vanacker’s “Virtue Ethics, Situationism and Casuistry: Toward a Digital Ethics Beyond Exemplars” and Morten Bay’s “Four Challenges to Confucian Virtue Ethics in Technology.” The final contribution here by Chi Kwok and Ngai Keung Chan, “Toward a Political Theory of Data Justice: A Public Good Perspective” shifts our perspective toward the political dimensions of Big Data and AI, offering distinctive arguments from political theory for their use by democratic states. Along the way, I will comment on how these four papers complement one another in important ways, as well as their connections with and implications for larger discussions and debates in these domains.

Broadly, the opening foci on virtue ethics are echoed in the final two papers that explore both familiar and novel research ethics challenges evoked within two specific research projects: these will also provide helpful responses to both Vanaker’s and Bay’s important critiques.

The first is by Katja Kaufmann, Tabea Bork-Hüffer, Niklas Gudowsky-Blatakes, Marjo Rauhala and Martin Rutzinger. Their “Ethical Challenges of Researching Emergent Socio-Material-Technological Phenomena: Insights from an Interdisciplinary Mixed Methods Project Using Mobile Eye-Tracking” delves into the multiple and thorny ethical challenges that unfolded through their project as being interdisciplinary and taking up relatively novel technologies, including Augmented Reality, out of the lab and into public spaces. In the second paper, “On the Complexities of Studying Sensitive Communities Online as a Researcher-Participant,” Ylva Hård af Segerstad explores the often exceptionally draining and demanding challenges of researching an online community for bereaved parents – one she participates in as a parent who has lost a child. As she has commented in earlier AoIR presentations, this is not a club you want to be a member of. The outcomes of Hård af Segerstad’s research and the insights she offers here on how to engage in such research, despite – and because of – their sometimes enormous personal challenges and costs, are all the more valuable for the rest of us.

The articles

To see how these diverse contributions cohere with and complement one another, I now turn to a more detailed review of each.

We begin with Kantar and Bynum as its careful historical overview of what they call “Flourishing Ethics” (FE) – more specifically, a Human-Centered Flourishing Ethics - helps establish the large background and ethical palette, including key topics and themes, for what follows. To start, Bynum’s own work – including the collaboration mentioned here with Simon Rogerson (Bynum and Rogerson, 1996) – has been foundational within what is called here Information and Communication Technology Ethics (ICTE). As this article shows, Bynum was an early pioneer in tracking the emergence of Aristotelian virtue ethics as an ethics of flourishing in a budding Information and Computing Ethics, starting with the foundational work of Norbert Wiener (e.g. 1950). Many of Bynum’s observations and arguments have become central in turn, as can be seen here. Kantar and Bynum take up and develop several primary elements of such an ICTE, starting with the manifest challenge raised by Krystyna Gorniak in 1995 – namely, that the global extension of these technologies, most centrally via the then-budding internet – thereby confronted us with the need to develop global computer ethics. However, such an aspiration, of course, immediately confronts us with the challenges of developing an ethics that recognizes the diversity of global cultural norms, values and so on that define cultural and national identities – without falling into what is here called “cultural relativism,” defined here as “the claim that ethical values, rules and practices are merely ‘local phenomena’ confined to a specific region or community” (Kantar and Bynum; Ess, 2017, 2020a, 2020b) [1]. Following the lead of Norbert Wiener, the founder of cybernetics, as well as “the founding father” of ICTE, Kantar and Bynum turn to Aristotelian virtue ethics – in part, as several of us have also argued, precisely because VE is generally recognized as an ethical framework found more or less globally, i.e. in both Western and non-Western traditions, including indigenous peoples and cultures (Ess, 2008; Vallor, 2016).

In these initial directions, Kantar and Bynum further complement the work of Rafael Cappuro, who first introduced the notion of Intercultural Information Ethics (IIE) in 1990 – one shaped precisely by these now-familiar demands and difficulties (Ess, 2020a). While Capurro centers his likewise human-centric approach on the work of Martin Heidegger, Kantar and Bynum rather take up both VE and deontological emphases central to modern conceptions of the human being – i.e. as first of all a (Kantian) autonomy or freedom that, thus, grounds a human dignity deserving of respect. “Human-centered” requires here, however, important qualifications. First of all, Bynum and Kantar make important moves beyond the more narrowly human-centric ethical approaches of modernity, namely, classical deontology and utilitarianism. They do this by noting that, in parallel to the demand that our ethics achieve a global legitimacy, Wiener further “realized that ethics must be broadened to cover non-human beings (such as robots and cyborgs)” (Bynum and Kantar). This maneuver is grounded in part in Wiener’s claim that “at the deepest physical level, everything in the universe is made of energy and information – even living things, including human beings” (Bynum and Kantar). On the one hand, this leads us in the direction of what some have called a “more than human” ethics (including ethics of care: Tronto, 1993; Puig de la Bellacasa, 2017; Mörtberg, 2021) – one that, most strongly, overcomes a modernist, Cartesian dualism that radically divorced mind from body, human from nature and thereby human from technology, as well as the larger material (and for some, supernatural) orders. Luciano Floridi’s Philosophy of Information (PI) and Information Ethics (IE) is especially instructive here, as they ground an affirmation of default goodness and value of all things about us as also information in a carefully developed sense (Floridi, 2011, 2012).

At the same time, this “more than human” move toward fundamental respect and care for all about us implicates a larger and very complex debate clustering about various ways of “decentering the human,” especially with the rise of postmodernist, feminist and post- and de-colonial frameworks from the 1980s forward. From these diverse perspectives emerge a cluster of important critiques of the modern liberal subject as conceived of especially within such classic Enlightenment sources as Kant. Many of these critiques make essential points regarding the limitations and weakness of these conceptions – e.g. as excessively masculinist, racist and human-centric: they are reinforced in resonant efforts such as Floridi’s PI and IE, to extend moral status and worth beyond the human per se so as to encompass, e.g. ICTs, including robots and AIs, as well as the larger natural (and for some, “supernatural”) orders we are inextricably interwoven with. These moves, thus, force, however, a central problem: how do we take on board these critiques and shifts, in the name of greater equality, emancipation and proper relationships within and beyond the human – moves especially urgent in the face of impending environmental crises – without losing the conceptions of human autonomy and so on that ground modern democratic norms and rights, including precisely the sorts of equality and emancipation as undergirding these shifts (Ess, 2021)? Within ICTE, this debate has further played out in the debate between Capurro’s human-centric IIE and Floridi’s critiques thereof (Ess, 2009b, p. 165).

Fortunately, a number of important middle grounds have emerged here such as feminist notions of relational autonomy that seek to conjoin something of Enlightenment conceptions of autonomy and affiliated rights and democratic processes with a relational self that takes on board both classical (e.g. Confucian, African, etc) and contemporary understandings as shaped by these larger streams of critique. Specifically, Andrea Veltman conjoins virtue ethics with a Kantian deontological account of autonomy that grounds respect for persons as a primary value. Her account is of particular relevance here as it foregrounds the role of judgment in “eudaimonistic ethics” (2014).

Somewhat similarly, Bynum and Kantar avoid the complete loss of human moral agency by invoking centrally important distinctions between human and machine – not only in terms of full autonomy but also what is further required for our ethical agency, namely, a specific form of reflective ethical judgment. Bynum and Kantar recall Weiner’s warnings against transferring “to the machine-made in his own image the responsibility for his choice of good and evil” (Wiener, 1950, pp. 211–212). Especially as echoed by James Moor’s comment that only human beings are “full ethical agents” (2014) – as I have argued elsewhere, Wiener’s notion of cybernetics at least implicitly points us to phronēsis, understood and developed especially within Aristotle as meaning first a form of prudence or “practical wisdom.” Phronēsis is also the capacity or virtue of reflective judgment – the capacity to discern our way forward, thick in the middle of fine-grained, specific contexts that escape resolution by more determinative judgments that begin with “rule-book” approaches based on given principles, etc. (as in deontology and consequentialism). This is implicit within the very term “cybernetics” which Wiener takes up as a term for self-correcting systems. While he does not say explicitly, so far as I can find, the reference here is to the cybernetes – a steersman or pilot – as described by Socrates in The Republic:

[…] a first-rate pilot [cybernetes] or physician, for example, feels the difference between the impossibilities and possibilities in his art and attempts the one and lets the others go; and then, too, if he does happen to trip, he is equal to correcting his error (Republic, 360e-361a, Bloom trans.; cf. Republic I, 332e-c; VI, 489c, in Ess, 2007a, p. 15)2.

The cybernetes, thus, exemplifies an embodied form of reflective ethical judgment – phronēsis: the first feature of such phronetic judgment is its capacity for self-correction when we recognize that we have made an error in judgment: the cybernetes is, thus, an exemplar of phronēsis as the capacity for ethical self-correction (Ess, 2019, p. 8–10).

With this amplification of Wiener in mind, we can further see that Wiener and Moor’s warnings against mistaking machine-based decision-making with human ethical judgments are part and parcel of a larger, now well-established tradition with ICE, arguing against conflating what is computationally tractable with especially human ethical judgment - the reflective judgment or phronēsis that is central to VE as both “practical wisdom” and the virtue or capability of judgment per se (Weizenbaum, 1976; Zweig, 2019; Cantwell Smith, 2019).

More broadly, the flourishing ethics unfolded here is further deeply resonant with what several of us understand as a relational turn – not only within ICTE but within the larger domains of social science and so on (for an early discussion, Ess, 2009a, p. 110ff.). This relational turn is apparent here first of all in the affirmation of Aristotelian virtue ethics. While Bynum and Kantar do not say so explicitly, underlying Aristotelian as well as other forms of VE (Buddhism, Confucian thought and so on), is a strong sense of the self as certainly individual but also relational – i.e. inextricably interwoven with Others, starting the Others in human societies. So Bynum and Kantar note that “To flourish, people need to be part of a community.” This relationality is more fully taken up, e.g. in the work of David Gunkel (2018) and Mark Coeckelbergh (2020) – in part to argue for the sorts of respect for non-human devices, starting with robots, that is foregrounded by Wiener here early on.

At the same time, we will see that this relationality also grounds important critiques of VE.

Virtue ethics: Critiques

Bynum and Kantar’s account of general ethics of flourishing is in good company. In addition to any number of prominent voices exploring the role of virtue ethics vis-à-vis contemporary technologies (Spiekermann, 2016; Vallor, 2016; Coeckelbergh, 2020; Capuccio et al., 2021, among others), the IEEE has likewise centered its work on “ethically-aligned design” for Autonomous and Machine Learning Systems squarely on Aristotelian VE (2019). While virtue ethics is not explicitly named, important philosophical and policy documents surrounding the development of EU approaches to AI likewise foreground the virtue ethics’ aims of “good lives,” “flourishing” and “well-being” (Floridi et al., 2018).

But as with all other ethical frameworks and approaches, virtue ethics has its limits and criticisms. We now turn to these as taken up in the papers by Bastiaan Vanaker and Morten Bay.

Bastiaan Vanaker’s “Virtue Ethics, Situationism and Casuistry: Toward a Digital Ethics Beyond Exemplars” begins with a compact summary of virtue ethics as taken up in digital ethics, adding important detail to the discussion above. This is the background to his review of situationist critiques of VE. Contra the central emphasis in VE on developing virtues as capabilities and habits central to good lives of flourishing, Vanaker notes that “experiments in the field of moral psychology give us reason to doubt the existence of robust character traits.” Broadly, situationists argue that for most of us, at least, many other “ethically irrelevant situational factors” shape our ostensible moral actions rather than virtues – including phronēsis, it would seem. In addition, situationists critique VE as being “individually-based” – a critique that several of us, as Vanaker observes, would quarrel with (starting with the emphasis on relationality that shapes both ancient and modern VE approach, as discussed above). However this may be, Vanaker collects a number of important objections to the role of exemplars – the phronemoi in Aristotle or the junzi in Confucius – that VE holds up as examples to guide our judgments about what to do in specific situations (Ess, 2007b). In particular, Vanaker makes clear that “the novel nature of the problems presented by digital technologies only exacerbates these objections” – i.e. given the often distinctively new contexts and possibilities afforded by digital technologies, it is not necessarily clear just who or what the exemplars in these new domains might be.

Vanaker takes up these issues by reimaging what moral exemplars might look like in digital contexts. He starts with the important counter to the critique of VE as individualist, i.e. that the virtues are learned within a “community of peers” – in the terms used above, among a community of relational beings. Drawing on Philip Kitcher’s pragmatic naturalism, Vanaker then links the development of ethics within a community as entailing “ideal conversations” – and further points to the role of casuistics as an approach that can resolve the problem of an ostensible lack of digital moral exemplars. Most briefly, casuistics can be understood as a careful and systematic evaluation of both similarities and differences between paradigm cases and the novel ones we now confront.

Appropriately, this brings us directly to the AoIR IRE guidelines –as both the result of a new 20-year community conversation and one deeply shaped by Heidi Mckee and James Porter’s seminal contribution, The Ethics of Internet Research: A Rhetorical, Case-Based Process (2009). A central problem remains, however: while IRE 3.0 explicitly endorses such a casuistic approach and offers an example – this hardly compares with, say, the legal field which can offer “a glut of case law” (Vanaker). Indeed, Vanaker puts his finger on what has been a central challenge to the development of IRE from the beginning: numerous efforts to collect a database, much less a taxonomy of cases have been consistently stymied on two fronts. One, ethics review boards, IRBs and their equivalents around the globe address cases “behind closed doors” (Vanaker) – and, I can add, with any number of reasons to not make cases public, starting with protecting the researchers and/or the institution (see also Kaufmann et al.’s comments on this below). Two, as Vanaker observes, “Nor is there a lot to be gained for researchers to voluntarily step forward and talk about that one time they practiced poor research ethics.” In the end, casuistics here is more of a “theoretical possibility,” not a practical one: “As long as these hurdles are not cleared, moving from articulation of best practices to a genuine casuistic approach will remain an elusive goal” (Vanaker).

At the same time, Vanaker’s analysis points to the final two chapters in this collection: each offers such critical exemplars – of practices in meeting new methodological challenges in research (Kaufmann et al.) and of meeting especially the personal challenges of conducting sensitive research as a participant-observer (Hård af Segerstad).

As his title, “Four Challenges to Confucian Virtue Ethics in Technology” makes clear, Morten Bay develops four central criticisms of VE as resting on Confucian traditions. Two sources are primary here – namely, Shannon Vallor (2016) and the work of Pak-Hang Wong (2012; see also Wong, 2020). While broadly endorsing the move toward a Confucian VE, Bay raises these four challenges in hopes of helping the approach become more robust and applicable. To begin with, Bay argues that Confucian VE faces central difficulties in becoming applied ethics that can indeed provide needed resolutions to the ethical challenges raised by technologies. Second, Bay reiterates the critique we have seen of VE more generally as individually-based. Insofar as that is the case, this renders Confucian VE ill-suited to deal with contemporary problems, as “These rarely concern moral choices made by individuals, but rather by corporations, state agencies, platforms and other human collectives, organizations and groups” (Bay). This leads to a damning failure on the part of Confucian VE: “self-cultivation is not sufficient to ensure ethical behavior on behalf of a collective” where “technology ethics must address collective, individual and non-human acts to be fully applied” (Bay).

Bay elaborates this critique by taking up several candidate virtues from Shannon Vallor, arguing that while these may work well with individuals, it is less clear how to apply them to collectives, starting with corporations. Bay acknowledges here Wong’s emphasis on relationality and the virtues of harmony, ritual and so on. Following a careful examination of multiple interpretations and understandings of li (ritual or ritual propriety), Bay argues that Wong’s interpretation may miss the mark in important ways – but in ways that nonetheless allow Wong to argue importantly for equal treatment in healthcare, for example. However, this raises, in turn, a central problem in attempting to take up Confucian VE – especially in a Western context which insists on human autonomy, rights to respect and thereby equality, as part and parcel of modern liberal-democratic societies. As Bay concisely notes, “Confucianism is anything but egalitarian”: on the contrary, “it encourages adherence to previously established hierarchies and submission to one’s superiors through several virtues,” including li as well the foundational Confucian virtue of filial piety. Bay provides numerous examples of how these hierarchical structures play out in both political terms – e.g. the crushing of the Hong Kong demonstrators – and, in my terms, a distinctively Western tradition of conscientious objection: “going against your superior, even respectfully, even to do the right thing is, in fact, seen as unethical in Confucianism” (Bay). Bay rightly ties this capacity for disobedience to such important events as Google employees forcing Google to break off AI work for the US military. I would add here that such disobedience is central to Western conceptions of being a free and responsible human being – from Antigone and Socrates through modern movements demanding emancipation and equality in the name of democracy. This includes specific rights to resist, disobey and contest – e.g. accusations against us in a court of law – as central to modern democracy and law (Hildebrandt, 2015, p. 10; Ess, 2021). At the same time, the notions of relational autonomy discussed above intend to sustain these rights as well.

Bay’s third critique addresses what he characterizes as “epistemic overconfidence” as part of the “probabilistic utilitarianism” that characterizes much of the tech giants’ approaches – here, in the example of Google’s statements on AI development. This section is doubly valuable as it explores and makes clear two classic criticisms of utilitarianism more generally. Briefly, as a sort of ethical “cost-benefit” approach that seeks to “balance” the good of the many against the costs of the few, everything in utilitarianism depends on the possibility of predicting outcomes – both positive and negative. However, as Bay queries here, how is the good of the many – “benefits” in Google’s discourse – to be determined in the first place? Secondly, predictions of the future are notoriously limited and often catastrophically wrong. Bay brings Karl Popper’s “Oedipus Effect” to develop a particularly sophisticated critique of the latter difficulty. Last but not least on this point, this critique reiterates the limitations of cultivating individual virtues vis-à-vis corporate and collective utilitarian decision-making.

Finally, Bay’s fourth challenge argues that Confucian VE may actually increase the “epistemic overconfidence” characterizing the utilitarian approaches he has just criticized. This again depends on a careful exposition of the possible meanings of a central Confucian virtue – namely, zhi (Wade-Giles: chih), initially translated as “wisdom.” In both Vallor’s account as well as in more specialist explorations by Confucian scholars – as again read through a Popperian lens, Bay finds that Confucians risk being “as epistemically overconfident as the probabilistic utilitarianist technology corporations.” While these four challenges are clearly formidable – none of this is to say that Confucian VE is thereby argued out of court as far as technology ethics is concerned. On the contrary, Bay helpfully concludes by suggesting that there may be additional resources within Confucian traditions that can help move Confucian VE beyond these primary hurdles. In addition, Kaufmann et al. point out below the virtue of humility as required in interdisciplinary work: this virtue would help counter the epistemic overconfidence Bay rightly warns against.

From virtue ethics to the public good

In their “Toward a Political Theory of Data Justice: A Public Good Perspective,” Chi Kwok and Ngai Keung Chan turn us to a more specific set of ethical issues vis-à-vis concerns based on the public good. They begin by highlighting the threats and risks in an era of Big Data and AI, most especially regarding democracies. As any number of scholars and critics have pointed out, the technologies that gather and process data about us are opaque and not open to inspection, much less critique or contest (Hildebrandt, 2015). Hence:

Democratic citizens simply cannot monitor and enforce political accountability to what they do not comprehend. Importantly, the collection and analytics of data could draw worrying inferences about data subjects […] and potentially pose a serious threat to democracy (Kwok and Chan).

To be sure, states’ collection and use of data may offer important benefits – but again, as numerous scholars and critiques have explored, these technologies likewise enhance the state’s “ability to abuse its power which threatens the privacy and freedom of democratic citizens” (Kwok and Chan). The authors argue here that big data may serve the public good – but only by meeting three principles, namely:

  1. The principle of transparency and accountability.

  2. The principle of fairness.

  3. The principle of democratic legitimacy.

To do this, they take up three primary theories of the public good – market failure, basic rights and democratic theory – to develop a normative framework for states’ use of big data as then articulated in their three principles.

First, the market failure approach argues that free markets fail to provide public goods that could be considered vital – such as public safety, education and health care – but are not profitable: hence, the state is obliged to provide these. Kwok and Chan use the example of real-time traffic data that can be used to enhance public safety: but there is no prima facie incentive for corporations to provide this data if it does not come at a profit, thereby potentially restricting wide-spread access to arguably vital information. As several examples powerfully illustrate, “publicly beneficial data, when being collected, used and sold by business corporations, are more likely to result in the abuse of data and infringement of privacy” (Kwok and Chan). Add to this the observation that the large corporations are actively engaged in lobbying and other activities that will minimize governments’ ability to shape and regulate their practices, the upshot is that:

If the responsibility for the provision of publicly beneficial big data falls entirely on the market, it is likely that it will result in a situation in which these providers become a greater threat to both the privacy of individual citizens and the core values of liberal democracy (Kwok and Chan).

Hence, the state is again the preferred source of publicly beneficial data – most especially within democratic contexts.

Second, the basic rights approach emphasizes that there are basic goods, starting with physical security and basic subsistence, that the state is obliged to provide its citizens on a fair and equal basis in the name of rights and the pursuit of a good life. If the distribution of these goods is to be guided by fundamental norms of fairness and equality – then, again, the manifest failure of the market to do so makes the “normative case for the state to produce and distribute such goods” (Kwok and Chan). The authors offer the example here of pandemic data and public health data – which at the same time recall the dark side of state surveillance. They hope that in democratic societies, at least, that actively engaged citizens can organize to protest and prevent states from overstepping their bounds in ways that violate basic rights instead.

These points lead directly to a “democratic approach” that foregrounds public goods including those “that can foster and strengthen the democratic process to better account for what goods democratic citizens collectively want […]” (Kwok and Chan). Access to relevant data and information is clearly vital to such democratic processes. The authors recall here optimistic examples of “e-democracy” such as online town meetings that promise to increase the diversity of voices and engagement in democratic deliberation. Here they tap into classic conceptions, frequently grounded in Habermas (1994) – conceptions we must note, however, have come under increasing critique especially over the past several years (Ess, 2018).

As an interlude toward the development of their three principles, Kwok and Chan argue for what in research ethics has been articulated as a principle of “data minimization” (franzke et al., 2020, p. 20). Turning to their three guiding principles for regulating the state’s collection and use of data, the first, “the principle of transparency and accountability,” seeks to address a series of challenges, including those we have seen above regarding the central importance of citizens’ rights to be able to challenge and contest claims made about and against them as data subjects – in their example, when being denied a loan. Echoing what is a classic principle of Enlightenment democracy, this requires in turn “an active contestatory civil society where misbehaviors of the state would be publicly exposed” (Kwok and Chan). The authors see hope in these directions especially in the activities of NGOs – as well as, e.g. possible applications of the GDPR.

Their second principle of fairness seeks to address well-known problems of built-in bias in AI facial recognition technologies. Then, echoing the importance of citizen engagement announced above, fairness will require educating and engaging citizens “in the process of developing data practices” (Kwok and Chan). We will return to this theme of larger citizen engagement in our final two articles as well.

Finally, the third principle of democratic legitimacy seeks to ensure democratic processes of authorizing the collection and use of data but also thereby “the massive nurturing of critical audiences’ data literacy” – again, in keeping with classic Enlightenment notions of the engaged citizen as essential to functioning democracy. The definition of citizen data literacy offered here is especially helpful.

In her meta-study of some 85 ethical guidelines proposed for emerging AI, aline shakti franzke observed that in the vast majority of cases, the guidelines settled on a relatively standard core of values and norms – including ones we see here such as transparency and fairness. At the same time, however, franzke noted that very few of these guidelines went beyond the surfaces to explore and establish more foundational norms, principles and so on that could ground these norms and values (franzke, 2020). To my knowledge, at least, Kwok and Chan’s approach here, as it grounds these principles for Big Data in three significant political theories, is, thus, distinctive and exceptionally substantive in these ways.

(Internet) research ethics

Our final articles are “classic” AoIR explorations of research ethics as confronting the often novel challenges evoked by new technologies. Both importantly extend these literature – while at the same time continuing and expanding on the focus on VE established in the opening three articles.

First, Katja Kaufmann, Tabea Bork-Hüffer, Niklas Gudowsky-Blatakes, Marjo Rauhala and Martin Rutzinger offer up the “Ethical Challenges of Researching Emergent Socio-Material-Technological Phenomena: Insights from an Interdisciplinary Mixed Methods Project Using Mobile Eye-Tracking.” A first distinguishing feature of their project is the use of Augmented Reality along with eye-tracking technologies in a public experimental setting, namely, two Austrian parks. These relatively new technologies are of particular interest as they “increase the complexity of people‘s perception of the world”: at the same time, these complexities require multiple methods and disciplines, leading to their development of a “mobile mixed method” approach to research “the effects of digital media on the affective-emotional experience of public spaces” (Kaufmann et al.).

The authors begin with the “process- and context-oriented approach” as characterized in the AoIR IRE 3.0 (franzke et al., 2020, p. 4) – one closely allied, we can note, with Vanaker’s exploration of casuistics in the development of this approach. At the same time, however, because of both the novelty of the technologies as well as the methods and disciplines involved, there is little to be found regarding the ethics of their mixed methods research (MMR). Accordingly, they offer a range of ethical reflections on their project with the aim to address:

  • “MMR research ethics.

  • Research ethics of methodological development.

  • Research ethics in projects that use technical instruments” (Kaufmann et al.).

Following a careful description of their research project, the authors turn to the multiple ethical matters and reflections that arose. A first point is that while their project received approval from the relevant ethical review committee, the committee “followed the typical anticipatory, prospective framework of review to prevent, among others, risk of litigation and did not foresee follow-up and/or continued guidance after approval” (Kaufmann et al.) Specifically, they encounter the standard problem of such frameworks, especially as these rely on general codes that “may not sufficiently cover the actual ethical challenges or help identify potential new ethical issues as they arise in doing research and their relevance for actual research practice have been questioned” (Kaufmann et al.).

The authors explore these new ethical issues in three categories:

  • In the practical implementation of the study design.

  • Concerning data processing and management.

  • With a view to the societal implications of developing instruments to track and understand human practices (Kaufmann et al.).

The first explores challenges to the standard Human Subjects requirements to protect research subjects, on the basis of respect for persons (as autonomous beings first of all) by way of informed consent. Suffice it to say here that the experiences described powerfully illustrate the challenges and inadequacies of acquiring such consent as a single step that can then be checked off. Similarly, attempting to meet standard requirements to protect privacy becomes fluid and complex when making “public use of new technical instruments” (Kaufmann et al.). Last but not least, the project’s interdisciplinary mix encounters significant ethical challenges as evoked by their “complex problem-oriented research settings” (Kaufmann et al.). This last context is of particular interest here as they find that five “ethical habits” identified by Balsamo and Mitcham (2010) – what Kaufmann et al. “prefer to call virtues” – proved to be critical:

  • [The] generosity to acknowledge each other’s work.

  • Confidence in the importance of each other’s contributions.

  • Humility to recognize partiality of one’s knowledge.

  • Flexibility to change one’s perspective.

  • Integrity to exercise responsible participation to build trust (Kaufmann et al.).

I suggest that especially the virtue of epistemological humility (my term) included here (3) would help counter the “epistemic overconfidence” Bay properly criticizes in both utilitarian and Confucian ethics.

These authors’ exploration of the “Ethical aspects of data processing and management” again nicely intersects with the AoIR guidelines in terms of the need for data minimization. However, the authors go well beyond here to examine in fine detail a number of additional considerations required by their distinctive mix of methods and data collected across various stages of the project. Echoing the AoIR emphasis on the necessity of attending to these ethical matters throughout the project, not simply at its beginning, they observe that:

For dealing ethically with data, we need to consider the whole data life cycle including collection, processing and interpretation, identifying the involved persons interacting with the data directly and considering also the persons indirectly involved (Kaufmann et al.).

Last but not least, “Societal implications of developing technologies and instruments” include attention to central issues of protecting privacy and autonomy vis-à-vis surveillance when taking up the sorts of instruments and data-recording techniques used here. Echoing Kwok and Chan, Kaufmann et al. further endorse a growing emphasis on engaging citizens and “the public” more broadly in the accordingly dialogical efforts to resolve the ethical issues unfolding in these new contexts. In their concluding remarks, they further echo the point made by Vanaker as they identify the need “for researchers to openly share their experiences in doing ethics when engaging in method development” – despite multiple obstacles, including “existing funding schemes and an increasingly competitive academic system.”

In our final article, “On the Complexities of Studying Sensitive Communities,” Ylva Hård af Segerstad leads us into “methodological, ethical and emotional challenges of studying sensitive and vulnerable communities online from the perspective of simultaneously being researcher and research subject.” These challenges are especially daunting: Hård af Segerstad is a parent who has endured the anguishing loss of her young daughter – who thereby participates in and research studies a closed Facebook group for bereaved parents. She urges “bringing the personal and autoethnographic voice as valuable tools for the nuanced and situated understanding of complex and sensitive cases.” She points out that this approach thereby will enhance the AoIR IRE guidelines, first as these are shaped by Annette Markham’s central insight that methods and ethics are inextricably interwoven across the course of a research project (2006): Hård af Segerstad will specifically add here the “emotional aspects of doing sensitive research.”

Hård af Segerstad begins with a review of the research that she and colleagues have published on this group as a way of providing important details and context. Key elements here begin with the deeply taboo character of grief and death in general in Scandinavia – only amplified when it comes to the loss of a child. Moreover, “members in the community mostly use the group when they are in despair,” – i.e. both participant and subjects are in their most vulnerable, personal and sensitive moments of anguish and grief. Moreover, the group only works because of the mutual trust based on it being a closed group whose membership is carefully vetted. Any sort of violation of the mutual trust and understanding – e.g. by revealing, inadvertently or otherwise, such personal and sensitive elements in publications – would be not simply a breach of standard research ethics but also a potentially personal conflict and risk of loss of access to the researcher. Last but not least: in keeping with Kaufmann et al. as well as the AoIR guidelines more broadly, while some challenges and potential harms can be anticipated at the outset – the unexpected ones have a way of showing up as well. Such research, thus “heightens the demands of the ethical responsibility of the researcher and the research process” – while simultaneously posing “high demands on the researchers’ ethical as well as emotional capacities and responsibilities.” These costs are warranted, Hård af Segerstad suggests, as “Centralizing the researcher's experience and body in the study can provide detail and nuance not available through other methods of engagement with participants.” Specifically, “My personal experiences of losing a child bring authenticity and genuineness to the study,” especially as “allowing myself to be self-reflexive and vulnerable, I am in a better position to resonate and meaningfully reverberate my audience.”

Hård af Segerstad’s invocation of her body and experience should remind us of the cybernetes who seek to exercise phronēsis as inextricably interwoven with an embodiment as well. At the same time, as she further notes, all of this seems to run counter to especially natural scientific notions of objectivity. I would also argue that this approach, thus, instantiates larger recognitions in the natural and then social sciences and humanities that the nineteenth-century positivist distinction between “objectivity” and “subjectivity” is at best a heuristic ideal, not a reality for human beings as we understand them today (Ess, 2019).

Hård af Segerstad’s exploration of the methodological and ethical issues of her research begins with the concern that discussion of these events can “set off” an unwanted recurrence of grief – both among the researcher and the researched: this is problematic from the outset in efforts to acquire needed informed consent. This leads to a consideration of just what sorts of (highly) sensitive data may – and may not – be ethically used. A particularly important example here is Hård af Segerstad’s decision not to scrape data in a closed group – an instance of data minimization. This decision further reflects deontological respect for users’ expectations of privacy identified in the AoIR guidelines, even if those expectations are not technically justified; Hård af Segerstad further justifies this decision in terms of the “contextual integrity” of information shared in these spaces – i.e. between multiple persons and thereby information that somehow belongs to a single individual (Nissenbaum, 2010).

A primary cost of attempting to sustain the researcher-subject distinction, as Hård af Segerstad points out, is that in doing so she can become cut off from the community resources needed to sustain her in her own grief. As she later notes, at times in her work she simply broke down. A further complication – imposed by the default assumptions guiding Human Subjects protects – is that we must be especially careful to protect the identity of individuals by way of anonymization: but this may go directly contrary to the wishes of those participants who want their stories to be told in a public way – perhaps, in hopes that this might help make desperately needed changes in public attitudes toward bereaved parents.

We are now manifestly very far away from rule-book, deductive approaches to research ethics that lead us to believe that we can somehow meet all of our ethical requirements at the outset of a project. Rather, what is called for here (again) is “flexibility, adaptivity and mindfulness of the researcher” – practices we might dare to call virtues, including, at least implicitly, phronēsis as the sort of reflective judgment needed to navigate this fluid, highly context-sensitive waters. At the same time, Hård af Segerstad emphasizes the “importance of discussing and questioning theoretical, methodological and ethical developments for studying everyday life practices online,” starting with default assumptions regarding anonymity and informed consent.

Concluding remarks

In short, the dialogical and process approach to research ethics developed across the AoIR IRE guidelines continues. In particular, both individually and collectively these articles help dramatically expand the attention to virtue ethics in both technology ethics and research ethics, including central critiques that will require response and revision to bring these forward, starting with Bynum and Kantar’s vision of a “Flourishing Ethics.” At the same time, Kaufmann et al.’s foregrounding of five virtues requisite for interdisciplinary work – including humility and generosity with one another – make clear that VE is not “just” a matter for ICTE in general, but for researchers in particular as well. I would also suggest here that both Kaufmann et al. and Hård af Segerstad show us judgment – phronēsis – at the center of their reflections, i.e. precisely the reflective sort of cybernetic judgment as self-correcting that is essential more broadly to good lives of flourishing. Such self-correction, I propose, will include taking onboard Bay’s and Vanaker’s formidable critiques of VE – ideally, as Bay suggests, resulting in more robust and applicable forms of VE. Specifically, Kaufmann et al. and Hård af Segerstad provide the sorts of exemplars Vanaker has identified as central to a casuistics approach (Zimmer and Kinder-Kurlanda, 2017). Moreover, Kaufmann et al.’s virtue of humility, as essential for interdisciplinary teams, may help counter the “epistemic overconfidence” Bay sharply criticizes. In turn, Kaufmann et al.’s attention to “the societal implications of developing instruments to track and understand human practices,” along with the contributions here from Vanaker, Bay and Kwok and Chan, perhaps, most directly contribute to Simon Rogerson’s original aim to foster dialogue regarding “the wider social and ethical issues related to the planning, development and implementation and use of new media and information and communication technologies” (2019).

Of particular importance here is something of a call for a new Enlightenment, starting in Kwok and Chan and then echoed in Kaufmann et al. and Hård af Segerstad. Such an Enlightenment would foster greater citizen engagement with the ethics of emerging technologies, based on a robust media and data literacy and a dialogical approach to ethics emphasizing process beyond rule-book approaches.

And, happily, the dialogues continue – starting with a new round of cultivating the next set of ethical and social reflections in the 2021 AoIR conference. I trust that this special issue will prove valuable and fruitful in opening up further conversations between the AoIR and JICES communities and I greatly look forward to the expanding dialogues and debates.

References

Balsamo, A. and Mitcham, C. (2010), “Interdisciplinarity in ethics and the ethics of interdisciplinarity”, in Frodeman, R. (Ed.), The Oxford Handbook of Interdisciplinarity, Oxford University Press, Oxford, pp. 259-272.

Bynum, T.W. and Rogerson, S. (Eds), (1996), “Global information ethics”, Science and Engineering Ethics, Vol. 2 No. 2.

Cantwell Smith, B. (2019), The Promise of Artificial Intelligence: Reckoning and Judgment, MIT Press, Cambridge.

Coeckelbergh, M. (2020), “How to use virtue ethics for thinking about the moral standing of social robots: a relational interpretation in terms of practices, habits and performance”, International Journal of Social Robotics, Vol. 13 No. 1, pp. 31-40, doi: 10.1007/s12369-020-00707-z.

Ess, C. (2007a), “Cybernetic pluralism in an emerging global information and computing ethics”, International Review of Information Ethics, Vol. 7 No. 9, available at: www.i-r-i-e.net/inhalt/007/11-ess.pdf

Ess, C. (2007b), “Liberal arts and distance education: can socratic virtue (arete) and confucius' exemplary person (junzi) be taught online?”, in Pegrum, M. and Lockard, J. (Eds), Brave New Classrooms: Educational Democracy and the Internet, Peter Lang, New York, NY, pp. 189-212.

Ess, C. (2008), “Culture and global networks: hope for a global ethics?”, in van den Hoven, J. and Weckert, J. (Eds), Information Technology and Moral Philosophy, Cambridge University Press, Cambridge, pp. 195-225.

Ess, C. (2009a), “The embodied self in a digital age: possibilities, risks and prospects for a pluralistic (democratic/liberal) future?”, Nordicom Information, Vol. 32 No. 2, pp. 105-118.

Ess, C. (2009b), “Floridi’s philosophy of information and information ethics: current perspectives, future directions”, The Information Society, Vol. 25 No. 3, pp. 159-168.

Ess, C. (2017), “What’s ‘culture’ got to do with it? A (personal) review of CATaC (cultural attitudes towards technology and communication), 1998-2014”, in Goggin, G. and McLelland, M. (Eds), Routledge Companion to Global Internet Histories, Routledge, London, pp. 34-48.

Ess, C. (2018), “Democracy and the internet: a retrospective”, Javnost - The Public, Vol. 25 Nos 1/2, pp. 93-101, doi: 10.1080/13183222.2017.1418820.

Ess, C. (2019), “Ethics and mediatization: subjectivity, judgment (phronēsis) and meta-theoretical coherence?”, in Eberwein, T., Karmasin, M., Krotz, F. and Matthias Rath, M. (Eds), Responsibility and Resistance: Ethics in Mediatized Worlds, Springer, Berlin, pp. 71-90, doi: 10.1007/978-3-658-26212-9_5.

Ess, C. (2020a), “Interpretative pros hen pluralism: from computer-mediated colonization to a pluralistic intercultural digital ethics”, Philosophy and Technology, Vol. 33 No. 4, pp. 551-569, doi: 10.1007/s13347-020-00412-9.

Ess, C. (2020b), “Viewpoint: at the intersections of information, computing and internet research”, Journal of Information, Communication and Ethics in Society, Vol. 18 No. 1, pp. 1-9, doi: 10.1108/JICES-01-2020-0001.

Ess, C. (2021), “Towards an existential and emancipatory ethic of technology”, in Vallor, S. (Ed.), Oxford Handbook of Philosophy and Technology, Online Publication Date: Jan 2021, doi: 10.1093/oxfordhb/9780190851187.013.35.

Ess, C. and the AoIR ethics working committee (2002), “Ethical decision-making and internet research: recommendations from the AoIR ethics working committee”, available at: www.aoir.org/reports/ethics.pdf

Floridi, L. (2011), The Philosophy of Information, Oxford University Press, Oxford.

Floridi, L. (2012), Information Ethics, Oxford University Press, Oxford.

franzke, A.S. (2020), “A systematic literature review of ethical code of conducts in the field of internet research”, Panel presentation, AoIR annual conference, October 29.

franzke, A.S., Bechmann, A., Zimmer, M. and Ess, C. and the Association of Internet Researchers (2020), “Internet research: ethical guidelines 3.0”, available at: https://aoir.org/reports/ethics3.pdf

Gunkel, D. (2018), Robot Rights, MIT Press, Cambridge, MA.

Habermas, J. (1994), “Three normative models of democracy”, Constellations, Vol. 1 No. 1, pp. 1-10.

Hildebrandt, M. (2015), Smart Technologies and the End (s) of Law: Novel Entanglements of Law and Technology, Edward Elgar, Cheltenham.

Markham, A. and Buchanan, E. (2012), “Ethical decision-making and internet research: recommendations from the AoIR ethics working committee (version 2.0)”, available at: www.aoir.org/reports/ethics2.pdf

Mörtberg, C. (2021), “Thinking-with care: transition from recovery to repairment”, Keynote lecture, IFIP WG9.8 Workshop, “Work, Place, Mobility and Embodiment: «Recovery» or Repairment in a Covid and Eventually Post-Covid World?, April 16, Linköping, Sweden.

Nissenbaum, H. (2010), “Privacy in context: technology”, Policy and the Integrity of Social Life, Stanford University Press, Stanford, CA.

Puig de la Bellacasa, M. (2017), Matters of Care: speculative Ethics in More than Human Worlds, University of MN Press, Minneapolis.

Rogerson, S. (2019), “Email to the AoIR executive committee”, January 25.

Spiekermann, S. (2016), Ethical IT Innovation: A Value-Based System Design Approach, Taylor and Francis, New York, NY.

Tronto, J.C. (1993), Moral Boundaries: A Political Argument for an Ethic of Care, Routledge, London.

Vallor, S. (2016), Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, MIT Press, Cambridge, MA.

Weizenbaum, J. (1976), Computer Power and Human Reason: From Judgment to Calculation, W. H. Freeman, New York, NY.

Wiener, N. (1950), The Human Use of Human Beings: Cybernetics and Society, 2nd ed., Houghton Mifflin, Doubleday Anchor, New York, NY, p. 1954.

Wong, P.H. (2012), “Dao, harmony and personhood: towards a confucian ethics of technology”, Philosophy and Technology, Vol. 25 No. 1, pp. 67-86.

Wong, P.H. (2020), “Why confucianism matters in ethics of technology”, in Vallor, S. (Ed.), Oxford Handbook of Philosophy and Technology, Online Publication Date: Nov 2020, doi: 10.1093/oxfordhb/9780190851187.013.36.

Zimmer, M. and Kinder-Kurlanda, K. (Eds), (2017), Internet Research Ethics for the Social Age: New Challenges, Cases and Contexts, Peter Lang, Berlin.

Zweig, K. (2019), “Ein Algorithmus hat kein Taktgefühl”, Wo Künstliche Intelligenz sich irrt, warum uns das betrifft und was wir dagegen tun können, Heyne Verlag, München.

Further reading

Capurro, R. (1990), “Towards an information ecology”, in Wormell, I. (Ed.), Information Quality. definitions and Dimensions, Taylor Graham, London, pp. 122-139.

IEEE (2019), The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, 1st ed., available at: https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-systems.html

Markham, A. (2006), “Ethic as method, method as ethic: a case for reflexivity in qualitative ICT research”, Journal of Information Ethics, Vol. 15 No. 2, pp. 37-54.

McKee, H.A. and Porter, J.E. (2009), The Ethics of Internet Research: A Rhetorical, Case-Based Process, Peter Lang, New York, NY.

Moor, J.H. (2014), “Four kinds of ethical robots”, Philosophy Now, Vol. 72, pp. 12-14. March/April (online).

Phelan, S. and Dahlberg, L. (2011), Discourse Theory and Critical Media Politics, Palgrave Macmillan, London.

Veltman, A. (2014), “Autonomy and oppression at work”, in Veltman, A. and Piper, M. (Eds), Autonomy, Oppression and Gender, Oxford University Press, Oxford, pp. 280-300.

Acknowledgements

The papers collected here represent the collective work of many, many hands. Beyond the authors themselves: numerous reviewers for both the AoIR panels and JICES offered initial critique and suggestions for improvement, as did several participants in the AoIR presentations themselves. I especially wish to express the deepest gratitude to aline shakti franzke for her manifold contributions essential to the development of the work presented here and to Simon Rogerson for innumerable suggestions and invaluable support throughout this inaugural collaboration.

Corresponding author

Charles Ess is the corresponding author and can be contacted at: c.m.ess@media.uio.no

Related articles