Research Is Not a Private Matter
Ethical Issues in Covert, Security and Surveillance Research
ISBN: 978-1-80262-414-4, eISBN: 978-1-80262-411-3
ISSN: 2398-6018
Publication date: 9 December 2021
Abstract
The received wisdom underlying many guides to ethical research is that information is private, and research is consequently seen as a trespass on the private sphere. Privacy demands control; control requires consent; consent protects privacy. This is not wrong in every case, but it is over-generalised. The distorted perspective leads to some striking misinterpretations of the rights of research participants, and the duties of researchers. Privacy is not the same thing as data protection; consent is not adequate as a defence of privacy; seeking consent is not always required or appropriate. Beyond that, the misinterpretation can lead to conduct which is unethical, limiting the scope of research activity, obstructing the flow of information in a free society, and failing to recognise what researchers’ real duties are.
Keywords
Citation
Spicker, P. (2021), "Research Is Not a Private Matter", Iphofen, R. and O'Mathúna, D. (Ed.) Ethical Issues in Covert, Security and Surveillance Research (Advances in Research Ethics and Integrity, Vol. 8), Emerald Publishing Limited, Leeds, pp. 29-40. https://doi.org/10.1108/S2398-601820210000008004
Publisher
:Emerald Publishing Limited
Copyright © 2022 Paul Spicker
License
These works are published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of these works (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Many of the ethical rules relating to research begin with a presumption that the information that is being obtained is, in some sense, private. The Australian National Health and Medical Research Council’s guidance explains:
Individuals have a sphere of life from which they should be able to exclude any intrusion …. A major application of the concept of privacy is information privacy: the interest of a person in controlling access to and use of any information personal to that person. (ANHMRC, 1999, p. 52)
In later advice, they describe privacy as ‘a domain within which individuals and groups are entitled to be free from the scrutiny of others’ and states that ‘An ethically defensible plan for research … should … include measures to protect the privacy desired by participants’ (ANHMRC, 2018, pp. 102 and 50). The central test is that people decide for themselves what they are prepared to reveal. If the information is under the control of the research participants, it can only be used by a researcher if the research participant gives consent. From this we go to the idea that all research concerning human beings must be subject to the free, fully informed consent of the people concerned.
There are lots of things wrong with this account. It leads to some striking misinterpretations of the rights of research participants, and the duties of researchers; but beyond that, it can lead to conduct which is frankly unethical.
Privacy
John Stuart Mill wrote of privacy as a ‘reserved territory’.
There is a part of the life of every person who has come to years of discretion, within which the individuality of that person ought to reign uncontrolled either by any other individual or by the public collectively.’ (Mill, 1848, chapter 11)
Some ideas of privacy seem to work on the principle that people’s affairs are nobody else’s business until the person in question says otherwise – a position which holds, not that no one else should interfere, but that the person must be in control of the process (Rössler, 2005, p. 72). Within that model, people can give up their privacy; they choose what to reveal; they can sell their information. That seems, however, to conceive of privacy as a sort of ownership. Judges Warren and Brandeis (1890), who are commonly credited with the introduction of the principle of privacy into US law, took a very different view:
The principle which protects personal writings and all other personal productions … is in reality not the principle of private property, but of an inviolate personality …. The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity so that solitude and privacy have become more essential to the individual …. (p. 205)
No one, the Universal Declaration of Human Rights states, ‘shall be subjected to arbitrary interference with his privacy, family, home or correspondence’ (United Nations, 1948). This is about respect for persons. It is not framed in terms of having a say; it is not about control or choice, though it could well enhance both. The suggestion that this principle can be breached with consent is an excuse, used to legitimate the intrusion into personal space that the principle of privacy is supposed to prevent. If one accepts that information is truly private and personal, research – or any other activity that violates the reserved territory – ought to minimise intrusion and accept that some things cannot be examined.
There are exceptions and limits to this understanding of privacy, but I will come to those later. The invasion of privacy is objectionable both in its own right, and because it is liable directly to affect how people live – what they can do, where they can go, and how they should act. That is the case for maintaining confidentiality, and anonymity in circumstances where it helps protect the subject of research from identification. Arguments about privacy have tended to get lost somewhere in arguments about data protection and control, but the test of privacy is quite different. It is about the preservation of an ‘inviolate personality’, and the sanctity of personal data has little directly to do with that.
Information Privacy: Consent and Control
The second stage of the argument is about information privacy, ‘the claim of individuals, groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others’. (Westin, cited in Kimmel, 1988). Information privacy is usually matched with the principle of informed consent: that people have to agree to the use of information, and that they have to be sufficiently informed to know what they are agreeing to.
As far as possible participation in sociological research should be based on the freely given informed consent of those studied. This implies a responsibility on the sociologist to explain in appropriate detail, and in terms meaningful to participants, what the research is about, who is undertaking and financing it, why it is being undertaken, and how it is to be distributed and used. (British Sociological Association (BSA), 2017, p. 5)
There are two subtly different principles outlined here. The first, information privacy, puts in charge the person to whom the information relates. (The principle of data protection is derived from this general approach. The terms are used almost interchangeably in the literature – arguably information privacy is a little wider – but for practical purposes, I will treat information privacy and data protection as being roughly equivalent.) In the context of research, information privacy and data protection imply that consent has to be negotiated. People who are being researched are entitled to be informed; they have to consent to the project; consent is a continuous process; they have the right to withdraw at any time.
The other set of principles relates to the conduct of the researcher. Guides to research ethics are typically directed to the researcher, not to the research participant. The duties of the researcher are to explain, to avoid coercion, and to make sure that they are not stepping over the line. Some people might want to argue that rights are correlative with duties, so that these two principles boil down to the same thing; if researchers have a duty, it is does not seem to be saying anything different from the idea that people participating in research have rights. However, even if the distinction between information privacy and informed consent is not immediately evident, they are not the same. The key difference is this: it is quite possible that they refer to different people. One important example is included in Westin’s definition of information privacy: the subjects who have the right to decide about privacy might be groups or institutions. Organisational research is often done for or about an organisation; it is the organisation that gives consent, not the participants. The people who take part in that research are contacted on the basis of their organisational role or position, and placed under an obligation to cooperate with the research.
An even more important distinction lies between research participants and research subjects. Information privacy is supposed to protect the subjects – the people to whom the information relates. Research participants are not necessarily people who are engaged with the research, and the information in question may not be about them. A participant in research on domestic violence vouches information about an abusive partner. A professional recounts experience working with children with mental disorder. A person claiming social security complains about the treatment given to her by an officer. In every case, that information does not belong to the person who is reporting it. The consent of the participant is not enough. The data relate to third parties, and the principle of information privacy has been breached.
The distinction between subjects and participants is rather too often elided in the literature. The UK’s Economic and Social Research Council, for example, defines research ‘participants’ in these terms:
Human participants are defined as including living human beings, human beings who have recently died (cadavers, human remains and body parts), embryos and foetuses, human tissue and bodily fluids, and human data and records (such as, but not restricted to medical, genetic, financial, personnel, criminal or administrative records and test results including scholastic achievements). (ESRC, 2015, p. 42)
The idea that cadavers and human tissue samples ‘participate’ in research is slightly surreal. (I am not sure what qualifies as a ‘recent’ death, but I cannot see that the strength of feeling people may have about, e.g., the organs of their dead child, grow dimmer with the passing of years. What matters is surely the relationship, not the length of time.) What the ESRC intends to say is that these subsidiary sources of information are also protected by the principle of information privacy. Where information privacy applies, the control of the research subject extends to every scrap of private data – including bodily fluids and historic records – and that someone who has the right to hold that data must be consulted.
This seems to be connected only very loosely with the idea of ‘privacy’ I have been discussing. Data protection and consent are not properly speaking ethical principles in their own right; they are methods intended to protect privacy, and it is as methods that they need to be judged. On one hand, consent is not enough to defend privacy. Privacy is a human right, and people cannot consent to give up their human rights; and while some requests are less intrusive than others, there is no way of asking for explicit consent that is not in itself an intrusion. On the other hand, data protection can be violated with no intrusion, and no immediate implications for personal privacy. The secondary analysis of data, based on information that was gathered for a different purpose for the original research, is illustrative. Research and data archives exist precisely to make this sort of analysis possible. It is difficult to see what implications for privacy there might be in working with tissue. Of course, one has to take care that the use of derived information should not be constructed in such a way as to compromise the position of individuals illegitimately. We usually use anonymity and confidentiality to cover that eventuality.
It could be argued that privacy is simply the wrong principle to refer to. Faden and Beauchamp argue that consent has much more to do with self-determination and personal autonomy than it does to privacy. Consent is about the exercise of personal choice (Faden, Beauchamp, 1986). There are limits, however, to how far the person giving up the information is in control, or should be, in so far as there is a potential for conflict; what matters are the duties of the researcher, rather than the rights of the participant. I think that most researchers will accept that they have some obligations to the people who participate in their research, but there is a large gap between that and ‘information privacy’. There will be circumstances where the two approaches combine, where there is no practical difference between respecting the participant and giving the participant a degree of control; but there are also circumstances where control over information becomes a way to protect the powerful, exploit the vulnerable, such as when it is a means to hide corruption or abuse. The principle of information privacy (or data protection) is a poor guide to ethical conduct. If we are hoping for researchers to act ethically, it is not clear that ceding control to participants is the way to go about it.
The Limits of Privacy
Even at the level of the individual, it is debatable whether we can ever treat ourselves as wholly private. We are social animals. We communicate with each other in common terms. Our understanding of ourselves, Gilbert Ryle argued, is based substantially on our knowledge of other people; we cannot have a sense of self until we know about others (Ryle, 1963). When we extend the principle to two people, difficulties arise. Each person has rights, and their rights are conditioned and mediated in terms of the society they live in. The contexts can be complex: the Dutch idea of ‘sphere sovereignty’, initially stated by Kuyper (1899) and later by Dooyeweerd (1979), is based on the idea that that there are several spheres of life where different rules apply – spheres such as the home, religion, business, and politics. We have come to reject – I think rightly – the claim that domestic violence is a private matter between husband and wife, or that parental chastisement in one’s own home has nothing to do with the world outside (Schneider, 1994).
The limitations of this kind of privacy are marked by the idea of the private sphere – Mill’s ‘reserved territory’. The private sphere stands in contradistinction to the public sphere – the areas of life where society or the state have the right to pass information to others. All criminal acts are, by definition, public; the public authorities have declared that certain acts must be treated legally as public matters. (That makes it rather questionable that so many researchers think they should protect their participants from the consequences of actions revealed by the research. Public actions are not protected by principles of privacy, and it is troubling when they are made the subject of data protection.) Where general rights are at stake – such as human rights – the privacy or confidentiality of the research process does not override them.
Many of the codes of guidance issued by professional associations get this wrong. The Social Policy Association offers as a general principle the idea that ‘Information provided to a researcher in the context of a research study should be treated as confidential’ (SPA, 2009). That implies that the information is presumed to be private. But social policy is concerned with public issues and public information; much of the point of the field is to subject public action to open scrutiny. If information is found in the course of research, there has to be a very good reason not to reveal it. The British Society of Criminology states that ‘Researchers should not breach the ‘duty of confidentiality’ and not pass on identifiable data to third parties without participants’ consent’. It goes on to advise that ‘In general in the UK people who witness crimes or hear about them before or afterwards are not legally obliged to report them to the police’ (BSC, 2006). (There are three main legal exceptions – terrorism, child protection and money laundering.) Criminal law defines a range of actions as public, not private. It is not always clear whether an action is criminal – but privacy is too often used as a defence against legitimate public scrutiny.
Similarly, the Social Research Association cites US guidance:
The US Office for Protection from Research Risks allows observational research to be exempt from consent unless:
- a)
‘information obtained is recorded in such a manner that human subjects can be identified, directly or through identifiers linked to the subjects; and
- b)
any disclosure of the human subjects’ responses outside the research could reasonably place the subjects at risk of criminal or civil liability or be damaging to the subjects’ financial standing, employability, or reputation’. (SRA, 2003, p 33)
If there is a risk of criminal or civil liability, it is because the subjects have harmed other people. That is exactly the point at which the duties a researcher has to a subject are likely to be outweighed by the need to avoid harm to others.
Defining what is private, and what is public, can be difficult. Different rules apply in different circumstances. Private actions can take place in public settings, and public actions can happen in private ones. There are circumstances where people do things in private that are public in their nature – people who are abusing public authority often try to do it behind closed doors. While justice should in principle be open and transparent, there are special cases where justice is better served in private – for example, in decisions about the care of children.
As a general proposition actions are public if:
They occur in the public domain, and are open to be witnessed by members of the public. There are exceptions to this: circumstances where people legitimately do things in public places that are not meant to be witnessed or public – adjusting clothing, falling over, or sleeping. There is a judgement to make; social media fizzle with examples of people doing silly things that they may regret. But there is no obvious moral duty to ignore public behaviour; if there is a default position, it is that behaviour in public occurs in the public domain unless there are good reasons to the contrary.
They have been defined as public by a lawfully constituted authority. All criminal acts are public, because the law declares them to be – that is what a ‘crime’ is. The same applies to some things that may otherwise seem personal – rules about marriage, sexuality, motoring, taxable income, workers’ rights, public companies (which have to report their financial activity publicly) and much else besides.
They concern public affairs, such as government, legislation, and the system of justice. Part of the argument for considering these things as public is the shared, social nature of the activity, but that is not the whole story. The legitimacy and authority of democratic governments rests not just on a process of voting, but on a degree of openness, transparency, and the opportunity to engage in public fora. Treating governance as public is not just a description; it is a moral positioning its own right.
Researchers who are working primarily in the public sphere are often fired by ethical concerns, but those concerns look rather different from the traditional focus of guides to research ethics. Policy analysts aim to ‘tell truth to power’. The American Society for Public Administration aims to ‘serve the public interest’ and ‘uphold the law’, including ‘constitutional principles of equality, fairness, representativeness, responsiveness and due process’. (ASPA, 2012) My own discipline, social policy, has a critical role in holding governments to account. [One of my last research projects was based on speaking to officials administering social security, and it was done without asking the government department which controls access to those officials, because the department would have refused. However, more than 200 officials participated (PCS, 2017).]
They concern issues that are already accepted as being in the public domain – typically, because they have been published. That is the standard defence of secondary analysis, meta-analysis, and critical reappraisal of evidence. Secondary analysis and research archives use data in ways that neither the participant nor the researcher can reasonably anticipate, and if the data belongs to the participant, that appears to be unethical. So, in principle, would be repetition of comments or information provided by one person for a different purpose. In my own work, I have used previously published accounts to discuss some of the intensely personal issues around dementia, incontinence, and learning disability. I did not of course ask the people concerned – I do not know who they are – but I would not have asked them if I did. I was citing other people’s research.
I claimed at the start of this chapter that some of the misinterpretations about the scope and process of research could be unethical, and this is an example. Treating public information as if it must be private is at best ethically questionable, at worst repugnant. Restricting truthful accounts of the things that people do in public, and subsequent discussion of them, is a restraint on free speech and a free press; that kind of restraint infringes the right of everyone else, as members of the public, to know. Obstruction of the examination of public norms, rules, and laws is a prescription for tyranny. Discussion and examination of published material is fundamental to science, learning, education, and a free society. And examination of government and policy is essential to democracy, which has been defined as ‘government by discussion’ (Cohen, 1997). The defence of the public sphere is in all our interests.
Research without Consent
Much of the literature on informed consent starts from the assumption that there is something intrinsically wrong with research where no consent is given. The development of that doctrine began with a legitimate concern, about the use of pointless, invasive ‘experiments’ by Nazi doctors, and the Nuremberg Code became the model for bio-medical research everywhere. The doctrines that I have been examining reflect those concerns, but they have gone some way beyond them. The doctrine of information privacy can only legitimately apply in circumstances where the person who gives the consent is the person who legitimately controls the release of that information to the researcher. There are many circumstances where that is not the case. They include, most obviously, information that is public in its nature. Even in the private sphere, however, there is information over which the research participant does not hold the rights. This includes information that relates to organisations, to third parties (and other people), and to other participants – and most research based on evidence from participants calls for some ‘triangulation’, cross-validation, or corroboration to be useful, at which point it ceases to be under the control of individual respondents. It might still be good practice (and good manners) for a researcher to behave as if their research participant was a valued source – I have tried to do that – but I have also, without compunction, used freedom of information legislation, which requires public officials to respond to queries. We should not suppose that the researcher’s primary duty is to the respondents.
Much of the literature concerned with involuntary participation in research is concerned with ‘covert research’, a term which generally refers to circumstances where the researcher does not tell research subjects or participants that research is taking place. [That is often muddled with the different, and relatively unusual, situation where researchers do not tell people that research is going on and actively deceive participants about what they are doing. Most cases of deception take place within the framework of a research project that seems to be about something else (Kimmel, 1996, p. 73).] It is more helpful to think of covert research as being undisclosed, or having ‘limited disclosure’ (ANHMRC, 2007). Legitimate examples of research where no disclosure was necessary or appropriate might be taken to include monitoring the use of mobile phones while driving (Walker, Williams, & Jamrozik, 2006), considering health and safety issues in the management of major sporting events (Lekka, Webster, & Corbett, 2010), or surveillance of internet use to produce economic indicators (McLaren & Shanbhogue, 2011). All three of those pieces of research have taken place in the public sphere, and they were all clearly done for the public benefit; it would be shocking if they were not permitted.
Undisclosed research could be considered a breach of privacy if it led to the publication of material that was private – but the same would be true of research with full disclosure. The doctrine of privacy does imply a default – a set of barriers and protections that researchers can only cross subject to permission, co-operation, and safeguards, and sometimes, particularly when there is a risk of harm, not even then. Consent may contribute to the protection of research subjects, but it is not a guarantee of it. I was part of a research team developing an instrument to assist with planning social care provision for people with dementia (Spicker & Gordon, 1997). The main objective of the instrument was to use data to inform planners about the needs of the population, and so to minimise intrusion in individual circumstances. Wherever possible, information was obtained from people who were already in possession of the data, and the information was anonymised and dealt with collectively. The study was designed to obtain the information in a manner which would minimise disturbance or cost to the subjects of the research, and to process and use the information in a manner which would not impose costs or otherwise harm them.
There were no problems raised during formal ethical review, because as far as the review committee was concerned this was not an invasive process. Ethical concerns were, however, raised as we proceeded. There was no effective way of obtaining consent from people with dementia; explaining the purpose of the research to people with dementia and their carers, even in outline, would itself carry risks (many people with dementia have not been told); even minimal intervention could be intrusive – questions about memory loss, behaviour, insecurities, or personal care are inevitably difficult to ask. We sought to protect and safeguard the interests of the respondents – our interviewers were professionally qualified and experienced social workers. Ultimately, however, this all depends on a series of moral judgments, and a question of whether the benefits of the research (a less invasive procedure than current assessments) could justify the process. There cannot be blanket rules.
Ethical Research: The Duties of Researchers
I began this chapter with a widely accepted model of ethical behaviour in research: privacy demands control, control requires consent, and consent protects privacy. This is not wrong in every case, but there are more than enough counter-examples to show that it cannot be taken as a default position.
Researchers do have duties to protect people, but those duties are badly described in conventional codes of guidance. First, there are duties to everyone and anyone – human rights, the rights of citizens, the rights of vulnerable people, and so on. Researchers have a duty at least to avoid, and where the information is clear to report, crimes against humanity, the abuse of power, and the abuse of persons. These rights should have the highest priority in research – certainly, they trump any issue about the research itself, and any undertakings the researcher might make to specific persons. The supposition that researchers have a duty to conceal the wrongs that some people do to others, that powerful people have the right to control information, and that nothing can be done without their consent, is plainly unethical.
Second, there are duties arising from the research that is being done – its potential use, its application, and its effects on research subjects. Research should be beneficent (aiming to do something good), or at least non-maleficent (doing no harm). Privacy can be an important constraint on research, but there are acts of observation, recording, and reporting that have no evident implications for privacy. Google and Twitter commonly monitor people’s use of terms or the subject of searches; many researchers are involved in similar activities. When people complain about the mass use of internet-based data, they are assuming that in some way that this has trespassed on their rights. How? Are Icelanders somehow violated because their government manages (and sells) genetic information about the population? Merz and his colleagues are highly critical of the Icelandic example. In their view, an action that would be legitimate if it was solely for governmental purposes ceases to be legitimate if it is used commercially (Merz, McGee, & Sankar, 2004). There is a distinct argument to make here about the research relationship – how the research might be affected by obligations incurred as a result of funding or sponsorship – but the simple fact of whether this relates to government or the private sector does not seem to me to have anything to do with the process of research. Neither the aims of the action, nor the process, nor the outcomes have any evident implications that affect any individual person. The objections to such measures are being represented, questionably, as a point of absolute principle, without considering whether there is actually a violation of privacy or of rights. If research is beneficent, does no harm and does not intrude on personal space, there should be no obstacle to it.
Third, there are duties to participants – which I take to mean the people with whom researchers interact directly (a much more limited category than appeared earlier in this argument). The rights of participants are ‘particular’, not ‘general’; they define the duties which are negotiated with the researcher. The defence of particular rights is a matter of integrity. Researchers should avoid, for example, making promises they cannot legitimately keep – such as promises of confidentiality made to people engaged in criminal activity. It is important, however, to recognise that duties to participants are contingent, and must have a lower priority than general duties such as human rights, human dignity, or the rights of citizenship. That is a still more important example of the ways in which treating research as private may be unethical.
Much of the process of research is concerned with making information public, in the broad sense of that term. The process of research generally involves taking data, of whatever sort, and processing it in a form that will be presented to other people. Whenever research is done with the intention of producing a report, or making the findings known to people other than the researcher and the participants, it can be said to be a ‘publication’. The very word ‘publication’ might reasonably be taken to suggest that the material is made available to the public, but that is not requisite; in law, a ‘publication’ might refer simply to communication to a third party. Sometimes the presentation itself is confidential – for example, when an organisation has commissioned research about its operations – but even then, information is likely to be taken from one place and moved to another. [In an American case, confidential communications within a company have been held to be ‘publications’ (Bals v. Verduzco, 1992).] The transmission of material across boundaries is fundamental to research work.
There are topics which cannot be broached without some degree of intrusion into people’s private space, and wherever that is done, it needs to be approached with a sense of ethical integrity and a degree of sensitivity. Where information is private, there may be a case for confidentiality and anonymity. That is not true in every situation, and where the activity falls clearly into the public sphere, there is no duty to consult with participants, to negotiate the terms of the research, or even to disclose that research is taking place. In most circumstances, research is not a private matter, and the assumption that it must be private is itself a violation of another ethical principle – one of the foundational principles of modern civilisation. We have to be able to examine the world we live in.
References
American Society for Public Administration (ASPA), 2012American Society for Public Administration (ASPA). (2012). Retrieved from http://www.aspanet.org/public/ASPADocs/Principles%2012-09-10.pdf
Australian National Health and Medical Research Council (ANHMRC)., 1999Australian National Health and Medical Research Council (ANHMRC). (1999). National statement on ethical conduct in research involving humans: Commonwealth of Australia.
Australian National Health and Medical Research Council (ANHMRC), 2007Australian National Health and Medical Research Council (ANHMRC). (2007). National statement of ethical conduct in human research. Retrieved from http://www.nhmrc.gov.au/publications/ethics/2007_humans/contents.htm
Australian National Health and Medical Research Council (ANHMRC). (2018Australian National Health and Medical Research Council (ANHMRC). (2018). National statement on ethical conduct in human research: Commonwealth of Australia.
Bals v. Verduzco, 600 N.E.2d 1353 (Ind. 1992Bals v. Verduzco, 600 N.E.2d 1353 (Ind. 1992).
British Sociological Association (BSA)., 2017British Sociological Association (BSA). (2017). Statement of ethical practice (p. 5). Retrieved from https://www.britsoc.co.uk/media/24310/bsa_statement_of_ethical_practice.pdf
British Society of Criminology (BSC)., 2006British Society of Criminology (BSC). (2006). Code of ethics for researchers in the field of criminology. Retrieved from http://www.britsoccrim.org/ethical.htm
Cohen, 1997Cohen, J. (1997). Deliberation and democratic legitimacy. In R. Goodin & P. Pettit (Eds.), Contemporary political philosophy. Oxford: Blackwell.
Dooyeweerd, 1979Dooyeweerd, H. (1979). Roots of western culture. Toronto: Wedge.
UK Economic and Social Research Council (ESRC), 2015UK Economic and Social Research Council (ESRC). (2015). ESRC framework for research ethics. Retrieved from https://esrc.ukri.org/files/funding/guidance-for-applicants/esrc-framework-for-research-ethics-2015/
Faden, & Beauchamp, 1986Faden, R., & Beauchamp, T. (1986). A history and theory of informed consent. Oxford: Oxford University Press.
Kimmel, 1988Kimmel, A. (1988). Ethics and values in social research. London: Sage.
Kimmel, 1996Kimmel, A. (1996). Ethical issues in behavioural research. Oxford: Blackwell.
Kuyper, 1899Kuyper, A. (1899). Calvinism: Six stone lectures. Amsterdam: Hoeveker and Wormser.
Lekka, Webster, & Corbett, 2010Lekka, C., Webster, J., & Corbett, E. (2010). A literature review of the health and safety risks associated with major sporting events. Merseyside: Health and Safety Executive.
McLaren, & Shanbhogue, 2011McLaren, N., & Shanbhogue, R. (2011). Using internet data as economic indicators. Bank of England Quarterly Bulletin Q2, 51, 134–140.
Merz, McGee, & Sankar, 2004Merz, J., McGee, G., & Sankar, P. (2004). “Iceland Inc.”?: On the ethics of commercial population genomics. Social Science & Medicine, 58, 1201–1209.
Mill, 1848Mill, J. S. (1848). The principles of political economy. Retrieved from http://ebooks.adelaide.edu.au/m/mill/john_stuart/m645p/book5.11.html
PCS, 2017PCS. (2017). The future of social security in Scotland: Views from within the system. Glasgow: Public and Commercial Services Union.
Rössler, 2005Rössler, B. (2005). The value of privacy. Cambridge: Polity.
Ryle, 1963Ryle, G. (1963). The concept of mind. Harmondsworth: Penguin.
Schneider, 1994Schneider, E. (1994). The violence of privacy. In M. Fineman & R. Mykitiuk (Eds.), The public nature of private violence. New York, NY: Routledge.
Social Policy Association (SPA), 2009Social Policy Association (SPA). (2009). Social policy association guidelines on research ethics. Retrieved from http://www.social-policy.com/documents/SPA_code_ethics_jan09.pdf
Spicker, & Gordon, 1997Spicker, P., & Gordon, D. (1997). Planning for the needs of people with dementia. Aldershot: Avebury.
Social Research Association (SRA), 2003Social Research Association (SRA). (2003). Ethical guidelines.
United Nations, 1948United Nations. (1948). Universal declaration of human rights, art 12.
Walker, Williams, & Jamrozik, 2006Walker, L., Williams, J., & Jamrozik, L. (2006). Unsafe driving behaviour and four wheel drive vehicles. British Medical Journal, 331, 71.
Warren, & Brandeis, 1890Warren, S., & Brandeis, L. (1890). The right to privacy. Harvard Law Review, 4(5), 193–220.
- Prelims
- Introduction: Ethical Issues in Covert, Security and Surveillance Research
- Chapter 1: Surveillance Ethics: An Introduction to an Introduction
- Chapter 2: Science, Ethics, and Responsible Research – The Case of Surveillance
- Chapter 3: Research Is Not a Private Matter
- Chapter 4: Covert Research Ethics
- Chapter 5: Taking Shortcuts: Correlation, Not Causation, and the Moral Problems It Brings
- Chapter 6: The Big Data World: Benefits, Threats and Ethical Challenges
- Chapter 7: Health Data, Public Interest, and Surveillance for Non-health-Related Purposes
- Chapter 8: Privacy and Security: German Perspectives, European Trends and Ethical Implications
- Chapter 9: A Framework for Reviewing Dual Use Research
- Chapter 10: Security Risk Management in Hostile Environments: Community-based and Systems-based Approaches
- Chapter 11: Conducting Ethical Research in Sensitive Security Domains: Understanding Threats and the Importance of Building Trust
- Chapter 12: Covert Aspects of Surveillance and the Ethical Issues They Raise
- Guidance Notes for Reviewers and Policymakers on Covert, Deceptive and Surveillance Research
- Index