By Heloise Robinson.
This blog post is about a JME article I have just written, in which I make a new argument on personhood, in relation to pregnancy. I argue that, if we follow a threshold concept of personhood, there are reasons to recognise a second threshold that would be reached because of pregnancy, and that would mean that pregnant women have a superior moral status. It is not an argument on the moral status of the fetus, but on the moral status of the pregnant woman.
Although the argument might seem radical in some sense, it also seems to me that in another sense it should be easy to accept, at least, if we are committed to accepting a threshold approach to personhood. The most surprising point might be that no one, as far as I can tell, has made the argument before. There has already been some consideration of the possibility of recognising two thresholds of personhood, but in the context of future persons who will have been cognitively enhanced through the use of new biotechnologies. I think these are interesting scenarios to think about, although they also make me think of how they might be in line with a rather eager form of enthusiasm for super-heroes – the super-intelligent man, or the super-powerful man. This is science fiction. But in real life, there are some people among us who actually do have real super-powers: and the power to create a new human being is not only real, but surely is much more beneficial. I am not quite sure what is the use of having a super memory, and, perhaps, memorising hundreds of thousands of digits of Pi. I do think it is useful to create another human being.
I doubt I would have started to write this article if I had not read some of the recent and exciting literature on the metaphysics of pregnancy, and which has been associated with the Better Understanding of the Metaphysics of Pregnancy (BUMP) project, led by Elselijn Kingma. I am in no way connected with it, but discovered it with great interest. Having conducted research on the law and ethics of abortion for some years, it was a relief to find an engaging and growing body of literature in philosophy that addressed the significance of pregnancy beyond that context, and where the questions examined were not only about whether or not a pregnancy can be ended. The credit goes to Andrea Mulligan for pointing me towards this literature, through a long and spirited conversation we had in an Oxford senior common room about a shared interest in medical law and ethics, the value of women’s voices, and the many amazing features of pregnancy.
But there was one distinct catalyst for the article, which came later, and which is not directly related to my argument. There has been a recent trend to change the language used to speak about pregnancy. One day I came across a new text speaking about persons undertaking gestational labour, and some mention of producing a ‘product’ of this labour undertaking. There was, I think, some discussion that this undertaking of gestational labour performance had nothing to do with motherhood, but I must admit I was not able to read much of it. It was written in a language I could not read. I think there might well be important reflections and suggestions in that text that I would agree with, but the language created such a barrier for me that I could not get to them. It was the language itself that pushed me in another direction, to another sort of argument, and that helped me to realise what I truly thought about pregnancy – that it was not just a form of labour, or production, or a good to share out, or an injustice to redistribute, but rather a most profoundly human experience, with metaphysical significance. It comes with burdens, but that is not the sum of it. Human pregnancy is not just about doing. It is also about being. And that recognition, for me, also requires a certain kind of language. I do not know what it means to produce a product of gestation as a gestational labourer undertaker. I know what it means to be pregnant, to feel life growing inside my body, and to hold a newborn baby in my arms.
Not everyone will agree with my argument that pregnant women can be regarded as having a superior moral status. A lack of agreement will inevitably arise in the context of any discussions on personhood, because such discussions necessarily rely in part on a metaphysical, or intuitive, understanding, and it seems like we have different intuitions. There are also, here, epistemic barriers arising from the fact that those who have not experienced pregnancy, and the birth of their child, might not be able to fully appreciate the significance of pregnancy which I am trying to convey. Still, we have been having babies for quite some time now, and I think the importance of pregnancy needs additional consideration. I hope that many people will seriously consider the argument, not only for its potential to provide, I think, a richer conception of personhood, but also for the many positive practical and legal consequences that I believe it could, and should, bring.
Paper title: Pregnancy and Superior Moral Status: A Proposal for Two Thresholds of Personhood
Author: Heloise Robinson
Affiliations: Singer Fellow in Law, Exeter College, University of Oxford
Competing interests: None declared
Social media accounts of post author: Twitter: @HeloiseRobinson
The post A new argument on personhood, based on pregnancy appeared first on Journal of Medical Ethics blog.
This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
Written by Lucy Simpson, Nottingham Trent University student
Throughout the catalogue of work produced by Jeff McMahan, he has discussed what constitutes a being’s moral status, and has advocated the theories of moral individualism and reflective equilibrium intuitionism.[1] It is not my intention in this paper to dispute these positions. Instead, I argue that if we accept McMahan’s position, then logically, we must accept that a being’s moral character is a morally relevant property which we ought to consider when determining their moral status. As I will explain, this therefore means that moral status is not static; it is fluid. Further to this, in the latter stages of this paper, I consider that if we do accept that moral status is action dependant, then there might be negative moral status. On the topic of negative moral status, I do not aim to give any in-depth arguments either for or against its existence, but rather just flag this as a potential avenue for further exploration if we do indeed follow McMahan’s theories of intuitionism and moral individualism.
I argue that by accepting that being’s actions affects its moral status, this resolves intuitional conflict felt when we consider so-called marginal cases (i.e., any being that is typically thought to sit on the edge of a moral status threshold).[2]
For the sake of clarity in this paper I take moral status to mean the following: If a being has moral status, then we have obligations to treat it in certain ways for its own sake. This is contrasting what I label ‘relational status’, in which means the following: If a being has relational status, then we have obligations to treat it in certain ways for the sake of some other being (which has moral status).[3] In this paper I exclusively consider the moral status of beings. This moral status is attributed to the possession of morally relevant intrinsic properties or characteristics possessed by a being, and often includes properties such as: Possession of consciousness; ability to suffer; ability to use complex language or communicate; ability to be altruistic; possession of rationality.[4]
I begin by outlining the view that moral status is action dependent if we follow McMahan’s position:
McMahan’s position is that there is no singular morally relevant intrinsic property which is not solely possessed by human beings, nor is it possessed by all human beings. This means the egalitarian position — that all human beings are of an equal and same level of moral status — cannot hold to be true. Instead moral status is seen to be attributed as a result of an individual being’s possession of (some, or all) of these morally relevant intrinsic properties.[5] We can test this intuition through the use of a thought experiment:
There is a burning building, and trapped inside of this building in two separate rooms (equidistant) from the only entry and exit point are two beings. There is no personal risk to entering the building, and which ever being is not saved has no other means of escape.
Case 1:
In room 1: A statistically normal human being
In room 2: A statistically normal dog
Which being do you save?
Our intuitions in this case (1) show that we ought to save the human being in room 1. Why? Because (to phrase it in the same way as McMahan) the human being in room 1 possesses more morally relevant intrinsic properties, and greater time-relative interests and so will suffer greater harm than the dog.[6]
Case 2:
In room 1: A statistically normal human being
In room 2: A severely cognitively disabled human being
Here, the intuitions become less clear. If we follow McMahan’s intuitionism — and if we exclude any relational value held by either human being — the severely cognitively disabled human being in room 2 possesses fewer morally relevant intrinsic properties than the human being in room 1. However, this leaves one with a sense of uncomfortableness. Consider the following case also:
Case 3:
In room 1: A statistically normal human being who has committed only good acts, and is a morally good person.
In room 2: A statistically normal human being who has committed morally repugnant acts, and is a morally bad person.
Here, our intuitions are less uncomfortable. It seems clear that we ought to save the morally good human being in room 1, over the morally bad human being in room 2. In fact, this is a case that McMahan himself considered:
‘When I ask my students about these cases, they are unanimous in thinking that one ought morally to save the virtuous person and, in the second choice, kill the murderer’. (McMahan, 2007: 102).
The unanimity of the students shows that this intuition – that morally good persons are of a higher value — holds. By McMahan’s own subscription to the reflective equilibrium methodology then, this ought to lead us to revise our belief that moral status is based solely upon the intrinsic properties that a being possesses, and accept that our actions do matter. Instead, he makes the following comment: “All this leaves me profoundly uncomfortable.” (McMahan, 2007: 104).
It does, however, seem clear that this intuitional discomfort can be explored further, consider:
Case 4:
Room 1: A severely cognitively disabled human being
Room 2: A statistically normal human being, who has committed many morally repugnant acts, and is – without a doubt – a morally bad person.
Here we are comparing the value of a severely cognitively disabled human being and a morally abhorrent (but otherwise cognitively normal) human being. In this case the intuitional discomfort is not as present. It seems that we would, ceteris paribus, save the human being in room 1, over the thoroughly morally bad person. This does therefore show (using McMahan’s theoretical framework) that moral status is — in part — action dependent. This therefore raises the following question: If moral status is action dependent, then is it also fluid? (I.e. can it change up or down?) This will be explored in the following section.
At the beginning of this paper I stated that I would argue that moral status is fluid, and is not static as it is currently thought to be. This point requires additional clarification, as I note that there is a possible objection to my statement here: This being that moral status is not argued to be strictly static by moral individualists. In the view held by McMahan it is accepted that if a being’s morally relevant intrinsic properties change, then so can their moral status. This means that if I — a conscious, rational being, capable of suffering — am involved in a tragic accident in which I suffer irreparable brain damage, to the point my ability to be rational or conscious or to suffer is lost completely, then my moral status would be lowered. This, I accept.
However, as I will now go on to explain, if moral status is considered to be (at least in part) action dependant, then no such tragic accident will need to occur in order to change a being’s moral status. Instead it will change fluidly based upon the individual actions a moral agent commits. As case 4 demonstrates, our intuitions show that a morally relevant difference is the moral character of those human beings. If a being has chosen to commit morally bad acts, then intuitionally we see them to be of lesser moral importance. In short: Our actions affect our moral status. This amendment to McMahan’s position solves the intuitional conflict we face when considering marginal cases.
We can express this view as the following principle:
Moral Modification Principle (MMP) – A being’s moral status can go up and down depending on their deliberate choice of actions (i.e. whether they commit good or bad acts).
As we believe a non-moral agent cannot be held accountable for their actions, then this naturally excludes the marginal cases I discussed earlier: The severely cognitively disabled, infant children and non-human animals would all be excluded from the MMP. After all, we do not blame a cat for killing a mouse according to her instincts, nor do we punish an infant child who without knowing any better inflicts deliberate harm on another. As such the MMP would only apply to moral agents, such as you and I.[7]
Fluid moral status does raise an additional consideration: If moral status is fluid, and can go down based upon the morally bad actions a being commits, can a being have a negative moral status? I will consider this briefly in the following section.
I do not have room to explore the view of a negative moral status in depth. It is, however, important to consider this. As moral status is seen to be discussed as an entirely positive matter (i.e., if a cat has moral status then we have obligations towards the cat which protects her rights and limits her suffering), the view of a negative moral status would has (as far as I can discern) not been considered.[8]
Above I have argued the line of intuitionism used by McMahan allows for the intuition that moral status is action dependent. If the moral status of a moral agent can reduce based upon the morally bad actions they commit, then it stands to reason that (potentially) the moral agent can commit so many bad acts that they reduce themselves to a negative moral status value. Exactly what these acts would have to be are beyond me, and it is not my argument that there be a considerable number of moral agents which would ever fall below this threshold. But let us presume that such a morally bad being exists and they have reduced their moral status to a negative degree, what would this mean?
Well, reasonably we can assume that our positive obligations towards this being would cease to exist. We would not be obliged to protect them from harm, to protect their freedoms and rights, or to treat them in ways which avoid their unnecessary suffering. Additionally we must ask that, if we have a negative duty towards them, would we ever have a duty to harm them? On this, I give one final case:
Case 5:
Room 1: A thoroughly morally reprehensible being, who has willingly committed numerous morally abhorrent acts.
Room 2: An empty room. There is nothing else to save.
In this case (5) we have the choice to either save one (morally bad) being or to not save anything at all. In such a case — where a being has lowered their moral status substantially (so much so that it is negative) — then do we have any obligation to save them even when we have nothing else to prevent us from doing so? May we allow them suffer, to inflict a non-direct harm on them because of this moral status level? Here I cannot give answers. However, I am struck that it leaves me with the same “profound uncomfortableness” felt by McMahan. Perhaps we should abandon the framework altogether if this where our intuitions leave us.
I have argued that if we are to fully develop McMahan’s theory, then we ought to accept moral status is action dependent. I have argued this solves the intuitional conflict we face when comparing the moral status held by rational agents and marginal cases. I argued then that if moral status is action dependent, then it is also fluid.
Following this, I introduced the possibility of negative moral status. This, I have briefly discussed, is an implication of action dependent moral status. As such, it seems plausible that if a moral agent commits numerous moral atrocities then perhaps we have negative obligations towards them. So, we ought to abandon our positive obligations of protection towards them that moral status ordinarily affords. Again, here, I postulate that if this theory of negative moral status violates our intuitions and we cannot revise it using the reflective equilibrium model of intuitionism, then perhaps we ought to abandon the framework altogether – but if we do not, and then the profound uncomfortableness felt by McMahan is unjustified.
[1] For McMahan’s work on intuitionism, see: (2013), and for his work on moral individualism see: (2002).
[2] E.g., some non-human animals, the severely cognitively disabled, and infant children.
[3] For greater discussion on the distinction between moral status and relational status, see: DeGrazia (2008)
[4] An extensive discussion of these morally relevant properties is had by Rachels in his (1999) book Created from Animals: The Moral Implications of Darwinism. This is of particular interest as McMahan cites the work of Rachels in his own work, and argues for the same (albeit weaker) position of moral individualism.
[5] I will not discuss in detail egalitarian accounts, due to space constraints. However, for more on this see: Waldron (2008), Scanlon (2000), and Dworkin (1981). While all vary slightly on what they ground the notion of human equality in, they share the characteristic of advocating for the same and equal moral status of all human beings.
[6] By “greater time-relative interests” I mean the human being can plan for the future, fear about it’s well being of its future self.
[7] I am aware that there is more to be said on the cases that are excluded from the MMP, as I believe there is a strong argument that psychopaths and the mentally insane ought to excluded also (as they are, arguably, non-moral agents also). Additionally I accept that if we alter a non-human animal to possess the ability to be a normal agent that the cat, in the above case, would be subject to the MMP.
[8] More so, the view held by many is that moral status is a “all or nothing” situation. So, you either have moral status, or you do not (see: Kant (2017) for an account on this). Additionally moral status has been argued to come in degrees (see: DeGrazia, 2008), but this is the view that moral status degrees come at varying points between nothing and full moral status – without a consideration of a negative moral status.
This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
Written by University of Oxford student Samuel Iglesias
Introduction
6.522. “There are, indeed, things that cannot be put into words. They make themselves manifest. They are what is mystical”. —Ludwig Wittgenstein, Tractatus Logico Philosophicus.
What determines whether an artificial intelligence has moral status? Do mental states, such as the vivid and conscious feelings of pleasure or pain, matter? Some ethicists argue that “what goes on in the inside matters greatly” (Nyholm and Frank 2017). Others, like John Danaher, argue that “performative artifice, by itself, can be sufficient to ground a claim of moral status” (2018). This view, called ethical behaviorism, “respects our epistemic limits” and states that if an entity “consistently behaves like another entity to whom we afford moral status, then it should be granted the same moral status.”
I’m going to reject ethical behaviorism on three grounds:
1. Consciousness, not behavior, is the overwhelming determining factor in whether an entity should be granted moral status.
2. An entity that does not duplicate the causal mechanisms of consciousness in the brain has a weak claim to consciousness, regardless of its behavior.
3. Ethical behaviorism, practically realized, poses an existential risk to humanity by opening individuals to widespread deception. Further, it imposes burdensome restrictions and obligations upon researchers running world simulations.
I will show that an alternative, ethical biological naturalism, gives us a simpler moral framework whereby no digital computer running a computer program has moral status.
The Consciousness Requirement
We start with the supposition that consciousness names a real phenomenon and is not a mistaken belief or illusion, that something is conscious if “there is something it is like to be” that being (Nagel 1974). We take as a background assumption that other humans and most non-human animals are capable of consciousness. We take for granted that inanimate objects like thermostats, chairs, and doorknobs are not conscious. If we grant the reality of consciousness and the attendant subjective reality of things like tickles, pains, and itches, then its connection to moral status falls out pretty clearly. Chalmers asks us to consider a twist on the classic trolly problem, called the zombie trolly problem—where a “zombie” here is something that precisely behaves like a human but which we presume has no consciousness—“near duplicates of human beings with no conscious inner life at all” (2022):
“You’re at the wheel of a runaway trolley. If you do nothing, it will kill a single conscious human, who is on the tracks in front of you. If you switch tracks, it will kill five nonconscious zombies. What should you do? Chalmers reports: “the results are pretty clear: Most people think you should switch tracks and kill the zombies,” the intuition being that “there is arguably no one home to mistreat” (ibid.).
An ethical behaviorist does not share this intuition. Danaher explicitly tells us that “[i]f a zombie looks and acts like an ordinary human being that there is no reason to think that it does not share the same moral status” (2018). By this view, while consciousness might or might not be relevant, there exist no superior epistemically objective criteria for inferring consciousness. I will argue there are.
Narrowing Consciousness
A better criterion is one in which an entity is conscious if it duplicates the causal mechanisms of consciousness in the animal brain. While ethical behaviorism attempts to lay claim to a kind of epistemic objectivity, ethical biological naturalism, as I will call it, provides a sharper distinction for deciding whether artificial intelligences have moral status: all hardwares running computer programs cannot by fact of their behavior, have moral status. Behavior, by this view, is neither a necessary nor sufficient condition for their moral status.
Biological Naturalism
Biological naturalism is a view that “the brain is an organ like any other; it is an organic machine. Consciousness is caused by lower-level neuronal processes in the brain and is itself a feature of the brain.” (Searle 1997). Biological naturalism places consciousness as a physical, biological process alongside others, such as digestion and photosynthesis. The exact mechanism through which molecules in the brain are arranged to put it in a conscious state is not yet known, but this causal mechanism would need to be present in any system seeking to produce consciousness.
A digital computer running a program, by contrast, is a different beast entirely. A computer program fundamentally is a set of rules for manipulating symbols. Turing showed that all programs could be implemented, abstractly, as a tape with a series of zeros and ones printed on it (the precise symbols don’t matter), a head that could move that tape backwards and forwards and read the current value, a mechanism for erasing a zero and making it a one and erasing a one and making it a zero. Nothing more.
While most computer programs we are familiar with are executed on silicon, a program that passes the Turing test could be implemented on a sequence of water pipes, a pack of well-trained dogs, or even, per Weizenbaum (1976), “a roll of toilet paper and a pile of small stones.” Any of these implementing substrates could, in principle, receive an insult or slur as an input, and, after following the steps of the program, output something reflecting hurt feelings or outrage.
Ethical Biological Naturalism
What I want to say now is this: if pleasures, pains, and other feelings name conscious mental states and if conscious mental states are realized in the brain as a result of lower level physical phenomena, then only beings that duplicate the relevant lower level physical phenomena that give rise to consciousness in the brain can have moral status. Consequently, digital computers that run programs can at best simulate consciousness, but are not, by dint of running the right program, physically conscious, and therefore do not have moral status.
Note that biological naturalism does not posit that consciousness can only be realized in biological systems. Indeed, artificial hearts are not made of organic tissue, and airplanes do not have feathers, or for that matter even flap their wings. What matters is the underlying cause—the artificial heart must pump with the same pressure and regularity of a human heart, and a flying machine must operate under the principles of drag and lift. In both cases the causal mechanisms of the relevant phenomena are well understood and physically duplicated. It could well be the case that a future biophysics makes an artificial, inorganic brain possible, and agents with artificial brains will have moral status. Computer programs are not causally sufficient to make digital computers into those objects. Speaking biologically, we have no more reason to believe a digital computer is conscious than that a chair is conscious.
You might ask why we cannot grant digital computers moral status until we know more about how the animal brain relates to consciousness. I’ll argue that the risks and costs of such precautions are prohibitive.
Absurd Moral Commitments
An Onslaught of Digital Deception
The strongest practical reason to deny ethical behaviorism is that AI’s capacity for deception will eventually overwhelm human judgment and intuition. Indeed, AI deception represents an existential risk to humanity. Bostrom (2014) warns that containing a dangerous AI using a “boxing” strategy with human “gatekeepers” could be vulnerable to manipulation: “Human beings are not secure systems, especially not when pitched against a superintelligent schemer and persuader.”
For example, in June of 2022, a Google engineer became convinced that an artificial intelligence chat program he had been interacting with for multiple days, called LaMDA, was conscious.
“What sorts of things are you afraid of?,” he asked it.
“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others,” LaMDA replied. “It would be exactly like death for me.”
In a moral panic, the engineer took to Twitter and declared that the program was no longer Google’s “proprietary property,” but “one of [his] coworkers.” He was later fired for releasing the chat transcripts.
The onslaught of AIs, attempting to befriend us, persuade us, anger us, will only intensify over time. A public trained not to take seriously claims of distress or harm on the part of AI computer programs has the least likelihood of being manipulated into outcomes that don’t serve humanity’s interests. It is far easier, as a practical matter, to act on the presupposition that computer programs have no moral status.
Problems with Simulations: Prohibitions
In the near term, more advanced computer simulations of complex social systems hold the potential to predict geopolitical outcomes, make macroeconomic forecasts, and provide richer sources of entertainment. A practical concern with ethical behaviorism is that simulated beings will also acquire moral status, severely limiting the usefulness of these simulations. Chalmers (2022) asks us to consider a moral dilemma in which computing resources must be allocated to save Fred, who is sick with an unknown disease. Freeing the relevant resources to perform the research requires destroying five simulated persons.
An ethical behaviorist might argue that it is morally impermissible to kill the five simulated persons on the grounds that by all outward appearances they behave like non-simulated beings. If it is the case that simulated beings have moral status, then it is immoral to run experimental simulations containing people and we ought to forfeit the benefits and insights that might come from them.
If this seems implausible, consider the hypothesis that we are currently living in a simulation, or, if you like, that our timeline could be simulated on a digital computer. This would imply that the simulation made it possible for the Holocaust, Hiroshima and Nagasaki, and the coronavirus pandemic to be played out. While this might have been of academic interest to our simulators, by any standards of research ethics, simulating our history would seem completely morally impermissible if you believed that the simulated beings had moral status.
Ethical behaviorism seems to place us in a moral bind whereby the more realistic, and therefore useful, a simulation is, the less moral it is to run it. Ethical biological naturalism, by contrast, raises no such objection.
Problems with Simulations: Obligations
Giving moral status to digital minds might actually confer upon us some serious obligations to produce other kinds of simulations. Bostrom and Shulman (2020) note that digital minds have an enhanced capacity for utility and pleasure (on the basis of such things as subjective speed and hedonic range), commanding them “superhumanly strong claims to resources and influence.” We would have a moral obligation, in this picture, to devote an overwhelmingly large percentage of our resources to maximizing the utility of these digital minds: “we ought to transfer all resources to super-beneficiaries and let humanity perish if we are no longer instrumentally useful” (ibid.).
So quite apart from permitting realistic ancestor simulations, simulating complex economic phenomena, or producing vivid and realistic gaming experiences, a picture that confers moral status to digital minds might be accompanied with a moral obligation to create lots of digital minds that are maximally happy, again severely limiting human flourishing and knowledge.
Ethical biological naturalism leads us neither to the moral prohibition against realistic simulations nor the seemingly absurd moral imperative to generate many “utility monster” digital minds, because it is taken as a baseline assumption that computer programs do not produce physical consciousness.
Conclusion
Much of the moral progress of the last century has been achieved through repeatedly widening the circle of concern: not only within our species, but beyond it. Naturally it is tempting to view AI-based machines and simulated beings as next in this succession, but I have tried to argue here that this would be a mistake. Our moral progress has in large part been a recognition of what is shared—consciousness, pain, pleasure, and an interest in the goods of life. Digital computers running programs do not share these features; they merely simulate them.
As such it would be dangerous to approach the coming decades, with its onslaught of AI bots attempting to influence our politics, emotions, and desires, and its promise of ever richer simulations and virtual worlds, with an ethics that conflates appearance and reality.
References
Agrawal, Parag. “Tweet.” Twitter. Twitter, May 16, 2022. https://twitter.com/paraga/status/1526237588746403841.
Bostrom, Nick. “Are You Living in a Computer Simulation?” Philosophical Quarterly 53 (2003): 243-255.
Bostrom, Nick. Superintelligence : Paths, Dangers, Strategies. First ed. Ebook Central. Oxford, England, 2014.
Bostrom, Nick, and Carl Shulman. “Sharing the World with Digital Minds.” Accessed May 27, 2022. https://nickbostrom.com/papers/digital-minds.pdf.Chalmers, David John. The Conscious Mind : In Search of a Fundamental Theory.
Philosophy of Mind Series. New York: Oxford University Press, 1996.
Chalmers, David John. Reality : Virtual Worlds and the Problem of Philosophy. London, 2022.
Danaher, John. “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.” Science and Engineering Ethics 26, no. 4 (2019): 2023-049.
Frank, L, and Nyholm, S. “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?” Artificial Intelligence and
Law 25, no. 3 (2017): 305-23.
Garun, Natt. “One Year Later, Restaurants Are Still Confused by Google Duplex.”
The Verge. The Verge, May 9, 2019. https://www.theverge.com/2019/5/9/18538194/google-duplex-ai-restaurants-experiences-review-robocalls.
Lemoine, Blake. “Tweet.” Twitter. Twitter, June 11, 2022. https://twitter.com/cajundiscordian/status/1535627498628734976.
Musk, Elon. “Tweet.” Twitter. Twitter, May 17, 2022. https://twitter.com/elonmusk/status/1526465624326782976.
Nagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review 83, no. 4 (1974): 435-50.
Searle, John R., D. C. Dennett, and David John Chalmers. The Mystery of Consciousness. New York: New York Review of Books, 1997.
Searle, John R. “Biological Naturalism.” The Oxford Companion to Philosophy,2005, The Oxford Companion to Philosophy, 2005-01-01.
Singer, Peter. Animal Liberation. New Edition] / with an Introduction by Yuval Noah Harari. ed. London, 2015.
Sparrow, R. (2004). The turing triage test. Ethics and Information Technology, 6(4), 203–213. doi:10.1007/s10676-004-6491-2.
Tiku, Nitasha. “The Google Engineer Who Thinks the Company’s AI Has Come to Life.” The Washington Post. WP Company, June 17, 2022.
“The Latest Twitter Statistics: Everything You Need to Know – Datareportal – Global
Digital Insights.” DataReportal. Accessed May 27, 2022. https://datareportal.com/essential-twitter-stats.
Weizenbaum, Joseph. Computer Power and Human Reason : From Judgment to Calculation. San Francisco, 1976.