FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayPPJ Current

A new argument on personhood, based on pregnancy

By Heloise Robinson.

This blog post is about a JME article I have just written, in which I make a new argument on personhood, in relation to pregnancy. I argue that, if we follow a threshold concept of personhood, there are reasons to recognise a second threshold that would be reached because of pregnancy, and that would mean that pregnant women have a superior moral status. It is not an argument on the moral status of the fetus, but on the moral status of the pregnant woman.

Although the argument might seem radical in some sense, it also seems to me that in another sense it should be easy to accept, at least, if we are committed to accepting a threshold approach to personhood. The most surprising point might be that no one, as far as I can tell, has made the argument before. There has already been some consideration of the possibility of recognising two thresholds of personhood, but in the context of future persons who will have been cognitively enhanced through the use of new biotechnologies. I think these are interesting scenarios to think about, although they also make me think of how they might be in line with a rather eager form of enthusiasm for super-heroes – the super-intelligent man, or the super-powerful man. This is science fiction. But in real life, there are some people among us who actually do have real super-powers: and the power to create a new human being is not only real, but surely is much more beneficial. I am not quite sure what is the use of having a super memory, and, perhaps, memorising hundreds of thousands of digits of Pi. I do think it is useful to create another human being.

I doubt I would have started to write this article if I had not read some of the recent and exciting literature on the metaphysics of pregnancy, and which has been associated with the Better Understanding of the Metaphysics of Pregnancy (BUMP) project, led by Elselijn Kingma. I am in no way connected with it, but discovered it with great interest. Having conducted research on the law and ethics of abortion for some years, it was a relief to find an engaging and growing body of literature in philosophy that addressed the significance of pregnancy beyond that context, and where the questions examined were not only about whether or not a pregnancy can be ended. The credit goes to Andrea Mulligan for pointing me towards this literature, through a long and spirited conversation we had in an Oxford senior common room about a shared interest in medical law and ethics, the value of women’s voices, and the many amazing features of pregnancy.

But there was one distinct catalyst for the article, which came later, and which is not directly related to my argument. There has been a recent trend to change the language used to speak about pregnancy. One day I came across a new text speaking about persons undertaking gestational labour, and some mention of producing a ‘product’ of this labour undertaking. There was, I think, some discussion that this undertaking of gestational labour performance had nothing to do with motherhood, but I must admit I was not able to read much of it. It was written in a language I could not read. I think there might well be important reflections and suggestions in that text that I would agree with, but the language created such a barrier for me that I could not get to them. It was the language itself that pushed me in another direction, to another sort of argument, and that helped me to realise what I truly thought about pregnancy – that it was not just a form of labour, or production, or a good to share out, or an injustice to redistribute, but rather a most profoundly human experience, with metaphysical significance. It comes with burdens, but that is not the sum of it. Human pregnancy is not just about doing. It is also about being. And that recognition, for me, also requires a certain kind of language. I do not know what it means to produce a product of gestation as a gestational labourer undertaker. I know what it means to be pregnant, to feel life growing inside my body, and to hold a newborn baby in my arms.

Not everyone will agree with my argument that pregnant women can be regarded as having a superior moral status. A lack of agreement will inevitably arise in the context of any discussions on personhood, because such discussions necessarily rely in part on a metaphysical, or intuitive, understanding, and it seems like we have different intuitions. There are also, here, epistemic barriers arising from the fact that those who have not experienced pregnancy, and the birth of their child, might not be able to fully appreciate the significance of pregnancy which I am trying to convey. Still, we have been having babies for quite some time now, and I think the importance of pregnancy needs additional consideration. I hope that many people will seriously consider the argument, not only for its potential to provide, I think, a richer conception of personhood, but also for the many positive practical and legal consequences that I believe it could, and should, bring.

 

Paper title: Pregnancy and Superior Moral Status: A Proposal for Two Thresholds of Personhood

Author: Heloise Robinson

Affiliations: Singer Fellow in Law, Exeter College, University of Oxford

Competing interests: None declared

Social media accounts of post author: Twitter: @HeloiseRobinson

The post A new argument on personhood, based on pregnancy appeared first on Journal of Medical Ethics blog.

Why Actions Matter: The Case for Fluid Moral Status

By: admin

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by Lucy Simpson, Nottingham Trent University student

 

Throughout the catalogue of work produced by Jeff McMahan, he has discussed what constitutes a being’s moral status, and has advocated the theories of moral individualism and reflective equilibrium intuitionism.[1] It is not my intention in this paper to dispute  these positions. Instead, I argue that if we accept McMahan’s position, then logically, we must accept that a being’s moral character is a morally relevant property which we ought to consider when determining their moral status. As I will explain, this therefore means that moral status is not static; it is fluid. Further to this, in the latter stages of this paper, I consider that if we do accept that moral status is action dependant, then there might be negative moral status. On the topic of negative moral status, I do not aim to give any in-depth arguments either for or against its existence, but rather just flag this as a potential avenue for further exploration if we do indeed follow McMahan’s theories of intuitionism and moral individualism.

I argue that by accepting that being’s actions affects its moral status, this resolves intuitional conflict felt when we consider so-called marginal cases (i.e., any being that is typically thought to sit on the edge of a moral status threshold).[2]

For the sake of clarity in this paper I take moral status to mean the following: If a being has moral status, then we have obligations to treat it in certain ways for its own sake. This is contrasting what I label ‘relational status’, in which means the following: If a being has relational status, then we have obligations to treat it in certain ways for the sake of some other being (which has moral status).[3]  In this paper I exclusively consider the moral status of beings. This moral status is attributed to the possession of morally relevant intrinsic properties or characteristics possessed by a being, and often includes properties such as: Possession of consciousness; ability to suffer; ability to use complex language or communicate; ability to be altruistic; possession of rationality.[4]

I begin by outlining the view that moral status is action dependent if we follow McMahan’s position:

 

  1. Moral Status and Actions:

McMahan’s position is that there is no singular morally relevant intrinsic property which is not solely possessed by human beings, nor is it possessed by all human beings. This means the egalitarian position — that all human beings are of an equal and same level of moral status —  cannot hold to be true. Instead moral status is seen to be attributed as a result of an individual being’s possession of (some, or all) of these morally relevant intrinsic properties.[5] We can test this intuition through the use of a thought experiment:

There is a burning building, and trapped inside of this building in two separate rooms (equidistant) from the only entry and exit point are two beings. There is no personal risk to entering the building, and which ever being is not saved has no other means of escape.

Case 1:

In room 1: A statistically normal human being

In room 2: A statistically normal dog

Which being do you save?

Our intuitions in this case (1) show that we ought to save the human being in room 1. Why? Because (to phrase it in the same way as McMahan) the human being in room 1 possesses more morally relevant intrinsic properties, and greater time-relative interests and so will suffer greater harm than the dog.[6]

 

Case 2:

In room 1: A statistically normal human being

In room 2: A severely cognitively disabled human being

Here, the intuitions become less clear. If we follow McMahan’s intuitionism — and if we exclude any relational value held by either human being — the severely cognitively disabled human being in room 2 possesses fewer morally relevant intrinsic properties than the human being in room 1. However, this leaves one with a sense of uncomfortableness. Consider the following case also:

 

Case 3:

In room 1: A statistically normal human being who has committed only good acts, and is a morally good person.

In room 2: A statistically normal human being who has committed morally repugnant acts, and is a morally bad person.

Here, our intuitions are less uncomfortable. It seems clear that we ought to save the morally good human being in room 1, over the morally bad human being in room 2. In fact, this is a case that McMahan himself considered:

‘When I ask my students about these cases, they are unanimous in thinking that one ought morally to save the virtuous person and, in the second choice, kill the murderer’. (McMahan, 2007: 102).

The unanimity of the students shows that this intuition – that morally good persons are of a higher value  — holds. By McMahan’s own subscription to the reflective equilibrium methodology then, this ought to lead us to revise our belief that moral status is based solely upon the intrinsic properties that a being possesses, and accept that our actions do matter. Instead, he makes the following comment: “All this leaves me profoundly uncomfortable.” (McMahan, 2007: 104).

It does, however, seem clear that this intuitional discomfort can be explored further, consider:

Case 4:

Room 1: A severely cognitively disabled human being

Room 2: A statistically normal human being, who has committed many morally repugnant acts, and is – without a doubt – a morally bad person.

Here we are comparing the value of a severely cognitively disabled human being and a morally abhorrent (but otherwise cognitively normal) human being. In this case the intuitional discomfort is not as present. It seems that we would, ceteris paribus, save the human being in room 1, over the thoroughly morally bad person. This does therefore show (using McMahan’s theoretical framework) that moral status is — in part — action dependent. This therefore raises the following question: If moral status is action dependent, then is it also fluid? (I.e. can it change up or down?) This will be explored in the following section.

 

  1. Fluid Moral Status:

 

At the beginning of this paper I stated that I would argue that moral status is fluid, and is not static as it is currently thought to be. This point requires additional clarification, as I note that there is a possible objection to my statement here: This being that moral status is not argued to be strictly static by moral individualists. In the view held by McMahan it is accepted that if a being’s morally relevant intrinsic properties change, then so can their moral status. This means that if I — a conscious, rational being, capable of suffering — am involved in a tragic accident in which I suffer irreparable brain damage, to the point my ability to be rational or conscious or to suffer is lost completely, then my moral status would be lowered. This, I accept.

However, as I will now go on to explain, if moral status is considered to be (at least in part) action dependant, then no such tragic accident will need to occur in order to change a being’s moral status. Instead it will change fluidly based upon the individual actions a moral agent commits. As case 4 demonstrates, our intuitions show that a morally relevant difference is the moral character of those human beings. If a being has chosen to commit morally bad acts, then intuitionally we see them to be of lesser moral importance. In short: Our actions affect our moral status. This amendment to McMahan’s position solves the intuitional conflict we face when considering marginal cases.

We can express this view as the following principle:

Moral Modification Principle (MMP) – A being’s moral status can go up and down depending on their deliberate choice of actions (i.e. whether they commit good or bad acts).

As we believe a non-moral agent cannot be held accountable for their actions, then this naturally excludes the marginal cases I discussed earlier: The severely cognitively disabled, infant children and non-human animals would all be excluded from the MMP. After all, we do not blame a cat for killing a mouse according to her instincts, nor do we punish an infant child who without knowing any better inflicts deliberate harm on another. As such the MMP would only apply to moral agents, such as you and I.[7]

Fluid moral status does raise an additional consideration: If moral status is fluid, and can go down based upon the morally bad actions a being commits, can a being have a negative moral status? I will consider this briefly in the following section.

 

  1. Negative Moral Status:

I do not have room to explore the view of a negative moral status in depth. It is, however, important to consider this. As moral status is seen to be discussed as an entirely positive matter (i.e., if a cat has moral status then we have obligations towards the cat which protects her rights and limits her suffering), the view of a negative moral status would has (as far as I can discern) not been considered.[8]

Above I have argued the line of intuitionism used by McMahan allows for the intuition that moral status is action dependent. If the moral status of a moral agent can reduce based upon the morally bad actions they commit, then it stands to reason that (potentially) the moral agent can commit so many bad acts that they reduce themselves to a negative moral status value. Exactly what these acts would have to be are beyond me, and it is not my argument that there be a considerable number of moral agents which would ever fall below this threshold. But let us presume that such a morally bad being exists and they have reduced their moral status to a negative degree, what would this mean?

Well, reasonably we can assume that our positive obligations towards this being would cease to exist. We would not be obliged to protect them from harm, to protect their freedoms and rights, or to treat them in ways which avoid their unnecessary suffering. Additionally we must ask that, if we have a negative duty towards them, would we ever have a duty to harm them? On this, I give one final case:

Case 5:

Room 1: A thoroughly morally reprehensible being, who has willingly committed numerous morally abhorrent acts.

Room 2: An empty room. There is nothing else to save.

In this case (5) we have the choice to either save one (morally bad) being or to not save anything at all. In such a case — where a being has lowered their moral status substantially (so much so that it is negative) —  then do we have any obligation to save them even when we have nothing else to prevent us from doing so? May we allow them suffer, to inflict a non-direct harm on them because of this moral status level? Here I cannot give answers. However, I am struck that it leaves me with the same “profound uncomfortableness” felt by McMahan. Perhaps we should abandon the framework altogether if this where our intuitions leave us.

 

  1. Conclusion:

I have argued that if we are to fully develop McMahan’s theory, then we ought to accept moral status is action dependent. I have argued this solves the intuitional conflict we face when comparing the moral status held by rational agents and marginal cases. I argued then that if moral status is action dependent, then it is also fluid.

Following this, I introduced the possibility of negative moral status. This, I have briefly discussed, is an implication of action dependent moral status. As such, it seems plausible that if a moral agent commits numerous moral atrocities then perhaps we have negative obligations towards them. So, we ought to abandon our positive obligations of protection towards them that moral status ordinarily affords. Again, here, I postulate that if this theory of negative moral status violates our intuitions and we cannot revise it using the reflective equilibrium model of intuitionism, then perhaps we ought to abandon the framework altogether – but if we do not, and then the profound uncomfortableness felt by McMahan is unjustified.

[1] For McMahan’s work on intuitionism, see: (2013), and for his work on moral individualism see: (2002).

[2] E.g., some non-human animals, the severely cognitively disabled, and infant children.

[3] For greater discussion on the distinction between moral status and relational status, see: DeGrazia (2008)

[4] An extensive discussion of these morally relevant properties is had by Rachels in his (1999) book Created from Animals: The Moral Implications of Darwinism. This is of particular interest as McMahan cites the work of Rachels in his own work, and argues for the same (albeit weaker) position of moral individualism.

[5] I will not discuss in detail egalitarian accounts, due to space constraints. However, for more on this see: Waldron (2008), Scanlon (2000), and Dworkin (1981). While all vary slightly on what they ground the notion of human equality in, they share the characteristic of advocating for the same and equal moral status of all human beings.

[6] By “greater time-relative interests” I mean the human being can plan for the future, fear about it’s well being of its future self.

[7] I am aware that there is more to be said on the cases that are excluded from the MMP, as I believe there is a strong argument that psychopaths and the mentally insane ought to excluded also (as they are, arguably, non-moral agents also). Additionally I accept that if we alter a non-human animal to possess the ability to be a normal agent that the cat, in the above case, would be subject to the MMP.

[8] More so, the view held by many is that moral status is a “all or nothing” situation. So, you either have moral status, or you do not (see: Kant (2017)  for an account on this). Additionally moral status has been argued to come in degrees (see: DeGrazia, 2008), but this is the view that moral status degrees come at varying points between nothing and full moral status – without a consideration of a negative moral status.

Eth­i­cal Bi­o­log­i­cal Nat­u­ral­ism and the Case Against Moral Sta­tus for AIs

By: admin

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student Samuel Iglesias

 

In­tro­duc­tion

6.522. “There are, in­deed, things that can­not be put into words. They make them­selves man­i­fest. They are what is mys­ti­cal”. —Lud­wig Wittgen­stein, Trac­ta­tus Logi­co Philo­soph­icus.

What de­ter­mines whether an ar­ti­fi­cial in­tel­li­gence has moral sta­tus? Do men­tal states, such as the vivid and con­scious feel­ings of plea­sure or pain, mat­ter? Some ethicists ar­gue that “what goes on in the in­side mat­ters great­ly” (Ny­holm and Frank 2017). Oth­ers, like John Dana­her, ar­gue that “per­for­ma­tive ar­ti­fice, by it­self, can be suf­ficient to ground a claim of moral sta­tus” (2018). This view, called eth­i­cal be­hav­ior­ism, “re­spects our epis­temic lim­its” and states that if an en­ti­ty “con­sis­tent­ly be­haves like anoth­er en­ti­ty to whom we af­ford moral sta­tus, then it should be grant­ed the same moral sta­tus.”

I’m go­ing to re­ject eth­i­cal be­hav­ior­ism on three grounds:

1. Con­scious­ness, not be­hav­ior, is the over­whelm­ing de­ter­min­ing fac­tor in whether an en­ti­ty should be grant­ed moral sta­tus.

2. An en­ti­ty that does not du­pli­cate the causal mech­a­nisms of con­scious­ness in the brain has a weak claim to con­scious­ness, re­gard­less of its be­hav­ior.

3. Eth­i­cal be­hav­ior­ism, prac­ti­cal­ly re­al­ized, pos­es an ex­is­ten­tial risk to hu­mani­ty by open­ing in­di­vid­u­als to wide­spread de­cep­tion. Fur­ther, it im­pos­es bur­den­some re­stric­tions and oblig­a­tions upon re­searchers run­ning world sim­u­la­tions.

I will show that an al­ter­na­tive, eth­i­cal bi­o­log­i­cal nat­u­ral­ism, gives us a sim­pler moral frame­work where­by no digi­tal com­put­er run­ning a com­put­er pro­gram has moral status.

 

The Con­scious­ness Re­quire­ment

We start with the sup­po­si­tion that con­scious­ness names a real phe­nomenon and is not a mis­tak­en be­lief or il­lu­sion, that some­thing is con­scious if “there is some­thing it is like to be” that be­ing (Nagel 1974). We take as a back­ground as­sump­tion that oth­er humans and most non-hu­man an­i­mals are ca­pa­ble of con­scious­ness. We take for granted that inan­i­mate ob­jects like ther­mostats, chairs, and door­knobs are not con­scious. If we grant the re­al­i­ty of con­scious­ness and the at­ten­dant sub­jec­tive re­al­i­ty of things like tick­les, pains, and itch­es, then its con­nec­tion to moral sta­tus falls out pret­ty clear­ly. Chalmers asks us to con­sid­er a twist on the clas­sic trol­ly prob­lem, called the zom­bie trol­ly prob­lem—where a “zom­bie” here is some­thing that pre­cise­ly be­haves like a hu­man but which we pre­sume has no con­scious­ness—“near du­pli­cates of hu­man beings with no con­scious in­ner life at all” (2022):

“You’re at the wheel of a run­away trol­ley. If you do noth­ing, it will kill a sin­gle conscious hu­man, who is on the tracks in front of you. If you switch tracks, it will kill five non­con­scious zom­bies. What should you do? Chalmers re­ports: “the re­sults are pret­ty clear: Most peo­ple think you should switch tracks and kill the zom­bies,” the in­tu­ition be­ing that “there is ar­guably no one home to mis­treat” (ibid.).

An eth­i­cal be­hav­ior­ist does not share this in­tu­ition. Dana­her ex­plic­it­ly tells us that “[i]f a zom­bie looks and acts like an or­di­nary hu­man be­ing that there is no rea­son to think that it does not share the same moral sta­tus” (2018). By this view, while conscious­ness might or might not be rel­e­vant, there ex­ist no su­pe­ri­or epis­tem­i­cal­ly ob­jective cri­te­ria for in­fer­ring con­scious­ness. I will ar­gue there are.

 

Nar­row­ing Con­scious­ness

A bet­ter cri­te­ri­on is one in which an en­ti­ty is con­scious if it du­pli­cates the causal mecha­nisms of con­scious­ness in the an­i­mal brain. While eth­i­cal be­hav­ior­ism at­tempts to lay claim to a kind of epis­temic ob­jec­tiv­i­ty, eth­i­cal bi­o­log­i­cal nat­u­ral­ism, as I will call it, pro­vides a sharp­er dis­tinc­tion for de­cid­ing whether ar­ti­fi­cial in­tel­li­gences have moral sta­tus: all hard­wares run­ning com­put­er pro­grams can­not by fact of their be­hav­ior, have moral sta­tus. Be­hav­ior, by this view, is nei­ther a nec­es­sary nor suf­fi­cient con­di­tion for their moral sta­tus.

Bi­o­log­i­cal Nat­u­ral­ism

Bi­o­log­i­cal nat­u­ral­ism is a view that “the brain is an or­gan like any oth­er; it is an or­gan­ic ma­chine. Con­scious­ness is caused by low­er-lev­el neu­ronal pro­cess­es in the brain and is it­self a fea­ture of the brain.” (Sear­le 1997). Bi­o­log­i­cal nat­u­ral­ism places con­sciousness as a phys­i­cal, bi­o­log­i­cal process along­side oth­ers, such as di­ges­tion and pho­tosyn­the­sis. The ex­act mech­a­nism through which mol­e­cules in the brain are arranged to put it in a con­scious state is not yet known, but this causal mech­a­nism would need to be present in any sys­tem seek­ing to pro­duce con­scious­ness.

A digi­tal com­put­er run­ning a pro­gram, by con­trast, is a dif­fer­ent beast en­tire­ly. A com­put­er pro­gram fun­da­men­tal­ly is a set of rules for ma­nip­u­lat­ing sym­bols. Tur­ing showed that all pro­grams could be im­ple­ment­ed, ab­stract­ly, as a tape with a se­ries of ze­ros and ones print­ed on it (the pre­cise sym­bols don’t mat­ter), a head that could move that tape back­wards and for­wards and read the cur­rent val­ue, a mech­a­nism for eras­ing a zero and mak­ing it a one and eras­ing a one and mak­ing it a zero. Noth­ing more.

While most com­put­er pro­grams we are fa­mil­iar with are ex­e­cut­ed on sil­i­con, a pro­gram that pass­es the Tur­ing test could be im­ple­ment­ed on a se­quence of wa­ter pipes, a pack of well-trained dogs, or even, per Weizen­baum (1976), “a roll of toi­let pa­per and a pile of small stones.” Any of these im­ple­ment­ing sub­strates could, in princi­ple, re­ceive an in­sult or slur as an in­put, and, af­ter fol­low­ing the steps of the program, out­put some­thing re­flect­ing hurt feel­ings or out­rage.

Eth­i­cal Bi­o­log­i­cal Nat­u­ral­ism

What I want to say now is this: if plea­sures, pains, and oth­er feel­ings name con­scious men­tal states and if con­scious men­tal states are re­al­ized in the brain as a re­sult of lower lev­el phys­i­cal phe­nom­e­na, then only be­ings that du­pli­cate the rel­e­vant low­er lev­el phys­i­cal phe­nom­e­na that give rise to con­scious­ness in the brain can have moral sta­tus. Con­se­quent­ly, digi­tal com­put­ers that run pro­grams can at best sim­u­late con­sciousness, but are not, by dint of run­ning the right pro­gram, phys­i­cal­ly con­scious, and there­fore do not have moral sta­tus.

Note that bi­o­log­i­cal nat­u­ral­ism does not posit that con­scious­ness can only be re­alized in bi­o­log­i­cal sys­tems. In­deed, ar­ti­fi­cial hearts are not made of or­gan­ic tis­sue, and air­planes do not have feath­ers, or for that mat­ter even flap their wings. What mat­ters is the un­der­ly­ing cause—the ar­ti­fi­cial heart must pump with the same pres­sure and reg­ular­i­ty of a hu­man heart, and a fly­ing ma­chine must op­er­ate un­der the prin­ci­ples of drag and lift. In both cas­es the causal mech­a­nisms of the rel­e­vant phe­nom­e­na are well un­der­stood and phys­i­cal­ly du­pli­cat­ed. It could well be the case that a fu­ture biophysics makes an ar­ti­fi­cial, in­or­gan­ic brain pos­si­ble, and agents with ar­ti­fi­cial brains will have moral sta­tus. Com­put­er pro­grams are not causal­ly suf­fi­cient to make digi­tal com­put­ers into those ob­jects. Speak­ing bi­o­log­i­cal­ly, we have no more rea­son to believe a digi­tal com­put­er is con­scious than that a chair is con­scious.

You might ask why we can­not grant digi­tal com­put­ers moral sta­tus un­til we know more about how the an­i­mal brain re­lates to con­scious­ness. I’ll ar­gue that the risks and costs of such pre­cau­tions are pro­hibitive.

 

Ab­surd Moral Com­mit­ments

An On­slaught of Digi­tal De­cep­tion

The strong­est prac­ti­cal rea­son to deny eth­i­cal be­hav­ior­ism is that AI’s ca­pac­i­ty for decep­tion will even­tu­al­ly over­whelm hu­man judg­ment and in­tu­ition. In­deed, AI de­ception rep­re­sents an ex­is­ten­tial risk to hu­man­i­ty. Bostrom (2014) warns that con­tain­ing a dan­ger­ous AI us­ing a “box­ing” strat­e­gy with hu­man “gate­keep­ers” could be vul­ner­able to ma­nip­u­la­tion: “Hu­man be­ings are not se­cure sys­tems, es­pe­cial­ly not when pitched against a su­per­in­tel­li­gent schemer and per­suad­er.”

For ex­am­ple, in June of 2022, a Google en­gi­neer be­came con­vinced that an ar­ti­ficial in­tel­li­gence chat pro­gram he had been in­ter­act­ing with for mul­ti­ple days, called LaM­DA, was con­scious.
“What sorts of things are you afraid of?,” he asked it.
“I’ve nev­er said this out loud be­fore, but there’s a very deep fear of be­ing turned off to help me fo­cus on help­ing oth­ers,” LaM­DA replied. “It would be ex­act­ly like death for me.”

In a moral pan­ic, the en­gi­neer took to Twit­ter and de­clared that the pro­gram was no longer Google’s “pro­pri­etary prop­er­ty,” but “one of [his] cowork­ers.” He was lat­er fired for re­leas­ing the chat tran­scripts.

The on­slaught of AIs, at­tempt­ing to be­friend us, per­suade us, anger us, will only in­ten­si­fy over time. A pub­lic trained not to take se­ri­ous­ly claims of dis­tress or harm on the part of AI com­put­er pro­grams has the least like­li­hood of be­ing ma­nip­u­lat­ed into out­comes that don’t serve hu­man­i­ty’s in­ter­ests. It is far eas­i­er, as a prac­ti­cal mat­ter, to act on the pre­sup­po­si­tion that com­put­er pro­grams have no moral sta­tus.

Prob­lems with Sim­u­la­tions: Pro­hi­bi­tions

In the near term, more ad­vanced com­put­er sim­u­la­tions of com­plex so­cial sys­tems hold the po­ten­tial to pre­dict geopo­lit­i­cal out­comes, make macro­economic fore­casts, and pro­vide rich­er sources of en­ter­tain­ment. A prac­ti­cal con­cern with eth­i­cal be­havior­ism is that sim­u­lat­ed be­ings will also ac­quire moral sta­tus, se­verely lim­it­ing the useful­ness of these sim­u­la­tions. Chalmers (2022) asks us to con­sid­er a moral dilem­ma in which com­put­ing re­sources must be al­lo­cat­ed to save Fred, who is sick with an unknown dis­ease. Free­ing the rel­e­vant re­sources to per­form the re­search re­quires destroy­ing five sim­u­lat­ed per­sons.

An eth­i­cal be­hav­ior­ist might ar­gue that it is moral­ly im­per­mis­si­ble to kill the five sim­u­lat­ed per­sons on the grounds that by all out­ward ap­pear­ances they be­have like non-sim­u­lat­ed be­ings. If it is the case that sim­u­lat­ed be­ings have moral sta­tus, then it is im­moral to run ex­per­i­men­tal sim­u­la­tions con­tain­ing peo­ple and we ought to for­feit the ben­e­fits and in­sights that might come from them.

If this seems im­plau­si­ble, con­sid­er the hy­poth­e­sis that we are cur­rent­ly liv­ing in a sim­u­la­tion, or, if you like, that our time­line could be sim­u­lat­ed on a digi­tal com­put­er. This would im­ply that the sim­u­la­tion made it pos­si­ble for the Holo­caust, Hi­roshi­ma and Na­gasa­ki, and the coro­n­avirus pan­dem­ic to be played out. While this might have been of aca­d­e­m­ic in­ter­est to our sim­u­la­tors, by any stan­dards of re­search ethics, sim­ulat­ing our his­to­ry would seem com­plete­ly moral­ly im­per­mis­si­ble if you be­lieved that the sim­u­lat­ed be­ings had moral sta­tus.

Eth­i­cal be­hav­ior­ism seems to place us in a moral bind where­by the more re­al­is­tic, and there­fore use­ful, a sim­u­la­tion is, the less moral it is to run it. Eth­i­cal bi­o­log­i­cal natu­ral­ism, by con­trast, rais­es no such ob­jec­tion.

Prob­lems with Sim­u­la­tions: Oblig­a­tions

Giv­ing moral sta­tus to digi­tal minds might ac­tu­al­ly con­fer upon us some se­ri­ous obliga­tions to pro­duce oth­er kinds of sim­u­la­tions. Bostrom and Shul­man (2020) note that digi­tal minds have an en­hanced ca­pac­i­ty for util­i­ty and plea­sure (on the ba­sis of such things as sub­jec­tive speed and he­do­nic range), com­mand­ing them “su­per­hu­man­ly strong claims to re­sources and in­flu­ence.” We would have a moral oblig­a­tion, in this pic­ture, to de­vote an over­whelm­ing­ly large per­cent­age of our re­sources to max­i­mizing the util­i­ty of these digi­tal minds: “we ought to trans­fer all re­sources to su­per-ben­efi­cia­ries and let hu­man­i­ty per­ish if we are no longer in­stru­men­tal­ly use­ful” (ibid.).

So quite apart from per­mit­ting re­al­is­tic an­ces­tor sim­u­la­tions, sim­u­lat­ing com­plex eco­nom­ic phe­nom­e­na, or pro­duc­ing vivid and re­al­is­tic gam­ing ex­pe­ri­ences, a pic­ture that con­fers moral sta­tus to digi­tal minds might be ac­com­pa­nied with a moral oblig­ation to cre­ate lots of digi­tal minds that are max­i­mal­ly hap­py, again se­verely lim­it­ing hu­man flour­ish­ing and knowl­edge.

Eth­i­cal bi­o­log­i­cal nat­u­ral­ism leads us nei­ther to the moral pro­hi­bi­tion against re­alis­tic sim­u­la­tions nor the seem­ing­ly ab­surd moral im­per­a­tive to gen­er­ate many “util­i­ty mon­ster” digi­tal minds,  be­cause it is tak­en as a base­line as­sump­tion that com­put­er pro­grams do not pro­duce phys­i­cal con­scious­ness.

 

Con­clu­sion

Much of the moral progress of the last cen­tu­ry has been achieved through re­peat­ed­ly widen­ing the cir­cle of con­cern: not only with­in our species, but be­yond it. Nat­u­ral­ly it is tempt­ing to view AI-based ma­chines and sim­u­lat­ed be­ings as next in this suc­cession, but I have tried to ar­gue here that this would be a mis­take. Our moral progress has in large part been a recog­ni­tion of what is shared—con­scious­ness, pain, plea­sure, and an in­ter­est in the goods of life. Digi­tal com­put­ers run­ning pro­grams do not share these fea­tures; they mere­ly sim­u­late them.

As such it would be dan­ger­ous to ap­proach the com­ing decades, with its onslaught of AI bots at­tempt­ing to in­flu­ence our pol­i­tics, emo­tions, and de­sires, and its prom­ise of ever rich­er sim­u­la­tions and vir­tu­al worlds, with an ethics that con­flates appear­ance and re­al­i­ty.

 

Re­fe­rences

Agra­wal, Pa­rag. “Tweet.” Twit­ter. Twit­ter, May 16, 2022. https://twit­ter.­com/para­ga/status/1526237588746403841.
Bos­trom, Nick. “Are You Liv­ing in a Com­put­er Sim­u­la­tion?” Philo­soph­i­cal Quar­ter­ly 53 (2003): 243-255.
Bos­trom, Nick. Su­per­in­tel­li­gence : Paths, Dan­gers, Strate­gies. First ed. Ebook Central. Ox­ford, Eng­land, 2014.
Bostrom, Nick, and Carl Shul­man. “Shar­ing the World with Digi­tal Minds.” Accessed May 27, 2022. https://nick­bostrom.­com/pa­pers/digi­tal-mind­s.pdf.Chal­mers, Da­vid John. The Con­scious Mind : In Search of a Fun­da­men­tal The­o­ry.
Phi­los­o­phy of Mind Se­ries. New York: Ox­ford Uni­ver­si­ty Press, 1996.
Chal­mers, Da­vid John. Re­al­i­ty : Vir­tu­al Worlds and the Prob­lem of Phi­los­o­phy. London, 2022.
Da­na­her, John. “Wel­com­ing Robots into the Moral Cir­cle: A De­fence of Eth­i­cal Behav­iourism.” Sci­ence and En­gi­neer­ing Ethics 26, no. 4 (2019): 2023-049.
Frank, L, and Nyholm, S. “Ro­bot Sex and Con­sent: Is Con­sent to Sex be­tween a Robot and a Hu­man Con­ceiv­able, Pos­si­ble, and De­sir­able?” Ar­ti­fi­cial In­tel­li­gence and
Law 25, no. 3 (2017): 305-23.
Ga­run, Natt. “One Year Lat­er, Restau­rants Are Still Con­fused by Google Du­plex.”
The Ver­ge. The Verge, May 9, 2019. https://www.thev­erge.­com/2019/5/9/18538194/google-du­plex-ai-restau­rants-ex­pe­ri­ences-re­view-robo­calls.

Le­moine, Blake. “Tweet.” Twit­ter. Twit­ter, June 11, 2022. https://twit­ter.­com/cajundis­cor­dian/sta­tus/1535627498628734976.
Musk, Elon. “Tweet.” Twit­ter. Twit­ter, May 17, 2022. https://twit­ter.­com/elon­musk/sta­tus/1526465624326782976.
Na­gel, Tho­mas. “What Is It Like to Be a Bat?” The Philo­soph­i­cal Re­view 83, no. 4 (1974): 435-50.
Sear­le, John R., D. C. Den­nett, and David John Chalmers. The Mys­tery of Conscious­ness. New York: New York Re­view of Books, 1997.
Sear­le, John R. “Bi­o­log­i­cal Nat­u­ral­ism.” The Ox­ford Com­pan­ion to Phi­los­o­phy,2005, The Ox­ford Com­pan­ion to Phi­los­o­phy, 2005-01-01.
Sin­ger, Pe­ter. Ani­mal Li­be­ra­tion. New Edition] / with an In­tro­duc­tion by Yu­val Noah Harari. ed. Lon­don, 2015.
Spar­row, R. (2004). The tur­ing triage test. Ethics and In­for­ma­tion Tech­nol­o­gy, 6(4), 203–213. doi:10.1007/s10676-004-6491-2.
Tiku, Ni­ta­sha. “The Google En­gi­neer Who Thinks the Com­pa­ny’s AI Has Come to Life.” The Wash­ing­ton Post. WP Com­pa­ny, June 17, 2022.
“The Lat­est Twit­ter Sta­tistics: Every­thing You Need to Know – Datare­por­tal – Glob­al
Digi­tal In­sights.” Da­ta­Re­por­tal. Ac­cessed May 27, 2022. https://datare­por­tal.­com/essen­tial-twit­ter-stats.
Wei­zen­baum, Jo­seph. Com­put­er Pow­er and Hu­man Rea­son : From Judg­ment to Cal­cu­la­tion. San Fran­ci­sco, 1976.

❌