FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayPractical Ethics

AI As A Writing Tool: Great Benefits, Major Pitfalls.

Written by Neil Levy

Large language models look set to transform every aspect of life over the coming decades. Some of these changes will be dramatic. I’m pretty unconcerned by the apocalyptic scenarios that preoccupy some people, but much more worried about the elimination of jobs (interestingly, the jobs that seem likeliest to be eliminated are those that require the most training: we may see a reversal of status and economic position between baristas and bureaucrats, bricklayers and barristers). Here, though, I’m going to look at much less dramatic, and very much near term, effects that LLMs might have on academic writing. I’m going to focus on the kind of writing I do in philosophy; LLMs will have different impacts on different disciplines.

A number of academics, writing in academic journals and on Twitter, have suggested that LLMs could be used to streamline the writing process. As they envisage it, LLMs could take on the burden of writing literature reviews and overviews, leaving the human free to undertake the more creative work involving the generation and testing of hypotheses (here, too, though, the LLM might have a role: it could generate candidate hypotheses for the human to choose between and refine, for example).

As a proponent of what we might call extended cognition, the general idea is one to which I’m sympathetic. The extended mind hypothesis is a metaphysical claim: on this hypothesis, mind can extend beyond the skull and into the artifacts that enable certain kinds of thinking (my smartphone might partially constitute my mind, when its reminders, navigational capacities, search functions, and so on, are sufficiently integrated into my cognitive activities). The extended cognition hypothesis is agnostic about metaphysics: it simply emphasises the degree to which our thought is offloaded onto the world, including artifacts. New technologies enable new kinds of thinking, and this has always been true. As Richard Feynman said, notes on paper aren’t merely a record of thinking, “not really. It’s working. You have to work on paper, and this is paper.”

Extending cognition through new technologies opens cognitive horizons that are otherwise inaccessible to us. Supercomputers that perform millions of operations per second allow us to analyse data and perform mathematical calculations that were utterly closed to previous generations. But in opening up new horizons, new ways of extending thought can make others less accessible and have unwanted impacts on our native cognition. In The Phaedo, Plato expressed the fear that writing would undermine our capacity to remember things. He may have been right about its effects on our memory, but that’s more than compensated for by our increased capacity to record things externally. There are no guarantees, however, that changes will always be for the better.

The idea of a division of labor between the relatively routine and the creative imagined above, with the LLM taking on the first and the human (alone or in collaboration with the LLM) the second, is not unattractive. It can be tiresome to review a literature one already knows well. Sometimes, I find myself in the position of having to rewrite pretty much the same points I’ve made in a previous paper in an introductory section. It’s only norms against self-plagiarism that prevent me from cutting and pasting from the older paper to the newer one. Allowing the LLM to do the work of rephrasing is a tempting option. We might think that whatever other costs and benefits they have, getting them to do what we the drudge work is surely an unalloyed benefit.

Perhaps – perhaps – it’s a benefit overall, but it’s not an unalloyed benefit. While we may approach a paper with a hypothesis in mind, and think of the introductory sections as merely sketching out the terrain, the relationship between that sketch and the meat of the paper is not always so straightforward. Sometimes, in rephrasing and summarizing ideas that I thought I already knew well, I discover relations between them I hadn’t noticed, or a lack of clarity that hadn’t struck me before. These realisations may lead to the reframing of the initial hypothesis, or the generation of a new hypothesis, or simply greater clarity than I had previously. What I took to be mere drudge work can’t be easily isolated from the more creative side of thought and writing.

More generally, the drudge work lays down the bedrock for creative activity. If I had never attempted to review and synthesise the work that appears in the review section of a paper, I wouldn’t know it well enough to be able to generate some of the hypotheses I go on to explore. That drudge work is an essential developmental stage. It’s also a developmental stage for a set of skills at navigating a terrain. This is a generalizable skill, one we can apply in future to different material and different debates. It may be that those who have already developed such skills – those who became academically mature before the advent of LLMs – can outsource drudge work at a smaller cost than those who have not yet developed this set of skills. Perhaps doing the task for oneself, boring though it may be, is necessary for a while, before we throw away the ladder we’ve climbed.

I’ve got no doubt that LLMs can and will be incorporated into academic writing, in ways and with effects we’re only beginning to imagine. Externalizing thought is extremely productive: it’s always been productive to write down your thoughts, because externalizing them allows us to reconfigure them, and to see connections that we mightn’t otherwise have noticed. The more complex the material, the greater the need to externalize. LLMs allow for a near instantaneous kind of externalization: we might regenerate multiple versions of a thought we’ve written once, and the permutations might allow us to see new connections. LLMs can also be used to generate new candidate hypotheses, to identify gaps in the literature, to synthesise and visualise data, and who yet knows what else? Perhaps the day will come – perhaps it will even be soon – when AI replaces the human researcher altogether. For now, it’s a powerful tool, perhaps even a partner, in the research process.

Some of those who have worried about the singularity – the postulated moment when AI design takes off, with ever more intelligent AIs designing even more intelligent AIs, leaving us humans in their dust – have proposed we might prevent human obsolescence by merging with the machines, perhaps even uploading our minds to artificial neural networks. I don’t know whether the singularity or human obsolescence are real threats, and I’m very sceptical about mind uploading. Whatever the prospects might be for mind uploading, right now we can integrate AIs into our thinking. We may not stay relevant for ever, and we may never merge with the machines, but right now they’re powerful tools for extending our cognition. They might homogenize prose and lead to a loss of creativity, or they might lead to an explosion of new approaches and ideas. They’re certain to have unanticipated costs, but the benefits will probably be much greater.

Inevitably, I ran this blogpost through an AI tool – the free version of Quillbot. It identified one or two typos, which of course I corrected. It also made a number of stylistic suggestions. I accepted almost none of them, but several led me to think I ought to rephrase the passage. Perhaps that’s not a model for how AI might be useful for writing right now.

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

The other day a colleague of mine shared a YouTube video of the presentation The AI Dilemma, by Tristan Harris and Aza Raskin. In it, they share with the audience their concerns about the rapid and somewhat wild development of artificial intelligence (AI) in the hands of a few tech giants. I highly recommend it. (The video, that is. Not the rapid and somewhat wild development of AI in the hands of a few tech giants).

 

Much like the thousands of signatories of the March open call to “pause giant AI experiments”, and recently the “Godfather of AI” Geoffrey Hinton, Harris and Raskin warn us that we are on the brink of major (negative, dangerous) social disruption due to the power of new AI technologies.

 

Indeed, there’s a bit of a public buzz about “AI ethics” in recent months.

 

While it is good that there is a general awareness and a public discussion about AI – or any majorly disruptive phenomenon for that matter – there’s a potential problem with the abstraction: AI is portrayed as this big, emerging, technological, behemoth which we cannot or will not control. But it has been almost three decades since humans were able to beat an AI at a game of chess. We have been using AI for many things, from medical diagnosis to climate prediction, with little to no concern about it besting us and/or stripping us of agency in these domains. In other domains, such as driving cars, and military applications of drones, there has been significantly more controversy.

All this is just to say that AI ethics is not for hedgehogs – it’s not “one big thing”[i] – and I believe that we need to actively avoid a narrative and a line of thinking which paints it to be. In examining the ethical dimensions of a multitude of AI inventions, then, we ought to take care to limit the scope of our inquiry to the domain in question at the very least.

 

So let us, for argument’s sake, return to that door at the Uehiro Centre, and the voice cautioning visitors to stay clear. Now, as far as I’m aware, the voice and the door are not part of an AI system. I also believe that there is no person who is tasked with waiting around for visitors asking for access, warning them of the impending door swing, and then manually opening the door. I believe it is a quite simple contraption, with a voice recording programmed to be played as the door opens. But does it make a difference to me, or other visitors, which of these possibilities is true?

 

We can call these possibilities:

Condition one (C1): AI door, created by humans.

Condition two (C2): Human speaker & door operator.

Condition three (C3): Automatic door & speaker, programmed by humans.

 

In C3, it seems that the outcome of the visitor’s action will always be the same after the buzzer is pushed or the key card is blipped: the voice will automatically say ‘stay clear of the door’, and the door will open. In C1 and C2, the same could be the case. But it could also be the case that the AI/human has been instructed to assess the risk for visitors on a case-to-case basis, and to only advise caution if there is imminent risk of collision or such (was this the case, I am consistently standing too close to the door when visiting, but that is beside the point).

 

On the surface, I think there are some key differences between these conditions which could have an ethical or moral impact, where some differences are more interesting than others. In C1 and C2, the door opener makes a real-time assessment, rather than following a predetermined cause of action in the way C3’s door opener does. More importantly, C2 is presumed to make this assessment from a place of concern, in a way which is impossible in C1 and C3 because the latter two are not moral agents, and therefore cannot be concerned. They simply do not have the capacity. And our inquiry could perhaps end here.

But it seems it would be a mistake.

 

What if something was to go wrong? Say the door swings open, but no voice warns me to stay clear, and so the door whacks me in the face[ii]. In C2, it seems the human who’s job it is to warn me of the imminent danger might have done something morally wrong, assuming they knew what to expect from opening the door without warning me, but failed in doing so due to negligence[iii]. In C1 and C3, on the other hand, while we may be upset with the door opener(s), we don’t believe that they did anything morally wrong – they just malfunctioned.

 

My colleague Alberto Giubilini highlighted the tensions in the morality of this landscape in what I thought was an excellent piece arguing that “It is not about AI, it is about humans”: we cannot trust AI, because trust is a relationship between moral agents, and AI does not (yet) have the capacity for moral agency and responsibility. We can, however, rely on AI to behave in a certain way (whether we should is a separate issue).

 

Similarly, while we may believe that a human should show concern for their fellow person, we should not expect the same from AIs, because they cannot be concerned.

 

Yet, if the automatic doors continue to whack visitors in the face, we may start feeling that someone should be responsible for this – not only legally, but morally: someone has a moral duty to ensure these doors are safe to pass through, right?

 

In doing so, we expand the field of inquiry, from the door opener to the programmer/constructor of the door opener, and perhaps to someone in charge of maintenance.

 

A couple of things pop to mind here.

 

First, when we find no immediate moral agent to hold responsible for a harmful event, we may expand the search field until we find one. That search seems to me to follow a systematic structure: if the door is automatic, we turn to call the support line, and if the support fails to fix the problem, but turns out to be an AI, we turn to whoever is in charge of support, and so on, until we find a moral agent.

 

Second, it seems to me that, if the door keeps slamming into visitors’ faces in condition in C2, we will not only morally blame the door operator, but also whoever left them in charge of that door. So perhaps the systems-thinking does not only apply when there is a lack of moral agents, but also applies on a more general level when we are de facto dealing with complicated and/or complex systems of agents.

 

Third, let us conjure a condition four (C4) like so: the door is automatic, but in charge of maintenance support is an AI system that is usually very reliable, and in charge of the AI support system, in turn, is a (human) person.

 

If the person in charge of an AI support system that failed to provide adequate service to a faulty automatic door is to blame for anything, it is plausibly for not adequately maintaining the AI support system – but not for whacking people in the face with a door (because they didn’t do that). Yet, perhaps there is some form of moral responsibility for the face-whacking to be found within the system as a whole. I.e. the compound of door-AI-human etc., has a moral duty to avoid face-whacking, regardless of any individual moral agents’ ability to whack faces.

 

If this is correct, it seems to me that we again[iv] find that our traditional means of ascribing moral responsibility fails to capture key aspects of moral life: it is not the case that any agent is individually morally responsible for the face-whacking door, nor are there multiple agents who are individually or collectively responsible for the face-whacking door. Yet, there seems to be moral responsibility for face-whacking doors in the system. Where does it come from, and what is its nature and structure (if it has one)?

 

In this way, not only cognitive processes such as thinking and computing seem to be able to be distributed throughout systems, but perhaps also moral capacities such as concern, accountability, and responsibility.

And in the end, I do not know to what extent it actually matters, at least in this specific domain. Because at the end of the day, I do not care much whether the door opener is human, an AI, or automatic.

 

I just need to know whether or not I need to stay clear of the door.

Notes & References.

[i] Berlin, I. (2013). The hedgehog and the fox: An essay on Tolstoy’s view of history. Princeton University Press.

[ii] I would like to emphasize that this is a completely hypothetical case, and that I take it to be safe to enter the Uehiro centre. The risk of face-whacking is, in my experience, minimal.

[iii] Let’s give them the benefit of the doubt here, and assume it wasn’t maleficence.

[iv] Together with Hazem Zohny, Julian Savulescu, and Ilina Singh, I have previously argued this to be the case in the domain of emerging technologies for collective thinking and decision-making, such as brain-to-brain interfaces. See the Open Access paper Merging Minds for more on this argument.

Resisting Nudges

By Gabriel De Marco

Consider the following case:

Classic Food Placement (FP): In order to encourage healthy eating, cafeteria staff place healthy food options at eye-level, whereas unhealthy options are placed lower down. Diners are more likely to pick healthy foods and less likely to pick unhealthy foods than they would have been otherwise.

This intervention is a paradigmatic case of what are often called nudges. Though many will think that it is OK to implement this sort of intervention for these sorts of purposes, there is a large debate about when exactly this is OK.

One common theme is that whether such an influence is easy to resist is going to be relevant to when the intervention is OK. If the intervention is not easy to resist, then, at the very least, this counts as a strike against implementing it. However, though there is often reference to the resistibility of a nudge, there is rarely explicit discussion of what it is for a nudge to be easy to resist, or for it to be easily resistible.

To begin giving an account of what it is for a nudge to be (easily) resistible, we need to figure out what it is an ability to do. So, what is it to resist a nudge?

Though authors tend not to explicitly answer this question, one natural, and perhaps common, answer is that to resist a nudge is to behave contrary to it. What sort of behavior is contrary to the nudge? An initially attractive thought is that, since nudges are intended to get agents to do (or not do) something, behaving contrary to a nudge just involves behaving contrary to what the nudge is intended to get the nudgee to do. Call this the intention conception of resistance. In Classic FP, the nudge is intended to get agents to pick the healthy food – suppose it is salad. The intention conception would tell us that resisting would involve not picking the salad. Though people tend not to be explicit about how they are understanding the claim that someone resists a nudge, the intention conception seems to capture the underlying theory.

There are, I will argue, issues with the intention conception. First, this way of conceiving of resistance is not helpful in cases where there is no intention behind the influence. Consider:

Random FP: The cafeteria manager is not aware of the placement effect on customer behavior. But she still needs to decide where to place the salad and the pudding; so, she flips a coin. It lands heads, the salad is placed at eye-level, and the cafeteria layout is identical to that found in Classic FP.

Here we have, at the very least, a nudge-like influence. The relevant feature of the environment is the same as in Classic FP, and if it has an effect in Classic FP, it presumably has the same effect in Random FP. And it is plausible that resisting the influence involves the same thing in both cases. The intention conception, however, would not tell us what resisting the influence involves in Random FP, insofar as there is no intention behind the nudge-like influence.

Second, although it is typically assumed that the effect of the nudge and the intention behind it are aligned, this need not always be the case. Consider:

Confused FP: The cafeteria manager receives a memo concerning the effect that food placement can have on customers. However, he is confused about the effect – perhaps there was a typo, or he misread it – such that he thinks that people are less likely than they would have been otherwise to pick the food that is at eye-level. Since he intends to get customers to purchase more chocolate pudding, he places the salad at eye-level, and the cafeteria layout is identical to that found in Classic FP.

As with Random FP, the relevant feature of the environment is the same as in Classic FP, and presumably has the same effect on customers (assuming it has one). On at least one way of understanding this, one might think that resisting the nudge, or the influence, involves the same thing across all three cases. But the intention behind the food placement in Confused FP is to get people to pick the pudding. On the intention conception, resisting this nudge involves not picking the pudding, which is something quite different than resisting the nudge in Classic FP. If resisting the nudge, or the influence, in all three FP cases involves the same behavior – if resisting the influence in all three cases involves not picking the salad – then the intention conception gets it wrong.

However, it seems at least plausible that there is some sense in which resisting the cafeteria manager’s attempt at influencing customers does involve different behavior in Classic FP and Confused FP. These managers tried to influence customers into doing different things, and whereas the former used a method that may help him achieve his goal, the latter did not.

So, I suggest that when we talk about resisting a nudge, we might be talking about two different things at once. One thing we could be talking about is resisting the influence itself. Doing this would involve the same behavior in all three variations of Classic FP. Another thing we might be talking about is something like resisting the would-be influencer’s attempt to influence, and this might involve different behavior in Classic FP and Confused FP; and, since there is no attempt in Random FP, there is no sense in which one can resist the attempt in this case. This difference is obscured when we make the common assumption that the effects of the nudge and the intention behind it are working in unison, but Confused FP pries these apart.

For the purposes of this post, we can just say that resisting the attempt at an influence involves acting contrary to the intention behind it; the intention conception is correct when it comes to resisting the attempt. But what does it take to resist the influence itself? Finding the answer to this is complicated, and I won’t resolve this in this post.

One might think that something like the intention conception will still serve us well enough. The issue with the intention conception, perhaps, was not so much the focus on intention, but rather the focus on the actual intention. Perhaps we can rescue a version of the intention conception if we focus on what intention the would-be influencer would have if they were informed. Call this the informed-intention conception. This may have more promise with respect to Random and Confused FP; the fact that there is no intention behind the food placement in Random FP does not preclude there from being an intention that the manager would have, were he informed of the effects. Nor does the fact that the manager in Confused FP is mistaken about the effect, and for similar reasons. What matters, on this conception, is what intention the cafeteria managers would have, were they informed of the effect of the food placement.

This apparent benefit, however, may be short-lived, and this becomes apparent once we try to work out what it would actually say about these, or similar, cases. What intention would the manager in Confused FP have, were he informed of the food placement effect? One might worry that in this hypothetical case, he would have the same intention – to get people to buy the chocolate pudding – it is the intervention that would change: he would have placed the pudding at eye-level instead. But this doesn’t seem to answer what it would take to resist the influence in Confused FP, which presumably is the same as in Classic FP.

Perhaps, instead, we could focus on the intention the manager would have, were he to a) be informed about the effect and b) implement the same influence. But this version faces some issues as well, insofar as there may not be a clear answer to what the manager would intend, were he informed. This can be made clearest, I think, by focusing on cases in which we stipulate the actual intention, or lack thereof. Consider, for example:

Apathetic FP: Everything is as in Random FP, but the cafeteria manager knows about the food placement effect, yet does not care at all what his customers pick. He flips the coin to decide how to place the items.

This case, again, features the same environmental feature, and so the same influence itself. Yet the manager is informed about the effect, and has no intention to modify behavior. Given this, it is not clear that there is any relevant intention that the manager would have, were he to be informed of the effect. If there is no such intention, this account fails to give an answer for what resisting the influence itself would involve in this case.

We could further modify the view by focusing on what intention the manager would have were he to, a) be informed about the effect, b) implement the same influence, and c) intend to influence customers. This could help to get around Apathetic FP insofar as we would now only be concerned with hypothetical cases in which he is not apathetic, and does have an intention.

But even this formulation faces a further issue. Nudgers who are informed of the effect of a nudge-like influence, and who intend for the intervention to have an effect on individuals, may still differ on the content of that intention. Consider another nudge that is often mentioned in the literature:

Calorie Count (CC): The food menu in a restaurant displays the calorie-count of individual food items.

One effect this has is that many customers form beliefs about the number of calories in a particular option. Another effect it might have is that individuals, when deliberating about what to eat, take calories into account, whereas they may not have otherwise. A further effect this might have is that people, in general, pick items with lower calorie-counts. Given these different effects, one might implement this nudge for different reasons, and with different intentions, even if one is informed of all of the above. Thus, consider the following two cases:

CC-Autonomy: As in CC, but the restaurant owner intends for people to make a more informed decision about what to eat.

CC-Paternalistic: As in CC, but the restaurant owner intends for people to pick healthier, lower-calorie, foods.

In both cases, the restaurant owners are aware of the same facts. However, the intentions are different; one intends for customers to make a more informed decisions – regardless of what they decide to do – and the other intends for customers to pick lower-calorie meals. Yet the feature of the environment – the display of the calorie counts – is the same, and we can suppose that they would have the same effect on individuals. The last version of the informed-intention conception would therefore tell us that resisting the influence itself involves different behavior in these two cases; yet, whatever it is that resisting the CC influence involves, it would plausibly be the same across these two cases.

There is much more to say here, but ultimately, I suspect that appealing to the intentions of a would-be nudger – be they actual or hypothetical – will not help to give us an account of what it is to resist the influence itself.

 

Disclaimer:

Recently, there has been some pushback against the efficacy of nudges, sparked by PNAS’s publication of three letters responding to a recent meta-analysis of nudges (the authors reply here). These letters pointed out various issues with not only the meta-analysis, but the whole body of work surrounding interventions often called nudges, and possibly suggesting that, at the very least, we have no evidence for the effectiveness of nudges. However, others take a more moderate lineoften pointing to another large study that seems to avoid at least some of these issues (and here is a brief interview with the authors of this large study). And even one of the critics of that study agrees that nudges sometimes work (see here for a response from the authors of that study).

 

 

Video Interview: Introducing Academic Visitor Prof Antonio Diéguez Lucena

By: admin

An interview with Prof Antonio Diéguez Lucena, professor of Logic and Philosophy of Science at the University of Málaga, Spain. Here he speaks of his research into the philosophy of biology and technology.

Horror, Fear and Trust in a Neonatal Unit

by Dominic Wilkinson

This week, closing evidence was presented in the long-running trial of Lucy Letby, a nurse who is accused of murdering seven newborn infants (and attempting to murder 10 more) in a neonatal intensive care unit in Chester between 2015 and 2016. In the coming weeks, the jury will consider and then return their verdict on the charges. That verdict will bring some closure to a criminal investigation that has taken more than four years, and a trial that has lasted nine months. It may (though may not) provide some relief for the grieving families whose babies died seven or eight years ago.
Yet, whatever verdict is reached, there are likely to be profound wider repercussions for all those who work in the care of very sick infants and children, and for many many other parents and families.
As someone who works in newborn intensive care, I have found the Letby case too awful, too close to the bone to follow closely. I do not know and (without access to all the evidence) cannot judge what conclusion the jury should reach. However, there are, ultimately, only two possibilities, both of which are horrendous.

Guilty.
The idea that a health professional working in the care of critically ill babies would deliberately and repeatedly attempt to kill infants is, quite literally, beyond comprehension. At least for me, it is impossible to imagine what would lead someone to do such a thing. Health professionals in neonatal units work closely in teams, in a relationship of trust, united by our shared commitment to the support of babies and families. We depend on each other – mutual support that makes it possible to provide care in highly stressful situations. If Letby is guilty, that very obviously represents the ultimate betrayal of her responsibilities to her patients and their families. However, beyond Chester, that conclusion will ripple across hospitals in the UK and beyond. It will shake to the core the trust between doctors and nurses and importantly between families and professionals. It will cause widespread distress and disquiet across neonatal units.

Not guilty.
The opposite possibility is almost as distressing. If Letby is innocent, that means that a health professional committed to the care of infants has been subject to a prolonged period of intense public and police scrutiny and attention. Her career and her life appear to have been irretrievably damaged. For health professionals, this raises the prospect that any of us could be vulnerable to similar accusations, to having our lives similarly turned upside down.

Wider effects 
Beyond professionals, and families of children who are currently in hospital, the Letby case will seriously and negatively impact another group – those who have lost a child or newborn infant. Many bereaved parents will have found the intense media attention to the tragic events in Chester has painfully stirred up memories and questions around their own child’s death. They may have had unanswered questions at the time, or they may now be re-examining events in the light of Letby, wondering whether they can believe the things that they were told and the answers that they were given. Even if (as will be the case for many or all of these) everything possible was done to care for their child and no malicious actors were present – how will they be able to know with certainty?

All of that is to say that the end of this case will not be the end of its effects. Trust is a precious commodity in healthcare. It may be in short supply in neonatal units in the wake of this case.

‘Naming and Shaming: Responding to Lookism’

On the evening of Friday 9 June, Prof. Heather Widdows presented the inaugural Michael Lockwood Memorial Lecture, as part of a weekend of events to celebrate the twentieth anniversary of the Oxford Uehiro Centre for Practical Ethics and the fifth of the MSt. in Practical Ethics, based in the Centre. The title of Prof. Widdows’ fascinating and suggestive lecture was ‘Naming and Shaming: Responding to Lookism’.

Prof. Widdows began with a definition of lookism as ‘unjust discrimination on the basis of looks or appearance’. If an appointment committee, for example, knowingly or unknowingly offers a job to someone because of their appearance, when that appearance is itself irrelevant to the job in question, this is lookist, as analogous decisions based on race or sex would be, respectively, racist or sexist.

Prof. Widdows then provided evidence of lookism in employment and other domains, including the justice system and in the attitudes of young children. She suggested that lookism is less recognized than other forms of discrimination in part because its victims feel shame, and are hence unmotivated to call out that discrimination. Given that, she argued, we should seek to change that shame to anger or rage, as has happened in the case of sexism. This would increase the visibility of lookism, and make appearance at least a more plausible candidate for inclusion as a ’protected characteristic’ in equality legislation.

Philosophy can play, and is to some extent playing, its part, and here Prof. Widdows referred to the arguments in her Perfect Me: Beauty as an Ethical Ideal (Princeton, 2018) to the conclusion that our culture is now over-valuing, and mistakenly valuing, beauty to the extent that many are harmed through seeing their identity as dependent on their appearance. Prof. Widdows provided moving examples from stories posted on the website of the #everydaylookism campaign that emerged from her book, noting again the salience of shame in many of them.

A lively discussion followed the lectures, covering among many other issues whether choosing a partner on the basis of looks must count as an injustice, and whether calmly and clearly calling out discrimination might sometimes be a more appropriate or effective response than anger.

 

Guest Post: High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation

By: admin

Written by David Thorstad , Global Priorities Institute, Junior Research Fellow, Kellogg College

This post is based on my paper “High risk, low reward: A challenge to the astronomical value of existential risk mitigation,” forthcoming in Philosophy and Public Affairs. The full paper is available here and I have also written a blog series about this paper here.

Derek Parfit (1984) asks us to compare two scenarios. In the first, a war kills 99% of all living humans. This would be a great catastrophe – far beyond anything humanity has ever experienced. But human civilization could, and likely would, be rebuilt.

In the second scenario, a war kills 100% of all living humans. This, Parfit urges, would be a far greater catastrophe, for in this scenario the entire human civilization would cease to exist. The world would perhaps never again know science, art, mathematics or philosophy. Our projects would be forever incomplete, and our cities ground to dust. Humanity would never settle the stars. The untold multitudes of descendants we could have left behind would instead never be born.

This thought has driven many philosophers to emphasize the importance of preventing existential risks, risks of catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, we might regulate weapons of mass destruction or seek to reduce what some see as a risk of extinction caused by rogue artificial intelligence.

Many philosophers think two things about existential risk. First, it is not only valuable, but astronomically valuable to do what we can to mitigate existential risk. After all, the future may hold unfathomable amounts of value, and existential risks threaten to reduce that value to naught. Call this the astronomical value thesis.

Second, increasingly many philosophers hold that humanity faces high levels of existential risk. In his bestselling book, The Precipice, Toby Ord (2020) puts the risk of existential catastrophe by 2100 at one in six: Russian roulette. Attendees at an existential risk conference at Oxford put existential risk by 2100 at nearly one in five (Sandberg and Bostrom 2008). And the Astronomer Royal, Martin Rees (2003), puts the risk of civilizational collapse by 2100 at fifty-fifty: a coinflip. Let existential risk pessimism be the claim that per-century levels of existential risk are very high.

Surely the following is an obvious truth: existential risk pessimism supports the astronomical value thesis. If we know anything about risks, it is that it is more important to mitigate large risks than it is to mitigate small risks. This means that defenders of the astronomical value thesis should be pessimists, aiming to convince us that humanity’s situation is dire, and opponents should be optimists, aiming to convince us that things really are not so bad.

In my paper, I argue that every word in the previous paragraph is false. At best, existential risk pessimism has no bearing on the astronomical value thesis. Across a range of modelling assumptions, matters are worse than this: existential risk pessimism strongly reduces the value of existential risk mitigation, often strongly enough to scuttle the astronomical value thesis singlehandedly. (See end notes for examples, and see the full paper for further details).

In the full paper, I explore a range of models and argue that there is only one viable way to reconcile existential risk pessimism with the astronomical value thesis. This is the time of perils hypothesis on which levels of existential risk are high now, but will soon drop to a permanently low level if only we survive the next few perilous centuries. However, I argue, the time of perils hypothesis is unlikely to be true, so there is likely an enduring tension between existential risk pessimism and the astronomical value thesis.

This tension has important philosophical implications. First, it means that unless more is said, many parties to debates about existential risk may have been arguing on behalf of their opponents. To many, it has seemed that a good way to support the moral importance of existential risk mitigation is to make alarmist predictions about the levels of existential risk facing humanity today, and that a good way to oppose the moral importance of existential risk mitigation is to argue that existential risk is in fact much lower than alarmists claim. However, unless more is said, matters are exactly the reverse: arguing that existential risk is high strongly reduces the value of existential risk mitigation, whereas arguing that existential risk is low strongly increases the value of existential risk mitigation.

Second, there has been a wave of recent support for longtermism, the doctrine that positively influencing the long-term future is a key moral priority of our time. When pressed to recommend concrete actions we can take to improve the long-term future of humanity, longtermists often point to existential risk mitigation. By the astronomical value thesis, longtermists hold, existential risk mitigation is very important. But this paper suggests an important qualification, since many longtermists are also pessimists about existential risk. As we have seen, existential risk pessimism may well be incompatible with the astronomical value thesis, in which case the value of existential risk mitigation may be too low to provide good support for longtermism.

End notes

The core modelling claim of the paper is that (1) at best, existential risk pessimism is irrelevant to the astronomical value thesis, and that (2) in most cases existential risk pessimism tells strongly against the astronomical value thesis. While full technical details are contained in the main paper, here are some models to illustrate claims (1) and (2).

On (1): To illustrate the best case, suppose that humanity faces a constant level of risk r per century. Suppose also that each century of existence has constant value v, if only we live to reach it. And suppose that all existential catastrophes lead to human extinction, so that no value will be realized after catastrophe. Then, it can be shown that the value of reducing in our century by some fraction f is f*v. In this model, pessimism has no bearing on the astronomical value thesis, since the starting level r of existential risk does not affect the value of existential risk mitigation. Moreover, the value of existential risk reduction is capped at v, the value of a single century of human life. Nothing to sneeze at, but hardly astronomical.

On (2): Making the model more realistic only serves to heighten the tension between pessimism and the astronomical value thesis. For example, suppose that centuries grow linearly in value over time, so that if this century has value v, the next century has value 2v, then 3v and so on. Keep the other modelling assumptions the same. Now, it can be shown that the value of reducing existential risk in our century by some fraction f is fv/r.

In this model, pessimism tells against the astronomical value thesis: if you think that existential risk is now 100 times greater than I think it is, you should be 100 times less enthusiastic about existential risk mitigation. Moreover, the value of existential risk reduction is capped at v/r. For the optimist, this quantity may be quite large, but not so for the pessimist. For example, if we estimate per-century risk r at 20%, then the value of existential risk is capped at five times the value of a single century – again, nothing to sneeze at, but not yet astronomical.

 

References

Bostrom, Nick, “Existential risk prevention as global priority,” Global Policy 4.1 (2013): 15-31.

Ord, Toby, The precipice (NY: Bloomsbury, 2020).

Parfit, Derek, Reasons and persons (Oxford: Oxford, 1984).

Rees, Martin, Our final our (NY: Basic Books, 2003).

Sandberg, Anders and Bostrom, Nick, “Global catastrophic risks survey,” Technical Report 2008-1 (2008), Future of Humanity Institute.

 

The Daft Discussion of Dangerous Dogs

Written by Rebecca Brown

Breed Specific Legislation

The UK currently imposes what’s called ‘Breed Specific Legislation’ in an effort to limit serious injuries due to dog attacks. The legislation was introduced in 1991 and made it illegal to own, sell, abandon, give away or breed dogs deemed to belong to one of four banned breeds. These are the Pit Bull Terrier, Japanese Tosa, Dogo Argentino and Fila Brasileiro. These breeds, having been selectively bred for purposes such as fighting, hunting and guarding, and are considered to have physical and behavioural attributes that mean they pose an unacceptable risk to the public. Dogs that meet the criteria for being a banned breed can be seized and either destroyed or permitted to remain with their owner under restrictive conditions. Breed specific legislation has been recently criticised in a number of organisations.

I do not intend to defend Breed Specific Legislation. It’s plausible that there are alternative, more effective and less damaging ways of reducing harm from dog attacks. However, many of the critiques of Breed Specific Legislation made by prominent animal charities and veterinary bodies are flawed. In pursuing what they no doubt see as a worthwhile end (the scrapping of Breed Specific Legislation), those publicly lobbying for change have made numerous confused and misleading arguments. Below, I outline why these arguments are misleading, implausible or weak, and how they fail to show that Breed Specific Legislation should be revoked.

Any dog can bite

One common claim is that “All dogs, whatever their breed type or size, are capable of showing aggression” and that “All dogs have the potential to be dangerous”. I don’t want to quibble about whether or not a Chihuahua is technically capable of inflicting a serious injury, but the relevant question is surely a) how likely a given dog is to bite a person, coupled with b) how damaging that bite is likely to be.

The Royal Society for Prevention of Cruelty to Animals (RSPCA) point to the more relevant claim that “recent studies found no difference observed between legislated and non-legislated breeds in the medical treatment required following a bite, or in the severity of bite and the type of dog that bit.” Unfortunately they don’t provide any citation or link to this evidence so it’s impossible to evaluate it. So we are left with the frankly implausible claim that if you get bitten by a toy poodle puppy you’re as likely to be harmed as if you are bitten by a breed of dog that was selectively bred to be able to fight a bull to the death.

Elsewhere on the RSPCA website is a document about dog aggression. Here they seem to acknowledge – albeit carefully – the fairly obvious point that some dog breeds are more likely to show aggression, and are more dangerous when aggressive, than others:

Although it might seem that some dogs are born to be aggressive, it is more accurate to say that they are born with inherited tendencies that might, if not controlled, make aggressive behaviour more likely… There are inherited ways of behaving that are particular to some breeds or types of dogs that make it more likely for individuals to grow up to use aggression where others would not… Aggression is not a single characteristic, however there are breeds of dogs that have historically been used for specific purposes, such as for fighting dogs or other animals, or for guarding. Whilst these breeds may not be any more likely to show aggression, because of their physical and temperamental attributes if they do show aggression it is likely to have more serious consequences. Persistence in attack coupled with strong jaws can cause serious injuries.

Lack of evidence

Notwithstanding the above quote from the RSPCA, another common claim is that there’s no ‘robust scientific evidence’ that some dogs (and types of dog) are able to inflict more damaging injuries than others. Again, this seems to be a case of absence of evidence rather than evidence of absence. It may be worth recalling Yeh et al’s (2018) article ‘Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial’ which usefully highlights how direct RCT evidence might not be necessary in order to conclude that interventions such as parachutes are probably a good idea when jumping out of aeroplanes. Thinking again of the toy poodle versus pit bull example, we can probably draw some fairly reliable conclusions based upon ‘mechanistic reasoning’, without a need to inflict different kinds of dogs bites upon participants as part of a randomised trial.

Disconfirmatory evidence

It is also claimed that there is direct evidence that banned breeds are no more dangerous than other breeds, and that Breed Specific Legislation is completely ineffective at reducing (serious) injuries from dogs.

The RSPCA tells us that “Between 1989 and 2017, 48 people died in dog-related incidents. Of the 62 dogs involved, 53 were dog breeds not on the prohibited list.” And that “Only 8% of dangerously out of control dog cases involved banned breeds”. They also reference evidence that “in the past 20 years (1999-2019), the number of hospital admissions for the treatment of dog bites has increased by 154%, despite the prohibition of certain types of dogs”.

What can we make of this? First of all, the population data: dog bites requiring hospital treatment do not appear to have dropped as a result of the introduction of Breed Specific Legislation. This may well be the case, but it’s not possible to tell based on the information provided. We don’t know how dog ownership has changed over the time period studied – perhaps there were lots more dogs, and this resulted in more bites. It has certainly been speculated that the enthusiasm for pets during the covid lockdown (the number of dogs registered with the UK Kennel Club increased by nearly 40% between 2020 and 2021) might have contributed to more dog bite injuries in recent years. The population data might be suggestive, but it is far from conclusive evidence that Breed Specific Legislation doesn’t or hasn’t prevented any serious injuries from dogs since it was introduced.

Second, the breed-linked data, supposedly showing that banned breeds are responsible for only a small minority of serious injuries, including deaths. But wait: banned breeds, of which there are only four (amongst hundreds of other dog breeds) were involved in 1/7 (14%) of the fatal attacks on people during the period mentioned; they were implicated in 8% of cases of ‘dangerously out of control’ dogs. We don’t know how many dogs belonging to banned breeds exist in the UK, but surely it’s less than 14%, meaning they’re at least overrepresented in these samples.

And surely we can go further than that. I’m pretty keen on dogs and had a childhood of obsessively recording and watching all the annual coverage of Crufts repeatedly. I can reliably tell the difference between an Italian Greyhound, a Whippet and a Greyhound, or describe to you what a Wirehaired Vizsla looks like. But I have never heard of three of the breeds on the banned list and possibly never seen them.

This Wikipedia article lists fatal dog attacks in the UK, including breed information where known. The breeds responsible for fatal attacks on humans since 1980 are largely unsurprising. In the below table I’ve summarised the data from the Wikipedia article. (Note that I’ve grouped some breeds / breed types together [Mastiffs; Bulldogs other than American Bulldogs]).

Breed Number of fatalities involved in since 1980 (including as part of cross-breed)
Staffordshire Bull Terrier 12
American Bulldog 9
American Bully XL 7
Mastiff / Bull Mastiff / Neapolitan Mastiff / Italian Mastiff 7
Alsatian / German Shepherd 6
American Pit Bull Terrier / Pit Bull type 6
Rottweiler 6
Bordeaux Bulldog / Bulldog type / Aylestone Bulldog / British Bulldog 4
Jack Russell Terrier 3
Bull Terrier 2
Cane Corso 2
Husky 2
Doberman 1
Alaskan Malamute 1
Lakeland Terrier 1
Presa Canario 1
Chow Chow 1

Again, without baseline information about how common these breeds are, one must be cautious about extrapolating from this list to draw conclusions about the dangers of specific breeds. Yet what is clear is that, with the exception of Jack Russell Terriers and a single Lakeland Terrier (responsible for killing three newborn infants), small dogs do not tend to kill people. Dogs that end up inflicting injuries sufficient to kill people tend to be powerful types, which have been selectively bred for guarding, fighting or hunting.

These dogs don’t deserve to die

One unfortunate refrain used by critics including the RSPCA and British Veterinary Association is that dogs deemed to belong to a banned breed and which do not qualify for an exemption, despite showing no previous signs of aggressive or dangerous behaviour, do not deserve to die. Yet this seems to mischaracterise what the Breed Specific Legislation is intended to do. It is clearly not intended to hand out punishment to ‘bad’ dogs, but instead to act as a preventative measure, to stop injuries in the first place. Indeed, since dogs are not moral agents it is entirely inappropriate to punish them on the basis that they ‘deserve’ it. We should of course treat dogs kindly and not cause them unnecessary suffering. But to suggest that Breed Specific Legislation is a form of undeserved punishment of ‘innocent’ dogs is misleading. Even the destruction of dogs that have seriously injured or killed people should not be construed as punishment: the justification for such actions is to protect people from future attacks from a dog proven to be dangerous.

Banning breeds implies that all other dogs are safe

Commentators suggest that, by banning certain breeds, the government sends the message that only these breeds are dangerous, that dogs belonging to other breeds are never dangerous, and that it is breed alone (rather than circumstances, handling, training, etc.) that makes a dog dangerous. Yet I wonder if this is really the case. It hadn’t crossed my mind that only those banned breeds were (potentially) dangerous, and I would be surprised if other people concluded this from the legislation. Indeed, it seems equally plausible that banning specific breeds would raise awareness of general dangers posed by all dogs.

A vet writing for the British Veterinary Association states “there has never been an onus on anyone, young or old, to behave sensibly and respectfully around dogs.” The problem is that many serious bites – particularly those resulting in fatalities – happen to children. 31 of the 64 fatalities in the UK since 1980 listed on the Wikipedia page were in children under 12. They include a number of babies, one of whom was 5 days old when she died. Children are particularly vulnerable to dog attacks: they are small (and thus injuries may be more severe and they are less able to escape or protect themselves if attacked); they are more likely to provoke dogs through excitement or rough treatment; they are more likely to miss the signs that a dog feels threatened and may attack. But there is a limit to what small children can be taught or the extent to which parents can control their behaviour, and we inevitably place them at risk by putting them in shared spaces with dogs. Of course there is an onus on people to treat dogs carefully and kindly, and to ensure those without the capacity to do this are protected. Yet focusing on whether or not children or their parents could (or should) have behaved differently once again seems to attend to questions of blame and desert, rather than the core matter: how best to prevent injuries in the first place.

Whilst it may well be a good idea to provide training and support to people regarding how to safely handle and act around dogs, this is not incompatible with Breed Specific Legislation. We need not accept the supposition that people are too stupid to simultaneously appreciate that poor training, irresponsible handling, stressful circumstances and the inherited predispositions and physical characteristics of a dog can all contribute to how likely it is to bite and injure someone.

Don’t judge a book by its cover

An interesting argument presented in criticism of Breed Specific Legislation is that the law is implemented based on the extent to which a dog fits the breed standard of a banned breed. This raises complaints from the RSPCA and British Veterinary Association for failing to use, for instance, genetic testing to determine whether or not a dog belongs to a particular (banned) breed.

But isn’t this disingenuous? First of all, if it is physical attributes that play a part in how severe a dog bite may turn out to be, then using physical attributes as a guide to which individuals should be restricted seems not unreasonable. Second, I do not think that, were the legislation to be enforced via genetic testing rather than breed standard, the RSPCA, British Veterinary Association or any of the other opposed groups would be any happier with it. Third, it’s not clear what additional information a genetic test will give you beyond the visible phenotypes that are used in the current system to determine breed. From my Crufts-watching days, the judges were looking at how well the dogs matched up against the breed standard. Of course, genes are partially responsible for what dogs end up looking like, but the relevant thing for dog breeds seems to be what they are actually like rather than what their genetic make-up is. The ‘you shouldn’t judge a book by its cover’ claim seems to be a piece of rhetoric rather than a genuine complaint about how the legislation is enforced.

A plea for better communications

As I said, I don’t want to defend Breed Specific Legislation. But it’s hard to evaluate the value of a policy on the basis of communications that are so clearly aimed at achieving a particular outcome (scrapping the legislation) rather than providing informative arguments. At the moment, the arguments made by the RSPCA, the British Veterinary Association and others could apply as equally to wolves as to any other dog breed. If I want to keep a wolf then, extrapolating from the above claims, I could defend myself against accusations I was risking others’ safety by saying “any dog can bite, and no one has shown me evidence that a wolf bite is worse than any other dog”. Indeed, none of the lethal attacks on humans by dogs in the UK in the last 50 years came from wolves. As long as the wolf hasn’t bitten anyone else yet, subjecting it to restrictions would be equivalent to ‘punishing the innocent’. Children just need to treat wolves respectfully and then they should be fine; there should be more of an emphasis on (potential) victims of wolf attacks to avoid putting themselves at risk, rather than restricting people’s freedom to keep wolves. Even though it looks and behaves exactly like a wolf, it’s not fair to say it’s a wolf unless this has been genetically proven.

Is this really the argument the RSPCA, British Veterinary Association and others want to make? It seems to me it would be vastly preferable to present the best evidence in support of their claims, and acknowledge where uncertainty lies so people can form a sensible and well-informed view on Breed Specific Legislation.

Perceptual diversity and philosophical belief

Reading up on Derek Parfit’s theory of personal identity as part of my research on non-essential accounts of self in literature, philosophy and neuroscience, I was astounded to come across a New Yorker feature on the philosopher which describes his inability to visualise imagery as an anomaly:

“He has few memories of his past, and he almost never thinks about it, although his memory for other things is very good. He attributes this to his inability to form mental images. Although he recognizes familiar things when he sees them, he cannot call up images of them afterward in his head: he cannot visualize even so simple an image as a flag; he cannot, when he is away, recall his wife’s face. (This condition is rare but not unheard of; it has been proposed that it is more common in people who think in abstractions.) He has always believed that this is why he never thinks about his childhood. He imagines other people, in quiet moments, playing their memories in their heads like wonderful old movies, whereas his few memories are stored as propositions, as sentences, with none of the vividness of a picture.”

Surely, Parfit’s experience would be representative of the norm, I thought  – i.e. to only be able to see things that are actually there, physically present and immediately visible in the external surroundings? I certainly never had this seeming super-power of creating images myself and had always assumed that my subjective experience corresponded to the average. 

As I was soon to find out, however, the absence of a visual component to Parfit’s imagination is part of a neurological condition which affects an estimated 2-5% of the population, including myself, namely aphantasia

Recent studies into aphantasia (e.g. https://pubmed.ncbi.nlm.nih.gov/34296179/) connect it to a number of characteristics and personality traits, including introversion and autistic spectrum features, difficulty with recognition, including face-recognition, impoverished autobiographical memory and less event detail in general memory, difficulty with atemporal and future-directed imagination, including difficulties with projecting oneself into mentally constructed scenes and the future, reduced mind-wandering tendency, elevated levels of IQ and mathematical and scientific occupations. 

In addition to these, I think aphantasia is likely connected to a certain philosophical belief or position, namely the non-essentialist view of the self that is found in both the reductionist account of personal identity in Western philosophy and the no-self doctrine in Eastern contemplative traditions. I offer a more extensive argument for this connection here: https://psyche.co/ideas/aphantasia-can-be-a-gift-to-philosophers-and-critics-like-me.

In Reasons and Persons Derek Parfit formulates the view that personal identity is reducible to physical and psychological continuity of mental states, and that there is no ‘further fact’, diachronic entity, or essence that determines identity. The belief that persons are separate entities with continuously existing selves, he argues, is to a great degree an illusion. The New Yorker profile only fleetingly connects Parfit’s philosophy to his aphantasia, but to me it seems an obviously relevant piece of explanation. Our philosophical views are based on our intuitions; our perceptual experience of the world guides our ideas about it. 

As modern neuroscience is giving us deeper insight into the wide neuro- and perceptual diversity of people, it is also giving us new explanations of differences in people’s experience of reality and, accordingly, their philosophical intuitions and beliefs. According to the predictive processing theory of brain function, the reality we experience as objective and independently existing, is to a large degree created by our brain, a projection based on our brain’s best guesses about the external reality and as such a form of controlled hallucination. And as Anil Seth has recently pointed out, since we all have different brains, we will naturally make different guesses about the external reality we encounter and thus have different perceptual experiences of reality. “Just as it serves us well to occasionally question our social and political beliefs, it’s useful to know that others can literally see things differently to us, and that these differences may evolve into different beliefs and behaviours.”

The growing insight into perceptual diversity, then, gives way to an increased possibility of biographically understanding and explaining philosophers’ theories and as such allows for a new form of ‘neuro-biographical’ reading of philosophy. 

It seems plausible that the flipside of a reduced sense of the past and future is an increased connection to and absorption in the present and a weaker identification with a continuous personal narrative and a coherent and substantial self. Parfit’s diminished sense of continuity of identity and substantiality of his own self – which he himself explicitly links to his aphantasia – may well have led him towards or at least strengthened his anti-essential views of personhood. 

Likewise, my own aphantasia could at least in part explain my intellectual preference for and easy identification with non-essential conceptions of self in both Western philosophy and Buddhism. The question is, then, whether the condition of aphantasia gives people like Parfit and me a shortcut to enlightenment and clearer philosophical insight into and intuitive understanding of the human condition and nature of reality. Or, does it obscure the truth by barring us from dimensions that are integral to the most common human experience and installing intuitions that do not correspond to the norm?

As neuroscience and neurotechnology continue to develop and give us better understanding of the variations and differences in the neurological constitution of brains, it will be interesting to see to how far the awareness of perceptual differences and specificity can reach in the explanation of differences in philosophical intuitions and beliefs – and to what extent it can disqualify philosophical positions and theories. The notion of perceptual diversity offers a valuable route for philosophers to exercise self-criticism, scrutinise their theories and intuitions and investigate the underlying perceptions and experiences. At the same time, it troubles some of the fundamental concepts on which the discipline of philosophy relies, paving the way for further relativisation and destabilisation of the already undermined notion of objective truth and rationality and potentially removing us further from consensus.

From Experience to Insight – the Personal Dimension of Philosophy

Written by Muriel Leuenberger

The more philosophers I have come to know, the more I realize how deeply personal philosophy is. Philosophical positions often emerge from personal experience and character – even the seemingly most technical, detached, and abstract ones. As Iris Murdoch wrote: “To do philosophy is to explore one’s own temperament, and yet at the same time to attempt to discover the truth.” Philosophy is an expression of how one sees the world, a clarification, development, and defense of “an outlook that defines who someone is” to add the words of Kieran Setiya.

This personal dimension of philosophy becomes evident in the new philosophical positions and topics that emerge when people with different personal experiences and points of view start to do philosophy. The most prominent example is how women in philosophy, particularly in the last 50 years, have contributed new perspectives – a brush of fresh air in old, stuffy rooms. Philosophy’s allegedly objective view from nowhere was rather the view from a particularly male perspective. Care ethics, feminist philosophy, and philosophy of pregnancy are just some areas where the inclusion of women in philosophy with their own outlook and priorities has advanced the discipline.[i]

The relational turn that can be observed in the philosophy of identity can be seen as a recent addition to this list. Relational identity is the idea that who you are is not just defined by your own properties and characteristics but also by how others define you. Others define us through concepts and norms we acquire in a social context that shape how we see ourselves and the world, they define us through our relations with them as friends, siblings, or members of an ethnic group or a book club, and they have the power to constrain our scope of action or provide opportunities. The latter can be a particularly incisive way of being defined by others. For example, by banning women in Afghanistan from universities the Taliban is defining who they can be. They can no longer become a doctor who dedicates their life to and finds meaning in caring for their patients. Insofar as we are defined by our actions, we can be defined by others who exercise control over what we can do in our lives.

Philosophy has typically been pursued by people whose life was in some sense open to them. They had a range of opportunities – doing philosophy was one of them – and did not face strongly limiting constraints and expectations, as in the example of an Afghan woman today. Academia and with it philosophy have become more accessible in many parts of the world. This means that more people are doing philosophy who either experienced more limiting constraints posed by others or who are aware that only very recent changes or the fact that they are born in a certain country spared them from a life of far-reaching constraints. People who have experienced or can readily empathize with how others can define one’s identity have entered the debate on identity. This development makes the emergence and rising popularity of relational identity views comprehensible.

I want to highlight a further, related reason for how the personal dimension of philosophy creates new trends besides the commonly mentioned shift in who is doing philosophy. The growing literature on philosophy concerned with topics and positions relevant to and based on the experience of a more diverse range of people can also be traced back to a diversification in whose testimony is being heard and taken seriously. As Miranda Fricker argued, marginalized groups are often faced with testimonial injustice – their testimonies are considered less credible due to prejudices related to their identity. For most of the history of philosophy, testimonies of experiences and viewpoints of women, non-western, non-binary, and non-white people were not heard, not taken as seriously or relevant, and not readily accessible. Globalization, digitalization, and a cultural shift towards more openness and equality are gradually changing this (although we still have a long way to go). The increased accessibility and ascribed credibility of testimonies of diverse experiences can inspire new topics and positions in philosophers who do not share those experiences but have come to learn about and empathize with them.

Philosophy clearly profits from taking other perspectives into account. We can get a richer picture of reality, a broader understanding of the moral landscape, raise interesting metaphysical questions, and new philosophical positions can come into sight that challenge established old doctrines. The deeply personal character of philosophy makes the inclusion of and attention to different voices all the more pressing.

[i] Vintiadis, Elly (2021, August). The view from her. Aeon. https://aeon.co/essays/is-there-something-special-about-the-way-women-do-philosophy

How Brain-to-Brain Interfaces Will Make Things Difficult for Us

Written by David Lyreskog

Four images depicting ‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney.

‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney

 

A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing, and hybrid and artificial intelligence, promises to transform how we think about collective and collaborative cognitive tasks. With implementations ranging from research to entertainment, and from therapeutics to military applications, as these tools continue to improve, we need to anticipate and monitor their impacts – how they may affect our society, but also how they may reshape our fundamental understanding of agency, responsibility, and other concepts which ground our moral landscapes.

In a new paper, I, together with Dr. Hazem Zohny, Prof. Julian Savulescu, and Prof. Ilina Singh, show how these new technologies may reshape fundamental components of widely accepted concepts pertaining to moral behaviour. The paper, titled ‘Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds’, was just published in Neuroethics, and is freely available as an Open Access article through the link above.

In the paper, we argue that the received views on how we (should) ascribe responsibility to individuals and collectives map poorly onto networks of these ‘Collective Minds’. The intimately collective nature of direct multiple-brain interfaces, for instance, where human minds can collaborate on and complete complex tasks without necessarily being in the same room – or even on the same continent! –  seem to suggest a collectivist moral framework to ascribe agency and responsibility. However, the technologies we are seeing in R&D do not necessitate the meeting of criteria we normally would turn to for ascription of such frameworks; they do not, for instance, seem to rely on that participants have shared goals, know what the goals of other participants are, or even know whether they are collaborating with another person or a computer. 

In anticipating and assessing the ethical impacts of Collective Minds, we propose that we move beyond binary approaches to thinking about agency and responsibility (i.e. that they are either individual or collective), and that relevant frameworks for now focus on other aspects of significance to ethical analysis, such as (a) technical specifications of the Collective Mind, (b) the domain in which the technology is deployed, and (c) the reversibility of its physical and mental impacts. However, in the future, we will arguably need to find other ways to assess agency constellations and responsibility distribution, lest we abandon these concepts completely in this domain.

Video Interview: Introducing Dr Emma Dore Horgan

By: admin

An interview with OUC academic visitor and former Oxford Uehiro Centre DPhil student Dr Emma Dore Horgan on her research into the ethics of neuro-interventions for offenders.

Prof Matthias Braun discussing the value of academic collaboration

By: admin

In a new short video academic visitor Matthias Braun discusses the value of academic collaboration with the Uehiro Centre’s Rebecca Brown.

 

Video Interview: Introducing Oxford Uehiro Centre’s Academic Visitor, Prof Dr Matthias Braun

By: admin

In the first of a new series of short videos produced by the OUC introducing the academic visitors at the Oxford Uehiro Centre and the practical ethics research that they are involved in.

Oxford Uehiro Prize in Practical Ethics: Turning up the Hedonic Treadmill: Is It Morally Impermissible for Parents to Give Their Children a Luxurious Standard of Living?

By: admin

This essay was the overall winner in the Undergraduate Category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student, Lukas Joosten

Most parents think they are helping their children when they give them a very high standard of life. This essay argues that giving luxuries to your children can, in fact, be morally impermissible. The core of my argument is that when parents give their children a luxurious standard of life, they foist an expectation for a higher standard of living upon their children, reducing their lifetime wellbeing if they cannot afford this standard in adulthood.

I argue for this conclusion in four steps. Firstly, I discuss how one can harm someone by changing their preferences. Secondly, I develop a model for the general permissibility of gift giving in the context of adaptive preferences. Thirdly, I apply this to the case of parental giving, arguing it is uniquely problematic. Lastly, I respond to a series of objections to the main argument.  

I call the practice in question, luxury parenting. Luxury parenting consists of providing certain luxuries to your child which go beyond a reasonably good standard of living.  I will consider this through a framework of gift giving, since luxury parenting can be understood as the continual gifting of certain luxuries to children. While my argument also applies to singular gifts of luxury to children, it is targeted at the continual provision of luxury goods and services to ensure a high standard of living throughout childhood.

 

Section 1: Preference Screwing

When we discuss harming one’s wellbeing, we are usually referring to taking some action which changes the actor’s situation so that they are further from their preferences. However, a person’s wellbeing can be harmed in the opposite way as well, by changing their preferences away from their situation. Consider the following example.

Wine pill: Bob secretly administers a pill to Will which changes his preferences so that he no longer enjoys cheap wine.

Will has been harmed here in some morally significant way without having received any immediate disbenefit. The harm consists in the effect on future preferences. We can call this type of harming “preference screwing”.

Preference Screwing: Making it more difficult for an actor to achieve a certain level of utility by changing the actor’s preferences so that there is a larger divergence between the preference set and the actor’s option set.

 

Section 2: Adaptive Preferences and Gift-giving

The theory of adaptive preferences tells us that people tend to return to their baseline happiness after positive or negative shocks to their wellbeing because people’s preferences adapt to their current situation. I argue this process of preference adaptation implies that some instances of gift-giving are impermissible, because consuming a high-quality gift, screws with the preferences of the recipient, so that they derive lower utility from future consumption of lower-quality variants of the good they were gifted.

There exists a vast literature debating the accuracy of adaptive preferences.[1] However, my argument only requires a weak restricted form of adaptive preferences. Namely it simply says that there is some negative impact of consuming expensive goods on the enjoyment of future cheap goods. That such an impact exists is generally empirically supported, even if the strength of the impact is debatable.[2]

It might be objected that if preferences are adaptive, then gift-giving has no long-term harm since, upon returning to the lower-quality good, preferences will adapt downward immediately. There are two independent reasons why this is not a problem for my argument.

Firstly, I don’t assume (and the empirics don’t support) complete adaption, only partial adaptation. This means that once the preferences of an actor have (partially) adapted up after consuming the higher-quality good, then if the actor returns to the lower-quality good, their preferences will adapt down but not completely, so there remains a long-lasting upwards pressure on their preferences.

Secondly, as discussed in section 3, since childhood is a formative life-phase, preferences adapt more quickly and more permanently for children. Luxury parenting thus fixes children’s preferences at a high point, which will take much longer to adapt back down in adulthood.

This allows us to develop a model of gift giving. When A gifts X to B, B’s lifetime wellbeing is affected in two ways. Firstly, there is the immediate positive (or negative if a particularly poor gift) utility derived by B’s consumption of X. Call this the immediate utility. Secondly there is the long-term impact of the gift’s preference screwing. The preference screwing effect is the total harm to the lifetime wellbeing of B incurred by B as a result of the preference screwing caused by consuming X. This allows us to state the following:

Net wellbeing impact of gift giving = immediate utility – preference screwing effect

Now, consider that preference screwing through gift-giving is usually not considered a form of wronging. Consider the following example:

Wine gift: Bob gifts a bottle of Château Latour to Will for his birthday. After thoroughly enjoying the wine, Will no longer enjoys cheap wines as much

In wine gift, we would not say that Bob has wronged Will. There are two distinctions between wine gift and wine pill which explain why gift giving to adults is generally permissible.

Firstly, wine gift is not necessarily a net negative for Will’s lifetime utility. The spike in utility of drinking the gifted bottle may outweigh the loss in utility from the future discounted happiness of drinking cheap wines. In wine pill, there is only a negative impact on Will’s utility (ignoring the health effects).

Secondly, and crucially, Will consents into receiving the gift. Generally, we think that a person’s potential complaint versus a particular action is much weaker when they consented into that action being conducted upon them.

This allows us to say that the permissibility of gift given is a function of the following two parameters:

  1. Expected net wellbeing impact of gift giving (henceforth expected net impact)
  2. Level of consent

The weight given to each is going to vary with one’s background intuitions on paternalism. Anti-paternalists might thus completely disregard the first parameter, arguing that given sufficient consent, gift-giving is always permissible. My argument is inclusive to a broad pluralism on this matter, since it avoids the 2nd parameter altogether, as discussed in section 3.

 

Section 3: Giving Children Luxuries

By evaluating luxury parenting on the two parameters, I argue that many instances of practice are impermissible.

Firstly, consider level of consent. Children are usually thought to lack the required capacities for autonomous decision making, such as critical thinking, time-relative agency (ownership of future interests) and independence[3]. This means that children, generally, cannot consent into receiving luxuries from their parents.

As such, we must adapt the model of consent for children. Brighouse suggests that the autonomy rights of children express themselves as fiduciary duties upon parents.[4] Parents thus have the authority to make decisions for their children, but this authority is limited by the duty to act in the child’s best interest. This means that both parents can permissibly give gifts to children, but only when those gifts appear to be in the best interest of the child. Assume now that, ceteris paribus, the non-welfare interests of children are unaffected in cases of gift giving. Given this assumption, we can say that the permissibility of child gift giving boils down to the expected net impact.

Luxury parenting is thus usually impermissible since it is particularly likely to lead to a negative expected net impact. This is because the preference screwing effect is likely to be strong, while the immediate benefit is small. Children are particularly vulnerable to preference screwing from luxury parenting for four reasons.

Firstly, childhood is an especially formative stage in life. Due to the ongoing development of the brain, the patterns children learn are going to be extra lasting. [5] This means that if preferences are formed to expect a high standard of living, these preferences are going to be especially sticky. If the child’s standard of living drops upon reaching adulthood, those preferences will likely adapt down less quickly and won’t adapt down completely.

Secondly, when children experience certain goods, they often experience them for the first time. If the first time they experience a particular good or service, they are experiencing an expensive variant of that good, they are likely to calibrate their future expectation on this expensive good, because they have no cheaper variants to compare it to.

Thirdly, children generally will have a lesser appreciation of the uniqueness or scarcity of the goods they experience at a high standard of living. In wine gift, Will is acutely aware that his drinking of Château Latour is a unique and temporary experience. This awareness can deter the preference adaptation. However, children are less likely to be aware of the fleeting nature of the standard of living and so are not protected from preference adaption in this way.

Lastly, the effect is going to be especially strong because the luxury gifts are provided for an extended period of time. If parents provide a luxurious standard of living for multiple years, that gives a very long time for the child’s preferences to be pushed upwards and solidify there.

On the flip side, the immediate utility effect is going to be smaller for children. The satisfaction people receive from luxuries often goes beyond the direct experiential joy of the good or service. There is also the novelty of the experience, the secondary reflective happiness from knowing that you are consuming something special. Children are much less likely to appreciate the novelty of the experience since they are likely, as argued above, to be less aware of the uniqueness of the experience.

In sum, luxury parenting has strongly negative preferencing screwing effects while it offers a limited positive immediate utility. In turn, luxury parenting is likely to have a negative expected net impact on children, meaning that luxury parenting is often impermissible.

 

Section 4: Objections

Objection 1: Symmetry Implications

If it is impermissible to give a luxurious standard of life to children, this could imply that it is morally required to give a miserable existence to children instead. If childhood suffering will push preferences down such that children will be happier in the long run, this may be better for the child. This implication would be so clearly unacceptable that it would condemn the whole argument. However, the implications of the model are asymmetrical. This is because children are generally thought to have significant rights, which ought to be respected. They have rights against being physically harmed and to a reasonable standard of living. Parents cannot impose suffering on their children even if it is a net-positive on lifetime wellbeing because this would violate these rights protections.

On the flipside, parents can permissibly withdraw these luxury goods, since children generally are not thought to have a right to luxury living.

 

Objection 2: Shared Time

One might argue that luxury parenting is permissible because it is necessary for parents to give themselves a high quality of life. Parents are generally thought to be under an obligation to spend quality time with their children because a healthy parental relationship is crucial for the child’s development. This is problematic since many opportunities for quality time are also opportunities for parents to spend money on themselves, such as restaurants, vacations, or entertainment. So, if we think parents should be permitted to spend money on themselves, this could make luxury parenting permissible. There are three responses to this objection.

Firstly, there are still many ways parents can spend on themselves without spending on their children. Parents can spend money on activities without their children or they can spend money on themselves while shielding their children from the same luxury expenditure, for instance by ordering a lobster for yourself and the pasta for your child.

Secondly, the magnitude of this sacrifice, being unable to spend on oneself, directly correlates with the level of wealth parents have. This makes the sacrifice a less significant problem because the wealth of parents reduces the required sacrifice of parenting significantly in other contexts. Wealthy parents can afford babysitters, summer camps, and meal boxes. This means that the sacrifice of giving up luxury is balanced out by the diminished sacrifice in other facets of parenting.

Thirdly, parents are routinely asked to make sacrifices for their children in determining how they spend their time. They can only watch child-friendly movies, avoid bars, and go to child-friendly holiday destinations. It’s unclear, for instance, how giving up luxury is materially different from forcing parents to go on vacation to Disneyland.

In sum, a parent’s interest in treating themselves is insufficient for making luxury parenting permissible.

 

Works Cited:

Bagenstos, Samuel R., and Margo Schlanger. ‘Hedonic Damages, Hedonic Adaptation, and Disability’. Vanderbilt Law Review 60, no. 3 (2007): 745–98.

Brighouse, Harry. ‘What Rights (If Any) Do Children Have’, 1 January 2002. https://doi.org/10.1093/0199242682.003.0003.

Coleman, Joe. ‘Answering Susan: Liberalism, Civic Education, and the Status of Younger Persons’. In The Moral and Political Status of Children, edited by David Archard and Colin M. Macleod, 0. Oxford University Press, 2002. https://doi.org/10.1093/0199242682.003.0009.

Russell, Simon J., Karen Hughes, and Mark A. Bellis. ‘Impact of Childhood Experience and Adult Well-Being on Eating Preferences and Behaviours’. BMJ Open 6, no. 1 (1 January 2016): e007770. https://doi.org/10.1136/bmjopen-2015-007770.

 

[1] Bagenstos and Schlanger, ‘Hedonic Damages, Hedonic Adaptation, and Disability’.

[2] Bagenstos and Schlanger.

[3] Coleman, ‘Answering Susan: Liberalism, Civic Education, and the Status of Younger Persons’.

[4] Brighouse, ‘What Rights (If Any) Do Children Have’.

[5] Russell, Hughes, and Bellis, ‘Impact of Childhood Experience and Adult Well-Being on Eating Preferences and Behaviours’.

Why Actions Matter: The Case for Fluid Moral Status

By: admin

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by Lucy Simpson, Nottingham Trent University student

 

Throughout the catalogue of work produced by Jeff McMahan, he has discussed what constitutes a being’s moral status, and has advocated the theories of moral individualism and reflective equilibrium intuitionism.[1] It is not my intention in this paper to dispute  these positions. Instead, I argue that if we accept McMahan’s position, then logically, we must accept that a being’s moral character is a morally relevant property which we ought to consider when determining their moral status. As I will explain, this therefore means that moral status is not static; it is fluid. Further to this, in the latter stages of this paper, I consider that if we do accept that moral status is action dependant, then there might be negative moral status. On the topic of negative moral status, I do not aim to give any in-depth arguments either for or against its existence, but rather just flag this as a potential avenue for further exploration if we do indeed follow McMahan’s theories of intuitionism and moral individualism.

I argue that by accepting that being’s actions affects its moral status, this resolves intuitional conflict felt when we consider so-called marginal cases (i.e., any being that is typically thought to sit on the edge of a moral status threshold).[2]

For the sake of clarity in this paper I take moral status to mean the following: If a being has moral status, then we have obligations to treat it in certain ways for its own sake. This is contrasting what I label ‘relational status’, in which means the following: If a being has relational status, then we have obligations to treat it in certain ways for the sake of some other being (which has moral status).[3]  In this paper I exclusively consider the moral status of beings. This moral status is attributed to the possession of morally relevant intrinsic properties or characteristics possessed by a being, and often includes properties such as: Possession of consciousness; ability to suffer; ability to use complex language or communicate; ability to be altruistic; possession of rationality.[4]

I begin by outlining the view that moral status is action dependent if we follow McMahan’s position:

 

  1. Moral Status and Actions:

McMahan’s position is that there is no singular morally relevant intrinsic property which is not solely possessed by human beings, nor is it possessed by all human beings. This means the egalitarian position — that all human beings are of an equal and same level of moral status —  cannot hold to be true. Instead moral status is seen to be attributed as a result of an individual being’s possession of (some, or all) of these morally relevant intrinsic properties.[5] We can test this intuition through the use of a thought experiment:

There is a burning building, and trapped inside of this building in two separate rooms (equidistant) from the only entry and exit point are two beings. There is no personal risk to entering the building, and which ever being is not saved has no other means of escape.

Case 1:

In room 1: A statistically normal human being

In room 2: A statistically normal dog

Which being do you save?

Our intuitions in this case (1) show that we ought to save the human being in room 1. Why? Because (to phrase it in the same way as McMahan) the human being in room 1 possesses more morally relevant intrinsic properties, and greater time-relative interests and so will suffer greater harm than the dog.[6]

 

Case 2:

In room 1: A statistically normal human being

In room 2: A severely cognitively disabled human being

Here, the intuitions become less clear. If we follow McMahan’s intuitionism — and if we exclude any relational value held by either human being — the severely cognitively disabled human being in room 2 possesses fewer morally relevant intrinsic properties than the human being in room 1. However, this leaves one with a sense of uncomfortableness. Consider the following case also:

 

Case 3:

In room 1: A statistically normal human being who has committed only good acts, and is a morally good person.

In room 2: A statistically normal human being who has committed morally repugnant acts, and is a morally bad person.

Here, our intuitions are less uncomfortable. It seems clear that we ought to save the morally good human being in room 1, over the morally bad human being in room 2. In fact, this is a case that McMahan himself considered:

‘When I ask my students about these cases, they are unanimous in thinking that one ought morally to save the virtuous person and, in the second choice, kill the murderer’. (McMahan, 2007: 102).

The unanimity of the students shows that this intuition – that morally good persons are of a higher value  — holds. By McMahan’s own subscription to the reflective equilibrium methodology then, this ought to lead us to revise our belief that moral status is based solely upon the intrinsic properties that a being possesses, and accept that our actions do matter. Instead, he makes the following comment: “All this leaves me profoundly uncomfortable.” (McMahan, 2007: 104).

It does, however, seem clear that this intuitional discomfort can be explored further, consider:

Case 4:

Room 1: A severely cognitively disabled human being

Room 2: A statistically normal human being, who has committed many morally repugnant acts, and is – without a doubt – a morally bad person.

Here we are comparing the value of a severely cognitively disabled human being and a morally abhorrent (but otherwise cognitively normal) human being. In this case the intuitional discomfort is not as present. It seems that we would, ceteris paribus, save the human being in room 1, over the thoroughly morally bad person. This does therefore show (using McMahan’s theoretical framework) that moral status is — in part — action dependent. This therefore raises the following question: If moral status is action dependent, then is it also fluid? (I.e. can it change up or down?) This will be explored in the following section.

 

  1. Fluid Moral Status:

 

At the beginning of this paper I stated that I would argue that moral status is fluid, and is not static as it is currently thought to be. This point requires additional clarification, as I note that there is a possible objection to my statement here: This being that moral status is not argued to be strictly static by moral individualists. In the view held by McMahan it is accepted that if a being’s morally relevant intrinsic properties change, then so can their moral status. This means that if I — a conscious, rational being, capable of suffering — am involved in a tragic accident in which I suffer irreparable brain damage, to the point my ability to be rational or conscious or to suffer is lost completely, then my moral status would be lowered. This, I accept.

However, as I will now go on to explain, if moral status is considered to be (at least in part) action dependant, then no such tragic accident will need to occur in order to change a being’s moral status. Instead it will change fluidly based upon the individual actions a moral agent commits. As case 4 demonstrates, our intuitions show that a morally relevant difference is the moral character of those human beings. If a being has chosen to commit morally bad acts, then intuitionally we see them to be of lesser moral importance. In short: Our actions affect our moral status. This amendment to McMahan’s position solves the intuitional conflict we face when considering marginal cases.

We can express this view as the following principle:

Moral Modification Principle (MMP) – A being’s moral status can go up and down depending on their deliberate choice of actions (i.e. whether they commit good or bad acts).

As we believe a non-moral agent cannot be held accountable for their actions, then this naturally excludes the marginal cases I discussed earlier: The severely cognitively disabled, infant children and non-human animals would all be excluded from the MMP. After all, we do not blame a cat for killing a mouse according to her instincts, nor do we punish an infant child who without knowing any better inflicts deliberate harm on another. As such the MMP would only apply to moral agents, such as you and I.[7]

Fluid moral status does raise an additional consideration: If moral status is fluid, and can go down based upon the morally bad actions a being commits, can a being have a negative moral status? I will consider this briefly in the following section.

 

  1. Negative Moral Status:

I do not have room to explore the view of a negative moral status in depth. It is, however, important to consider this. As moral status is seen to be discussed as an entirely positive matter (i.e., if a cat has moral status then we have obligations towards the cat which protects her rights and limits her suffering), the view of a negative moral status would has (as far as I can discern) not been considered.[8]

Above I have argued the line of intuitionism used by McMahan allows for the intuition that moral status is action dependent. If the moral status of a moral agent can reduce based upon the morally bad actions they commit, then it stands to reason that (potentially) the moral agent can commit so many bad acts that they reduce themselves to a negative moral status value. Exactly what these acts would have to be are beyond me, and it is not my argument that there be a considerable number of moral agents which would ever fall below this threshold. But let us presume that such a morally bad being exists and they have reduced their moral status to a negative degree, what would this mean?

Well, reasonably we can assume that our positive obligations towards this being would cease to exist. We would not be obliged to protect them from harm, to protect their freedoms and rights, or to treat them in ways which avoid their unnecessary suffering. Additionally we must ask that, if we have a negative duty towards them, would we ever have a duty to harm them? On this, I give one final case:

Case 5:

Room 1: A thoroughly morally reprehensible being, who has willingly committed numerous morally abhorrent acts.

Room 2: An empty room. There is nothing else to save.

In this case (5) we have the choice to either save one (morally bad) being or to not save anything at all. In such a case — where a being has lowered their moral status substantially (so much so that it is negative) —  then do we have any obligation to save them even when we have nothing else to prevent us from doing so? May we allow them suffer, to inflict a non-direct harm on them because of this moral status level? Here I cannot give answers. However, I am struck that it leaves me with the same “profound uncomfortableness” felt by McMahan. Perhaps we should abandon the framework altogether if this where our intuitions leave us.

 

  1. Conclusion:

I have argued that if we are to fully develop McMahan’s theory, then we ought to accept moral status is action dependent. I have argued this solves the intuitional conflict we face when comparing the moral status held by rational agents and marginal cases. I argued then that if moral status is action dependent, then it is also fluid.

Following this, I introduced the possibility of negative moral status. This, I have briefly discussed, is an implication of action dependent moral status. As such, it seems plausible that if a moral agent commits numerous moral atrocities then perhaps we have negative obligations towards them. So, we ought to abandon our positive obligations of protection towards them that moral status ordinarily affords. Again, here, I postulate that if this theory of negative moral status violates our intuitions and we cannot revise it using the reflective equilibrium model of intuitionism, then perhaps we ought to abandon the framework altogether – but if we do not, and then the profound uncomfortableness felt by McMahan is unjustified.

[1] For McMahan’s work on intuitionism, see: (2013), and for his work on moral individualism see: (2002).

[2] E.g., some non-human animals, the severely cognitively disabled, and infant children.

[3] For greater discussion on the distinction between moral status and relational status, see: DeGrazia (2008)

[4] An extensive discussion of these morally relevant properties is had by Rachels in his (1999) book Created from Animals: The Moral Implications of Darwinism. This is of particular interest as McMahan cites the work of Rachels in his own work, and argues for the same (albeit weaker) position of moral individualism.

[5] I will not discuss in detail egalitarian accounts, due to space constraints. However, for more on this see: Waldron (2008), Scanlon (2000), and Dworkin (1981). While all vary slightly on what they ground the notion of human equality in, they share the characteristic of advocating for the same and equal moral status of all human beings.

[6] By “greater time-relative interests” I mean the human being can plan for the future, fear about it’s well being of its future self.

[7] I am aware that there is more to be said on the cases that are excluded from the MMP, as I believe there is a strong argument that psychopaths and the mentally insane ought to excluded also (as they are, arguably, non-moral agents also). Additionally I accept that if we alter a non-human animal to possess the ability to be a normal agent that the cat, in the above case, would be subject to the MMP.

[8] More so, the view held by many is that moral status is a “all or nothing” situation. So, you either have moral status, or you do not (see: Kant (2017)  for an account on this). Additionally moral status has been argued to come in degrees (see: DeGrazia, 2008), but this is the view that moral status degrees come at varying points between nothing and full moral status – without a consideration of a negative moral status.

Do we have an Obligation to Diversify our Media Consumption ?

By: admin

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by James Shearer, University of St Andrews student

  1. Introduction 

In an increasingly politicised society, previously mundane decisions about our daily lives can take on normative qualities. One such question is “what news media should we consume?”. Alex Worsnip suggests that we have an obligation to consume media from across the political landscape. This essay argues against this claim by showing that any obligation to diversify our media consumption in this way would face severe limitations. §2 will consider Worsnip’s argument. §3 will show why we are under no general obligation to diversify our media consumption. Finally, in §4 I consider and respond to potential responses to my position.

  1. Worsnip’s Argument 

Worsnip claims that we are obliged to diversify the news media we consume such that we should read sources from across the political landscape, including those belonging to political positions that we disagree with[1].  Before looking at how Worsnip structures and justifies his argument, I want to clarify the strength of his claim. There is a distinction between an obligation and a mere reason; Worsnip posits that we are under the former. I have a reason to attend office hours for my classes, it would improve my learning, but if I fail to do so I am not necessarily open to criticism. Perhaps I have a class scheduled at the same time; this would be a stronger reason to not attend office hours. However, if I fail to follow through on an obligation (say, the obligation not to kill) it seems that you automatically have licence to criticise me.

Obligations are not totally general, I might rightly kill an unprovoked attacker to save my own life. Such instances indicate the limit of an obligation. If an obligation can be shown to be so limited that we can freely break it in a wide range of cases, then talk of an obligation becomes less natural. Instead, it starts to look more like we just have a reason to act. I suggest that Worsnip’s obligation is better thought of as a reason to diversify our media consumption; one that will often be defeated.

Now, let us consider why Worsnip thinks we are obliged to diversify our media consumption. He claims that all publications, regardless of partisan affiliation, are illicitly influenced to a non-trivial degree. Illicit influences are those that lead a publication to report in a non-ideal way, leaving out important details or presenting the details in a way that is misleading, etc.

Worsnip considers two relevant ways that illicit influence might affect reporting. The first is through belief formation. Suppose a reporter believes that NATO should arm Ukraine. An uncomfortable consequence of arming the Ukrainian military is that some civilians will inevitably be killed by the donated arms. The desire to avoid acknowledging this tragic outcome might lead the reporter to wilfully ignore or discredit evidence that shows civilian deaths occurring because of these weapons. If they erroneously discredit the evidence or underreport the deaths, they do so under illicit influence.

The second illicit influence more directly acts on the decision-making process regarding what facts to report. Consider the Ukraine example again, but this time the reporter actively believes that the donated weapons are killing civilians. However, because they still support NATO’s involvement, they do not want to do anything that will lead to public pressure against the intervention. This desire leads them not to report credible evidence of the deaths. Again, illicit influence is at play.

Having defined illicit influence, Worsnip’s argument is as follows:

  1. We should expect all publications to be illicitly influenced to a non-trivial degree.
  2. Illicitly influenced publications are prone to omit important facts and stories.
  3. Reading only news sources from anyone side of the political spectrum will result in an incomplete picture of the evidence (From 1,2).
  4. We cannot rely on ourselves to adjust our beliefs to correct for the incompleteness of our evidence.
  5. Reading only news sources from any one side of the political spectrum will result in epistemically non-ideal beliefs (From 3,4).
  6. We are obliged to avoid having epistemically non-ideal beliefs.

Conclusion: We have an obligation to diversify our sources to be from across the political spectrum (From 5,6).

(1) is plausible; given the presumption that all publications have a political leaning, it seems that any given publication is going to have illicit, politically biased influences. We could dig into the differing degrees of influence, but I will grant Worsnip this claim for the purposes of this essay. (2) is a straightforward consequence of illicit influence.

(4) seems an uncontentious claim about our psychology but it does highlight an important aspect of Worsnip’s theory, it is non-ideal. The distinction between ideal and non-ideal epistemic theories is in the agents that they apply to. Ideal epistemic theories consider ideal agents. Perhaps ideal agents would be able to adjust their beliefs appropriately once they recognise that their data set was incomplete, Worsnip’s argument would not apply to them. Non-ideal theories try to determine what course of action to recommend to non-ideal agents considering their failings. That Worsnip is acting in the non-ideal theory space will be relevant when we come to the objection and rebuttal. (6) is the premise which I will be indirectly contesting in the rest of this essay: in many cases, we are not obliged to diversify our sources. Given this creates a contradiction with the argument as stated, I suggest that (6) is misguided. Instead, we generally have a defeasible reason to avoid epistemically non-ideal beliefs by diversifying our media sources. With this in mind we can now turn to the objection.

 

  1. Counter Case – Positive Social Movements 

The claim is this; in many cases, we do something wrong when we follow Worsnip’s advice. We could not be obliged to do something that is often wrong so I conclude that we are not under a general obligation to diversify our media consumption. I want to examine a case where many people are going to be obliged to not diversify their sources on partisan lines. I will use Black Lives Matter (BLM) as an instance of a positive social movement, however the argument will generalise to any positive social movement that the reader prefers.

We start with the premise that social movements such as BLM can be undermined by publications reporting only true claims. Banks’s 2018 work more fully explores the use of racial grammar, public memory, and framing in delegitimising BLM[2]. Here, I want to focus on framing. In April 2015 Baltimore police killed Freddie Gray, a black man, while he was in their custody. This resulted in a largely peaceful protest, during which a small number of protestors rioted. Media coverage focussed heavily on the protest’s violent aspects, highlighting photos of the rioters over the peaceful protestors and keeping the debate focussed on the appropriateness of the rioting, rather than the virtues and necessity of BLM.

By focussing on these factors, publications make salient certain facts that can erroneously undermine our faith in legitimate social movements. Framing BLM in such a way as to link it with violent protest can delegitimise it when non-ideal agents are unable to adjust their beliefs appropriately in light of this framing. Given the politically charged nature of BLM, we can expect partisan media that is aligned against BLM to be illicitly influenced towards framing its reporting in this delegitimising fashion.

That we are non-ideal agents is therefore highly relevant in this situation. We know that we may be susceptible to framing techniques that seek to discredit BLM, and we also know that should we consume media from across the political spectrum, we are highly likely to encounter those techniques. Following Worsnip’s suggested obligation would therefore pose a risk to BLM by making pernicious facts salient to a larger number of people.

If we think that supporting BLM is a moral obligation, then it would follow that we are obliged to avoid putting support for BLM at risk by making ourselves susceptible to discrediting techniques. This would constitute an obligation to avoid diversifying our media consumption in relation to BLM that would apply broadly to all people who are obliged to support BLM. Given this generalises out to all positive social movements that we are obliged to support, we consequently have a broad limit on Worsnip’s obligation. Now recall the distinction drawn between reason and obligation. That the obligation would be so often defeated suggests that we have a reason, not obligation, to diversify our media intake.

 

  1. Response and Rebuttal 

Before closing, I will consider two potential responses to my argument. First, we will look at a question regarding the extent of the limit I have argued for. We will then consider the importance of the distinction between epistemic and moral obligations.

Worsnip might agree that there are limits on the obligation to diversify our media consumption, but say that this is not the same as saying there is no obligation as such. It might be that we have a general obligation, but that it does not apply in the cases I have outlined. Perhaps I should not diversify my intake on social issues, but should do in other cases like on the economy.

I have two issues with this response. First, the non-ideal aspect of the theory makes it difficult to see how actionable the advice to “diversify your media consumption, but only on some issues” really is. Sifting through publications looking for “safe” reports while avoiding the riskier ones is error prone for agents like us. Second is that I am not sure how many issues really are both important enough that we are obliged to be informed on them and free of any type of positive movement that can be undermined via pernicious reporting. It seems to me that for any given important issue, there is some side which I am under a moral obligation to support. For each, the argument regarding BLM will go through[3].

Finally, there is a response based on the distinction between epistemic and moral obligations. Epistemic obligations are derived from requirements of rationality, whereas moral obligations are based on requirements of morality. Worsnip is arguing for an epistemic obligation; it is an obligation based on the premise that we do better epistemically when we diversify our body of evidence. I, on the other hand, have been arguing that there may be countervailing moral obligations, but it is not obvious that we could not be obliged epistemically in one direction but morally in the other.

I will not be attempting to offer a full account of the relation between epistemic and moral obligation here. I will instead settle with noting two problems that may emerge should Worsnip rely on this distinction in their defence. Firstly, Worsnip thinks that the obligation to diversify your sources is a case of moral and epistemic obligations “lining up”, but my objection shows that in many cases, this is not true[4]. Second is that if moral and epistemic obligations do come apart as I have suggested, that would be a severe limit on said epistemic obligations. It seems to me that in these situations we do better to think of ourselves as having an epistemic reason that is defeated by our moral obligations. This would be a rejection of (6) in Worsnip’s argument.

 

  1. Conclusion 

This essay has argued that we have no obligation to diversify our media consumption. We began by looking at Worsnip’s argument to the contrary and his understanding of illicit influence. I then refuted Worsnip’s argument by looking at obligations to support positive social movements. If that argument goes through, then broad limits on the obligation will emerge. We would do better to take ourselves as having a mere reason to diversify our media consumption.

 

Notes:

[1] (Worsnip, 2019)

[2] (Banks, 2018)

[3] Incidentally, this issue is why Worsnip cannot rely on his exception of publications that are “beyond the pale”. (Worsnip, 2019, Pg 258) Worsnip agrees that we are not obliged to consume publications that push immoral view points. My contention here is that unideal agents who try to diversify will inevitably do exactly that.

[4] (Ibid, Pg 243)

 

Bibliography:

Banks, Chloe (2018) Disciplining Black activism: post-racial rhetoric, public memory and decorum in news media framing of the Black Lives Matter movement, Continuum, 32:6, 709-720.

Worsnip, Alex (2019). The Obligation to Diversify One’s Sources: Against Epistemic Partisanship in the Consumption of News Media. In Carl Fox & Joe Saunders (eds.), _Media Ethics: Free Speech and the Requirements of Democracy_. London: Routledge. pp. 240-264.

Announcing the Winners and Runners Up in the 9th Annual National Oxford Uehiro Prize in Practical Ethics

By: admin

Please join us in congratulating all four of the finalists in the National Oxford Uehiro Prize in Practical Ethics 2023, and in particular our winners, Lukas Joosten and Avital Fried. We would also like to thank our judges, Prof Roger Crisp, Prof Edward Harcourt and Dr Sarah Raskoff.

This, the final of the 9th Annual National Oxford Uehiro Prize in Practical Ethics, was held on the 14th March in the lecture theatre of the Faculty of Philosophy, as well as online. During the final the four finalists presented their papers and ideas to an audience and responded to a short Q&A as the deciding round in the competition. A selection of the winning essays and honourable mentions will be published on this blog.

Undergraduate Category:

Lukas Joosten presenting his paper at the prize

Winner: Lukas Joosten, “Turning up the Hedonic Treadmill: Is it Morally Impermissible for Parents to Give Their Children a Luxurious Standard of Living?”

Chase Mizzell presenting

Runner Up: Chase Mizzell, “Against Using AI to Influence Our Future Selves in Ways That Bypass or Subvert Rationality”

Honourable Mentions: James FrenchHow can we address the gender gap in anaesthesia and the wider medical workplace?

Leah O’Grady, “What is wrong with stating slurs?”

Tanae Rao, “Why the Responsibility Gap is Not a Compelling Objection to Lethal Autonomous Weapons”

Maria Rotaru, “Causal links and duties to past, present, and future generations: why and to whom do the affluent have moral obligations?”

Graduate Category:

Avital Fried the winner of the graduate category

Winner: Avital Fried, “Criminal Confessions and Content-Sensitive Testimonial Injustice”

Runner Up: Leora Urim Sung, “Should I Give or Save?”

Honourable Mentions:

Leora Sung presenting her paper

Samuel Iglesias, “Ethical Biological Naturalism and the Case Against Moral Status for AIs”

Thomas Long, “The Ambiguous Ethicality of Applause: Ethnography’s Uncomfortable Challenge to the Ethical Subject”

Pablo Neira, “Why Preventing Predation Can Be a Morally Right Cause for Effective Altruism?”

Kyle van Oosterum, “How Confucian Harmony Can Help Us Deal With Echo Chambers”

Trenton Andrew Sewell, “Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?”

James Shearer, “Do we have an Obligation to Diversify our Media Consumption?”

Lucy Simpson“Why Our Actions Matter: The Case for Fluid Moral Status.”

 

Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?

By: admin

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student Trenton Andrew Sewell 

Social Media Companies (SMCs) should use artificial intelligence (‘AI’) to automate content moderation (‘CM’) presuming they meet two kinds of conditions. Firstly, ‘End Conditions’ (‘ECs’) which restrict what content is moderated. Secondly, ‘Means Conditions’ (‘MCs’) which restrict how moderation occurs.

This essay focuses on MCs. Assuming some form of moderation is permissible, I will discuss how/whether SMCs should use AI to moderate. To this end, I outline CM AI should respect users ‘moral agency’ (‘MA’) through transparency, clarity, and providing an option to appeal the AI’s judgment. I then address whether AI failing to respect MA proscribes its use. It does not. SMCs are permitted[1] to use AI, despite procedural failures, to discharge substantive obligations to users and owners.

This essay will demonstrate that:
1) Respect for user’s MA entails SMCs should use AI in a:
a. Transparent, reason-giving way,
b. Based on clear rules and,
c. With an option for appeal.
2) But failing to meet these standards does not proscribe using AI. It is a necessary means of discharging important obligations.

Ideal CM AI
People have rights we should respect. This cred is the basis of this essay. However, rights include substantive rights such as to expression. Here, I am presuming that any moderated content is a legitimate target. Hence, moderating this content simpliciter does not violate users’ rights because SMCs could permissibly moderate the post/user.

The question that remains is what ‘procedural-rights’ users possess. How should SMCs respect users whilst moderating? Here, I address the procedural-rights users have because of their ‘moral agency’ (‘MA’).

MA is the capacity of an agent to understand moral reasons. Respecting the dignity of a person involves treating them as a moral-agent[2]. This requires engagement in moral reasoning[3]. Moral reasoning is the process of giving reasons concerning the justification of an act. Engagement in moral reasoning acknowledges one’s MA and dignity – a basic Kantian requirement[4].

Applying MA to Moderation
Moderation is akin to punishment. H.L.A Hart defined punishment “in terms of five elements:
1. …consequences…considered unpleasant.
2. …for an offence against…rules.
3. …of an…offender for his offence.
4. …intentionally administered by [another] and
5. …administered by an authority constituted by the…system.”[5]

Moderation removes posts and restricts access to platform features which is unpleasant. It occurs to ‘offenders’ for breaching the community guidelines. It is intentionally administered by SMCs which created authorities who impose moderation. It satisfies Hart’s 5 elements.

If moderation is punishment, then respecting MA in the process of moderation will be similar to respecting MA in the process of criminal punishment. That involves giving reasons why the act/offence was wrongful, and why the response to the act/offence was just.[6]

Hence, SMCs respect users’ MA whilst moderating if they:
1) Provide moral reasons to users why they ought not post certain content and;
2) Provide moral reasons to users why they are moderating[7]

SMCs should give users reasons why the guidelines were violated, and why moderation was the right response. CM AI must be, in other words, transparent[8].

Respect for MA requires more than granting reasons. It requires the option of appealing an AI’s judgement to a human moderator.

“Penalizing someone for violating the rules…reasserts our shared values…calling something hate speech…is a….performative assertion that something should be treated as hate speech and…undoubtedly, it will be disagreed with” [9].

Users should be free to question whether such an assertion is an accurate representation of the guidelines. A moral-agent is also a giver, not merely a receiver, of reasons. To engage in the moral reasoning which respects one’s MA, SMCs should give users the option to justify their post.

Furthermore, to respect users, AI should use rules which are prospectively clear. Respecting people as moral-agents is to regard them as able to follow rules they are aware of[10]. Part of what legitimizes punishment is that the user could have complied with the rule.

To respect users as moral-agents, AI should facilitate users’ compliance with rules. CM AI should be:
i) Based on rules;
ii) Which are published;
iii) Prospective,
iv) Intelligible;
v) Free from contradiction;
vi) Possible to follow;
vii) Not constantly changing and;
viii) With congruence between the rule and official actions.[11]

If CM AI satisfies these eight principles, then it respects users by recognizing their MA and furthermore, providing rational freedom.

Moral-agents should not face ‘bolts-from-the-blue’. Their freedom should not be dependant on an AI’s whims. The guidelines that the AI follows should allow users to know whether they are in, and avoid, non-compliance.

This prospective clarity enhances the morality of CM AI by providing ‘freedom from domination’:

“[freedom] is not…the availability of…choices. It is conceivable that a free man might have fewer options…than a slave…[But] we think of slavery as the…embodiment of unfreedom…because…the conditions under which he enjoys…options…are…dependant upon the will of the master.”[12]

Clear rules liberate one from dependence/domination. A user’s freedom is not dependant on the SMC but rather, on the rules which equally constrain moderators.

But why accept that ‘punishment’ by SMCs should respect moral agency? State punishment of crimes might need to – but why content moderation?

Because all should respect each other as moral agents. To do otherwise is to disrespect our dignity. Insofar as moral agency is only consistent with certain procedures of punishment by the state, I see no reason why (as an ideal matter) it would impose fundamentally different requires on punishment by family, friends, strangers, or crucially here – SMCs.

In summary:
Moderation is punishment. To respect MA whilst punishing, SMCs must use transparent AI which gives users reasons to justify SMCs’ response. Furthermore, respecting MA requires that AI decisions are appealable to a human moderator. This provides the opportunity for moral discourse which further respects MA. Lastly, respecting MA requires that the AI’s decisions allow the user to prospectively avoid non-compliance.

Unideal AI?
Whilst the prior section explored how a CM AI can respect user’s MA, it neglected two questions. Does CM AI currently respect MA? If it does not, should SMCs continue to use AI which violates procedural rights?

The answer to the first question is no. “A common critique of automated decision making is the…lack of transparency…[It is]…difficult to decipher…the specific criteria by which…decisions were made”[13]. Furthermore, systems of appeal, such as Facebook’s “Supreme Court”, are available to very few users[14]Finally, users report not knowing when they will be moderated, leading to confusion, and anger[15].

The answer to the second question – should SMCs use unideal AI – is complicated.

One could answer: if MA should be respected, then SMCs are not at liberty to use CM AI unless it respects user’s MA. In short, if CM AI is not transparent, appealable, and prospectively clear, it should not be used.

This view is flawed because SMCs do not only have process obligations. They have substantive obligations to their users and owners.

For its users, SMCs could be obligated to prevent the spread of toxic content, terrorist propaganda or child exploitation. To do otherwise is to become complicit. Christopher Bennet explained this complicity, and its corollary obligations, as resulting from ‘normative control’: “control over whether the…act is done with permission or not”[16]. The wrong done by “a car owner who permits another to engage in reckless… [driving]…[is]…that the owner could and should have…withdrawn his consent”[17]SMCs can – through moderation – to determine whether an act is ‘permissible or impermissible’. “[W]herever [SMCs] [do] not mark some act as impermissible, it regards it as permissible…It can be complicit in allowing…acts to be permissible where it should have made them impermissible…complicity…comes about through a failure to [moderate]”[18]SMCs have an obligation to its users to moderate content (the scope of which is a matter for later investigation).

Furthermore, SMCs have shareholders/investors. “A corporate executive is an employee of the owners of the business. He has direct responsibility to his employers. That responsibility is to conduct the business in accordance with their desires, which generally will be to make…money”[19]. When an agent is managing money belonging to another, we traditionally accept she is obliged to act with regard for the principal’s interests. Those same obligations bind all SMCs baring those which are owner operated[20].

These substantive obligations answer whether SMCs should use imperfect AI, because using CM AI is crucial for discharging of these duties. Even if AI is imperfect, SMCs are obliged to use it for CM; CM AI is needed to SMC’s obligations to different stakeholders.

Given SMCs’ size general size, CM requires AI. Yan LeCunn – Facebook’s chief AI scientist – has stated that: “Without AI there would not be any possibility of…speech filtering, detecting harassment, child exploitation, or terrorist propaganda”[21]To adequately meet SMCs’ substantive obligations to not be complicit in certain harmful conduct, SMCs needs to use AI.

A potential response is that it is “size…that makes automation seem necessary… [and]…size can be changed”[22]. Specifically, “if moderation is…overwhelming at…scale, it should be understood as a limiting factor on…growth”[23]. SMCs should accept making less profit to reduce the need for CM AI.

However, this neglects their obligations to owners. Even if SMC could make moderation respect users MA by setting growth aside, they would breach their fiduciary obligations to owners. Furthermore, SMCs are under pressure to moderate from the public. Not moderating could result in a harm to their brand, ability to recruit talent etc. Moderation is likely in owner’s interests.

Not using CM AI would result in SMCs failing their substantive obligations to either its users, its owners, or more likely both. Yet, one could say that if a ‘right’ to be recognized as moral-agent exists, SMCs should not violate it. Procedural-rights are side-constraints which requires not using imperfect AI. What this neglects is that X being a right does not mean it is of equal importance to right Y. If all obligations cannot be simultaneous met, then choices must be made about which obligations should be unfulfilled.

I would contend that procedural-rights in CM are some of SMCs’ least important obligations. Users who have posted content eligible to moderation are the reason a trade-off of rights is necessary. If they had not done wrong, then the SMC would not need to decide whether to respect their procedural-rights or the substantive rights of its users or owners. If a set amount of cost must be imposed, then it seems appropriate to apply that cost upon the individual most responsible – the user being moderated[24]. Since not using CM AI would result in SMCs failing their substantive obligations, and these obligations are more important, procedural obligations cease to really matter. Human moderation is not feasible, and imperfect CM AI is preferable to no moderation at all. SMCs should use AI because it discharges their more important duties. Nevertheless, insofar as SMCs can improve their CM AI to bring it closer to the ideal, they are obliged to do so. It should work towards the ideal but not let it be the enemy of the good or necessary.

Conclusion
Social Media Companies should use artificial intelligence to automate content moderation. The use of this technology is needed to meet SMCs’ substantive obligations to their users and owners. That means that the conditions under which it should be used are broad. Even if, AI moderation does not respect user’s moral agency, it should still be used. Nevertheless, where possible, SMCs should work to bring its AI moderation more in line with an ideal of respect. This Ideal AI Content Moderation would be transparent (capable of giving users the reasons which underpin the moderation decision) with an option to appeal to a human moderator (as a recognition of the two-sided nature of moral reasoning). Furthermore, the AI should operate on clear, prospective, and reasonably predictable rules such that users are given a freedom from domination and are spared from moderation happening like a ‘bolt-from-the-blue’.
AI moderation is a necessity for SMCs.
They should use AI moderation to meet their substantive obligations whilst striving for the procedural ideal.


Notes:

[1] Perhaps obliged.

[2] (Strawson, 1962).

[3] (Hirsch, 1993).

[4] (Jacobs, 2019, p. 29) (Seelmann, 2014).

[5] (Hart, 2008, pp. 5-6).

[6] (Edwards & Simester, 2014) (von Hirsch A. , 1992).

[7] (Edwards & Simester, 2014, p. 64).

[8] (Suzor & Etal, 2019).

[9] (Gillespie, 2020, p. 3).

[10] (von Hirsch & Hörnle, 1995).

[11] (Fuller, 1969, p. 39) (Simmonds, 2007, p. 64).

[12] (Simmonds, 2007, p. 101).

[13] (Gorwa & et.al, 2020, p. 11) (Burrell, 2016).

[14] (Kelion, 2020).

[15] (West, 2018).

[16] (Bennett, 2019, pp. 78-81).

[17] Ibid (p. 81).

[18] Ibid.

[19] (Friedman, 1970).

[20] There is thus an interesting question about how these obligations could apply to Twitter post Elon’s takeover.

[21] (LeCunn, 2020).

[22] (Gillespie, 2020, p. 4).

[23] Ibid.

[24] (McMahan, 2005) (Øverland, 2014).

Works Cited
Bennett, C. (2019). How Should We Argue for a Censure Theory of Punishment? In A. du Bois-Pedain, & A. Bottoms, Penal Censure (pp. 67-86). Hart Publishing.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 1-16.
Cohen-Almagor, R. (2015). Confronting the internet’s dark side: moral and social responsibility on the free highway. Cambridge: Cambridge University Press.
Edwards, J., & Simester, A. (2014). Prevention with a Moral Voice. In A. Simester, A. Du Bois-Pedain, & U. Neumann, Liberal Criminal Theory (pp. 43-65). Hart Publishing.
Friedman, M. (1970, September 13). The Social Responsibility of Business Is to Increase Its Profits. New York Times.
Fuller, L. (1969). The Morality of Law. New Haven: Yale University Press.
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 1-5.
Gorwa, R., & et.al. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform goverance. Big Data & Society, 1-15.
Günther, K. (2014). Crime and Punishment as Communication. In A. du Bois-Pedain, A. Simester, & U. Neumann, Liberal Criminal Theory (pp. 123-140). Hart Publishing.
Hart, H. (2008). Punishment and Responsibility: Essays in the Philosophy of Law. Oxford University Press.
Hirsch, A. v. (1993). Censure and Sanctions. Oxford University Press.
Jacobs, J. (2019). Censure, Sanction and the Moral Psychology of Resentment. In A. du Bois-Pedain, & A. Bottoms, Penal Censure (pp. 19-40). Hart Publishing.
Kelion, L. (2020, September 24). Facebook ‘Supreme Court’ to begin work before US Presidential vote. Retrieved from BBC: https://www.bbc.co.uk/news/technology-54278788
LeCunn, Y. (2020, June). Deep learning, neural networks and the future of AI. (C. Anderson, Interviewer)
McMahan, J. (2005). Self-Defense and Culpability. Law and Philosophy, 751–774.
Øverland, G. (2014). Moral Obstacles: An Alternative to the Doctrine of Double Effect. Ethics, 481-506.
Seelmann, K. (2014). Does Punishment Honour the Offender? In A. Du Bois-Pedain, A. Simister, & U. Neumann, Liberal Criminal Theory (pp. 111-121). Hart Publishing.
Simmonds, N. (2007). Law as a Moral Idea. Oxford: Oxford University Press.
Strawson, P. (1962). Freedom and Resentment. Retrieved from UCL: https://www.ucl.ac.uk/~uctytho/dfwstrawson1.htm
Suzor, N. P., & etal. (2019). What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commerical Content Moderation. International Journal of Communication, 1526-1543.
von Hirsch, A. (1992). Proportionalty in the Philosophy of Punishment. Crime and Justice, 16, 55-98.
von Hirsch, A., & Hӧrnle, T. (1995). Postive Generalpravention und Tadel. Goltdammer’s Archiv fur Strafrecht, 142.
West, S. M. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media and Society, 4366-4383.

Cross Post: Why Government Budgets are Exercises in Distributing Life and Death as Much as Fiscal Calculations

By: admin

Written by Hazem Zohny, University of Oxford

Sacrificial dilemmas are popular among philosophers. Should you divert a train from five people strapped to the tracks to a side-track with only one person strapped to it? What if that one person were a renowned cancer researcher? What if there were only a 70% chance the five people would die?

These questions sound like they have nothing to do with a government budget. These annual events are, after all, conveyed as an endeavour in accounting. They are a chance to show anticipated tax revenues and propose public spending. We are told the name of the game is “fiscal responsibility” and the goal is stimulating “economic growth”. Never do we talk of budgets in terms of sacrificing some lives to save others.

In reality, though, government budgets are a lot like those trains, in philosophical terms. Whether explicitly intended or not, some of us take those trains to better or similar destinations, and some of us will be left strapped to the tracks. That is because the real business of budgets is in distributing death and life. They are exercises in allocating misery and happiness.

Take the austerity policies introduced by the UK government in 2010 and the following years. Studies put the mortality cost of spending cuts between 2010 and 2014 at approximately 150,000 excess deaths. A more recent study suggested that 335,000 excess deaths could be attributed to the austerity of the 2010s.

These are contestable figures – and they cannot draw a direct causal relationship between specific austerity policies and number of deaths. But even if it is impossible to assign an accurate death toll to austerity, it should come as no surprise that reducing welfare spending will reduce, well, welfare. The same is true in reverse. An England-focused study suggests that by increasing healthcare expenditure by 1%, around 300,000 deaths could have been avoided in the wake of 2010 cuts.

This has a sobering implication: knowingly or not, a decision is effectively made to let 300,000 die if healthcare expenditure isn’t increased by 1%.

Similarly, there is a clear link between fuel poverty and premature deaths. It is difficult to derive a precise figure, but it’s almost certain that a government decision not to further subsidise energy bills will trigger otherwise preventable deaths.

There is a truism in all this: resources are limited, and decisions inevitably come with trade-offs and opportunity costs.

For instance, austerity was a response to a severe economic downturn. Recessions reduce how much revenue a government has to spend on services, which leads to the loss of lives and livelihoods. Even for those who don’t suffer in the extreme, life is generally less liveable for many of us. For all we know, the global financial crisis would have (eventually) led to tens of thousands of additional deaths no matter how the government responded.

The values smuggled into budgets

Budgets are of course hugely complicated, but this is all to show that while it is true that they involve accounting, the morally relevant currency of what is being accounted for is ultimately our wellbeing, including its loss through death.

For instance, assume it is indeed true that 300,000 deaths could be prevented by a 1% increase in healthcare spending. Assume also that using these funds in some other way (such as on education and home insulation) would in the long term prevent far more deaths, or, significantly improve the quality of several million lives.

Suddenly, talk of fiscal responsibility seems to miss the point. We are faced with a much more philosophically loaded debate. It becomes pertinent to say, hang on, just how much extra wellbeing for some do you think can make up for the deaths of 300,000 others? And how sure are you about this?

It starts to become obvious how value judgments infuse and implicitly guide these fiscal decisions that, unwittingly or not, involve weighing up certain lives against others, present lives against future ones, and proxies for wellbeing (such as job creation) against others (such as preventing premature deaths).

The risk of the language of “boosting the economy” is that it gives these monumental decisions in ethics a veneer of being value-free budgetary exercises in “following the evidence”, stopping us from seeing how the economic sausage is actually made.

Of course, evidence-based policy is better than unevidenced policy. What is missing is values-transparent policy, and that starts with the philosophical task of laying bare the precise value judgments that go into constructing what we’re told are “good economic outcomes”.

While the Office of Budget Responsibility independently assesses the economic credibility of the budget, no corresponding institution works to uncover its ethical assumptions and value trade-offs. Welfare economists and ethicists need to forge a new relationship that initiates and guides the uncomfortable public conversation of how government budgets inevitably trade-off lives and livelihoods, now and against the future.

Equally crucial, by instituting norms that encourage uncovering all the value-judgments smuggled beneath the guise of sanitised fiscal and economic talk, we might reduce the chances of opportunistic politicians gambling with millions of livelihoods by redirecting the train in the name of one ideology or another.

Hazem Zohny, Research Fellow in Practical Ethics, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.
The Conversation

❌