FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

The other day a colleague of mine shared a YouTube video of the presentation The AI Dilemma, by Tristan Harris and Aza Raskin. In it, they share with the audience their concerns about the rapid and somewhat wild development of artificial intelligence (AI) in the hands of a few tech giants. I highly recommend it. (The video, that is. Not the rapid and somewhat wild development of AI in the hands of a few tech giants).

 

Much like the thousands of signatories of the March open call to “pause giant AI experiments”, and recently the “Godfather of AI” Geoffrey Hinton, Harris and Raskin warn us that we are on the brink of major (negative, dangerous) social disruption due to the power of new AI technologies.

 

Indeed, there’s a bit of a public buzz about “AI ethics” in recent months.

 

While it is good that there is a general awareness and a public discussion about AI – or any majorly disruptive phenomenon for that matter – there’s a potential problem with the abstraction: AI is portrayed as this big, emerging, technological, behemoth which we cannot or will not control. But it has been almost three decades since humans were able to beat an AI at a game of chess. We have been using AI for many things, from medical diagnosis to climate prediction, with little to no concern about it besting us and/or stripping us of agency in these domains. In other domains, such as driving cars, and military applications of drones, there has been significantly more controversy.

All this is just to say that AI ethics is not for hedgehogs – it’s not “one big thing”[i] – and I believe that we need to actively avoid a narrative and a line of thinking which paints it to be. In examining the ethical dimensions of a multitude of AI inventions, then, we ought to take care to limit the scope of our inquiry to the domain in question at the very least.

 

So let us, for argument’s sake, return to that door at the Uehiro Centre, and the voice cautioning visitors to stay clear. Now, as far as I’m aware, the voice and the door are not part of an AI system. I also believe that there is no person who is tasked with waiting around for visitors asking for access, warning them of the impending door swing, and then manually opening the door. I believe it is a quite simple contraption, with a voice recording programmed to be played as the door opens. But does it make a difference to me, or other visitors, which of these possibilities is true?

 

We can call these possibilities:

Condition one (C1): AI door, created by humans.

Condition two (C2): Human speaker & door operator.

Condition three (C3): Automatic door & speaker, programmed by humans.

 

In C3, it seems that the outcome of the visitor’s action will always be the same after the buzzer is pushed or the key card is blipped: the voice will automatically say ‘stay clear of the door’, and the door will open. In C1 and C2, the same could be the case. But it could also be the case that the AI/human has been instructed to assess the risk for visitors on a case-to-case basis, and to only advise caution if there is imminent risk of collision or such (was this the case, I am consistently standing too close to the door when visiting, but that is beside the point).

 

On the surface, I think there are some key differences between these conditions which could have an ethical or moral impact, where some differences are more interesting than others. In C1 and C2, the door opener makes a real-time assessment, rather than following a predetermined cause of action in the way C3’s door opener does. More importantly, C2 is presumed to make this assessment from a place of concern, in a way which is impossible in C1 and C3 because the latter two are not moral agents, and therefore cannot be concerned. They simply do not have the capacity. And our inquiry could perhaps end here.

But it seems it would be a mistake.

 

What if something was to go wrong? Say the door swings open, but no voice warns me to stay clear, and so the door whacks me in the face[ii]. In C2, it seems the human who’s job it is to warn me of the imminent danger might have done something morally wrong, assuming they knew what to expect from opening the door without warning me, but failed in doing so due to negligence[iii]. In C1 and C3, on the other hand, while we may be upset with the door opener(s), we don’t believe that they did anything morally wrong – they just malfunctioned.

 

My colleague Alberto Giubilini highlighted the tensions in the morality of this landscape in what I thought was an excellent piece arguing that “It is not about AI, it is about humans”: we cannot trust AI, because trust is a relationship between moral agents, and AI does not (yet) have the capacity for moral agency and responsibility. We can, however, rely on AI to behave in a certain way (whether we should is a separate issue).

 

Similarly, while we may believe that a human should show concern for their fellow person, we should not expect the same from AIs, because they cannot be concerned.

 

Yet, if the automatic doors continue to whack visitors in the face, we may start feeling that someone should be responsible for this – not only legally, but morally: someone has a moral duty to ensure these doors are safe to pass through, right?

 

In doing so, we expand the field of inquiry, from the door opener to the programmer/constructor of the door opener, and perhaps to someone in charge of maintenance.

 

A couple of things pop to mind here.

 

First, when we find no immediate moral agent to hold responsible for a harmful event, we may expand the search field until we find one. That search seems to me to follow a systematic structure: if the door is automatic, we turn to call the support line, and if the support fails to fix the problem, but turns out to be an AI, we turn to whoever is in charge of support, and so on, until we find a moral agent.

 

Second, it seems to me that, if the door keeps slamming into visitors’ faces in condition in C2, we will not only morally blame the door operator, but also whoever left them in charge of that door. So perhaps the systems-thinking does not only apply when there is a lack of moral agents, but also applies on a more general level when we are de facto dealing with complicated and/or complex systems of agents.

 

Third, let us conjure a condition four (C4) like so: the door is automatic, but in charge of maintenance support is an AI system that is usually very reliable, and in charge of the AI support system, in turn, is a (human) person.

 

If the person in charge of an AI support system that failed to provide adequate service to a faulty automatic door is to blame for anything, it is plausibly for not adequately maintaining the AI support system – but not for whacking people in the face with a door (because they didn’t do that). Yet, perhaps there is some form of moral responsibility for the face-whacking to be found within the system as a whole. I.e. the compound of door-AI-human etc., has a moral duty to avoid face-whacking, regardless of any individual moral agents’ ability to whack faces.

 

If this is correct, it seems to me that we again[iv] find that our traditional means of ascribing moral responsibility fails to capture key aspects of moral life: it is not the case that any agent is individually morally responsible for the face-whacking door, nor are there multiple agents who are individually or collectively responsible for the face-whacking door. Yet, there seems to be moral responsibility for face-whacking doors in the system. Where does it come from, and what is its nature and structure (if it has one)?

 

In this way, not only cognitive processes such as thinking and computing seem to be able to be distributed throughout systems, but perhaps also moral capacities such as concern, accountability, and responsibility.

And in the end, I do not know to what extent it actually matters, at least in this specific domain. Because at the end of the day, I do not care much whether the door opener is human, an AI, or automatic.

 

I just need to know whether or not I need to stay clear of the door.

Notes & References.

[i] Berlin, I. (2013). The hedgehog and the fox: An essay on Tolstoy’s view of history. Princeton University Press.

[ii] I would like to emphasize that this is a completely hypothetical case, and that I take it to be safe to enter the Uehiro centre. The risk of face-whacking is, in my experience, minimal.

[iii] Let’s give them the benefit of the doubt here, and assume it wasn’t maleficence.

[iv] Together with Hazem Zohny, Julian Savulescu, and Ilina Singh, I have previously argued this to be the case in the domain of emerging technologies for collective thinking and decision-making, such as brain-to-brain interfaces. See the Open Access paper Merging Minds for more on this argument.

How Brain-to-Brain Interfaces Will Make Things Difficult for Us

Written by David Lyreskog

Four images depicting ‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney.

‘Hivemind Brain-Computer Interfaces’, as imagined by the AI art generator Midjourney

 

A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing, and hybrid and artificial intelligence, promises to transform how we think about collective and collaborative cognitive tasks. With implementations ranging from research to entertainment, and from therapeutics to military applications, as these tools continue to improve, we need to anticipate and monitor their impacts – how they may affect our society, but also how they may reshape our fundamental understanding of agency, responsibility, and other concepts which ground our moral landscapes.

In a new paper, I, together with Dr. Hazem Zohny, Prof. Julian Savulescu, and Prof. Ilina Singh, show how these new technologies may reshape fundamental components of widely accepted concepts pertaining to moral behaviour. The paper, titled ‘Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds’, was just published in Neuroethics, and is freely available as an Open Access article through the link above.

In the paper, we argue that the received views on how we (should) ascribe responsibility to individuals and collectives map poorly onto networks of these ‘Collective Minds’. The intimately collective nature of direct multiple-brain interfaces, for instance, where human minds can collaborate on and complete complex tasks without necessarily being in the same room – or even on the same continent! –  seem to suggest a collectivist moral framework to ascribe agency and responsibility. However, the technologies we are seeing in R&D do not necessitate the meeting of criteria we normally would turn to for ascription of such frameworks; they do not, for instance, seem to rely on that participants have shared goals, know what the goals of other participants are, or even know whether they are collaborating with another person or a computer. 

In anticipating and assessing the ethical impacts of Collective Minds, we propose that we move beyond binary approaches to thinking about agency and responsibility (i.e. that they are either individual or collective), and that relevant frameworks for now focus on other aspects of significance to ethical analysis, such as (a) technical specifications of the Collective Mind, (b) the domain in which the technology is deployed, and (c) the reversibility of its physical and mental impacts. However, in the future, we will arguably need to find other ways to assess agency constellations and responsibility distribution, lest we abandon these concepts completely in this domain.

Oxford Uehiro Prize in Practical Ethics: Turning up the Hedonic Treadmill: Is It Morally Impermissible for Parents to Give Their Children a Luxurious Standard of Living?

By: admin

This essay was the overall winner in the Undergraduate Category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student, Lukas Joosten

Most parents think they are helping their children when they give them a very high standard of life. This essay argues that giving luxuries to your children can, in fact, be morally impermissible. The core of my argument is that when parents give their children a luxurious standard of life, they foist an expectation for a higher standard of living upon their children, reducing their lifetime wellbeing if they cannot afford this standard in adulthood.

I argue for this conclusion in four steps. Firstly, I discuss how one can harm someone by changing their preferences. Secondly, I develop a model for the general permissibility of gift giving in the context of adaptive preferences. Thirdly, I apply this to the case of parental giving, arguing it is uniquely problematic. Lastly, I respond to a series of objections to the main argument.  

I call the practice in question, luxury parenting. Luxury parenting consists of providing certain luxuries to your child which go beyond a reasonably good standard of living.  I will consider this through a framework of gift giving, since luxury parenting can be understood as the continual gifting of certain luxuries to children. While my argument also applies to singular gifts of luxury to children, it is targeted at the continual provision of luxury goods and services to ensure a high standard of living throughout childhood.

 

Section 1: Preference Screwing

When we discuss harming one’s wellbeing, we are usually referring to taking some action which changes the actor’s situation so that they are further from their preferences. However, a person’s wellbeing can be harmed in the opposite way as well, by changing their preferences away from their situation. Consider the following example.

Wine pill: Bob secretly administers a pill to Will which changes his preferences so that he no longer enjoys cheap wine.

Will has been harmed here in some morally significant way without having received any immediate disbenefit. The harm consists in the effect on future preferences. We can call this type of harming “preference screwing”.

Preference Screwing: Making it more difficult for an actor to achieve a certain level of utility by changing the actor’s preferences so that there is a larger divergence between the preference set and the actor’s option set.

 

Section 2: Adaptive Preferences and Gift-giving

The theory of adaptive preferences tells us that people tend to return to their baseline happiness after positive or negative shocks to their wellbeing because people’s preferences adapt to their current situation. I argue this process of preference adaptation implies that some instances of gift-giving are impermissible, because consuming a high-quality gift, screws with the preferences of the recipient, so that they derive lower utility from future consumption of lower-quality variants of the good they were gifted.

There exists a vast literature debating the accuracy of adaptive preferences.[1] However, my argument only requires a weak restricted form of adaptive preferences. Namely it simply says that there is some negative impact of consuming expensive goods on the enjoyment of future cheap goods. That such an impact exists is generally empirically supported, even if the strength of the impact is debatable.[2]

It might be objected that if preferences are adaptive, then gift-giving has no long-term harm since, upon returning to the lower-quality good, preferences will adapt downward immediately. There are two independent reasons why this is not a problem for my argument.

Firstly, I don’t assume (and the empirics don’t support) complete adaption, only partial adaptation. This means that once the preferences of an actor have (partially) adapted up after consuming the higher-quality good, then if the actor returns to the lower-quality good, their preferences will adapt down but not completely, so there remains a long-lasting upwards pressure on their preferences.

Secondly, as discussed in section 3, since childhood is a formative life-phase, preferences adapt more quickly and more permanently for children. Luxury parenting thus fixes children’s preferences at a high point, which will take much longer to adapt back down in adulthood.

This allows us to develop a model of gift giving. When A gifts X to B, B’s lifetime wellbeing is affected in two ways. Firstly, there is the immediate positive (or negative if a particularly poor gift) utility derived by B’s consumption of X. Call this the immediate utility. Secondly there is the long-term impact of the gift’s preference screwing. The preference screwing effect is the total harm to the lifetime wellbeing of B incurred by B as a result of the preference screwing caused by consuming X. This allows us to state the following:

Net wellbeing impact of gift giving = immediate utility – preference screwing effect

Now, consider that preference screwing through gift-giving is usually not considered a form of wronging. Consider the following example:

Wine gift: Bob gifts a bottle of Château Latour to Will for his birthday. After thoroughly enjoying the wine, Will no longer enjoys cheap wines as much

In wine gift, we would not say that Bob has wronged Will. There are two distinctions between wine gift and wine pill which explain why gift giving to adults is generally permissible.

Firstly, wine gift is not necessarily a net negative for Will’s lifetime utility. The spike in utility of drinking the gifted bottle may outweigh the loss in utility from the future discounted happiness of drinking cheap wines. In wine pill, there is only a negative impact on Will’s utility (ignoring the health effects).

Secondly, and crucially, Will consents into receiving the gift. Generally, we think that a person’s potential complaint versus a particular action is much weaker when they consented into that action being conducted upon them.

This allows us to say that the permissibility of gift given is a function of the following two parameters:

  1. Expected net wellbeing impact of gift giving (henceforth expected net impact)
  2. Level of consent

The weight given to each is going to vary with one’s background intuitions on paternalism. Anti-paternalists might thus completely disregard the first parameter, arguing that given sufficient consent, gift-giving is always permissible. My argument is inclusive to a broad pluralism on this matter, since it avoids the 2nd parameter altogether, as discussed in section 3.

 

Section 3: Giving Children Luxuries

By evaluating luxury parenting on the two parameters, I argue that many instances of practice are impermissible.

Firstly, consider level of consent. Children are usually thought to lack the required capacities for autonomous decision making, such as critical thinking, time-relative agency (ownership of future interests) and independence[3]. This means that children, generally, cannot consent into receiving luxuries from their parents.

As such, we must adapt the model of consent for children. Brighouse suggests that the autonomy rights of children express themselves as fiduciary duties upon parents.[4] Parents thus have the authority to make decisions for their children, but this authority is limited by the duty to act in the child’s best interest. This means that both parents can permissibly give gifts to children, but only when those gifts appear to be in the best interest of the child. Assume now that, ceteris paribus, the non-welfare interests of children are unaffected in cases of gift giving. Given this assumption, we can say that the permissibility of child gift giving boils down to the expected net impact.

Luxury parenting is thus usually impermissible since it is particularly likely to lead to a negative expected net impact. This is because the preference screwing effect is likely to be strong, while the immediate benefit is small. Children are particularly vulnerable to preference screwing from luxury parenting for four reasons.

Firstly, childhood is an especially formative stage in life. Due to the ongoing development of the brain, the patterns children learn are going to be extra lasting. [5] This means that if preferences are formed to expect a high standard of living, these preferences are going to be especially sticky. If the child’s standard of living drops upon reaching adulthood, those preferences will likely adapt down less quickly and won’t adapt down completely.

Secondly, when children experience certain goods, they often experience them for the first time. If the first time they experience a particular good or service, they are experiencing an expensive variant of that good, they are likely to calibrate their future expectation on this expensive good, because they have no cheaper variants to compare it to.

Thirdly, children generally will have a lesser appreciation of the uniqueness or scarcity of the goods they experience at a high standard of living. In wine gift, Will is acutely aware that his drinking of Château Latour is a unique and temporary experience. This awareness can deter the preference adaptation. However, children are less likely to be aware of the fleeting nature of the standard of living and so are not protected from preference adaption in this way.

Lastly, the effect is going to be especially strong because the luxury gifts are provided for an extended period of time. If parents provide a luxurious standard of living for multiple years, that gives a very long time for the child’s preferences to be pushed upwards and solidify there.

On the flip side, the immediate utility effect is going to be smaller for children. The satisfaction people receive from luxuries often goes beyond the direct experiential joy of the good or service. There is also the novelty of the experience, the secondary reflective happiness from knowing that you are consuming something special. Children are much less likely to appreciate the novelty of the experience since they are likely, as argued above, to be less aware of the uniqueness of the experience.

In sum, luxury parenting has strongly negative preferencing screwing effects while it offers a limited positive immediate utility. In turn, luxury parenting is likely to have a negative expected net impact on children, meaning that luxury parenting is often impermissible.

 

Section 4: Objections

Objection 1: Symmetry Implications

If it is impermissible to give a luxurious standard of life to children, this could imply that it is morally required to give a miserable existence to children instead. If childhood suffering will push preferences down such that children will be happier in the long run, this may be better for the child. This implication would be so clearly unacceptable that it would condemn the whole argument. However, the implications of the model are asymmetrical. This is because children are generally thought to have significant rights, which ought to be respected. They have rights against being physically harmed and to a reasonable standard of living. Parents cannot impose suffering on their children even if it is a net-positive on lifetime wellbeing because this would violate these rights protections.

On the flipside, parents can permissibly withdraw these luxury goods, since children generally are not thought to have a right to luxury living.

 

Objection 2: Shared Time

One might argue that luxury parenting is permissible because it is necessary for parents to give themselves a high quality of life. Parents are generally thought to be under an obligation to spend quality time with their children because a healthy parental relationship is crucial for the child’s development. This is problematic since many opportunities for quality time are also opportunities for parents to spend money on themselves, such as restaurants, vacations, or entertainment. So, if we think parents should be permitted to spend money on themselves, this could make luxury parenting permissible. There are three responses to this objection.

Firstly, there are still many ways parents can spend on themselves without spending on their children. Parents can spend money on activities without their children or they can spend money on themselves while shielding their children from the same luxury expenditure, for instance by ordering a lobster for yourself and the pasta for your child.

Secondly, the magnitude of this sacrifice, being unable to spend on oneself, directly correlates with the level of wealth parents have. This makes the sacrifice a less significant problem because the wealth of parents reduces the required sacrifice of parenting significantly in other contexts. Wealthy parents can afford babysitters, summer camps, and meal boxes. This means that the sacrifice of giving up luxury is balanced out by the diminished sacrifice in other facets of parenting.

Thirdly, parents are routinely asked to make sacrifices for their children in determining how they spend their time. They can only watch child-friendly movies, avoid bars, and go to child-friendly holiday destinations. It’s unclear, for instance, how giving up luxury is materially different from forcing parents to go on vacation to Disneyland.

In sum, a parent’s interest in treating themselves is insufficient for making luxury parenting permissible.

 

Works Cited:

Bagenstos, Samuel R., and Margo Schlanger. ‘Hedonic Damages, Hedonic Adaptation, and Disability’. Vanderbilt Law Review 60, no. 3 (2007): 745–98.

Brighouse, Harry. ‘What Rights (If Any) Do Children Have’, 1 January 2002. https://doi.org/10.1093/0199242682.003.0003.

Coleman, Joe. ‘Answering Susan: Liberalism, Civic Education, and the Status of Younger Persons’. In The Moral and Political Status of Children, edited by David Archard and Colin M. Macleod, 0. Oxford University Press, 2002. https://doi.org/10.1093/0199242682.003.0009.

Russell, Simon J., Karen Hughes, and Mark A. Bellis. ‘Impact of Childhood Experience and Adult Well-Being on Eating Preferences and Behaviours’. BMJ Open 6, no. 1 (1 January 2016): e007770. https://doi.org/10.1136/bmjopen-2015-007770.

 

[1] Bagenstos and Schlanger, ‘Hedonic Damages, Hedonic Adaptation, and Disability’.

[2] Bagenstos and Schlanger.

[3] Coleman, ‘Answering Susan: Liberalism, Civic Education, and the Status of Younger Persons’.

[4] Brighouse, ‘What Rights (If Any) Do Children Have’.

[5] Russell, Hughes, and Bellis, ‘Impact of Childhood Experience and Adult Well-Being on Eating Preferences and Behaviours’.

Cross Post: Why Government Budgets are Exercises in Distributing Life and Death as Much as Fiscal Calculations

By: admin

Written by Hazem Zohny, University of Oxford

Sacrificial dilemmas are popular among philosophers. Should you divert a train from five people strapped to the tracks to a side-track with only one person strapped to it? What if that one person were a renowned cancer researcher? What if there were only a 70% chance the five people would die?

These questions sound like they have nothing to do with a government budget. These annual events are, after all, conveyed as an endeavour in accounting. They are a chance to show anticipated tax revenues and propose public spending. We are told the name of the game is “fiscal responsibility” and the goal is stimulating “economic growth”. Never do we talk of budgets in terms of sacrificing some lives to save others.

In reality, though, government budgets are a lot like those trains, in philosophical terms. Whether explicitly intended or not, some of us take those trains to better or similar destinations, and some of us will be left strapped to the tracks. That is because the real business of budgets is in distributing death and life. They are exercises in allocating misery and happiness.

Take the austerity policies introduced by the UK government in 2010 and the following years. Studies put the mortality cost of spending cuts between 2010 and 2014 at approximately 150,000 excess deaths. A more recent study suggested that 335,000 excess deaths could be attributed to the austerity of the 2010s.

These are contestable figures – and they cannot draw a direct causal relationship between specific austerity policies and number of deaths. But even if it is impossible to assign an accurate death toll to austerity, it should come as no surprise that reducing welfare spending will reduce, well, welfare. The same is true in reverse. An England-focused study suggests that by increasing healthcare expenditure by 1%, around 300,000 deaths could have been avoided in the wake of 2010 cuts.

This has a sobering implication: knowingly or not, a decision is effectively made to let 300,000 die if healthcare expenditure isn’t increased by 1%.

Similarly, there is a clear link between fuel poverty and premature deaths. It is difficult to derive a precise figure, but it’s almost certain that a government decision not to further subsidise energy bills will trigger otherwise preventable deaths.

There is a truism in all this: resources are limited, and decisions inevitably come with trade-offs and opportunity costs.

For instance, austerity was a response to a severe economic downturn. Recessions reduce how much revenue a government has to spend on services, which leads to the loss of lives and livelihoods. Even for those who don’t suffer in the extreme, life is generally less liveable for many of us. For all we know, the global financial crisis would have (eventually) led to tens of thousands of additional deaths no matter how the government responded.

The values smuggled into budgets

Budgets are of course hugely complicated, but this is all to show that while it is true that they involve accounting, the morally relevant currency of what is being accounted for is ultimately our wellbeing, including its loss through death.

For instance, assume it is indeed true that 300,000 deaths could be prevented by a 1% increase in healthcare spending. Assume also that using these funds in some other way (such as on education and home insulation) would in the long term prevent far more deaths, or, significantly improve the quality of several million lives.

Suddenly, talk of fiscal responsibility seems to miss the point. We are faced with a much more philosophically loaded debate. It becomes pertinent to say, hang on, just how much extra wellbeing for some do you think can make up for the deaths of 300,000 others? And how sure are you about this?

It starts to become obvious how value judgments infuse and implicitly guide these fiscal decisions that, unwittingly or not, involve weighing up certain lives against others, present lives against future ones, and proxies for wellbeing (such as job creation) against others (such as preventing premature deaths).

The risk of the language of “boosting the economy” is that it gives these monumental decisions in ethics a veneer of being value-free budgetary exercises in “following the evidence”, stopping us from seeing how the economic sausage is actually made.

Of course, evidence-based policy is better than unevidenced policy. What is missing is values-transparent policy, and that starts with the philosophical task of laying bare the precise value judgments that go into constructing what we’re told are “good economic outcomes”.

While the Office of Budget Responsibility independently assesses the economic credibility of the budget, no corresponding institution works to uncover its ethical assumptions and value trade-offs. Welfare economists and ethicists need to forge a new relationship that initiates and guides the uncomfortable public conversation of how government budgets inevitably trade-off lives and livelihoods, now and against the future.

Equally crucial, by instituting norms that encourage uncovering all the value-judgments smuggled beneath the guise of sanitised fiscal and economic talk, we might reduce the chances of opportunistic politicians gambling with millions of livelihoods by redirecting the train in the name of one ideology or another.

Hazem Zohny, Research Fellow in Practical Ethics, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.
The Conversation

Corporate ethics and corporate crimes


Several earlier posts focused on corporate responsibility for crimes against humanity during the period of the Holocaust (link, link, link). But we don't need to go to the period of World War II to find examples of crimes committed by corporations in support of their international business interests.

An especially egregious example was confirmed in 2018 when two former Ford Motor Company executives in Argentina were convicted of crimes against humanity for their cooperation with the military dictatorship in operations leading to the kidnapping and murder of twenty-four labor activists (link). The crimes occurred during the dictatorship of the 1970s, and the trial of executives and military officers culminated in 2018. Here is the summary of facts provided in the New York Times story:

A three-judge panel sentenced Pedro Müller, 87, then a manufacturing director at a Ford factory in Buenos Aires province, to 10 years, and Héctor Francisco Sibilla, 92, then the security manager at the plant, to 12 years for assisting in the kidnapping and torture of their colleagues.

The two executives “allowed a detention center to be set up inside the premises of that factory, in the recreational area, so that the abductees could be interrogated,” according to court papers.

These judicial findings establish the inexcusable behavior of Ford Motor Argentina. The question arises here, as it did in the case of Ford Werke during the Holocaust, the level of knowledge and control possessed by Ford Motor's parent corporation. This question is addressed in the New York Times article as well:

“It is clear that Ford Motor Company had control of the Argentinian subsidiary during the ’70s,” said Mr. Ojea Quintana. “Therefore, there is a direct responsibility of Ford Motor Company and that might give us the possibility to bring the case to the U.S. courts.”

Ford said in a statement the company was “aware of the verdict about the supposed participation of ex-employees of the firm in events related to human rights in the ’70s.” The company added that it “always had an open and collaborative attitude with judicial authorities supplying all the available information.”

It is evident that Ford's corporate position on its responsibility for these atrocities was ambivalent. The statement that the corporate headquarters is "aware of the verdict" is quite different from "FMC acknowledges and expresses remorse for these crimes that occurred in its Argentine subsidiary in the 1970s." 

Victoria Basualdo, Tomás Ojea Quintana, and Carolina Varsky address these issues of corporate responsibility in greater detail in "The Cases of Ford and Mercedes Benz", contained in The Economic Accomplices of the Argentine Dictatorship. In their very informative chapter they describe the background of the crimes committed at the Pacheco manufacturing plant in the 1970s:

Toward the mid-1970s, following a period of growth in the country’s automotive industry, workers at the Pacheco plant began mobilizing and organizing at the rank-and-file level, represented by some 200 factory delegates who not only stepped up their demands to management but also increasingly confronted their own national leadership at the Union of Automotive Transport Mechanics and Related Workers (Sindicato de Mecánicos y Afines del Transporte Automotor, SMATA). Pedro Troiani, a factory worker and delegate, and a member of the internal commission, was kidnapped and tortured inside the General Pacheco plant in April 1976. Later, when he testified in court, he talked about the implications and the impact of trade union activism on the way the company operated. In his testimony, he clearly explained that the internal commission received worker complaints that were not only about wages but also had to do with the working conditions and the pace at which they were forced to work, and that in 1975 the commission succeeded in signing an agreement that was highly beneficial to the workers. All of this, he said, had consolidated the commission’s position and the workers’ bargaining power in the company.

It was in that context that repressive policies were implemented, with increasing force after the March 24, 1976 military coup. Between March and May of that year, twenty-five workers in the plant were kidnapped, most of them members of the internal commission and the rest active unionists, who remained “disappeared” for thirty to sixty days. Some of them were kidnapped from their homes and taken to the Tigre police station, which operated as a clandestine detention center, while the rest were seized directly at the factory, where they were held for hours and then taken to the Tigre police station.

The relationship between company and armed forces in this process of repression of workers became apparent in different ways in the Ford case. First, the kidnapping victims have testified that they were picked up in F100 pickup trucks supplied by the company to the military. Second, there are numerous testimonies indicating that, as well as supporting the armed forces, the company asked the military to kidnap workers and trade union delegates. Arcelia Luján de Portillo, the wife of one of the victims, stated in her testimony that during a meeting she had with a military officer responsible for the kidnappings, whose last name was Molinari, the officer “opened a drawer and pulled out a list typed on a sheet of paper with the Ford logo, which he told me had ‘all the names that the company gave us of workers it wanted us to chupar,’” using the repression slang term for kidnapping and disappearing (literally, “suck up”). (159-160)

The authors make it clear that Ford's involvement was active and purposive, in support of business interests of the company:

Also, Ford personnel members participated in the interrogations of the kidnapped delegates, to extract information regarding trade union activities in the factory. One such interrogation was that of detainee Francisco Guillermo Perrotta, who was not a factory worker but one of the administrative employees, a category that until the mid-1970s had not been represented by a union. As an employee in the cost, material, and inventory analysis division, Perrotta had access to key information about the factory’s internal matters. He and another delegate from the financial area were tortured with an electric prod. During the torture session, in which his interrogators mentioned details and names that only very well-informed employees of the firm could know, Perrotta, who was wearing a hood, was able to identify the voice of the factory’s security chief, Héctor Francisco Sibilla, among the people present. Sibilla was a member of the armed forces and on July 26, 1978, after the kidnapping of workers, he was promoted to the rank of lieutenant colonel. After his time in the company, he was hired to work in security at the U.S. Embassy, a position he held until 2004. (162)

These sources make it plain that terrible crimes were committed in Argentina, not just by the military or a few rogue employees, but by the corporations themselves. Basualdo, Quintana, and Varsky quote the president of Ford Argentina in a speech given on May 13, 1980: 

As of March 1976 we were facing a challenge. In Argentina a process had begun, a change of system, a complete change in philosophy, which covered individual behaviors and the collective behavior of society as a whole. A change in mentality was necessary. In our case, we had to make a business decision and, with our actions and procedures, we showed what that decision was. Those representatives of destruction with no love of country and no God – whose eradication has cost the nation so much, and who still persist in small numbers – deserve only scorn from the decent men who work and study, day in and day out, to build this nation. (164)

This sounds very much like an unapologetic confession of a corporate decision to commit murder and other atrocities against "representatives of destruction" (labor organizers and activists).

What strategies exist to hold multinational corporations accountable for their subsidiary activities in many countries? One model that might be considered is a human rights analog to the Foreign Corrupt Practices Act. The FCPA was enacted in 1977 to create significant legal penalties against bribery and other corrupt practices in the conduct of international business (link), and the legislation has had significant effect on the practice of international business. Could we imagine an analogous set of laws and regulations that held corporate officers in multinational corporations legally responsible for crimes against humanity committed by agents of their international subsidiaries? And could such an enactment also provide for reasonably direct civil suits in US courts by foreign individuals in compensation for the damages they have suffered? A legal framework like this would present a new and behaviorally significant constraint on corporate crimes against humanity in other countries.

Ford Motor Company opened its archives in 1998 in order to demonstrate its lack of control of Ford Werke and its non-culpability for Ford Werke's use of forced labor. A similar level of transparency should have been demanded with regard to Ford's conduct during Argentina's military dictatorship.

(Here is a revealing article by Rut Diamint in The Conversation about the involvement of the US government in the dirty war in Argentina; link.)

Corporate ethics and corporate crimes


Several earlier posts focused on corporate responsibility for crimes against humanity during the period of the Holocaust (link, link, link). But we don't need to go to the period of World War II to find examples of crimes committed by corporations in support of their international business interests.

An especially egregious example was confirmed in 2018 when two former Ford Motor Company executives in Argentina were convicted of crimes against humanity for their cooperation with the military dictatorship in operations leading to the kidnapping and murder of twenty-four labor activists (link). The crimes occurred during the dictatorship of the 1970s, and the trial of executives and military officers culminated in 2018. Here is the summary of facts provided in the New York Times story:

A three-judge panel sentenced Pedro Müller, 87, then a manufacturing director at a Ford factory in Buenos Aires province, to 10 years, and Héctor Francisco Sibilla, 92, then the security manager at the plant, to 12 years for assisting in the kidnapping and torture of their colleagues.

The two executives “allowed a detention center to be set up inside the premises of that factory, in the recreational area, so that the abductees could be interrogated,” according to court papers.

These judicial findings establish the inexcusable behavior of Ford Motor Argentina. The question arises here, as it did in the case of Ford Werke during the Holocaust, the level of knowledge and control possessed by Ford Motor's parent corporation. This question is addressed in the New York Times article as well:

“It is clear that Ford Motor Company had control of the Argentinian subsidiary during the ’70s,” said Mr. Ojea Quintana. “Therefore, there is a direct responsibility of Ford Motor Company and that might give us the possibility to bring the case to the U.S. courts.”

Ford said in a statement the company was “aware of the verdict about the supposed participation of ex-employees of the firm in events related to human rights in the ’70s.” The company added that it “always had an open and collaborative attitude with judicial authorities supplying all the available information.”

It is evident that Ford's corporate position on its responsibility for these atrocities was ambivalent. The statement that the corporate headquarters is "aware of the verdict" is quite different from "FMC acknowledges and expresses remorse for these crimes that occurred in its Argentine subsidiary in the 1970s." 

Victoria Basualdo, Tomás Ojea Quintana, and Carolina Varsky address these issues of corporate responsibility in greater detail in "The Cases of Ford and Mercedes Benz", contained in The Economic Accomplices of the Argentine Dictatorship. In their very informative chapter they describe the background of the crimes committed at the Pacheco manufacturing plant in the 1970s:

Toward the mid-1970s, following a period of growth in the country’s automotive industry, workers at the Pacheco plant began mobilizing and organizing at the rank-and-file level, represented by some 200 factory delegates who not only stepped up their demands to management but also increasingly confronted their own national leadership at the Union of Automotive Transport Mechanics and Related Workers (Sindicato de Mecánicos y Afines del Transporte Automotor, SMATA). Pedro Troiani, a factory worker and delegate, and a member of the internal commission, was kidnapped and tortured inside the General Pacheco plant in April 1976. Later, when he testified in court, he talked about the implications and the impact of trade union activism on the way the company operated. In his testimony, he clearly explained that the internal commission received worker complaints that were not only about wages but also had to do with the working conditions and the pace at which they were forced to work, and that in 1975 the commission succeeded in signing an agreement that was highly beneficial to the workers. All of this, he said, had consolidated the commission’s position and the workers’ bargaining power in the company.

It was in that context that repressive policies were implemented, with increasing force after the March 24, 1976 military coup. Between March and May of that year, twenty-five workers in the plant were kidnapped, most of them members of the internal commission and the rest active unionists, who remained “disappeared” for thirty to sixty days. Some of them were kidnapped from their homes and taken to the Tigre police station, which operated as a clandestine detention center, while the rest were seized directly at the factory, where they were held for hours and then taken to the Tigre police station.

The relationship between company and armed forces in this process of repression of workers became apparent in different ways in the Ford case. First, the kidnapping victims have testified that they were picked up in F100 pickup trucks supplied by the company to the military. Second, there are numerous testimonies indicating that, as well as supporting the armed forces, the company asked the military to kidnap workers and trade union delegates. Arcelia Luján de Portillo, the wife of one of the victims, stated in her testimony that during a meeting she had with a military officer responsible for the kidnappings, whose last name was Molinari, the officer “opened a drawer and pulled out a list typed on a sheet of paper with the Ford logo, which he told me had ‘all the names that the company gave us of workers it wanted us to chupar,’” using the repression slang term for kidnapping and disappearing (literally, “suck up”). (159-160)

The authors make it clear that Ford's involvement was active and purposive, in support of business interests of the company:

Also, Ford personnel members participated in the interrogations of the kidnapped delegates, to extract information regarding trade union activities in the factory. One such interrogation was that of detainee Francisco Guillermo Perrotta, who was not a factory worker but one of the administrative employees, a category that until the mid-1970s had not been represented by a union. As an employee in the cost, material, and inventory analysis division, Perrotta had access to key information about the factory’s internal matters. He and another delegate from the financial area were tortured with an electric prod. During the torture session, in which his interrogators mentioned details and names that only very well-informed employees of the firm could know, Perrotta, who was wearing a hood, was able to identify the voice of the factory’s security chief, Héctor Francisco Sibilla, among the people present. Sibilla was a member of the armed forces and on July 26, 1978, after the kidnapping of workers, he was promoted to the rank of lieutenant colonel. After his time in the company, he was hired to work in security at the U.S. Embassy, a position he held until 2004. (162)

These sources make it plain that terrible crimes were committed in Argentina, not just by the military or a few rogue employees, but by the corporations themselves. Basualdo, Quintana, and Varsky quote the president of Ford Argentina in a speech given on May 13, 1980: 

As of March 1976 we were facing a challenge. In Argentina a process had begun, a change of system, a complete change in philosophy, which covered individual behaviors and the collective behavior of society as a whole. A change in mentality was necessary. In our case, we had to make a business decision and, with our actions and procedures, we showed what that decision was. Those representatives of destruction with no love of country and no God – whose eradication has cost the nation so much, and who still persist in small numbers – deserve only scorn from the decent men who work and study, day in and day out, to build this nation. (164)

This sounds very much like an unapologetic confession of a corporate decision to commit murder and other atrocities against "representatives of destruction" (labor organizers and activists).

What strategies exist to hold multinational corporations accountable for their subsidiary activities in many countries? One model that might be considered is a human rights analog to the Foreign Corrupt Practices Act. The FCPA was enacted in 1977 to create significant legal penalties against bribery and other corrupt practices in the conduct of international business (link), and the legislation has had significant effect on the practice of international business. Could we imagine an analogous set of laws and regulations that held corporate officers in multinational corporations legally responsible for crimes against humanity committed by agents of their international subsidiaries? And could such an enactment also provide for reasonably direct civil suits in US courts by foreign individuals in compensation for the damages they have suffered? A legal framework like this would present a new and behaviorally significant constraint on corporate crimes against humanity in other countries.

Ford Motor Company opened its archives in 1998 in order to demonstrate its lack of control of Ford Werke and its non-culpability for Ford Werke's use of forced labor. A similar level of transparency should have been demanded with regard to Ford's conduct during Argentina's military dictatorship.

(Here is a revealing article by Rut Diamint in The Conversation about the involvement of the US government in the dirty war in Argentina; link.)

National Oxford Uehiro Prize in Practical Ethics: Why the Responsibility Gap is Not a Compelling Objection to Lethal Autonomous Weapons

By: admin

This article received an honourable mention in the undergraduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by Tanae Rao, University of Oxford student

There are some crimes, such as killing non-combatants and mutilating corpses, so vile that they are clearly impermissible even in the brutal chaos of war. Upholding human dignity, or whatever is left of it, in these situations may require us to hold someone morally responsible for violation of the rules of combat. Common sense morality dictates that we owe it to those unlawfully killed or injured to punish the people who carried out the atrocity. But what if the perpetrators weren’t people at all? Robert Sparrow argues that, when lethal autonomous weapons cause war crimes, it is often impossible to identify someone–man or machine–who can appropriately be held morally responsible (Sparrow 2007; Sparrow 2016). This might explain some of our ambivalence about the deployment of autonomous weapons, even if their use would replace human combatants who commit war crimes more frequently than their robotic counterparts.

This essay rejects Sparrow’s argument, at least as it applies to a wide class of lethal autonomous weapons I call ‘LAW-1’. When LAW-1s cause war crimes, then at least one human being can usually be held morally responsible. I acknowledge that there is a subset of accidents for which attributing moral responsibility is murkier, but they do not give us reason to refrain from using LAW-1s as compared with less sophisticated weapons like guns and missiles.

LAW-1s are the weapons systems that most people envision when imagining a lethal autonomous weapon. I predict that most systems developed in the next decade will be LAW-1s, although some may push the boundaries between LAW-1s and the next generation of lethal autonomous weapons. The defining characteristics of an LAW-1 are:

1.Moderate task specificity: An LAW-1 is a model trained to fulfil a relatively specific task, such as ‘fly around this area and kill any enemy combatants identified if and only if this is allowed under international law’. An example of a task too specific for a LAW-1 is ‘fly to these specific coordinates, then explode’ (this would be more akin to an unsophisticated missile, land mine, etc.). An example of a task too general is ‘perform tasks that will help our state win the war’.

2.No human intervention needed: An LAW-1 is capable of identifying targets and using lethal force without human intervention. For example, an unmanned aerial vehicle (UAV) that uses computer vision techniques to discern active combatants from non-combatants, then shoots the combatants with an attached gun without waiting for human approval would qualify as an LAW-1. An aerial vehicle that requires a remote pilot to operate it is not an LAW-1.

3.No mental states: An LAW-1 does not have mental states, such as pain or regret, and does not have subjective experiences. It is reasonable to believe that all weapons systems currently in operation fulfil this criterion.

I will now outline Sparrow’s argument that lethal autonomous weapons introduce a responsibility gap.

(1) There is a responsibility gap for some war crimes caused by lethal autonomous weapons, meaning that no one can be held morally responsible for the war crime.

(2) Out of basic respect for enemy combatants and non-combatants alike, the legitimate use of any weapon requires that someone can be held responsible if wrongful harm arises as a result of its use.

(C) Therefore, we should not use lethal autonomous weapons during wartime.

I deny the existence of a responsibility gap for an LAW-1. Therefore, the focus of this essay is on the first premise of Sparrow’s argument. There are two reasons why an LAW-1 might commit a war crime. First, this might be intentionally programmed, in which case at least one human being is morally responsible. Second, if the war crime was not a result of human intention, human beings can often be held responsible for gross negligence. I concede that there will be a small number of freak accidents involving the use of LAW-1s for which no human can be held responsible but argue that these cases give us no special reason to reject LAW-1s as compared with less sophisticated weapons.

i. Humans develop and deploy an LAW-1 despite knowing that it will likely commit a war crime.

It should be uncontroversial that humans using an LAW-1 with the knowledge that it will likely commit war crimes are morally responsible for those crimes. For example, a human could knowingly train an LAW-1 with a reward function that incentivises killing non-combatants, even if killing non-combatants is not its explicit goal (e.g., the machine is trained to kill non-combatants that get in its way). The programmers of such a horrible weapon are morally responsible for the war crimes committed. If the military officials knew about its criminal programming, then they too would be morally responsible for the war crimes committed. Therefore, if humans knowingly deploy an LAW-1 that will commit war crimes, there is no responsibility gap.

 Humans deploy an LAW-1, without knowing that it could commit a war crime.

Here is where the existence of a responsibility gap is most plausible. Sparrow argues that “the more the system is autonomous then the most it has the capacity to make choices other than those predicted or encouraged by its programmers. At some point then, it will no longer be possible to hold the programmers/designers responsible for outcomes that they could neither control or predict” (Sparrow 2006, 70).

I make two contentions about accidental war crimes caused by LAW-1s. Firstly, many of these automation failures are a result of gross negligence and should have been foreseen by human programmers. As in other cases of negligence, it is appropriate to hold some human beings morally responsible for the results. For example, weapons company executives and/or military leadership could justifiably be imprisoned for some accidents. Secondly, the accidents which could not have been foreseen or prevented through sensible design practice do not give us special reason to dismiss LAW-1s. These accidents are not dissimilar from the misfiring of a gun, or human mistargeting of an unsophisticated missile.

When considering my arguments, it is prudent to think of why such accidents happen. Not all LAW-1s use machine learning (ML) techniques, but ML is widespread enough in tasks important for LAW-1s, such as computer vision, that it is worth exploring in some detail. In general, a machine learning-powered LAW-1 might fail because a) it is (accidentally) given a goal compatible with war crimes without robust constraints, and/or b) it fails at achieving its goal or staying within its constraints (e.g., misidentifying non-combatants as enemy combatants about to shoot friendly combatants).[1]

A body of machine learning research has identified, forewarned, and discussed these potential failure modes in detail.[2] I think it is reasonable to expect LAW-1 programmers to rigorously test their systems to ensure that the frequency of war crimes committed is exceedingly low. Sensible development of LAW-1s might involve intensive testing on representative datasets, early-stage deployments in real combat zones without weaponry to check if non-combatants can be consistently identified, etc. Techniques to solve the problem of misspecified goals (in this case, goals compatible with war crimes) continue to be developed (Ouyang et al. 2022). The comparatively specific objectives given to LAW-1s makes overcoming these technical challenges easier than for ML models given very general objectives. And, in the worst-case scenario, LAW-1s committing war crimes can be quickly recalled, and either decommissioned or improved to avoid recurrences.

Crucially, developers of LAW-1s need not be able to predict exactly how or why their machines will fail to be held morally responsible for their failure. As long as the LAW-1 committed a war crime as a result of a known failure mode (e.g., glitches in computer vision misclassifying non-combatants) that was not ruled out with a sufficient degree of confidence, developers (among others) can be held morally responsible. This is analogous to an unsophisticated missile whose faulty targeting system causes target coordinates to be miscommunicated, resulting in the accidental bombing of a hospital. The weapons manufacturer can plausibly be held morally responsible for not rigorously testing their product before selling it to the military.

Therefore, it is likely that, in many though not all circumstances, humans can be held morally responsible for war crimes caused by LAW-1s, even if no human explicitly intended for a war crime to be committed. In particular, programmers can be held responsible for not carefully checking for common failure modes, military officials can be held responsible for not sufficiently auditing the weapons they choose to deploy, and states can be held responsible for failing to regulate the development of faulty LAW-1s. I acknowledge that careful, rigorous checks might not currently be possible for LAW-1s, let alone more sophisticated lethal autonomous weapons. But ensuring a very low failure rate in such systems is a technical problem to be solved, rather than some sort of mathematical impossibility. Perhaps the deployment of LAW-1s ought to be delayed until further progress on these technical problems is made, but this does not justify a complete ban.

To be clear, LAW-1s still identify and kill people without human intervention. There will likely always be a small risk of accidentally violating international law when using an LAW-1 even if no negligence is involved. But there is no morally relevant difference between this and a human keying in the wrong target for a missile accidentally, or even a gun misfiring and hurting a surrendered enemy combatant. If LAW-1s have a very high rate of accidental killings, then they should not be used, for the same reason that a very inaccurate missile should not be used. The degree of autonomy exhibited by a weapons system is only relevant insofar as it is correlated with the frequency of accidents; the responsibility gap is not a reason to discount the deployment of LAW-1s with low accident rates.

Sparrow’s response to the charge that non-autonomous weapon-related unjust killings sometimes also have responsibility gaps is that “if the nature of a weapon, or other means of war fighting, is such that it is typically impossible to identify or hold individuals responsible for the casualties that it causes then it is contrary to [the] important requirement of jus in bello” (Sparrow 2007, 67). But I have argued that, at least for the LAW-1s currently being deployed and developed by the world’s militaries, the responsibility gap is far from typical. By this, I mean that the overall number of LAW-1-caused war crimes for which no one can be held morally responsible is plausibly smaller than Sparrow needs for his quoted response to be compelling.

Despite being able to use lethal force without human intervention, LAW-1s are not so different with regards to the attribution of moral responsibility than a gun. Just as a gun might misfire, or a human being may accidentally (and understandably) misaim, LAW-1s might not fulfil the task intended by the humans developing and deploying them. If these accidents are just as infrequent as accidents caused by human combatants, then the existence of a responsibility gap does not give us compelling reason to abandon LAW-1s. As technology develops, it seems likely that accident rates will decrease to the point that LAW-1s are superior to human combatants. Clever programming can allow LAW-1s to escape the violence-inducing cognitive biases shown to be present in human militaries, intake and provide relevant information faster than humans, and ultimately render law-abiding decisions in chaotic situations (Arkin 2010).

Therefore, the responsibility gap is not a compelling reason to refrain from developing and deploying certain kinds of lethal autonomous weapons. In fact, the need to minimise accidents may justify more expenditure on developing LAW-1s to be as safe as is feasible. Additionally, further research should establish a clearer classification of the degree of autonomy displayed by different weapons systems, as is relevant to moral responsibility. Not all lethal autonomous weapons have the same ethical implications, and it is dangerous to be overly general in our conclusions about such a consequential subject.

 

Bibliography

Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete problems in AI safety.” arXiv preprint arXiv:1606.06565 (2016).

Arkin, Ronald C. “The case for ethical autonomy in unmanned systems.” Journal of Military Ethics 9, no. 4 (2010): 332-341.

Di Langosco, Lauro Langosco, Jack Koch, Lee D. Sharkey, Jacob Pfau, and David Krueger. “Goal misGeneralization in deep reinforcement learning.” In International Conference on Machine Learning, pp. 12004-12019. PMLR, 2022.

Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang et al. “Training language models to follow instructions with human feedback.” arXiv preprint arXiv:2203.02155 (2022).

Sparrow, Robert. “Killer robots.” Journal of applied philosophy 24, no. 1 (2007): 62-77.

Sparrow, Robert. “Robots and respect: Assessing the case against autonomous weapon systems.” Ethics & international affairs 30, no. 1 (2016): 93-116.

[1] The former category is not limited to models for which goals are misspecified; I intend for ‘inner alignment’ failures, also known as goal misgeneralisation, to be included as well (see Langosco et al. 2022).

[2] See Amodei et al. 2016 for an overview of these research problems.

Would an Armed Humanitarian Intervention in Haiti Be Legal—And Could It Succeed?

Guest post by Alexandra Byrne, Zoha Siddiqui, and Kelebogile Zvobgo

Haitian officials and world leaders are calling for an armed humanitarian intervention backed by the United Nations (UN) to defeat organized crime. Gangs in Haiti have reportedly kidnapped and killed hundreds of civilians and displaced thousands. Gangs are also limiting access to fuel and blocking critical humanitarian aid to civilians. Add to this a resurgence of cholera.

The United States asked the UN Security Council in October to approve a targeted intervention, under Chapter VII of the UN Charter. US Ambassador to the United Nations Linda Thomas-Greenfield underscored “extreme violence and instability” in Haiti and proposed a mission led by a “partner country” (not the United States or UN peacekeeping forces).

There is nominal support for the mission. In the coming weeks, Canada will send naval vessels to Haiti’s coast, and Jamaica has offered some troops, but no country is taking the lead. Critics argue that past missions in Haiti did more harm than good. In 2010, UN peacekeepers even reintroduced cholera into Haiti. Nonetheless, the United States is pushing for an intervention.

What is an armed humanitarian intervention and would it be legal under international law? Here’s what you need to know.

What Is an Armed Humanitarian Intervention?

An armed humanitarian intervention is a use of force to protect, maintain, or restore peace and security in a target country and internationally. Armed humanitarian interventions differ from ordinary military operations because they aim to protect populations from severe human rights abuses.

Past armed humanitarian interventions achieved limited success in places like Somalia, where troops initially stabilized the country but failed to improve the country’s security environment long-term.

Armed interventions fundamentally clash with state sovereignty—the idea that states control activities within their territories—because they can be conducted without the target state’s consent. While sovereignty is important in international law, it can nevertheless be sidestepped to stop atrocities and restore international peace.

International Law on Armed Interventions

The prevailing law on international interventions is the UN Charter, which binds all UN member states. Chapter VII of the charter governs international interventions and comprises thirteen key articles. Article 39 establishes that the Security Council may determine when international peace and security are breached or threatened. The article also gives the council authority to take all necessary measures to restore peace.

The remaining articles elaborate on those measures. Article 41 authorizes actions “not involving the use of armed force,” such as economic sanctions, while Article 42 permits “action by air, sea, or land forces.”

So, the Security Council decides whether, how, and why state sovereignty may be infringed, including through the use of force. States may only use force without council authorization in response to an armed attack, under Article 51, but they must still notify the council.

Security Council-authorized interventions may be conducted by UN peacekeeping forces or by UN member states’ troops. (Regional armed interventions require approval under Chapter VIII, Article 53.)

The Security Council has not always authorized armed humanitarian interventions, notably failing to prevent genocide in the Balkans and Rwanda in the 1990s. To avoid repeating those failures, the UN in 2005 adopted the Responsibility to Protect (R2P) principle.

R2P delegates to all states the responsibility to protect all people from genocide, ethnic cleansing, war crimes, and crimes against humanity. Under R2P, the international community must be prepared to take collective action through the Security Council “should peaceful means be inadequate,” a line taken from Chapter VII.

R2P was invoked once in Kenya, not to justify armed intervention, but to rally international mediation. R2P was also used in Côte d’Ivoire to deploy additional UN peacekeeping forces. (These forces completed their mandate in 2017.) One of the largest military actions authorized by the Security Council under R2P was the 2011 NATO intervention in Libya, which has since been criticized for poor planning, increasing instability, and pushing regime change.

Possible Legal Armed Intervention in Haiti

If the US resolution for an armed humanitarian intervention in Haiti is approved by the Security Council, the operation would be legal and, if it focused solely on humanitarian objectives (i.e., opening aid delivery channels, providing aid, and protecting civilians), it could succeed. Still, there is the risk of failure. A more ambitious plan, seeking to change the country’s overall security environment, could also fail, as in Somalia.

But even if the intervention complied with international law and was invited by Haitian officials, critics argue it would be unethical, undermining Haitians’ sovereignty.

Other Means to Mitigate the Crisis in Haiti

If the United States fails to gain Security Council support for an armed humanitarian intervention in Haiti, there are other measures available. The United States can increase the humanitarian aid it already provides, and provide tactical equipment and armored vehicles to the government. The Biden administration could also reverse recently expanded immigration restrictions, and instead provide asylum to Haitian migrants while also supporting struggling transit countries.

For its part, the Security Council could expand the economic sanctions and arms embargoes it adopted against criminal actors in Haiti. But such measures take time to implement and might not be felt for months.

The biggest challenge to mitigating the crisis in Haiti is the gangs that are blocking the delivery of food, fuel, and medical supplies to civilians. The challenge for the United States and the broader international community is to not repeat past mistakes—either by intervening too little, too late, or too much.

Alexandra Byrne is a research fellow in the International Justice Lab at William & Mary. Zoha Siddiqui is a 1693 scholar, a research fellow in the International Justice Lab at William & Mary, and an incoming George J. Mitchell Scholar at Queen’s University Belfast. Kelebogile Zvobgo is an assistant professor of government at William & Mary, a faculty affiliate at the Global Research Institute, and founder and director of the International Justice Lab.

Reports from Abroad: Maryellen Stohlman-Vanderveen

This series questions and complicates what ‘reporting from abroad’ can mean in a globalized world that faces interconnected and local crises alongside forces grappling with how to liberate our beings from oppressive structures rooted in past and present (neo)colonialism and imperialism. We can take this as a chance to collectively and constructively consider both broader […]
❌