FreshRSS

🔒
☐ ☆ ✇ Practical Ethics

National Oxford Uehiro Prize in Practical Ethics: Why the Responsibility Gap is Not a Compelling Objection to Lethal Autonomous Weapons

By: admin — March 3rd 2023 at 10:36

This article received an honourable mention in the undergraduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by Tanae Rao, University of Oxford student

There are some crimes, such as killing non-combatants and mutilating corpses, so vile that they are clearly impermissible even in the brutal chaos of war. Upholding human dignity, or whatever is left of it, in these situations may require us to hold someone morally responsible for violation of the rules of combat. Common sense morality dictates that we owe it to those unlawfully killed or injured to punish the people who carried out the atrocity. But what if the perpetrators weren’t people at all? Robert Sparrow argues that, when lethal autonomous weapons cause war crimes, it is often impossible to identify someone–man or machine–who can appropriately be held morally responsible (Sparrow 2007; Sparrow 2016). This might explain some of our ambivalence about the deployment of autonomous weapons, even if their use would replace human combatants who commit war crimes more frequently than their robotic counterparts.

This essay rejects Sparrow’s argument, at least as it applies to a wide class of lethal autonomous weapons I call ‘LAW-1’. When LAW-1s cause war crimes, then at least one human being can usually be held morally responsible. I acknowledge that there is a subset of accidents for which attributing moral responsibility is murkier, but they do not give us reason to refrain from using LAW-1s as compared with less sophisticated weapons like guns and missiles.

LAW-1s are the weapons systems that most people envision when imagining a lethal autonomous weapon. I predict that most systems developed in the next decade will be LAW-1s, although some may push the boundaries between LAW-1s and the next generation of lethal autonomous weapons. The defining characteristics of an LAW-1 are:

1.Moderate task specificity: An LAW-1 is a model trained to fulfil a relatively specific task, such as ‘fly around this area and kill any enemy combatants identified if and only if this is allowed under international law’. An example of a task too specific for a LAW-1 is ‘fly to these specific coordinates, then explode’ (this would be more akin to an unsophisticated missile, land mine, etc.). An example of a task too general is ‘perform tasks that will help our state win the war’.

2.No human intervention needed: An LAW-1 is capable of identifying targets and using lethal force without human intervention. For example, an unmanned aerial vehicle (UAV) that uses computer vision techniques to discern active combatants from non-combatants, then shoots the combatants with an attached gun without waiting for human approval would qualify as an LAW-1. An aerial vehicle that requires a remote pilot to operate it is not an LAW-1.

3.No mental states: An LAW-1 does not have mental states, such as pain or regret, and does not have subjective experiences. It is reasonable to believe that all weapons systems currently in operation fulfil this criterion.

I will now outline Sparrow’s argument that lethal autonomous weapons introduce a responsibility gap.

(1) There is a responsibility gap for some war crimes caused by lethal autonomous weapons, meaning that no one can be held morally responsible for the war crime.

(2) Out of basic respect for enemy combatants and non-combatants alike, the legitimate use of any weapon requires that someone can be held responsible if wrongful harm arises as a result of its use.

(C) Therefore, we should not use lethal autonomous weapons during wartime.

I deny the existence of a responsibility gap for an LAW-1. Therefore, the focus of this essay is on the first premise of Sparrow’s argument. There are two reasons why an LAW-1 might commit a war crime. First, this might be intentionally programmed, in which case at least one human being is morally responsible. Second, if the war crime was not a result of human intention, human beings can often be held responsible for gross negligence. I concede that there will be a small number of freak accidents involving the use of LAW-1s for which no human can be held responsible but argue that these cases give us no special reason to reject LAW-1s as compared with less sophisticated weapons.

i. Humans develop and deploy an LAW-1 despite knowing that it will likely commit a war crime.

It should be uncontroversial that humans using an LAW-1 with the knowledge that it will likely commit war crimes are morally responsible for those crimes. For example, a human could knowingly train an LAW-1 with a reward function that incentivises killing non-combatants, even if killing non-combatants is not its explicit goal (e.g., the machine is trained to kill non-combatants that get in its way). The programmers of such a horrible weapon are morally responsible for the war crimes committed. If the military officials knew about its criminal programming, then they too would be morally responsible for the war crimes committed. Therefore, if humans knowingly deploy an LAW-1 that will commit war crimes, there is no responsibility gap.

 Humans deploy an LAW-1, without knowing that it could commit a war crime.

Here is where the existence of a responsibility gap is most plausible. Sparrow argues that “the more the system is autonomous then the most it has the capacity to make choices other than those predicted or encouraged by its programmers. At some point then, it will no longer be possible to hold the programmers/designers responsible for outcomes that they could neither control or predict” (Sparrow 2006, 70).

I make two contentions about accidental war crimes caused by LAW-1s. Firstly, many of these automation failures are a result of gross negligence and should have been foreseen by human programmers. As in other cases of negligence, it is appropriate to hold some human beings morally responsible for the results. For example, weapons company executives and/or military leadership could justifiably be imprisoned for some accidents. Secondly, the accidents which could not have been foreseen or prevented through sensible design practice do not give us special reason to dismiss LAW-1s. These accidents are not dissimilar from the misfiring of a gun, or human mistargeting of an unsophisticated missile.

When considering my arguments, it is prudent to think of why such accidents happen. Not all LAW-1s use machine learning (ML) techniques, but ML is widespread enough in tasks important for LAW-1s, such as computer vision, that it is worth exploring in some detail. In general, a machine learning-powered LAW-1 might fail because a) it is (accidentally) given a goal compatible with war crimes without robust constraints, and/or b) it fails at achieving its goal or staying within its constraints (e.g., misidentifying non-combatants as enemy combatants about to shoot friendly combatants).[1]

A body of machine learning research has identified, forewarned, and discussed these potential failure modes in detail.[2] I think it is reasonable to expect LAW-1 programmers to rigorously test their systems to ensure that the frequency of war crimes committed is exceedingly low. Sensible development of LAW-1s might involve intensive testing on representative datasets, early-stage deployments in real combat zones without weaponry to check if non-combatants can be consistently identified, etc. Techniques to solve the problem of misspecified goals (in this case, goals compatible with war crimes) continue to be developed (Ouyang et al. 2022). The comparatively specific objectives given to LAW-1s makes overcoming these technical challenges easier than for ML models given very general objectives. And, in the worst-case scenario, LAW-1s committing war crimes can be quickly recalled, and either decommissioned or improved to avoid recurrences.

Crucially, developers of LAW-1s need not be able to predict exactly how or why their machines will fail to be held morally responsible for their failure. As long as the LAW-1 committed a war crime as a result of a known failure mode (e.g., glitches in computer vision misclassifying non-combatants) that was not ruled out with a sufficient degree of confidence, developers (among others) can be held morally responsible. This is analogous to an unsophisticated missile whose faulty targeting system causes target coordinates to be miscommunicated, resulting in the accidental bombing of a hospital. The weapons manufacturer can plausibly be held morally responsible for not rigorously testing their product before selling it to the military.

Therefore, it is likely that, in many though not all circumstances, humans can be held morally responsible for war crimes caused by LAW-1s, even if no human explicitly intended for a war crime to be committed. In particular, programmers can be held responsible for not carefully checking for common failure modes, military officials can be held responsible for not sufficiently auditing the weapons they choose to deploy, and states can be held responsible for failing to regulate the development of faulty LAW-1s. I acknowledge that careful, rigorous checks might not currently be possible for LAW-1s, let alone more sophisticated lethal autonomous weapons. But ensuring a very low failure rate in such systems is a technical problem to be solved, rather than some sort of mathematical impossibility. Perhaps the deployment of LAW-1s ought to be delayed until further progress on these technical problems is made, but this does not justify a complete ban.

To be clear, LAW-1s still identify and kill people without human intervention. There will likely always be a small risk of accidentally violating international law when using an LAW-1 even if no negligence is involved. But there is no morally relevant difference between this and a human keying in the wrong target for a missile accidentally, or even a gun misfiring and hurting a surrendered enemy combatant. If LAW-1s have a very high rate of accidental killings, then they should not be used, for the same reason that a very inaccurate missile should not be used. The degree of autonomy exhibited by a weapons system is only relevant insofar as it is correlated with the frequency of accidents; the responsibility gap is not a reason to discount the deployment of LAW-1s with low accident rates.

Sparrow’s response to the charge that non-autonomous weapon-related unjust killings sometimes also have responsibility gaps is that “if the nature of a weapon, or other means of war fighting, is such that it is typically impossible to identify or hold individuals responsible for the casualties that it causes then it is contrary to [the] important requirement of jus in bello” (Sparrow 2007, 67). But I have argued that, at least for the LAW-1s currently being deployed and developed by the world’s militaries, the responsibility gap is far from typical. By this, I mean that the overall number of LAW-1-caused war crimes for which no one can be held morally responsible is plausibly smaller than Sparrow needs for his quoted response to be compelling.

Despite being able to use lethal force without human intervention, LAW-1s are not so different with regards to the attribution of moral responsibility than a gun. Just as a gun might misfire, or a human being may accidentally (and understandably) misaim, LAW-1s might not fulfil the task intended by the humans developing and deploying them. If these accidents are just as infrequent as accidents caused by human combatants, then the existence of a responsibility gap does not give us compelling reason to abandon LAW-1s. As technology develops, it seems likely that accident rates will decrease to the point that LAW-1s are superior to human combatants. Clever programming can allow LAW-1s to escape the violence-inducing cognitive biases shown to be present in human militaries, intake and provide relevant information faster than humans, and ultimately render law-abiding decisions in chaotic situations (Arkin 2010).

Therefore, the responsibility gap is not a compelling reason to refrain from developing and deploying certain kinds of lethal autonomous weapons. In fact, the need to minimise accidents may justify more expenditure on developing LAW-1s to be as safe as is feasible. Additionally, further research should establish a clearer classification of the degree of autonomy displayed by different weapons systems, as is relevant to moral responsibility. Not all lethal autonomous weapons have the same ethical implications, and it is dangerous to be overly general in our conclusions about such a consequential subject.

 

Bibliography

Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete problems in AI safety.” arXiv preprint arXiv:1606.06565 (2016).

Arkin, Ronald C. “The case for ethical autonomy in unmanned systems.” Journal of Military Ethics 9, no. 4 (2010): 332-341.

Di Langosco, Lauro Langosco, Jack Koch, Lee D. Sharkey, Jacob Pfau, and David Krueger. “Goal misGeneralization in deep reinforcement learning.” In International Conference on Machine Learning, pp. 12004-12019. PMLR, 2022.

Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang et al. “Training language models to follow instructions with human feedback.” arXiv preprint arXiv:2203.02155 (2022).

Sparrow, Robert. “Killer robots.” Journal of applied philosophy 24, no. 1 (2007): 62-77.

Sparrow, Robert. “Robots and respect: Assessing the case against autonomous weapon systems.” Ethics & international affairs 30, no. 1 (2016): 93-116.

[1] The former category is not limited to models for which goals are misspecified; I intend for ‘inner alignment’ failures, also known as goal misgeneralisation, to be included as well (see Langosco et al. 2022).

[2] See Amodei et al. 2016 for an overview of these research problems.

☐ ☆ ✇ Philosophy Archives | OUPblog

Charity and solidarity! What responsibilities do nonprofits have towards Ukraine?

By: Becky Clifford — January 30th 2023 at 10:30

Charity and solidarity! What responsibilities do nonprofits have towards Ukraine?

In a speech to the UN General Assembly in the fall of 2022, President Biden called on the UN to stand in solidarity with Ukraine. At least 1,000 companies have left Russia because of Putin’s brutal unprovoked war on Ukraine. Some companies left because of sanctions. Others left for moral reasons, often under pressure from investors, consumers, and out of empathy with their employees in Ukraine. But companies also have human rights responsibilities. Whether they stay or leave Russia will impact the war and human rights of the people of Ukraine. When companies leave en masse, Russia faces the possibility of economic oblivion.

Nonprofits can also impact the war. Russian oligarchs have donated lots of money to cultural organizations, universities, and think tanks, such as Harvard, MOMA, and the Council on Foreign Relations. Many of these donations are tainted by corruption and the close ties oligarchs have with Putin.

Philanthropy is a common way for oligarchs to launder their reputations, sometimes with an eye to future wrongdoing, what social psychologists call moral licensing. Studies show that people often follow their good acts with bad acts as a way to balance out the good with the bad. In the end, whatever good oligarchs do through their giving may be outweighed by the bad they’ve done in the past or will do in the future. But oligarchs are only part of the problem. Nonprofits that solicit and accept their donations are complicit in those harms, too.

What are the responsibilities of nonprofits? How should they meet their moral and human rights responsibilities during Russia’s war on Ukraine? What should we expect from museums, universities, and cultural organizations? If anything, they should be held to a higher standard than for-profit enterprises. After all, nonprofits serve the public good. They may not have had a physical presence in Russia, the way Starbucks and Levi Strauss did, but many of them are connected to Putin by way of Russian oligarchs.

“Philanthropy is a common way for oligarchs to launder their reputations, sometimes with an eye to future wrongdoing.”

How are nonprofits connected to Russia’s oligarchs?

Consider Viktor Vekselberg, a prominent Russian oligarch with close ties to Putin and head of the Skolkovo Foundation. Like many Russian oligarchs, he made his money with the collapse of the Soviet Union. The Skolkovo Foundation donated over $300 million to Massachusetts Institute of Technology (MIT) to support Skoltech, a program aimed at developing Russia’s tech sector. Vekselberg also sat on MIT’s Board of Trustees. It was only in 2018, after the US Treasury sanctioned him for “malign activities,” that MIT found the wherewithal to remove him from the Board. And, it was not until Russia invaded Ukraine that MIT ended the Skoltech Program, explaining, “this step is a rejection of the actions of the Russian government in Ukraine.” MIT finally got it right. Donors, such as Vekselberg, implicate nonprofits in Russia’s war on Ukraine. But had MIT done its due diligence from the outset, it would not have accepted Vekselberg’s donation in the first place. Boycotting oligarchs shows solidarity with the people of Ukraine, while doing nothing renders nonprofits complicit in the human rights violations suffered in Ukraine.

Vladimir Potanin, Russia’s richest oligarch, has supported the Kennedy Center and the Guggenheim Museum, among others. Until recently, he sat on the Board of Trustees at the Guggenheim, and on the Advisory Board of the Council of Foreign Relations. Potanin resigned from both in April 2022. Although not a Russian citizen, Len Blavatnik is a Russian insider who donated millions of dollars to Oxford, the Tate Modern, Yale, Harvard Medical School, and the Council of Foreign Affairs, to name a few of the elite recipients of his philanthropy. Aaron Ring, a Yale professor who received support from the Blavatnik Fund, called on Yale to suspend the Program. He was concerned that Yale was endorsing the donor. Yale maintained that since Blavatnik had not been sanctioned, his donation could be accepted. During Russia’s war on Ukraine, stakeholders like Aaron Ring don’t want to benefit from Russia’s oligarchs. They want to stand in solidarity with Ukraine.

How are nonprofits implicated in Russia’s human rights violations?

The Guiding Principles on Business and Human Rights were endorsed by the UN Human Rights Council in 2011. They hold that enterprises are responsible not only for their direct human rights violations, but also for their indirect ones. So, what counts as an indirect violation in the nonprofit sector? When a nonprofit benefits from donors who are implicated in human rights violations, the nonprofit is complicit in the wrongs of the donors. Many Russian oligarchs are tied to Putin, have profited from their relationship with him, and stand to benefit from his war on Ukraine.

“Boycotting oligarchs shows solidarity with the people of Ukraine, while doing nothing renders nonprofits complicit in the human rights violations suffered in Ukraine.”

When nonprofits refuse to accept donations from oligarchs, they stand in solidarity with Ukraine against Russia. Given the tendency of oligarchs to donate to elite and high-profile organizations, boycotting them may create a bandwagon effect, or a little philanthropy warfare!

Russia has a long record of human rights violations. Freedom of expression is one. The Committee to Protect Journalists confirmed that 82 journalists and media workers were killed in Russia between 1992 and 2022. In 2020, Russia adopted a law banning so called “disrespect” to authorities. Its violation of the fundamental rights of LGBTQ people is longstanding. In 2013, it penalized so called “propaganda” about homosexuality. Activists and celebrities faced fines for supporting the LGBTQ community in Russia.

Now, Russia is under investigation for war crimes such as rape, torture, and execution style murders of civilians. As of 2022, the UN General Assembly resolved that Russia should withdraw its military forces from Ukraine. This came amidst reports of Russian attacks on residences, schools, hospitals, and on civilians, including women, people with disabilities, and children.

Have nonprofits done enough for human rights?

Have nonprofits done enough for human rights? No, not when it comes to Russian oligarchs. By laundering the reputations of oligarchs, nonprofits have enabled Putin’s war on Ukraine and the horrific suffering it has brought. The Guiding Principles can help nonprofits identify their human rights responsibilities and ensure that they are not complicit in Russia’s human rights violations. All enterprises should practice due diligence, a mechanism that prevents human rights violations and complicity in them. Refusing donations from Russian oligarchs is the very least nonprofits can do.

Transparency is at the heart of due diligence. Yale Professor Jeffrey Sonnenfeld has tracked which companies left Russia and which have stayed, providing much-needed transparency on the operations of for-profit enterprises. Not only does Sonnenfeld’s list incentivize companies to pull out of Russia, those that left have outperformed those that remained. Unfortunately, no such list exists for nonprofits. Tracking nonprofits with respect to Russian oligarchs, knowing and showing, would go a long way toward ensuring that they meet their human rights responsibilities.

To be sure, there is a risk that nonprofits will receive less money if they boycott Russian oligarchs. But it is also possible that they will be rewarded for doing the right thing, as Hands on Hartford was when it refused donations from the Proud Boys, a white supremacist group. Generous donors may come forward when they learn that nonprofits stand in solidarity with Ukraine. Granted, the impact nonprofits can have on the war in Ukraine is not as great as for-profit companies, if only because of scale. But keep in mind that nonprofits serve the public good, which if anything enhances their human rights responsibilities. In the long run, when nonprofits stand in solidarity with Ukraine, they serve the public good.

Featured image by Elena Mozhilov via Unsplash (publish domain)

OUPblog - Academic insights for the thinking world.

❌