FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

The other day a colleague of mine shared a YouTube video of the presentation The AI Dilemma, by Tristan Harris and Aza Raskin. In it, they share with the audience their concerns about the rapid and somewhat wild development of artificial intelligence (AI) in the hands of a few tech giants. I highly recommend it. (The video, that is. Not the rapid and somewhat wild development of AI in the hands of a few tech giants).

 

Much like the thousands of signatories of the March open call to “pause giant AI experiments”, and recently the “Godfather of AI” Geoffrey Hinton, Harris and Raskin warn us that we are on the brink of major (negative, dangerous) social disruption due to the power of new AI technologies.

 

Indeed, there’s a bit of a public buzz about “AI ethics” in recent months.

 

While it is good that there is a general awareness and a public discussion about AI – or any majorly disruptive phenomenon for that matter – there’s a potential problem with the abstraction: AI is portrayed as this big, emerging, technological, behemoth which we cannot or will not control. But it has been almost three decades since humans were able to beat an AI at a game of chess. We have been using AI for many things, from medical diagnosis to climate prediction, with little to no concern about it besting us and/or stripping us of agency in these domains. In other domains, such as driving cars, and military applications of drones, there has been significantly more controversy.

All this is just to say that AI ethics is not for hedgehogs – it’s not “one big thing”[i] – and I believe that we need to actively avoid a narrative and a line of thinking which paints it to be. In examining the ethical dimensions of a multitude of AI inventions, then, we ought to take care to limit the scope of our inquiry to the domain in question at the very least.

 

So let us, for argument’s sake, return to that door at the Uehiro Centre, and the voice cautioning visitors to stay clear. Now, as far as I’m aware, the voice and the door are not part of an AI system. I also believe that there is no person who is tasked with waiting around for visitors asking for access, warning them of the impending door swing, and then manually opening the door. I believe it is a quite simple contraption, with a voice recording programmed to be played as the door opens. But does it make a difference to me, or other visitors, which of these possibilities is true?

 

We can call these possibilities:

Condition one (C1): AI door, created by humans.

Condition two (C2): Human speaker & door operator.

Condition three (C3): Automatic door & speaker, programmed by humans.

 

In C3, it seems that the outcome of the visitor’s action will always be the same after the buzzer is pushed or the key card is blipped: the voice will automatically say ‘stay clear of the door’, and the door will open. In C1 and C2, the same could be the case. But it could also be the case that the AI/human has been instructed to assess the risk for visitors on a case-to-case basis, and to only advise caution if there is imminent risk of collision or such (was this the case, I am consistently standing too close to the door when visiting, but that is beside the point).

 

On the surface, I think there are some key differences between these conditions which could have an ethical or moral impact, where some differences are more interesting than others. In C1 and C2, the door opener makes a real-time assessment, rather than following a predetermined cause of action in the way C3’s door opener does. More importantly, C2 is presumed to make this assessment from a place of concern, in a way which is impossible in C1 and C3 because the latter two are not moral agents, and therefore cannot be concerned. They simply do not have the capacity. And our inquiry could perhaps end here.

But it seems it would be a mistake.

 

What if something was to go wrong? Say the door swings open, but no voice warns me to stay clear, and so the door whacks me in the face[ii]. In C2, it seems the human who’s job it is to warn me of the imminent danger might have done something morally wrong, assuming they knew what to expect from opening the door without warning me, but failed in doing so due to negligence[iii]. In C1 and C3, on the other hand, while we may be upset with the door opener(s), we don’t believe that they did anything morally wrong – they just malfunctioned.

 

My colleague Alberto Giubilini highlighted the tensions in the morality of this landscape in what I thought was an excellent piece arguing that “It is not about AI, it is about humans”: we cannot trust AI, because trust is a relationship between moral agents, and AI does not (yet) have the capacity for moral agency and responsibility. We can, however, rely on AI to behave in a certain way (whether we should is a separate issue).

 

Similarly, while we may believe that a human should show concern for their fellow person, we should not expect the same from AIs, because they cannot be concerned.

 

Yet, if the automatic doors continue to whack visitors in the face, we may start feeling that someone should be responsible for this – not only legally, but morally: someone has a moral duty to ensure these doors are safe to pass through, right?

 

In doing so, we expand the field of inquiry, from the door opener to the programmer/constructor of the door opener, and perhaps to someone in charge of maintenance.

 

A couple of things pop to mind here.

 

First, when we find no immediate moral agent to hold responsible for a harmful event, we may expand the search field until we find one. That search seems to me to follow a systematic structure: if the door is automatic, we turn to call the support line, and if the support fails to fix the problem, but turns out to be an AI, we turn to whoever is in charge of support, and so on, until we find a moral agent.

 

Second, it seems to me that, if the door keeps slamming into visitors’ faces in condition in C2, we will not only morally blame the door operator, but also whoever left them in charge of that door. So perhaps the systems-thinking does not only apply when there is a lack of moral agents, but also applies on a more general level when we are de facto dealing with complicated and/or complex systems of agents.

 

Third, let us conjure a condition four (C4) like so: the door is automatic, but in charge of maintenance support is an AI system that is usually very reliable, and in charge of the AI support system, in turn, is a (human) person.

 

If the person in charge of an AI support system that failed to provide adequate service to a faulty automatic door is to blame for anything, it is plausibly for not adequately maintaining the AI support system – but not for whacking people in the face with a door (because they didn’t do that). Yet, perhaps there is some form of moral responsibility for the face-whacking to be found within the system as a whole. I.e. the compound of door-AI-human etc., has a moral duty to avoid face-whacking, regardless of any individual moral agents’ ability to whack faces.

 

If this is correct, it seems to me that we again[iv] find that our traditional means of ascribing moral responsibility fails to capture key aspects of moral life: it is not the case that any agent is individually morally responsible for the face-whacking door, nor are there multiple agents who are individually or collectively responsible for the face-whacking door. Yet, there seems to be moral responsibility for face-whacking doors in the system. Where does it come from, and what is its nature and structure (if it has one)?

 

In this way, not only cognitive processes such as thinking and computing seem to be able to be distributed throughout systems, but perhaps also moral capacities such as concern, accountability, and responsibility.

And in the end, I do not know to what extent it actually matters, at least in this specific domain. Because at the end of the day, I do not care much whether the door opener is human, an AI, or automatic.

 

I just need to know whether or not I need to stay clear of the door.

Notes & References.

[i] Berlin, I. (2013). The hedgehog and the fox: An essay on Tolstoy’s view of history. Princeton University Press.

[ii] I would like to emphasize that this is a completely hypothetical case, and that I take it to be safe to enter the Uehiro centre. The risk of face-whacking is, in my experience, minimal.

[iii] Let’s give them the benefit of the doubt here, and assume it wasn’t maleficence.

[iv] Together with Hazem Zohny, Julian Savulescu, and Ilina Singh, I have previously argued this to be the case in the domain of emerging technologies for collective thinking and decision-making, such as brain-to-brain interfaces. See the Open Access paper Merging Minds for more on this argument.

Video Interview: Introducing Oxford Uehiro Centre’s Academic Visitor, Prof Dr Matthias Braun

By: admin

In the first of a new series of short videos produced by the OUC introducing the academic visitors at the Oxford Uehiro Centre and the practical ethics research that they are involved in.

Should Social Media Companies Use Artificial Intelligence to Automate Content Moderation on their Platforms and, if so, Under What Conditions?

By: admin

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student Trenton Andrew Sewell 

Social Media Companies (SMCs) should use artificial intelligence (‘AI’) to automate content moderation (‘CM’) presuming they meet two kinds of conditions. Firstly, ‘End Conditions’ (‘ECs’) which restrict what content is moderated. Secondly, ‘Means Conditions’ (‘MCs’) which restrict how moderation occurs.

This essay focuses on MCs. Assuming some form of moderation is permissible, I will discuss how/whether SMCs should use AI to moderate. To this end, I outline CM AI should respect users ‘moral agency’ (‘MA’) through transparency, clarity, and providing an option to appeal the AI’s judgment. I then address whether AI failing to respect MA proscribes its use. It does not. SMCs are permitted[1] to use AI, despite procedural failures, to discharge substantive obligations to users and owners.

This essay will demonstrate that:
1) Respect for user’s MA entails SMCs should use AI in a:
a. Transparent, reason-giving way,
b. Based on clear rules and,
c. With an option for appeal.
2) But failing to meet these standards does not proscribe using AI. It is a necessary means of discharging important obligations.

Ideal CM AI
People have rights we should respect. This cred is the basis of this essay. However, rights include substantive rights such as to expression. Here, I am presuming that any moderated content is a legitimate target. Hence, moderating this content simpliciter does not violate users’ rights because SMCs could permissibly moderate the post/user.

The question that remains is what ‘procedural-rights’ users possess. How should SMCs respect users whilst moderating? Here, I address the procedural-rights users have because of their ‘moral agency’ (‘MA’).

MA is the capacity of an agent to understand moral reasons. Respecting the dignity of a person involves treating them as a moral-agent[2]. This requires engagement in moral reasoning[3]. Moral reasoning is the process of giving reasons concerning the justification of an act. Engagement in moral reasoning acknowledges one’s MA and dignity – a basic Kantian requirement[4].

Applying MA to Moderation
Moderation is akin to punishment. H.L.A Hart defined punishment “in terms of five elements:
1. …consequences…considered unpleasant.
2. …for an offence against…rules.
3. …of an…offender for his offence.
4. …intentionally administered by [another] and
5. …administered by an authority constituted by the…system.”[5]

Moderation removes posts and restricts access to platform features which is unpleasant. It occurs to ‘offenders’ for breaching the community guidelines. It is intentionally administered by SMCs which created authorities who impose moderation. It satisfies Hart’s 5 elements.

If moderation is punishment, then respecting MA in the process of moderation will be similar to respecting MA in the process of criminal punishment. That involves giving reasons why the act/offence was wrongful, and why the response to the act/offence was just.[6]

Hence, SMCs respect users’ MA whilst moderating if they:
1) Provide moral reasons to users why they ought not post certain content and;
2) Provide moral reasons to users why they are moderating[7]

SMCs should give users reasons why the guidelines were violated, and why moderation was the right response. CM AI must be, in other words, transparent[8].

Respect for MA requires more than granting reasons. It requires the option of appealing an AI’s judgement to a human moderator.

“Penalizing someone for violating the rules…reasserts our shared values…calling something hate speech…is a….performative assertion that something should be treated as hate speech and…undoubtedly, it will be disagreed with” [9].

Users should be free to question whether such an assertion is an accurate representation of the guidelines. A moral-agent is also a giver, not merely a receiver, of reasons. To engage in the moral reasoning which respects one’s MA, SMCs should give users the option to justify their post.

Furthermore, to respect users, AI should use rules which are prospectively clear. Respecting people as moral-agents is to regard them as able to follow rules they are aware of[10]. Part of what legitimizes punishment is that the user could have complied with the rule.

To respect users as moral-agents, AI should facilitate users’ compliance with rules. CM AI should be:
i) Based on rules;
ii) Which are published;
iii) Prospective,
iv) Intelligible;
v) Free from contradiction;
vi) Possible to follow;
vii) Not constantly changing and;
viii) With congruence between the rule and official actions.[11]

If CM AI satisfies these eight principles, then it respects users by recognizing their MA and furthermore, providing rational freedom.

Moral-agents should not face ‘bolts-from-the-blue’. Their freedom should not be dependant on an AI’s whims. The guidelines that the AI follows should allow users to know whether they are in, and avoid, non-compliance.

This prospective clarity enhances the morality of CM AI by providing ‘freedom from domination’:

“[freedom] is not…the availability of…choices. It is conceivable that a free man might have fewer options…than a slave…[But] we think of slavery as the…embodiment of unfreedom…because…the conditions under which he enjoys…options…are…dependant upon the will of the master.”[12]

Clear rules liberate one from dependence/domination. A user’s freedom is not dependant on the SMC but rather, on the rules which equally constrain moderators.

But why accept that ‘punishment’ by SMCs should respect moral agency? State punishment of crimes might need to – but why content moderation?

Because all should respect each other as moral agents. To do otherwise is to disrespect our dignity. Insofar as moral agency is only consistent with certain procedures of punishment by the state, I see no reason why (as an ideal matter) it would impose fundamentally different requires on punishment by family, friends, strangers, or crucially here – SMCs.

In summary:
Moderation is punishment. To respect MA whilst punishing, SMCs must use transparent AI which gives users reasons to justify SMCs’ response. Furthermore, respecting MA requires that AI decisions are appealable to a human moderator. This provides the opportunity for moral discourse which further respects MA. Lastly, respecting MA requires that the AI’s decisions allow the user to prospectively avoid non-compliance.

Unideal AI?
Whilst the prior section explored how a CM AI can respect user’s MA, it neglected two questions. Does CM AI currently respect MA? If it does not, should SMCs continue to use AI which violates procedural rights?

The answer to the first question is no. “A common critique of automated decision making is the…lack of transparency…[It is]…difficult to decipher…the specific criteria by which…decisions were made”[13]. Furthermore, systems of appeal, such as Facebook’s “Supreme Court”, are available to very few users[14]Finally, users report not knowing when they will be moderated, leading to confusion, and anger[15].

The answer to the second question – should SMCs use unideal AI – is complicated.

One could answer: if MA should be respected, then SMCs are not at liberty to use CM AI unless it respects user’s MA. In short, if CM AI is not transparent, appealable, and prospectively clear, it should not be used.

This view is flawed because SMCs do not only have process obligations. They have substantive obligations to their users and owners.

For its users, SMCs could be obligated to prevent the spread of toxic content, terrorist propaganda or child exploitation. To do otherwise is to become complicit. Christopher Bennet explained this complicity, and its corollary obligations, as resulting from ‘normative control’: “control over whether the…act is done with permission or not”[16]. The wrong done by “a car owner who permits another to engage in reckless… [driving]…[is]…that the owner could and should have…withdrawn his consent”[17]SMCs can – through moderation – to determine whether an act is ‘permissible or impermissible’. “[W]herever [SMCs] [do] not mark some act as impermissible, it regards it as permissible…It can be complicit in allowing…acts to be permissible where it should have made them impermissible…complicity…comes about through a failure to [moderate]”[18]SMCs have an obligation to its users to moderate content (the scope of which is a matter for later investigation).

Furthermore, SMCs have shareholders/investors. “A corporate executive is an employee of the owners of the business. He has direct responsibility to his employers. That responsibility is to conduct the business in accordance with their desires, which generally will be to make…money”[19]. When an agent is managing money belonging to another, we traditionally accept she is obliged to act with regard for the principal’s interests. Those same obligations bind all SMCs baring those which are owner operated[20].

These substantive obligations answer whether SMCs should use imperfect AI, because using CM AI is crucial for discharging of these duties. Even if AI is imperfect, SMCs are obliged to use it for CM; CM AI is needed to SMC’s obligations to different stakeholders.

Given SMCs’ size general size, CM requires AI. Yan LeCunn – Facebook’s chief AI scientist – has stated that: “Without AI there would not be any possibility of…speech filtering, detecting harassment, child exploitation, or terrorist propaganda”[21]To adequately meet SMCs’ substantive obligations to not be complicit in certain harmful conduct, SMCs needs to use AI.

A potential response is that it is “size…that makes automation seem necessary… [and]…size can be changed”[22]. Specifically, “if moderation is…overwhelming at…scale, it should be understood as a limiting factor on…growth”[23]. SMCs should accept making less profit to reduce the need for CM AI.

However, this neglects their obligations to owners. Even if SMC could make moderation respect users MA by setting growth aside, they would breach their fiduciary obligations to owners. Furthermore, SMCs are under pressure to moderate from the public. Not moderating could result in a harm to their brand, ability to recruit talent etc. Moderation is likely in owner’s interests.

Not using CM AI would result in SMCs failing their substantive obligations to either its users, its owners, or more likely both. Yet, one could say that if a ‘right’ to be recognized as moral-agent exists, SMCs should not violate it. Procedural-rights are side-constraints which requires not using imperfect AI. What this neglects is that X being a right does not mean it is of equal importance to right Y. If all obligations cannot be simultaneous met, then choices must be made about which obligations should be unfulfilled.

I would contend that procedural-rights in CM are some of SMCs’ least important obligations. Users who have posted content eligible to moderation are the reason a trade-off of rights is necessary. If they had not done wrong, then the SMC would not need to decide whether to respect their procedural-rights or the substantive rights of its users or owners. If a set amount of cost must be imposed, then it seems appropriate to apply that cost upon the individual most responsible – the user being moderated[24]. Since not using CM AI would result in SMCs failing their substantive obligations, and these obligations are more important, procedural obligations cease to really matter. Human moderation is not feasible, and imperfect CM AI is preferable to no moderation at all. SMCs should use AI because it discharges their more important duties. Nevertheless, insofar as SMCs can improve their CM AI to bring it closer to the ideal, they are obliged to do so. It should work towards the ideal but not let it be the enemy of the good or necessary.

Conclusion
Social Media Companies should use artificial intelligence to automate content moderation. The use of this technology is needed to meet SMCs’ substantive obligations to their users and owners. That means that the conditions under which it should be used are broad. Even if, AI moderation does not respect user’s moral agency, it should still be used. Nevertheless, where possible, SMCs should work to bring its AI moderation more in line with an ideal of respect. This Ideal AI Content Moderation would be transparent (capable of giving users the reasons which underpin the moderation decision) with an option to appeal to a human moderator (as a recognition of the two-sided nature of moral reasoning). Furthermore, the AI should operate on clear, prospective, and reasonably predictable rules such that users are given a freedom from domination and are spared from moderation happening like a ‘bolt-from-the-blue’.
AI moderation is a necessity for SMCs.
They should use AI moderation to meet their substantive obligations whilst striving for the procedural ideal.


Notes:

[1] Perhaps obliged.

[2] (Strawson, 1962).

[3] (Hirsch, 1993).

[4] (Jacobs, 2019, p. 29) (Seelmann, 2014).

[5] (Hart, 2008, pp. 5-6).

[6] (Edwards & Simester, 2014) (von Hirsch A. , 1992).

[7] (Edwards & Simester, 2014, p. 64).

[8] (Suzor & Etal, 2019).

[9] (Gillespie, 2020, p. 3).

[10] (von Hirsch & Hörnle, 1995).

[11] (Fuller, 1969, p. 39) (Simmonds, 2007, p. 64).

[12] (Simmonds, 2007, p. 101).

[13] (Gorwa & et.al, 2020, p. 11) (Burrell, 2016).

[14] (Kelion, 2020).

[15] (West, 2018).

[16] (Bennett, 2019, pp. 78-81).

[17] Ibid (p. 81).

[18] Ibid.

[19] (Friedman, 1970).

[20] There is thus an interesting question about how these obligations could apply to Twitter post Elon’s takeover.

[21] (LeCunn, 2020).

[22] (Gillespie, 2020, p. 4).

[23] Ibid.

[24] (McMahan, 2005) (Øverland, 2014).

Works Cited
Bennett, C. (2019). How Should We Argue for a Censure Theory of Punishment? In A. du Bois-Pedain, & A. Bottoms, Penal Censure (pp. 67-86). Hart Publishing.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 1-16.
Cohen-Almagor, R. (2015). Confronting the internet’s dark side: moral and social responsibility on the free highway. Cambridge: Cambridge University Press.
Edwards, J., & Simester, A. (2014). Prevention with a Moral Voice. In A. Simester, A. Du Bois-Pedain, & U. Neumann, Liberal Criminal Theory (pp. 43-65). Hart Publishing.
Friedman, M. (1970, September 13). The Social Responsibility of Business Is to Increase Its Profits. New York Times.
Fuller, L. (1969). The Morality of Law. New Haven: Yale University Press.
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 1-5.
Gorwa, R., & et.al. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform goverance. Big Data & Society, 1-15.
Günther, K. (2014). Crime and Punishment as Communication. In A. du Bois-Pedain, A. Simester, & U. Neumann, Liberal Criminal Theory (pp. 123-140). Hart Publishing.
Hart, H. (2008). Punishment and Responsibility: Essays in the Philosophy of Law. Oxford University Press.
Hirsch, A. v. (1993). Censure and Sanctions. Oxford University Press.
Jacobs, J. (2019). Censure, Sanction and the Moral Psychology of Resentment. In A. du Bois-Pedain, & A. Bottoms, Penal Censure (pp. 19-40). Hart Publishing.
Kelion, L. (2020, September 24). Facebook ‘Supreme Court’ to begin work before US Presidential vote. Retrieved from BBC: https://www.bbc.co.uk/news/technology-54278788
LeCunn, Y. (2020, June). Deep learning, neural networks and the future of AI. (C. Anderson, Interviewer)
McMahan, J. (2005). Self-Defense and Culpability. Law and Philosophy, 751–774.
Øverland, G. (2014). Moral Obstacles: An Alternative to the Doctrine of Double Effect. Ethics, 481-506.
Seelmann, K. (2014). Does Punishment Honour the Offender? In A. Du Bois-Pedain, A. Simister, & U. Neumann, Liberal Criminal Theory (pp. 111-121). Hart Publishing.
Simmonds, N. (2007). Law as a Moral Idea. Oxford: Oxford University Press.
Strawson, P. (1962). Freedom and Resentment. Retrieved from UCL: https://www.ucl.ac.uk/~uctytho/dfwstrawson1.htm
Suzor, N. P., & etal. (2019). What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commerical Content Moderation. International Journal of Communication, 1526-1543.
von Hirsch, A. (1992). Proportionalty in the Philosophy of Punishment. Crime and Justice, 16, 55-98.
von Hirsch, A., & Hӧrnle, T. (1995). Postive Generalpravention und Tadel. Goltdammer’s Archiv fur Strafrecht, 142.
West, S. M. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media and Society, 4366-4383.

Fearing ChatGPT, Google enlists founders Brin and Page in AI fight

An illustration of ChatGPT exploding onto the scene, being very threatening.

Enlarge / An illustration of a chatbot exploding onto the scene, being very threatening. (credit: Benj Edwards / Ars Technica)

ChatGPT has Google spooked. On Friday, The New York Times reported that Google founders Larry Page and Sergey Brin held several emergency meetings with company executives about OpenAI's new chatbot, which Google feels could threaten its $149 billion search business.

Created by OpenAI and launched in late November 2022, the large language model (LLM) known as ChatGPT stunned the world with its conversational ability to answer questions, generate text in many styles, aid with programming, and more.

Google is now scrambling to catch up, with CEO Sundar Pichai declaring a “code red” to spur new AI development. According to the Times, Google hopes to reveal more than 20 new products—and demonstrate a version of its search engine with chatbot features—at some point this year.

Read 9 remaining paragraphs | Comments

❌