FreshRSS

๐Ÿ”’
โŒ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskogย 

ย 

In what is quite possibly my last entry for the Practical Ethics blog, as Iโ€™m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

โ€“

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma.ย 

As you are granted access to the building โ€“ be it via buzzer or key card โ€“ a dry, somewhat sad, voice states โ€œstay clear of the doorโ€ before the door slowly swings open.

โ€“

The other day a colleague of mine shared a YouTube video of the presentation The AI Dilemma, by Tristan Harris and Aza Raskin. In it, they share with the audience their concerns about the rapid and somewhat wild development of artificial intelligence (AI) in the hands of a few tech giants. I highly recommend it. (The video, that is. Not the rapid and somewhat wild development of AI in the hands of a few tech giants).

ย 

Much like the thousands of signatories of the March open call to โ€œpause giant AI experimentsโ€, and recently the โ€œGodfather of AIโ€ Geoffrey Hinton, Harris and Raskin warn us that we are on the brink of major (negative, dangerous) social disruption due to the power of new AI technologies.

ย 

Indeed, thereโ€™s a bit of a public buzz about โ€œAI ethicsโ€ in recent months.

ย 

While it is good that there is a general awareness and a public discussion about AI โ€“ or any majorly disruptive phenomenon for that matter โ€“ thereโ€™s a potential problem with the abstraction: AI is portrayed as this big, emerging, technological, behemoth which we cannot or will not control. But it has been almost three decades since humans were able to beat an AI at a game of chess. We have been using AI for many things, from medical diagnosis to climate prediction, with little to no concern about it besting us and/or stripping us of agency in these domains. In other domains, such as driving cars, and military applications of drones, there has been significantly more controversy.

All this is just to say that AI ethics is not for hedgehogs โ€“ itโ€™s not โ€œone big thingโ€[i] โ€“ and I believe that we need to actively avoid a narrative and a line of thinking which paints it to be. In examining the ethical dimensions of a multitude of AI inventions, then, we ought to take care to limit the scope of our inquiry to the domain in question at the very least.

ย 

So let us, for argumentโ€™s sake, return to that door at the Uehiro Centre, and the voice cautioning visitors to stay clear. Now, as far as Iโ€™m aware, the voice and the door are not part of an AI system. I also believe that there is no person who is tasked with waiting around for visitors asking for access, warning them of the impending door swing, and then manually opening the door. I believe it is a quite simple contraption, with a voice recording programmed to be played as the door opens. But does it make a difference to me, or other visitors, which of these possibilities is true?

ย 

We can call these possibilities:

Condition one (C1): AI door, created by humans.

Condition two (C2): Human speaker & door operator.

Condition three (C3): Automatic door & speaker, programmed by humans.

ย 

In C3, it seems that the outcome of the visitorโ€™s action will always be the same after the buzzer is pushed or the key card is blipped: the voice will automatically say โ€˜stay clear of the doorโ€™, and the door will open. In C1 and C2, the same could be the case. But it could also be the case that the AI/human has been instructed to assess the risk for visitors on a case-to-case basis, and to only advise caution if there is imminent risk of collision or such (was this the case, I am consistently standing too close to the door when visiting, but that is beside the point).

ย 

On the surface, I think there are some key differences between these conditions which could have an ethical or moral impact, where some differences are more interesting than others. In C1 and C2, the door opener makes a real-time assessment, rather than following a predetermined cause of action in the way C3โ€™s door opener does. More importantly, C2 is presumed to make this assessment from a place of concern, in a way which is impossible in C1 and C3 because the latter two are not moral agents, and therefore cannot be concerned. They simply do not have the capacity. And our inquiry could perhaps end here.

But it seems it would be a mistake.

ย 

What if something was to go wrong? Say the door swings open, but no voice warns me to stay clear, and so the door whacks me in the face[ii]. In C2, it seems the human whoโ€™s job it is to warn me of the imminent danger might have done something morally wrong, assuming they knew what to expect from opening the door without warning me, but failed in doing so due to negligence[iii]. In C1 and C3, on the other hand, while we may be upset with the door opener(s), we donโ€™t believe that they did anything morally wrong โ€“ they just malfunctioned.

ย 

My colleague Alberto Giubilini highlighted the tensions in the morality of this landscape in what I thought was an excellent piece arguing that โ€œIt is not about AI, it is about humansโ€: we cannot trust AI, because trust is a relationship between moral agents, and AI does not (yet) have the capacity for moral agency and responsibility. We can, however, rely on AI to behave in a certain way (whether we should is a separate issue).

ย 

Similarly, while we may believe that a human should show concern for their fellow person, we should not expect the same from AIs, because they cannot be concerned.

ย 

Yet, if the automatic doors continue to whack visitors in the face, we may start feeling that someone should be responsible for this โ€“ not only legally, but morally: someone has a moral duty to ensure these doors are safe to pass through, right?

ย 

In doing so, we expand the field of inquiry, from the door opener to the programmer/constructor of the door opener, and perhaps to someone in charge of maintenance.

ย 

A couple of things pop to mind here.

ย 

First, when we find no immediate moral agent to hold responsible for a harmful event, we may expand the search field until we find one. That search seems to me to follow a systematic structure: if the door is automatic, we turn to call the support line, and if the support fails to fix the problem, but turns out to be an AI, we turn to whoever is in charge of support, and so on, until we find a moral agent.

ย 

Second, it seems to me that, if the door keeps slamming into visitorsโ€™ faces in condition in C2, we will not only morally blame the door operator, but also whoever left them in charge of that door. So perhaps the systems-thinking does not only apply when there is a lack of moral agents, but also applies on a more general level when we are de facto dealing with complicated and/or complex systems of agents.

ย 

Third, let us conjure a condition four (C4) like so: the door is automatic, but in charge of maintenance support is an AI system that is usually very reliable, and in charge of the AI support system, in turn, is a (human) person.

ย 

If the person in charge of an AI support system that failed to provide adequate service to a faulty automatic door is to blame for anything, it is plausibly for not adequately maintaining the AI support system โ€“ but not for whacking people in the face with a door (because they didnโ€™t do that). Yet, perhaps there is some form of moral responsibility for the face-whacking to be found within the system as a whole. I.e. the compound of door-AI-human etc., has a moral duty to avoid face-whacking, regardless of any individual moral agentsโ€™ ability to whack faces.

ย 

If this is correct, it seems to me that we again[iv] find that our traditional means of ascribing moral responsibility fails to capture key aspects of moral life: it is not the case that any agent is individually morally responsible for the face-whacking door, nor are there multiple agents who are individually or collectively responsible for the face-whacking door. Yet, there seems to be moral responsibility for face-whacking doors in the system. Where does it come from, and what is its nature and structure (if it has one)?

ย 

In this way, not only cognitive processes such as thinking and computing seem to be able to be distributed throughout systems, but perhaps also moral capacities such as concern, accountability, and responsibility.

And in the end, I do not know to what extent it actually matters, at least in this specific domain. Because at the end of the day, I do not care much whether the door opener is human, an AI, or automatic.

ย 

I just need to know whether or not I need to stay clear of the door.

โ€“

Notes & References.

[i] Berlin, I. (2013). The hedgehog and the fox: An essay on Tolstoyโ€™s view of history. Princeton University Press.

[ii] I would like to emphasize that this is a completely hypothetical case, and that I take it to be safe to enter the Uehiro centre. The risk of face-whacking is, in my experience, minimal.

[iii] Letโ€™s give them the benefit of the doubt here, and assume it wasnโ€™t maleficence.

[iv] Together with Hazem Zohny, Julian Savulescu, and Ilina Singh, I have previously argued this to be the case in the domain of emerging technologies for collective thinking and decision-making, such as brain-to-brain interfaces. See the Open Access paper Merging Minds for more on this argument.

Perceptual diversity and philosophical belief

Reading up on Derek Parfitโ€™s theory of personal identity as part of my research on non-essential accounts of self in literature, philosophy and neuroscience, I was astounded to come across a New Yorker feature on the philosopher which describes his inability to visualise imagery as an anomaly:

โ€œHe has few memories of his past, and he almost never thinks about it, although his memory for other things is very good. He attributes this to his inability to form mental images. Although he recognizes familiar things when he sees them, he cannot call up images of them afterward in his head: he cannot visualize even so simple an image as a flag; he cannot, when he is away, recall his wifeโ€™s face. (This condition is rare but not unheard of; it has been proposed that it is more common in people who think in abstractions.) He has always believed that this is why he never thinks about his childhood. He imagines other people, in quiet moments, playing their memories in their heads like wonderful old movies, whereas his few memories are stored as propositions, as sentences, with none of the vividness of a picture.โ€

Surely, Parfitโ€™s experience would be representative of the norm, I thoughtย  โ€“ i.e. to only be able to see things that are actually there, physically present and immediately visible in the external surroundings? I certainly never had this seeming super-power of creating images myself and had always assumed that my subjective experience corresponded to the average.ย 

As I was soon to find out, however, the absence of a visual component to Parfitโ€™s imagination is part of a neurological condition which affects an estimated 2-5% of the population, including myself, namely aphantasia.ย 

Recent studies into aphantasia (e.g. https://pubmed.ncbi.nlm.nih.gov/34296179/) connect it to a number of characteristics and personality traits, including introversion and autistic spectrum features, difficulty with recognition, including face-recognition, impoverished autobiographical memory and less event detail in general memory, difficulty with atemporal and future-directed imagination, including difficulties with projecting oneself into mentally constructed scenes and the future, reduced mind-wandering tendency, elevated levels of IQ and mathematical and scientific occupations.ย 

In addition to these, I think aphantasia is likely connected to a certain philosophical belief or position, namely the non-essentialist view of the self that is found in both the reductionist account of personal identity in Western philosophy and the no-self doctrine in Eastern contemplative traditions. I offer a more extensive argument for this connection here: https://psyche.co/ideas/aphantasia-can-be-a-gift-to-philosophers-and-critics-like-me.

In Reasons and Persons Derek Parfit formulates the view that personal identity is reducible to physical and psychological continuity of mental states, and that there is no โ€˜further factโ€™, diachronic entity, or essence that determines identity. The belief that persons are separate entities with continuously existing selves, he argues, is to a great degree an illusion. The New Yorker profile only fleetingly connects Parfitโ€™s philosophy to his aphantasia, but to me it seems an obviously relevant piece of explanation. Our philosophical views are based on our intuitions; our perceptual experience of the world guides our ideas about it.ย 

As modern neuroscience is giving us deeper insight into the wide neuro- and perceptual diversity of people, it is also giving us new explanations of differences in peopleโ€™s experience of reality and, accordingly, their philosophical intuitions and beliefs. According to the predictive processing theory of brain function, the reality we experience as objective and independently existing, is to a large degree created by our brain, a projection based on our brainโ€™s best guesses about the external reality and as such a form of controlled hallucination. And as Anil Seth has recently pointed out, since we all have different brains, we will naturally make different guesses about the external reality we encounter and thus have different perceptual experiences of reality. โ€œJust as it serves us well to occasionally question our social and political beliefs, itโ€™s useful to know that others can literally see things differently to us, and that these differences may evolve into different beliefs and behaviours.โ€

The growing insight into perceptual diversity, then, gives way to an increased possibility of biographically understanding and explaining philosophersโ€™ theories and as such allows for a new form of โ€˜neuro-biographicalโ€™ reading of philosophy.ย 

It seems plausible that the flipside of a reduced sense of the past and future is an increased connection to and absorption in the present and a weaker identification with a continuous personal narrative and a coherent and substantial self. Parfitโ€™s diminished sense of continuity of identity and substantiality of his own self โ€“ which he himself explicitly links to his aphantasia โ€“ may well have led him towards or at least strengthened his anti-essential views of personhood.ย 

Likewise, my own aphantasia could at least in part explain my intellectual preference for and easy identification with non-essential conceptions of self in both Western philosophy and Buddhism. The question is, then, whether the condition of aphantasia gives people like Parfit and me a shortcut to enlightenment and clearer philosophical insight into and intuitive understanding of the human condition and nature of reality. Or, does it obscure the truth by barring us from dimensions that are integral to the most common human experience and installing intuitions that do not correspond to the norm?

As neuroscience and neurotechnology continue to develop and give us better understanding of the variations and differences in the neurological constitution of brains, it will be interesting to see to how far the awareness of perceptual differences and specificity can reach in the explanation of differences in philosophical intuitions and beliefs โ€“ and to what extent it can disqualify philosophical positions and theories. The notion of perceptual diversity offers a valuable route for philosophers to exercise self-criticism, scrutinise their theories and intuitions and investigate the underlying perceptions and experiences. At the same time, it troubles some of the fundamental concepts on which the discipline of philosophy relies, paving the way for further relativisation and destabilisation of the already undermined notion of objective truth and rationality and potentially removing us further from consensus.

How Brain-to-Brain Interfaces Will Make Things Difficult for Us

Written by David Lyreskog

Four images depicting โ€˜Hivemind Brain-Computer Interfacesโ€™, as imagined by the AI art generator Midjourney.

โ€˜Hivemind Brain-Computer Interfacesโ€™, as imagined by the AI art generator Midjourney

ย 

A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing, and hybrid and artificial intelligence, promises to transform how we think about collective and collaborative cognitive tasks. With implementations ranging from research to entertainment, and from therapeutics to military applications, as these tools continue to improve, we need to anticipate and monitor their impacts โ€“ how they may affect our society, but also how they may reshape our fundamental understanding of agency, responsibility, and other concepts which ground our moral landscapes.

In a new paper, I, together with Dr. Hazem Zohny, Prof. Julian Savulescu, and Prof. Ilina Singh, show how these new technologies may reshape fundamental components of widely accepted concepts pertaining to moral behaviour. The paper, titled โ€˜Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Mindsโ€™, was just published in Neuroethics, and is freely available as an Open Access article through the link above.

In the paper, we argue that the received views on how we (should) ascribe responsibility to individuals and collectives map poorly onto networks of these โ€˜Collective Mindsโ€™. The intimately collective nature of direct multiple-brain interfaces, for instance, where human minds can collaborate on and complete complex tasks without necessarily being in the same room โ€“ or even on the same continent! โ€“ย  seem to suggest a collectivist moral framework to ascribe agency and responsibility. However, the technologies we are seeing in R&D do not necessitate the meeting of criteria we normally would turn to for ascription of such frameworks; they do not, for instance, seem to rely on that participants have shared goals, know what the goals of other participants are, or even know whether they are collaborating with another person or a computer.ย 

In anticipating and assessing the ethical impacts of Collective Minds, we propose that we move beyond binary approaches to thinking about agency and responsibility (i.e. that they are either individual or collective), and that relevant frameworks for now focus on other aspects of significance to ethical analysis, such as (a) technical specifications of the Collective Mind, (b) the domain in which the technology is deployed, and (c) the reversibility of its physical and mental impacts. However, in the future, we will arguably need to find other ways to assess agency constellations and responsibility distribution, lest we abandon these concepts completely in this domain.

Video Interview: Introducing Dr Emma Dore Horgan

By: admin

An interview with OUC academic visitor and former Oxford Uehiro Centre DPhil student Dr Emma Dore Horgan on her research into the ethics of neuro-interventions for offenders.

โŒ