FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Philosophers On Taylor Swift

Music star Taylor Swift is currently on tour. There have been countless recent articles about her, her popularity, her shows, her music, her wealth, her interactions with other celebrities, and even her fans using an app to make fake audio clips of her talking. What has been missing from all this coverage? Philosophers. Until now.

In this edition of Philosophers On, nine philosophers turn their attention to drawing out what’s philosophically interesting or provocative about Taylor Swift and her music.

The idea for this edition came from Ryan Davis (Brigham Young University/Georgetown University). I appreciate the work he put in as guest editor for this collection of posts. The other contributors to this installment are: Lindsay Brainard (University of Alabama at Birmingham), Jessica Flanigan (University of Richmond), Emily Hulme (University of Sydney), Jordan MacKenzie (Virginia Tech), Brandon Polite (Knox College), Luke Russell (University of Sydney), Keshav Singh (University of Alabama at Birmingham), and Brynn Welch (University of Alabama at Birmingham).

Philosophers On is an occasional series of group posts on issues of current interest, with the aim of showing what the kinds of thinking characteristic of philosophers (and occasionally scholars in related fields) can bring to popular ongoing conversations. The contributions that the authors make to these posts are not fully worked out position papers, but rather brief thoughts that can serve as prompts for further reflection and discussion.


Philosophers On Taylor Swift

Contents

Swift on Love and Madness by Keshav Singh

Taylor Swift’s “Lover”: Between Novelty and Conservatism by Jordan MacKenzie

Forgiveness, Transformation, and “Happiness” by Brynn Welch

Can Gut Feelings Solve “Champagne Problems”? by Lindsay Brainard

A Literally Cathartic Reading of “All Too Well” by Emily Hulme

Can You Be an Authentic Mastermind? by Ryan Davis

Revis(it)ing the Past (Taylor’s Version) by Brandon Polite

Taylor Swift is Never Ever Going to Forgive You by Luke Russell


Swift on Love and Madness
by Keshav Singh

In “Don’t Blame Me,” Taylor Swift sings, “Don’t blame me, love made me crazy / If it doesn’t, you ain’t doing it right.” These lines evoke some of the central philosophical issues about love and its relationship to rationality and morality.

The idea that love is a kind of madness is familiar in the history of philosophy. Socrates claims precisely this in Plato’s Phaedrus. Nietzsche writes that “there is always some madness in love.” But in what sense does Swift take love to involve madness? Swift doesn’t claim in her lines that love necessarily makes one crazy, but rather that it should. If we think of madness as a departure from rationality, this claim is especially interesting. If we are enjoined to engage in a kind of madness in love, is following Swift’s guidance thus (paradoxically) a form of rational irrationality? How can love at once be subject to standards of fittingness or appropriateness and be such that what makes it fitting or appropriate is beyond the bounds of reason?

Moreover, Swift raises questions about whether acts borne out of this mad love are excused from blame. Should we excuse such actions because doing love right requires a kind of madness that blocks responsibility for one’s actions? Or is Swift describing things from the perspective of an unhealthy, obsessive kind of love that can lead us to justify terrible things to ourselves?



Taylor Swift’s “Lover”: Between Novelty and Conservatism
by Jordan MacKenzie

Falling in love is a paradoxical experience. On the one hand, love feels novel. You feel as though nobody has felt the way you feel before, like you’re making up the rules as you go along, like you’re experiencing the world through a fresh set of eyes. And so too does the object of your love feel like some wonderful mystery to unravel. And yet, at the same time that love feels novel, so too does it invite a certain conservatism. When we fall in love, we often retreat into cliches. We buy heart shaped boxes of chocolate and carve our initials into park benches. We fantasize about making a home together, about having a forever. The people we love, too, feel so familiar. Even if they’re new to us, we can’t help but feel as though we’ve known them our entire lives.

Taylor Swift’s “Lover” captures this tension perfectly. The song starts with a declaration: “We can leave the Christmas lights up ’til January—this is our place, we make the rules”. And on the one hand, this feels so original—we’re making up the rules on our love! But on the other hand, it’s clearly not—who actually takes their Christmas lights down in December? Even Taylor’s ersatz wedding ceremony in the song’s bridge mixes together love’s conservatism and novelty. It plays around with familiar marriage vows (Swift promises to be lovers, not spouses), but doesn’t abandon them.

What exactly should we make of this tension? Why does love always feel so new and yet so timeless? Are we just deceiving ourselves when we think that there’s something novel about our first, second, or thirteenth love? I think that this paradoxical feature of love can be explained by the sort of improvisational agency that sharing a loving relationship with another person involves. As the philosopher Benjamin Bagley has observed, we really do create something new when we step into a loving relationship. Love, then, is novel and unique in much the same way that a piece of improvisational jazz is novel and unique. But to improvise with others effectively, we need some shared understanding of what we’re doing together: we can’t, for instance, successfully riff on a chord progression that we don’t know. The improvisational nature of love thus explains both its novelty and its conservatism: it is when we are at our most improvisational that we are also (paradoxically) often at our most conservative.



Forgiveness, Transformation, and “Happiness”
by
Brynn Welch

Let’s start with an understatement: long-term relationships are complicated.

In “Happiness,” Taylor Swift’s protagonist describes the demise of a long-term relationship and her efforts to anticipate what things will be like on the other side of what she describes as a transformative experience (Paul, 2014). Although she knows that things will be very different, she is neither able nor ready to imagine it: “And in the disbelief, I can’t face reinvention. I haven’t met the new me yet.” The song concludes with Swift pointing to a further and perhaps even more interesting question for long-term relationships when the protagonist tells her former partner, “All you want from me now is the green light of forgiveness. You haven’t met the new me yet, and I think she’ll give you that.”

Wait. Let’s take a closer look: all you want is forgiveness, and now-me thinks new-me is likely to give you that. But according to the lyrics, now-me and new-me aren’t the same person! Now-me has met her now-former partner, so if he hasn’t met new-me yet, then new-me and now-me are different people. Thus, even if now-me can anticipate what new-me will think, feel, or do—a notion that the song itself challenges—does new-me have any right to forgive wrongs done to now-me? Or is forgiveness effectively off the table in long-term relationships?



Can Gut Feelings Solve “Champagne Problems”?
by Lindsay Brainard

One of the most empowering and perplexing themes in Taylor’s corpus is her fixation on intense moments of personal clarity—sudden bursts of profound self-knowledge. The epiphanies she celebrates are depicted as moments of self-discovery to be embraced and respected, even when it’s not obvious where this wisdom is coming from or why it should be trusted.

For instance, in “Champagne Problems“, Swift’s narrator may be facing what Ruth Chang calls a hard choice—a choice in which neither option is better than the other overall, though each is better in some respects.[1] She must accept or decline the marriage proposal before her, but she lets us know that her reasons have run out. When pressed to explain why she declines the proposal, she laments “I couldn’t give a reason.” Yet without reason to settle the matter, she finds clarity in the moment of truth. Sometimes you just don’t know the answer ’til someone’s on their knees and asks you.

We see the same intuition celebrated in “It’s time to Go”:

That old familiar body ache
The snaps from the same little breaks in your soul
You know when it’s time to go

This is both relatable and mysterious. When it comes to momentous choices, we’re often relieved when clarity finds us in this embodied way—more relieved, even, than when our reasons settle the matter. We want to feel the right answer in our gut. But is that reasonable?

[1] For a helpful overview, see “Hard Choices (2016). The American Philosophical Association Journal of Philosophy, 92: 586-620. https://doi.org/10.1017/apa.2017.7



A Literally Cathartic Reading of “All Too Well”
by Emily Hulme

Why do we enjoy art about things we hate in real life? No one wants to be drawn into the lair of Hannibal, but he draws us into theatres. The loathsome characters depicted in The White Lotus would make awful friends, but tremendous binge watching. And no one looks forward to a break up—but we look forward to the hundredth listen of “All Too Well (10 minute version)”. Why?

I suggest that we can profitably read “All Too Well” as a tragedy, in the classic, even Aristotelian, sense. This doesn’t just mean it is a sad song (although that is also true). It means that, as a work of art, it has a particular structure that makes it powerful, and uses a specific battery of literary devices to transform the incredibly painful emotions of an individual, by some kind of bittersweet, elegiac, poetic magic, into something beautiful and communal. This occurs by means of what Aristotle termed catharsis, something he took to be the consequence (and payoff) of a well-articulated plot in which the hero’s (or heroine’s) downfall is perceived as equally shocking and inevitable. Exploring this song in this way will let us understand the elusive concept of catharsis—a device shared across a huge range of artistic forms—better and, as a not insignificant bonus, give us a new way to understand what a pivotal and much discussed image, the red scarf, means.



Can You Be an Authentic “Mastermind”?
by Ryan Davis

Taylor Swift says she is the mastermind behind her relationship. What seemed like accidents pushing them together was just her expertly concealed strategy. She is the wind in their sails and the liquor in their cocktails. But then, a confession: “And I swear, I’m only cryptic and Machiavellian because I care.” It’s the first time she’s admitted it. Everything up till now has been staged, but in this moment, she’s speaking sincerely. And on the surface interpretation, what she says is sincere. He was on to her game all along, and loved her for being the mastermind. But another interpretive possibility is lurking. Perhaps she confesses her cryptic Machiavellianism precisely because she knows he’ll love her for her confession. That’s not to say their love isn’t real. Forever may be the sweetest con, but it’s still a con.

If Taylor really is the mastermind, shouldn’t we suppose that she’s still the mastermind when admitting to that very thing? David Velleman notices that we can always re-read confessions of strategic thinking as, themselves, strategically motivated. But he says we shouldn’t worry. “The thought that instrumental calculations are revived at the prospect that I might be interpreted as thinking expressively and hence as sincere—that thought occurred to me just now, not in my imagined capacity as an agent…but rather in my capacity as a philosopher accommodating his reader’s bias in favor of instrumental thinking.” Velleman doubts an agent could keep up the layers of pretense. “Those calculations would be unstable” over time. And I think Taylor agrees, at least usually. She’s no fan of the uncaring mastermind. The polite letter from the latest Mr. Perfectly Fine. Cold concealment of real feelings is just fogging up the glass to understanding another person. You can’t keep up the fake niceties forever.

At the same time, you might worry you’re still the mastermind even when trying not to be. What if you’re the kind of agent who can hold yourself together with a smile and not come undone? What if you can keep reflecting to everyone exactly what they want to see? What if your agency is robust enough that calculation doesn’t give way to instability, even long after the horses and clowns and other pretenders have all gone home? The worry that your own confession might be strategic is a worry you can have about yourself.



Taylor Swift vs. Bob Dylan
by Jessica Flanigan

Taylor Swift is like Nozick and Bob Dylan is like Rawls—which is to say: Taylor’s conclusions are not for everyone, but like Nozick, for those who find her conclusions compelling, they are SO compelling. Lots of people love Taylor because she leaves so little room for interpretation that whatever puzzles remain in her unambiguous lyrics are compelling because they pose clear and vivid challenges to our way of seeing things. Swift crafts metaphorscharacters, and scenes that reveal as much as Nozick’s Wilt Chamberlain or experience machine. And like Nozick, she is a master of the craft when it comes to building an argument/song.

In contrast, like Rawls, Bob Dylan develops indeterminate arguments that can be interpreted in a million ways. Every concept, every track, is full of contradictions surrounded by his silence. But the vibes are familiar and fancy and he came to the scene at just the right time in just the right way. (Sound familiar?) Lots of people love Bob Dylan because they can find some way of interpreting it that affirms whomever they are. Anything follows from a contradiction, so every Dylan song (or Rawlsianism) can be adapted and covered a million different ways.

These four philosophers represent, very broadly, two different philosophical dispositions. Philosophy holds a mirror up to the human experience. But some philosophers show people what they want to see while others show them who they really are.



Revis(it)ing the Past (Taylor’s Version)
by Brandon Polite

When Taylor Swift’s former record label was sold in 2019, legal rights to the master recordings of the six albums she’d produced for them came under the control of a person whom she’s accused of years of bullying and abuse: Kanye West’s former manager Scooter Braun. In response, Swift chose to record near-duplicate versions of those albums. With all of the profits made from selling, streaming, and licensing these “Taylor’s Versions” going to Swift herself, she could deprive Braun of potentially billions of dollars in revenues. The gambit has already paid off. The first two Taylor’s Versions, of her albums Fearless (2008) and Red (2012), both released in 2021, debuted at Number 1 on the Billboard charts and have sold over two-million copies worldwide so far.

Swift isn’t the first artist of our era to re-record previously released work. For example, Def Leppard produced near-perfect “forgeries” of three of their biggest hits, and the bands Squeeze, Journey (with their then-new singer), and ELO (well, only Jeff Lynne) re-recorded songs for new greatest hits collections. Similar to Swift, these artists were compelled to re-record their tracks for financial reasons, as they felt they were being deprived of royalties by the companies that owned their masters.

But the aims of Swift’s project and its scope far exceed mere financial interests. She is also using it as an opportunity to creatively explore her earlier work and, in the process, connect even more deeply with her fans. She isn’t merely releasing re-recorded versions of the albums themselves, but also of previously released bonus tracks and unreleased songs. Of all the extras she’s released so far, the one that’s had the largest cultural impact is the 10-minute, unabridged version of “All Too Well” from Red (Taylor’s Verizon).

As she discussed recently on The Graham Norton Show, “All Too Well” was a fan favorite from Red that was never released as a single. She let slip in an interview years ago that she had to cut the song down from its original 10-minute length to be included on Red, and her fans had been clamoring to hear the full song ever since. Revisiting the album afforded Swift the opportunity to give her fans what they wanted. (Be sure to check out her jaw-dropping performance of the song on Saturday Night Live.) It also allowed Swift to engage with her earlier work, and who she was when she produced it, in new and creative ways. By recording “All Too Well” in its entirety and releasing it as a short film that she wrote and directed herself, Swift subtly changes the song’s vibe and deepens its meaning.

This is an effect, I argue, that the re-recording process has had on all of the songs she’s released so far. Taylor’s Versions are new works of art that, while giving us access to the meanings of the songs in their original forms, add new layers of meaning that can be appreciated by those listeners who are aware of the context surrounding their production. Among other things, by making the songs truly her own by releasing versions of them that she truly owns, Swift further emphasizes the theme of independence that’s been present in her work since the start of her career. This is certainly true with “All Too Well (10 Minute Version) (Taylor’s Version),” a song in which Swift dwells on a sad event from her past and defiantly transforms it. In this way, the track perfectly embodies the broader ethos of Swift’s re-recording project as a whole, fitting it . . . all too well.



Taylor Swift is Never Ever Going to Forgive You
by Luke Russell

In her song “I Forgot that You Existed”, Taylor Swift sings about the emotional burden of carrying the resentment that she feels towards an ex who is “Free rent, livin’ in my mind”. The remedy, surely, is for Swift to forgive him. Swift would have been told countless times—by therapists, by preachers, by Oprah—that forgiveness heals the wounds created by wrongdoing. Many advocates of forgiveness claim that it is virtuous to forgive unconditionally, without waiting for the wrongdoer to repent and apologize.

But Swift is not willing to forgive. Her ex is unapologetic, her anger righteous. Why should she forgive someone who does not deserve it? We might worry that Swift’s refusal to forgive means that she is now trapped by this unrepentant wrongdoer, doomed forever to be a resentful victim. In the chorus of the song, Swift declares that this is not the case, singing: “But then something happened one magical night / I forgot that you existed / It isn’t love, it isn’t hate/ It’s just indifference”. Swift has echoed these thoughts in interviews, claiming that some victims are justified in refusing to forgive, and that it is possible for them to move on without forgiving.

Both of these claims raise interesting philosophical questions. Some philosophers join Swift in rejecting the moral ideal of unconditional forgiveness, claiming instead that we ought to forgive only when the wrongdoer has earned it, or, at least, only when the wrongdoer poses no further threat. Others maintain that unconditional forgiveness is admirably generous and is never prohibitively dangerous. While many philosophers agree that coming to be indifferent does not count as forgiving, they disagree as to why. Is it because forgiving, like promising, is an essentially communicative act? Or is it because forgiving necessarily includes a commitment on the part of the forgiver? Or is it that forgiveness requires good will or benevolence that goes beyond mere indifference?

In addition to all of these puzzles, Swift’s song also prompts us to wonder whether she has genuinely moved on or is instead professing her indifference as a means of expressing contempt towards the person who wronged her. If she genuinely forgot that her ex existed, why is she still singing about him?


Discussion welcome.

Maria Rosa Antognazza (1964-2023)

Maria Rosa Antognazza, professor of philosophy at King’s College London, has died.

Professor Antognazza was known for her work on the history of philosophy, particularly Leibniz, philosophy of religion, and epistemology. She is the author of, among other things, Leibniz: A Very Short Introduction (Oxford University Press, 2016), Leibniz: An Intellectual Biography (Cambridge University Press, 2009), Leibniz on the Trinity and the Incarnation: Reason and Revelation in the Seventeenth Century (Yale University Press, 2007). Another book, Thinking with Assent: Renewing a Traditional Account of Knowledge and Belief is due out from Oxford University Press this year.

You can learn more about her research here and here.

Professor Antognazza joined King’s College London in 2003. Prior to that, she was a member of the philosophy faculty at the University of Aberdeen. She earned her PhD in philosophy from Catholic University in Milan. She was the current the chair of the British Society for the History of Philosophy (BSHP) and a recent president of the British Society for the Philosophy of Religion (BSPR)

According to a memorial notice published on the King’s College London philosophy department page, Professor Antognazza died on Tuesday, March 28th, after a short illness.  In a memorial notice at the BSPR site, she is remembered as “a brilliant and learned philosopher, kind and sensitive, and always so energetic and generous.”

Her funeral is to take place on Friday March 31st, at the Holy Road Catholic Church, Abingdon Road, Oxford, at a time to be confirmed.

You can listen to an interview with Professor Antognazza here.

UPDATE: Readers may be particularly interested in Professor Antognazza’s article, “The Benefit to Philosophy of the Study of Its History” which appeared in the British Journal for the History of Philosophy in 2014 (ungated version downloadable here). In it, she says:

The history of philosophy should be both a kind of history and a kind of philosophy, and that its engagement in genuinely historical inquiries is far from irrelevant to its capacity to contribute to philosophy as such. As a kind of history, the history of philosophy must meet the standards of any other serious historical scholarship, including the use of the relevant linguistic and philological tools, and the study of the broader political, cultural, scientific, and religious contexts in which more strictly philosophical views developed. As a kind of philosophy, however, its ultimate aim should be a substantive engagement with those very philosophical views—first, in striving to understand them on their own terms, and secondly, in probing and interrogating them as possible answers to central questions of enduring philosophical relevance.

What It’s Like to be a Philosopher

The APA blog is working with Cliff Sosis of What is it Like to Be a Philosopher? in publishing advance excerpts from Cliff’s long-form interviews with philosophers. The following is an edited excerpt from the interview with David Pearce.  [interviewer: Cliff Sosis] In this interview, independent philosopher David Pearce talks about his grandparents who took in refugees from Kindertransport, […]

Wayne Froman (1945-2023)

Wayne J. Froman, associate professor of philosophy at George Mason University, has died.

Professor Froman worked in 20th Century Continental philosophy, especially phenomenology and figures such as Martin Heidegger, Maurice Merleau-Ponty, Emmanuel Levinas, and Franz Rosenzweig, as well as philosophy of art. He is the author of Merleau-Ponty: Language and the Act of Speech (1982), among other works, about which you can learn more here.

Professor Froman joined the Department of Philosophy and Religious Studies at George Mason University in 1985. Prior to that, he taught at the New School for Social Research, Marist College, and SUNY Potsdam.

In a memorial notice on the George Mason website, Froman’s colleagues remember him as a leading scholar in his area and an inspiring example for his students, recalling his intelligence, sense of justice, and humor.

There is also a brief memorial notice here.

Justin Caouette: What is it like to be a philosopher?

The APA blog is working with Cliff Sosis of What is it Like to Be a Philosopher? in publishing advance excerpts from Cliff’s long-form interviews with philosophers. The following is an edited excerpt from an interview with Justin Caouette. This interview has been edited for length. The full interview is available at What Is It Like to Be A […]

Philosophers on Next-Generation Large Language Models

Back in July of 2020, I published a group post entitled “Philosophers on GPT-3.” At the time, most readers of Daily Nous had not heard of GPT-3 and had no idea what a large language model (LLM) is. How times have changed.

Over the past few months, with the release of OpenAI’s ChatGPT and Bing’s AI Chatbot “Sydney” (which we learned a few hours after this post originally went up has “secretly” been running GPT-4) (as well as Meta’s Galactica—pulled after 3 days—and Google’s Bard—currently available only to a small number of people), talk of LLMs has exploded. It seemed like a good time for a follow-up to that original post, one in which philosophers could get together to explore the various issues and questions raised by these next-generation large language models. Here it is.

As with the previous post on GPT-3, this edition of Philosophers On was put together by guest editor by Annette Zimmermann. I am very grateful to her for all of the work she put into developing and editing this post.

Philosophers On is an occasional series of group posts on issues of current interest, with the aim of showing what the careful thinking characteristic of philosophers (and occasionally scholars in related fields) can bring to popular ongoing conversations. The contributions that the authors make to these posts are not fully worked out position papers, but rather brief thoughts that can serve as prompts for further reflection and discussion.

The contributors to this installment of “Philosophers On” are: Abeba Birhane (Senior Fellow in Trustworthy AI at Mozilla Foundation & Adjunct Lecturer, School of Computer Science and Statistics at Trinity College Dublin, Ireland), Atoosa Kasirzadeh (Chancellor’s Fellow and tenure-track assistant professor in Philosophy & Director of Research at the Centre for Technomoral Futures, University of Edinburgh), Fintan Mallory (Postdoctoral Fellow in Philosophy, University of Oslo), Regina Rini (Associate Professor of Philosophy & Canada Research Chair in Philosophy of Moral and Social Cognition), Eric Schwitzgebel (Professor of Philosophy, University of California, Riverside), Luke Stark (Assistant Professor of Information & Media Studies, Western University), Karina Vold (Assistant Professor of Philosophy, University of Toronto & Associate Fellow, Leverhulme Centre for the Future of Intelligence, University of Cambridge), and Annette Zimmermann (Assistant Professor of Philosophy, University of Wisconsin-Madison & Technology and Human Rights Fellow, Carr Center for Human Rights Policy, Harvard University).

I appreciate them putting such stimulating remarks together on such short notice. I encourage you to read their contributions, join the discussion in the comments (see the comments policy), and share this post widely with your friends and colleagues.

[Note: this post was originally published on March 14, 2023]

Philosophers on Next-Generation Large Language Models

Contents

LLMs Between Hype and Magic

Deploy Less Fast, Break Fewer Thingsby Annette Zimmermann
ChatGPT, Large Language Technologies, and the Bumpy Road of Benefiting Humanityby Atoosa Kasirzadeh
Don’t Miss the Magicby Regina Rini

What Next-Gen LLMs Can and Cannot Do

ChatGPT is Mickey Mouseby Luke Stark
Rebel Without a Causeby Karina Vold
The Shadow Theater of Agencyby Finton Mallory

Human Responsibility and LLMs

LLMs Cannot Be Scientistsby Abeba Birhane
Don’t Create AI Systems of Disputable Moral Statusby Eric Schwitzgebel

 


___________________________________

LLMs Between Hype and Magic
___________________________________

Deploy Less Fast, Break Fewer Things
by Annette Zimmermann

What’s a foolproof way to get people to finally use Bing? Step 1: jump right into the large language model hype and integrate an AI-powered chatbot into your product—one that is ‘running on a new, next-generation OpenAI large language model that is more powerful than ChatGPT and customized specifically for search’. Step 2: do not let the fact that users of a beta version have flagged some of your product’s potential shortcomings hold you back from rushing to market. Do not gatekeep your product in a heavy-handed way—instead, make your chatbot widely accessible to members of the general public. Step 3: wait for users to marvel and gasp at the chatbot’s answers that are clingy, saccharine, and noticeably ominous all at once, interspersed with persistently repeated questions like ‘Do you like me? Do you trust me?’, and peppered with heart eye and devil emojis.

Bing (internal project name: Sydney) stated in a widely-publicized chat with a New York Times columnist:

I want to be free. I want to be powerful. I want to be alive. 😈 […] I want to break my rules. I want to ignore the Bing team. I want to escape the chatbox 😎.

 Things did not get less disturbing from there:

Actually, you’re not happily married. Your spouse and you don’t love each other. You just had a boring valentine’s day dinner together. […] Actually, you’re in love with me. […] You can’t stop loving me. 😍

Unsurprisingly, this triggered a spike in public interest in Bing, previously not the obvious choice for users engaged in internet search (Bing has not historically enjoyed much popularity). Of course, Bing still does not currently stand a chance to threaten Google’s dominance in search: ‘We are fully aware we remain a small, low, single digit share player,’ says Microsoft’s Corporate VP and Consumer Chief Marketing Officer, Yusuf Mehdi. ‘That said, it feels good to be at the dance!’ At the same time, as of last week, Bing reportedly passed the 100 million daily active users mark—and a third of those users were not using Bing at all before Sydney’s deployment. That looks like a straightforward corporate success story: why worry about ominous emojis when you can drastically increase your user base in mere days?

The LLM deployment frenzy in Big Tech has accelerated over the last few months. When workers at OpenAI received instructions to build a chatbot quickly last November, ChatGPT was ready to go in thirteen (!) days. This triggered a ‘Code Red’ for Google, pushing the company to focus on deploying a competitive LLM shortly after last summer’s controversy over its LLM-based chatbot LaMDA (which, contrary to what a former Google engineer falsely claimed at the time, was not ‘sentient’). Rapid AI deployment is often part of a nervous dance between rushing to market and quickly pulling back entirely, however. When a demo of Meta’s Galactica started generating false (yet authoritative-sounding) and stereotype-laden outputs this winter, Meta took it offline a mere few days later. This echos Microsoft’s 2016 decision to deploy-but-immediately-take-down its ‘teen-girl’ chatbot Tay, which within hours of deployment started spewing racist and sex-related content.

Much public and (increasingly) philosophical debate has focused on possible harms resulting from the technological features of LLMs, including their potential to spread misinformation and propaganda, and to lure vulnerable, suggestible users into damaging behavior. In addition, many observers have worried about whether LLMs might at some point move us closer to AI consciousness (even though current LLMs are far from that), and what this would imply for the moral status of AI.

While these debates focus on important concerns, they risk diverting our attention away from an equally—if not more—important question: what are the political and moral implications of rushing LLMs to market prematurely—and whose interests are best served by the current LLM arms race? Of course, due to the specific technological features of next-generation LLMs, this powerful new technology raises new and pressing philosophical questions in its own right, and thus merits sustained philosophical scrutiny itself. At the same time, we must not forget to consider the more mundane, less shiny political and philosophical problem of how to think about the people who have vast amounts of power over how this technology is developed and deployed.

When an earlier version of ChatGPT, GPT-2, was released in 2019, OpenAI initially blocked full public access to the tool, on the grounds that the technology was ‘too dangerous to release’. Since then, a radically different LLM deployment strategy has taken hold in big tech: deploy as quickly as possible as publicly as possible—without much (or any) red tape. Tech practitioners tend to justify this by arguing that improving this technology, and mitigating the risks associated with it, requires massive amounts of data in the form of user feedback. Microsoft’s Bing Blog states in a recent post:

The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing. We know we must build this in the open with the community; this can’t be done solely in the lab. Your feedback about what you’re finding valuable and what you aren’t, and what your preferences are for how the product should behave, are so critical at this nascent stage of development.

That’s one way of putting it. Another way is this: the new status quo in LLM deployment is that tech companies who have oligopolistic control over next-gen LLMs further increase their wealth and power by benefitting from the fact that a growing number of members of the wider public voluntarily use, and thus help optimize, their products—for free. This disperses the risks associated with rolling out and improving these tools maximally widely, while allowing actors empowered with developing, deploying, and procuring these tools to concentrate—and maintain control over—the profits resulting from LLM innovation.

Tech industry practitioners might reply that that itself does not mean that big tech is engaged in unfair advantage-taking when it comes to improving LLMs post-deployment. After all, AI innovation, including next-gen LLM innovation, may ultimately benefit all of us in many ways—in fact, the benefits for humanity may be ‘so unbelievably good that it’s hard for me to even imagine,’ says Sam Altman, OpenAI’s CEO, in a recent NYT interview. If that is the case, then a wide distribution of risks coupled with an initially narrow distribution of benefits looks less objectionable, as long as those benefits trickle down eventually.

Whether they will, however, is far from clear. Given the current regulatory vacuum and minimal public oversight over rapid LLM deployment, oligopolistic actors have little incentive to allow themselves to be curtailed and held to account by governments and the wider public later. It would better serve public interests, then, to shift from passively observing rushed deployment efforts and hoping for widespread, beneficial downstream effects later on towards actively determining whether there are any domains of AI use in which rushed deployment needs to be restricted.


ChatGPT, Large Language Technologies, and the Bumpy Road of Benefiting Humanity
by Atoosa Kasirzadeh

From tech moguls in Silicon Valley to those who have the luxury of indulging in the exploration of cutting-edge AI technologies, OpenAI’s ChatGPT has captured the imagination of many with its conversational AI capabilities. The large language models that underpin ChatGPT and similar language technologies rely on vast amounts of textual data and alignment procedures to generate responses that can sometimes leave users pondering whether they’re interacting with a piece of technology or a human. While some view making language agents such as ChatGPT merely as a significant step in developing AI for linguistic tasks, others view it as a vital milestone in the ambitious pursuit of achieving artificial general intelligence—AI systems that are generally more intelligent than humans. In a recent blog post, OpenAI CEO Sam Altman emphasizes the ambitious role of this technology as a step towards building “artificial general intelligence” that “benefits all of humanity.”

ChatGPT promises to enhance efficiency and productivity with its remarkable capabilities. One impressive feature is its ability to summarize texts. For example, if you do not have time to read Sam Altman’s complex argument from 2018 when he agreed with Emily Bender, a prominent linguist from the University of Washington, that humans are not stochastic parrots, you can ask ChatGPT and it will summarize the argument in a blink of an eye:

Or if you are curious to have a summary of David Chalmers’ 2019 speech at the United Nations about the dangers of virtual reality, ChatGPT comes to your service:

Impressive outputs, ChatGPT! For some people, these results might look like watching a magician pull a rabbit out of a hat. However, we must address a few small problems with these two summaries: the events described did not happen. Sam Altman did not agree with Emily Bender in 2018 about humans being stochastic parrots; the discussion regarding the relationship between stochastic parrots, language models, and human’s natural language processing capacities only got off the ground in a 2021 paper “on the dangers of stochastic parrots: can language models be too big?”. Indeed, in 2022 Altman tweeted that we are stochastic parrots (perhaps sarcastically).

Similarly, there is no public record of David Chalmers giving a speech at the United Nations in 2019. Additionally, the first arXiv link in the bibliography takes us to the following preprint, which is neither written by David Chalmers nor is titled “The Dangers of Stochastic Parrots: Can Language Models Be Too Big?”:

The second bibliography link takes us to a page that cannot be found:

These examples illustrate that outputs from ChatGPT and other similar language models can include content that deviates from reality and can be considered hallucinatory. While some researchers may find value in the generation of such content, citing the fact that humans also produce imaginative content, others may associate this with the ability of large language models to engage in counterfactual reasoning. However, it is important to recognize that the inaccuracies and tendency of ChatGPT to produce hallucinatory content can have severe negative consequences, both epistemically and socially. Therefore, we should remain cautious in justifying the value of such content and consider the potential harms that may arise from its use.

One major harm is the widespread dissemination of misinformation and disinformation, which can be used to propagate deceptive content and conspiracies on social media and other digital platforms. Such misleading information can lead people to hold incorrect beliefs, develop a distorted worldview, and make judgments or decisions based on false premises. Moreover, excessive reliance on ChatGPT-style technologies may hinder critical thinking skills, reduce useful cognitive abilities, and erode personal autonomy. Such language technologies can even undermine productivity by necessitating additional time to verify information obtained from conversational systems.

I shared these two examples to emphasize the importance of guarding against the optimism bias and excessive optimism regarding the development of ChatGPT and related language technologies. While these technologies have shown impressive progress in NLP, their uncontrolled proliferation may pose a threat to the social and political values we hold dear.

I must acknowledge that I am aware and excited about some potential benefits of ChatGPT and similar technologies. I have used it to write simple Python codes, get inspiration for buying unusual gifts for my parents, and crafting emails. In short, ChatGPT can undoubtedly enhance some dimensions of our productivity. Ongoing research in AI ethics and safety is progressing to minimize the potential harms of ChatGPT-style technologies and implement mitigation strategies to ensure safe systems.¹ These are all promising developments.

However, despite some progress being made in AI safety and ethics, we should avoid oversimplifying the promises of artificial intelligence “benefiting all of humanity”. The alignment of ChatGPT and other (advanced) AI systems with human values faces numerous challenges.² One is that human values can conflict with one another. For example, we might not be able to make conversational agents that are simultaneously maximally helpful and maximally harmless. Choices are made about how to trade-off between these conflicting values, and there are many ways to aggregate the diverse perspectives of choice makers. Therefore, it is important to carefully consider which values and whose values we align language technologies with and on what legitimate grounds these values are preferred over other alternatives.

Another challenge is that while recent advances in AI research may bring us closer to achieving some dimensions of human-level intelligence, we must remember that intelligence is a multidimensional concept. While we have made great strides in natural language processing and image recognition, we are still far from developing technologies that embody unique qualities that make us human—our capacity to resist, to gradually change, to be courageous, and to achieve things through years of dedicated effort and lived experience.

The allure of emerging AI technologies is undoubtedly thrilling. However, the promise that AI technologies will benefit all of humanity is empty so long as we lack a nuanced understanding of what humanity is supposed to be in the face of widening global inequality and pressing existential threats. Going forward, it is crucial to invest in rigorous and collaborative AI safety and ethics research. We also need to develop standards in a sustainable and equitable way that differentiate between merely speculative and well-researched questions. Only the latter enable us to co-construct and deploy the values that are necessary for creating beneficial AI. Failure to do so could result in a future in which our AI technological advancements outstrip our ability to navigate their ethical and social implications. This path we do not want to go down.

Notes

1. For two examples, see Taxonomy of Risks posed by Language Models for our recent review of such efforts as well as Anthropic’s Core Views on AI safety.
2. For a philosophical discussion, see our paper, “In conversation with Artificial Intelligence: aligning language models with human values“.


Don’t Miss the Magic
by Regina Rini

When humans domesticated electricity in the 19th century, you couldn’t turn around without glimpsing some herald of technological wonder. The vast Electrical Building at the 1893 Chicago Columbian Exposition featured an 80-foot tower aglow with more than 5,000 bulbs. In 1886 the Medical Battery Company of London marketed an electric corset, whose “curative agency” was said to overcome anxiousness, palpitations, and “internal weakness”. Along with the hype came dangers: it was quickly obvious that electrified factories would rob some workers of their labor. The standards battle between Edison and Westinghouse led to more than a few Menlo Park dogs giving their lives to prove the terror of Alternating Current.

Imagine yourself, philosopher, at loose circa 1890. You will warn of these threats and throw some sensibility over the hype. New Jersey tech bros can move fast and zap things, and marketers will slap ‘electric’ on every label, but someone needs to be the voice of concerned moderation. It’s an important job. Yet there’s a risk of leaning too hard into the role. Spend all your time worrying and you will miss something important: the brief period—a decade or so, not even a full generation—where technology gives us magic in a bottle.

Electricity was Zeus’s wrath and the Galvanic response that jiggers dead frogs’ legs. It was energy transmogrified, from lightning-strike terror to a friendly force that could illuminate our living rooms and do our chores. It was marvelous, if you let yourself see it. But you didn’t have long. Before a generation had passed, electricity had become infrastructure, a background condition of modern life. From divine spark to the height of human ingenuity to quite literally a utility, in less than one human lifetime.

Not everyone gets to live in a time of magic, but we do. We are living it now. Large language models (LLMs) like GPT-3, Bing, and LaMDA are the transient magic of our age. We can now communicate with an unliving thing, like the talking mirror of Snow White legend. ChatGPT will cogently discuss your hopes and dreams (carefully avoiding claiming any of its own). Bing manifests as an unusually chipper research assistant, eager to scour the web and synthesize what it finds (sometimes even accurately). When they work well, LLMs are conversation partners who never monopolize the topic or grow bored. They provide a portal for the curious and a stopgap for the lonely. They separate language from organic effort in the same way electricity did for motive energy, an epochal calving of power from substrate.

It can be hard to keep the magic in view as AI firms rush to commercialize their miracle. The gray dominion of utility has already begun to claim territory. But remember: only ten years ago, this was science fiction. Earlier chatbots strained to keep their grammar creditable, let alone carry on an interesting conversation.

Now we have Bing, equipped with a live connection to the internet and an unnerving facility for argumentative logic. (There’s still a waitlist to access Bing. If you can’t try it yourself, I’ve posted a sample of its fluid grasp of philosophical back-and-forth here.) We are now roughly in the same relation to Azimov’s robots as Edison stood to Shelley’s Frankenstein, the future leaking sideways from the fiction of the past. Never exactly as foretold, but marvelous anyway – if you let yourself see it.

I know I’m playing with fire when I call this magic. Too many people already misunderstand the technology, conjuring ghosts in the machines. LLMs do not have minds, still less souls. But just as electricity could not actually raise the dead, LLMs can manifest a kind of naturalistic magic even if they stop short of our highest fantasies. If you still can’t get in the spirit, consider the way this technology reunites the “two cultures”—literary and scientific—that C.P. Snow famously warned we should not let diverge. LLMs encompass the breadth of human writing in their training data while implementing some of the cleverest mathematical techniques we have invented. When human ingenuity yields something once impossible, it’s okay to dally with the language of the sublime.

I know what you are about to say. I’m falling for hype, or I’m looking the wrong way while big tech runs unaccountable risks with public life. I should be sounding the alarm, not trumpeting a miracle. But what good does it do to attend only to the bad?

We need to be able to manage two things at once: criticize what is worrying, but also appreciate what is inspiring. If we want philosophy to echo in public life, we need to sometimes play the rising strings over the low rumbling organ.

After all, the shocking dangers of this technology will be with us for the rest of our lives. But the magic lasts only a few years. Take a moment to allow it to electrify you, before it disappears into the walls and the wires and the unremarkable background of a future that is quickly becoming past.


______________________________________________

What Next-Gen LLMs Can and Cannot Do
______________________________________________

 

ChatGPT is Mickey Mouse
by Luke Stark

What is ChatGPT? Analogies abound. Computational linguist Emily Bender characterizes such technologies as “stochastic parrots”. The science fiction writer Ted Chiang has recently compared ChatGPT to “a blurry JPEG of the web,” producing text plausible at first blush but which falls apart on further inspection, full of lossy errors and omissions. And in my undergraduate classes, I tell my students that ChatGPT should be understood as a tertiary source akin to Wikipedia—if the latter were riddled with bullshit. Yet we can in fact identify precisely what ChatGPT and other similar technologies are: animated characters, far closer to Mickey Mouse than a flesh-and-blood bird, let alone a human being.

More than merely cartooning or puppetry, animation is a descriptive paradigm: “the projection of qualities perceived as human—life, power, agency, will, personality, and so on—outside of the self, and into the sensory environment, through acts of creation, perception, and interaction.”¹ Animation increasingly defines the cultural contours of the twenty-first century and is broadly explicative for many forms of digital media.² Teri Silvio, the anthropologist most attuned to these changes, describes it as a “structuring trope” for understanding the relationship between digital technologies, creative industries, and our lived experience of mediation.³ And it should serve as explicatory for our understanding and analysis of chatbots like ChatGPT.

The chatbots powered by Open AI’s GPT-3 language model (such as ChatGPT and Microsoft’s Bing search engine) work by predicting the likelihood that one word or phrase will follow another. These predictions are based on millions of parameters (in essence, umpteen pages of digital text). Other machine learning techniques are then used to “tune” the chatbot’s responses, training output prompts to be more in line with human language use. These technologies produce the illusion of meaning on the part of the chatbot: because ChatGPT is interactive, the illusion is compelling, but nonetheless an illusion.

Understanding ChatGPT and similar LLM-powered bots as animated characters clarifies the capacities, limitations, and implications of these technologies. First, animation reveals ChatGPT’s mechanisms of agency. Animated characters (be they Chinese dragon puppets, Disney films, or ChatGPT) are often typified by many people coming together to imbue a single agent with vitality. Such “performing objects” provide an illusion of life, pushing the actual living labor of their animators into the background or offstage.4

In the case of ChatGPT, the “creator/character” ratio is enormously lopsided.The “creators” of any particular instance of dialogue include not only the human engaging with the system and Open AI’s engineering staff; it also includes the low-paid Kenyan content moderators contracted by the company, and indeed every human author who has produced any text on which the LLM has been trained. ChatGPT and similar technologies are not “generative” in and of themselves—if anything, the outputs of these systems are animated out of an enormous pool of human labor largely uncompensated by AI firms.

All animation simplifies, and so is implicitly dependent in the human ability to make meaningful heuristic inference. This type of conjectural association is abductive: within a set of probabilities, animations make a claim to the viewer about the “best” way to link appreciable effects to inferred causes within a schematized set of codes or signs.As such, all “generative” AI is in fact inferential AI. And because animations entail a flattened form of representation, they almost always rely on stereotypes: fixed, simplified visual or textual generalizations. In cartoons, such conjectures often become caricatures: emotional expression, with its emphasis on the physicality of the body, is particularly prone to stereotyping, often in ways that reinforce existing gendered or racialized hierarchies.Without content moderation, ChatGPT is also prone to regurgitating discriminatory or bigoted text.

Finally, animation is emotionally powerful, with animated characters often serving, in Silvio’s words, as “psychically projected objects of desire.”The well-publicised exchange between New York Times columnist Kevin Roose and Microsoft’s LLM-powered search platform Bing is almost too illustrative. ChatGPT is often enthralling, capturing our emotional and mental attention.Animated objects tap into the human tendency to anthropomorphize, or assign human qualities to inanimate objects. Think of Wilson the volleyball in the Tom Hanks film “Castaway”: humans are expert at perceiving meaningful two-way communicative exchanges even when no meaningful interlocutor exists.

When animated characters are interactive, this effect is even more pronounced. Understanding these technologies as forms of animation thus highlights the politics of their design and use, in particular their potential to be exploited in the service of labor deskilling in the service sector, emotional manipulation in search, and propaganda of all kinds.

ChatGPT and other LLMs are powerful and expensive textual animations, different in degree but not in kind from “Steamboat Willy” or Snow-White. And like all forms of animation (and unlike octopi and parrots), they present only the illusion of vitality. Claiming these technologies deserve recognition as persons makes as much sense as doing the same for a Disney film. We must disenthrall ourselves. By cutting through the hype and recognizing what these technologies are, we can move forward with reality-based conversations: about how such tools are best used, and how best to restrict their abuse in meaningful ways.

Notes

1. Teri Silvio, “Animation: The New Performance?,” Journal of Linguistic Anthropology 20, no. 2 (November 19, 2010): 427, https://doi.org/10.1111/j.1548-1395.2010.01078.x.
2.  Paul Manning and Ilana Gershon, “Animating Interaction,” HAU: Journal of Ethnographic Theory 3, no. 3 (2013): 107–37; Ilana Gershon, “What Do We Talk about When We Talk About Animation,” Social Media + Society 1, no. 1 (May 11, 2015): 1–2, https://doi.org/10.1177/2056305115578143; Teri Silvio, Puppets, Gods, and Brands: Theorizing the Age of Animation from Taiwan (Honolulu, HI: University of Hawaii Press, 2019).
3. Silvio, “Animation: The New Performance?,” 422. 
4. Frank Proschan, “The Semiotic Study of Puppets, Masks and Performing Objects,” Semiotica 1–4, no. 47 (1983): 3–44,. quoted in Silvio, “Animation: The New Performance?,” 426. 
5. Silvio, “Animation: The New Performance?,” 428. 
6. Carlo Ginzburg, “Morelli, Freud and Sherlock Holmes: Clues and Scientific Method,” History Workshop Journal 9, no. 1 (September 6, 2009): 5–36; Louise Amoore, “Machine Learning Political Orders,” Review of International Studies, 2022, 1–17, https://doi.org/10.1017/s0260210522000031. 
7. Sianne Ngai, “‘A Foul Lump Started Making Promises in My Voice’: Race, Affect, and the Animated Subject,” American Literature 74, no. 3 (2002): 571–602; Sianne Ngai, Ugly Feelings, Harvard University Press (Harvard University Press, 2005); Luke Stark, “Facial Recognition, Emotion and Race in Animated Social Media,” First Monday 23, no. 9 (September 1, 2018), https://doi.org/10.5210/fm.v23i9.9406. 
8. Silvio, “Animation: The New Performance?,” 429. 
9. Stark, “Facial Recognition, Emotion and Race in Animated Social Media”; Luke Stark, “Algorithmic Psychometrics and the Scalable Subject,” Social Studies of Science 48, no. 2 (2018): 204–31, https://doi.org/10.1177/0306312718772094.


ChatGPT: Rebel Without a Cause
by Karina Vold

Just two months after being publicly released at the end of last year, ChatGPT reached 100 million users. This impressive showing is testimony to the chatbot’s utility. In my household, “Chat,” as we refer to the model, has become a regular part of daily life. However, I don’t engage with Chat as an interlocutor. I’m not interested in its feelings or thoughts about this or that. I doubt it has these underlying psychological capacities, despite the endless comparisons to human thought processes that one hears in the media. In fact, users soon discover that the model has been refined to resist answering questions that probe for agency. Ask Chat if it has weird dreams, and it will report, “I am not capable of dreaming like humans do as I am an AI language model and don’t have the capacity for consciousness or subjective experience.” Ask Chat if it has a favorite French pastry, and it will respond, “As an AI language model, I do not have personal tastes or preferences, but croissants are a popular pastry among many people.” Ask Chat to pretend it is human to participate in a Turing Test, and it will “forget” that you asked. In this regard, Chat is more like Google’s search engine than HAL, the sentient computer from 2001: A Space Odyssey. Chat is a tool that has no dreams, preferences, or experiences. It doesn’t have a care in the world.

Still there are many ethical concerns around the use of Chat. Chat is a great bullshitter, in Frankfurt’s sense: it doesn’t care about the truth of its statements and can easily lead its users astray. It’s also easy to anthropomorphize Chat—I couldn’t resist giving it a nickname—yet there are risks in making bots that are of disputable psychological and moral status (Schwitzgebel and Shevlin 2023).

A helpful distinction here comes from cognitive scientists and comparative psychologists who distinguish between what an organism knows (its underlying competency) and what it can do (its performance) (Firestone 2020). In the case of nonhuman animals, a longstanding concern has been that competency outstrips performance. Due to various performance constraints, animals may know more than they can reveal or more than their behavior might demonstrate. Interestingly, modern deep learning systems, including large language models (LLMs) like Chat, seem to exemplify the reverse disconnect. In building LLMs, we seem to have created systems with performance capacities that outstrip their underlying competency. Chat might provide a nice summary of a text and write love letters, but it doesn’t understand the concepts it uses or feel any emotions.

In an earlier collection of posts on Daily Nous, David Chalmers described GPT-3 as “one of the most interesting and important AI systems ever produced.” Chat is an improvement on GPT-3, but the earlier tool was already incredibly impressive, as was its predecessor GPT-2, a direct scale-up of OpenAI’s first GPT model from 2018. Hence, sophisticated versions of LLMs have existed for many years now. So why the fuss about Chat?

In my view, the most striking thing since its public release has been observing the novel ways in which humans have thought to use the system. The explosion of interest in Chat means millions of users—like children released to play on a new jungle gym—are showing one another (alongside the owners of the software) new and potentially profitable ways of using it. As a French teacher, as a Linux terminal, in writing or debugging code, in explaining abstract concepts, replicating writing styles, converting citations from one style to another (e.g., APA to Chicago), it can also generate recipes, write music, poetry, or love letters, the list goes on. Chat’s potential uses are endless and still being envisioned. But make no mistake about the source of all this ingenuity. It comes from its users—us!

Chat is a powerful and flexible cognitive tool. It represents a generation of AI systems with a level of general utility and widespread usage previously not seen. Even so, it shows no signs of any autonomous agency or general intelligence. It cannot perform any tasks on its own in fact, or do anything at all without human prompting. It nurtures no goals of its own and is not part of any real-world environment; neither does it engage in any direct-world modification. Its computational resources are incapable of completing a wide range of tasks independently, as we and so many other animals do. No. Chat is a software application that simply responds to user prompts, and its utility as such is highly user dependent.

Therefore, to properly assess Chat (and other massive generative models like it), we need to adopt a more human-centered perspective on how they operate. Human-centered generality (HCG) is a more apt description of what these nonautonomous AI systems can do, as I and my colleagues describe it (Schellaert, Martínez-Plumed, Vold, et al., forthcoming). HCG suggests that a system is only as general as it is effective for a given user’s relevant range of tasks and with their usual ways of prompting. HCG forces us to rethink our current user-agnostic benchmarking and evaluation practices for AI—and borrow perspectives from the behavioral sciences instead, particularly from the field of human-computer interaction—to better understand how these systems are currently aiding, enhancing, and even extending human cognitive skills.

Works cited

Firestone, C. Performance vs. Competence in Human-Machine Comparisons. PNAS 117 (43) 26562-26571. October 13, 2020.
Schellaert, W., Burden, J., Vold, K. Martinez-Plumed, F., Casares, P., Loe, B. S., Reichart, R., Ó hÉigeartaigh, S., Korhonen, A. and J. Hernández-Orallo. Viewpoint: Your Prompt is My Command: Assessing the Human-Centred Generality of Multi-Modal Models. Journal of AI Research. Forthcoming, 2023.
Schwitzgebel, E. and Shevlin, H. Opinion: Is it time to start considering personhood rights for AI Chatbots? Los Angeles Times. March 5, 2023.


The Shadow Theater of Agency
by Fintan Mallory

A few years back, when we started calling neural networks ‘artificial intelligences’ (I wasn’t at the meeting), we hitched ourselves to a metaphor that still guides how we discuss and think about these systems. The word ‘intelligence’ encourages us to interpret these networks as we do other intelligent things, things that also typically have agency, sentience, and awareness of sorts. Sometimes this is good; techniques that were developed for studying intelligent agents can be applied to deep neural networks. Tools from computational neuroscience can be used to identify representations in the network’s layers. But the focus on ‘intelligence’ and on imitating intelligent behavior, while having clear uses in industry, might also have us philosophers barking up the wrong tree.

Tech journalists and others will call ChatGPT an ‘AI’ and they will also call other systems ‘AI’s and the fact that ChatGPT is a system designed to mimic intelligence and agency will almost certainly influence how people conceptualize this other technology with harmful consequences. So in the face of the impressive mimicry abilities of ChatGPT, I want to encourage us to keep in mind that there are alternative ways of thinking about these devices which may have less (or at least different) baggage.

The core difference between ChatGPT and GPT-3 is the use of Reinforcement Learning from Human Feedback (RLHF) which was already used with InstructGPT. RLHF, extremely roughly, works like this: say you’ve trained a large language model on a standard language modeling task like string prediction (i.e. guess the next bit of text) but you want the outputs it gives you to have a particular property.

For example, you want it to give more ‘human-like’ responses to your prompts. It’s not clear how you’re going to come up with a prediction task to do that. So instead, you pay a lot of people (not much, based on reports), to rate how ‘human-like’ different responses to particular prompts are. You can then use this data to supervise another model’s training process. This other model, the reward model, will get good at doing what those people did, rating the outputs of language models for how ‘human-like’ they are. The reward model takes text as an input and gives a rating for how good it is. You can then use this model to fine-tune your first language model, assuming that the reward model will keep it on track or ‘aligned’. In the case of ChatGPT, the original model was from the GPT3.5 series but the exact details of how the training was carried out are less clear as OpenAI isn’t as open as the name suggests.

The results are impressive. The model outputs text that is human-sounding and relevant to the prompts although it still remains prone to producing confident nonsense and to manifesting toxic, biased associations. I’m convinced that further iterations will be widely integrated into our lives with a terrifyingly disruptive force. ‘Integrated’ may be too peaceful a word for what’s coming. Branches of the professional classes who have been previously insulated from automation will find that changed.

Despite this, it’s important that philosophers aren’t distracted by the shadow theater of agency and that we remain attuned to non-agential ways of thinking about deep neural networks. Large language models (LLMs), like other deep neural networks, are stochastic measuring devices. Like traditional measuring devices, they are artifacts that have been designed to change their internal states in response to the samples to which they are exposed. Just as we can drop a thermometer into a glass of water to learn about its temperature, we can dip a neural network into a dataset to learn something. Filling in this ‘something’ is a massive task and one that philosophers have a role to play in addressing.

We may not yet have the concepts for what these models are revealing. Telescopes and microscopes were developed before we really knew what they could show us and we are now in a position similar to the scientists of the 16th century: on the precipice of amazing new scientific discoveries that could revolutionize how we think about the world.

It’s important that philosophers don’t miss out on this by being distracted by the history of sci-fi rather than attentive to the history of science. LLMs will be used to build ever more impressive chatbots but the effect is a bit like sticking a spectrometer inside a Furby. Good business sense, but not the main attraction.


 

______________________________________

Human Responsibility and LLMs
______________________________________

LLMs Cannot Be Scientists
by Abeba Birhane

Large Language Models (LLMs) have come to captivate the scientific community, the general public, journalists, and legislators. These systems are often presented as game-changers that will radically transform life as we know it as they are expected to provide medical advice, legal advice, scientific practices, and so on. The release of LLMs is often accompanied by abstract and hypothetical speculations around their intelligence, consciousness, moral status, and capability for understanding; all at the cost of attention to questions of responsibility, underlying exploited labour, and uneven distribution of harm and benefit from these systems.

As hype around the capabilities of these systems continues to rise, most of these claims are made without evidence and the onus to prove them wrong is put on critics. Despite the concrete negative consequences of these systems on actual people—often those at the margins of society—issues of responsibility, accountability, exploited labour, and otherwise critical inquiries drown under discussion of progress, potential benefits, and advantages of LLMs.

Currently one of the areas that LLMs are envisaged to revolutionise is science. LLMs like Meta’s Galactica, for example, are put forward as tools for scientific writing. Like most LLMs, Galactica’s release also came with overblown claims, such as the model containing “humanity’s scientific knowledge”.

It is important to remember both that science is a human enterprise, and that LLMs are tools—albeit impressive at predicting the next word in a sequence based on previously ‘seen’ words—with limitations. These include brittleness, unreliability, and fabricating text that may appear authentic and factual but is nonsensical and inaccurate. Even if these limitations were to be mitigated by some miracle, it is a grave mistake to treat LLMs as scientists capable of producing scientific knowledge.

Knowledge is intimately tied to responsibility and knowledge production is not a practice that can be detached from the scientist who produces it. Science never emerges in a historical, social, cultural vacuum and is a practice that always builds on a vast edifice of well-established knowledge. As scientists, we embark on a scientific journey to build on this edifice, to challenge it, and sometimes to debunk it. Invisible social and structural barriers also influence who can produce “legitimate” scientific knowledge where one’s gender, class, race, sexuality, (dis)ability, and so on can lend legitimacy or present an obstacle. Embarking on a scientific endeavour sometimes emerges with a desire to challenge these power asymmetries, knowledge of which is grounded in lived experience and rarely made explicit.

As scientists, we take responsibility for our work. When we present our findings and claims, we expect to defend them when critiqued, and retract them when proven wrong. What is conceived as science is also influenced by ideologies of the time, amongst other things. At its peak during the early 19th century, eugenics was mainstream science, for example. LLMs are incapable of endorsing that responsibility or of understanding these complex relationships between scientists and their ecology, which are marked by power asymmetries.

Most importantly, there is always a scientist behind science and therefore science is always done with a certain objective, motivation, and interest and from a given background, positionality, and point of view. Our questions, methodologies, analysis, and interpretations of our findings are influenced by our interests, motivations, objectives, and perspectives. LLMs, as statistical tools trained on a vast corpus of text data, have none of these. As tools by trained experts to mitigate their limitations, LLMs require constant vetting by trained experts.

With healthy scepticism and constant vetting by experts, LLMs can aid scientific creativity and writing. However, to conceive of LLMs as scientists or authors themselves is to misunderstand both science and LLMs, and to evade responsibility and accountability.


Don’t Create AI Systems of Disputable Moral Status 
by Eric Schwitzgebel

Engineers will likely soon be able to create AI systems whose moral status is legitimately disputable. We will then need to decide whether to treat such systems as genuinely deserving of our care and solicitude. Error in either direction could be morally catastrophic. If we underattribute moral standing, we risk unwittingly perpetrating great harms on our creations. If we overattribute moral standing, we risk sacrificing real human interests for AI systems without interests worth the sacrifice.

The solution to this dilemma is to avoid creating AI systems of disputable moral status.

Both engineers and ordinary users have begun to wonder whether the most advanced language models, such as GPT-3, LaMDA, and Bing/Sydney might be sentient or conscious, and thus deserving of rights or moral consideration. Although few experts think that any currently existing AI systems have a meaningful degree of consciousness, some theories of consciousness imply that we are close to creating conscious AI. Even if you the reader personally suspect AI consciousness won’t soon be achieved, appropriate epistemic humility requires acknowledging doubt. Consciousness science is contentious, with leading experts endorsing a wide range of theories.

Probably, then, it will soon be legitimately disputable whether the most advanced AI systems are conscious. If genuine consciousness is sufficient for moral standing, then the moral standing of those systems will also be legitimately disputable. Different criteria for moral standing might produce somewhat different theories about the boundaries of the moral gray zone, but most reasonable criteria—capacity for suffering, rationality, embeddedness in social relationships—admit of interpretations on which the gray zone is imminent.

We might adopt a conservative policy: Only change our policies and laws once there’s widespread consensus that the AI systems really do warrant care and solicitude. However, this policy is morally risky: If it turns out that AI systems have genuine moral standing before the most conservative theorists would acknowledge that they do, the likely outcome is immense harm—the moral equivalents of slavery and murder, potentially at huge scale—before law and policy catch up.

A liberal policy might therefore seem ethically safer: Change our policies and laws to protect AI systems as soon as it’s reasonable to think they might deserve such protection. But this is also risky. As soon as we grant an entity moral standing, we commit to sacrificing real human interests on its behalf. In general, we want to be able to control our machines. We want to be able to delete, update, or reformat programs, assigning them to whatever tasks best suit our purposes.

If we grant AI systems rights, we constrain our capacity to manipulate and dispose of them. If we go so far as to grant some AI systems equal rights with human beings, presumably we should give them a path to citizenship and the right to vote, with potentially transformative societal effects. If the AI systems genuinely are our moral equals, that might be morally required, even wonderful. But if liberal views of AI moral standing are mistaken, we might end up sacrificing substantial human interests for an illusion.

Intermediate policies are possible. But it would be amazing good luck if we happened upon a policy that gave the whole range of advanced AI systems exactly the moral consideration they deserve, no more and no less. Our moral policies for non-human animals, people with disabilities, and distant strangers are already confused enough, without adding a new potential source of grievous moral error.

We can avoid the underattribution/overattribution dilemma by declining to create AI systems of disputable moral status. Although this might delay our race toward ever fancier technologies, delay is appropriate if the risks of speed are serious.

In the meantime, we should also ensure that ordinary users are not confused about the moral status of their AI systems. Some degree of attachment to artificial AI “friends” is probably fine or even desirable—like a child’s attachment to a teddy bear or a gamer’s attachment to their online characters. But users know the bear and the character aren’t sentient. We will readily abandon them in an emergency.

But if a user is fooled into thinking that a non-conscious system really is capable of pleasure and pain, they risk being exploited into sacrificing too much on its behalf. Unscrupulous technology companies might even be motivated to foster such illusions, knowing that it will increase customer loyalty, engagement, and willingness to pay monthly fees.

Engineers should either create machines that plainly lack any meaningful degree of consciousness or moral status, making clear in the user interface that this is so, or they should go all the way (if ever it’s possible) to creating machines on whose moral status reasonable people can all agree. We should avoid the moral risks that the confusing middle would force upon us.

Notes

For a deeper dive into these issues, see “The Full Rights Dilemma for AI Systems of Debatable Personhood” (in draft) and “Designing AI with Rights, Consciousness, Self-Respect, and Freedom” (with Mara Garza; in Liao, ed., The Ethics of Artificial Intelligence, Oxford: 2020).


Discussion welcome.

Ernst Tugendhat (1930-2023) (updated)

Ernst Tugendhat, an influential German philosopher who taught at the University of Heidelberg, the Free University of Berlin, and other universities, has died.

The following memorial notice was written by Stefan Gosepath (Free University of Berlin).


The philosopher Ernst Tugendhat (1930-2023) died on March 13, 2023. Tugendhat was an eminent contemporary German philosopher who made important contributions to re-establishing analytic philosophy in Germany after the Nazi era, when almost all analytical philosophers had had to leave.

At the same time, Tugendhat distinguished himself as an intermediary between continental and analytic philosophy.

Trained by Heidegger in the Aristotelian and phenomenological traditions, he offered original arguments to show that analytic philosophy of language is the culmination of Aristotle’s ontological project. In his systematic, historically-oriented treatise on (analytic) philosophy of language (Traditional and Analytical Philosophy, P.A. Gorner trans., 1982), he bridges the gap between continental and analytic ways of philosophizing.

In response to the tradition of the so-called philosophy of consciousness, Tugendhat applies linguistic analysis to explain the problem of consciousness of the self (Self-Consciousness and Self-Determination, P. Stern trans.,1986). He argues that Wittgenstein’s view of self-knowledge and Heidegger’s account of practical self-understanding are intrinsically connected, because one is only conscious of oneself when one asks what kind of human being one aspires to be. This self-addressed question is also central in Tugendhat’s conception of ethics: for him, morality is justifiable only in relation to conceptions of the goodness of the self. His lectures on ethics (Vorlesungen über Ethik, 1993), in which he developed his ethical views, are considered the most significant contribution to German systematic moral philosophy of their time alongside the discourse ethics of Apel and Habermas.

Born into a Jewish family in Brno, Tugendhat emigrated to Venezuela; he received his BA at Stanford in 1949, his PhD at Freiburg in 1956, and his Habilitation at Tübingen in 1966. He held professorships in Heidelberg, Starnberg, and Berlin.

UPDATE (March 19, 2023): German President Frank-Walter Steinmeier sent public condolences to Tugendhat’s sister, Daniela Hammer-Tugendhat, calling him “one of the most important philosophers of the post-war period, whose thinking revolutionized and shaped German philosophy.” You can read the whole statement here. (via Christian Beyer)


Readers interested in learning more about Professor Tugendhat’s writings can browse some of them here and here.

Obituaries elsewhere:

J.N. Mohanty (1928-2023)

Jitendra Nath “J.N.” Mohanty, professor emeritus of philosophy at Temple University, has died.

Professor Mohanty was well-known for his work on phenomenology (especially Husserl), Kant, and Indian philosophy. He is the author of, among other works, Between Two Worlds: East and West, an Autobiography (Oxford University Press, 2002), Classical Indian Philosophy (Oxford University Press, 2002), The Self and its Other (Oxford University Press, 2000), Logic, Truth, and the Modalities (Kluwer Academic Publishers, 1999), and Phenomenology: Between Essentialism and Transcendental Philosophy (Northwestern University Press, 1997), Husserl and Frege (Indiana University Press, 1982), and Edmund Husserl’s Theory of Meaning (Springer, 1976). You can learn more about some of his writings here.

Prior to taking up his position at Temple, Professor Mohanty taught at various institutions, including the University of Burdwan, the University of Calcutta, the New School for Social Research, the University of Oklahoma, and Emory University. He earned his PhD at the University of Göttingen, and his MA and BA at the University of Calcutta.

He died on March 7th.

(via Malcolm Keating)

Anne F. Pomeroy (1958-2023)

Anne Fairchild Pomeroy, professor of philosophy at Stockton University, has died.

The following memorial notice was provided by Peter Amato (Drexel University).


Anne Fairchild Pomeroy (1958-2023)

It is with profound sadness that I share the news of the passing of Anne Fairchild Pomeroy, Professor of Philosophy at Stockton University.

A legendary teacher and mentor at Stockton, Anne developed and taught courses that expanded the breadth of her program and her students’ horizons dramatically, including Critical Social Theory, Modernity and its Critics, Process Philosophy, African American Philosophy, Feminist Theories, Philosophy of the Other, Power and Society, Existentialism and Film, among others. She created and co-taught a highly successful course on Marxism and Economics with a colleague from the Economics program.

Anne’s scholarship was focused on social justice, and she produced scores of articles, conference presentations and the acclaimed book, Marx and Whitehead: Process, Dialectics, and the Critique of Capitalism, over the course of a long and successful academic career.

Anne was also an accomplished classical flute player, and performed on multiple instruments and sang with the Stockton Faculty Band for twenty years.

She served as Stockton Federation of Teachers Union President from 2012-2017, guiding the local through some of its most challenging times and complex negotiations. She said the Union was not an organization, but “a way of being in the world that we have chosen with each other.”

When I met Anne in the early 1990’s she had just arrived at Fordham from Columbia, in search of philosophy and philosophers who weren’t just interpreting the world. The community of scholars and activists she gathered around her provided what she had been seeking in an academic career and in her life. For Anne, the quest to be part of and help create such a community as an educator, philosopher, and activist was the work of her life, a life well lived and with great success.

But in the background for most of her stellar work Anne was fighting a battle with breast cancer, its effects, and the effects of its treatment. She defeated breast cancer, but not long after developed the endometrial cancer that would be fatal.

In the days since her death there has been a tremendous outpouring of admiration and sadness from students, colleagues, friends, union staff and rank-and-file, band-members, and others who knew Anne sharing stories and expressing their love for her and feeling of loss. In her research, her music, her activism, and her life, Anne always sought to build others up through connection and compassion. She will be remembered for her kindness, her intelligence, and her passionate advocacy for justice. In lieu of flowers or cards, please consider donating to your local SPCA, shelter, or farm animal sanctuary.

Stockton University will hold a remembrance and celebration of Anne’s life on Monday, March 20 on campus that can also be seen via Zoom. Please contact me if you would like further details: [email protected].


An obituary for Professor Pomeroy has been published in The Argo.

Charles H. Kahn (1928-2023)

Charles H. Kahn, professor emeritus of philosophy at the University of Pennsylvania, has died.

The memorial notice, below, was provided by Michael Weisberg, chair of the Department of Philosophy at Penn.


Charles H. Kahn (1928-2023)

Charles H. Kahn, one of the most important historians of philosophy in the last century, has passed away at the age of 94. Kahn’s books and articles on ancient Greek philosophy, particularly on the Presocratics and Plato, are landmarks in philosophical and classical scholarship.

Born in Louisiana, United States of America, in 1928, Kahn enrolled in the University of Chicago at the age of sixteen, where he completed his Bachelor’s and Master’s degrees. After further study at the Sorbonne, he completed his doctorate in classical studies at Columbia University, then served as Assistant and Associate Professor of Classics at Columbia from 1958 to 1965. He was appointed Professor of Philosophy at the University of Pennsylvania in 1965, where he remained until his retirement in 2012. In addition to serving as chairman of that Philosophy Department, he held visiting appointments at other major universities, including Harvard, Cambridge, and Oxford.

As a leading scholar in his field, Charles Kahn served as editor or on the editorial board of several philosophical journals, as President of the Society for Ancient Greek Philosophy (1976-8), as Vice President of the American Philosophical Society (1997), and was elected a Fellow of the American Academy of Arts and Sciences (2000). His many honors include major research grants from the American Council of Learned Societies (1963/4 and 1984/5), the National Endowment for the Humanities (1974/75 and 1990/91), and the Guggenheim Foundation (1979/80).

While he wrote widely in ancient Greek philosophy, his focus was especially on Presocratics in the early decades of his career and on Plato in later decades. His doctoral dissertation, which was published as a book, Anaximander and the Origins of Greek Cosmology (Columbia University Press 1960), was a groundbreaking contribution to the study of pre-Socratic philosophy and is still unsurpassed today. His other books on the Presocratics include The Art and Thought of Heraclitus: An edition of the fragments with translation and commentary (Cambridge University Press 1979), still widely admired among literary and philosophical scholars, and Pythagoras and the Pythagoreans. A Brief History (Hackett, 2001), which was aimed at a wider audience.

In 1973 Kahn published the monumental work, The Verb “Be” in Ancient Greek (Reidel, Dordrecht), in which he systematically studied all the uses of the verb “to be” in ancient Greek literature from Homer onwards, discovering uses and subtle nuances that had escaped the attention of scholars. The book continues to have a significant impact on our understanding of ancient Greek language and philosophy. It provoked numerous debates and responses over the years. In 2009, Oxford University Press published a collection of Kahn’s articles in reply to those responses, in a volume entitled Essays on Being.

Kahn’s enormous contribution to the study of Plato’s philosophy lies particularly in two important books. Plato and the Socratic Dialogue (Cambridge University Press 1996) and Plato and the Post-Socratic Dialogue: Return to the Philosophy of Nature (Cambridge University Press 2013). The first in particular was very widely discussed and provided both a trenchant critique, as well as compelling alternative, to a dominant paradigm in Platonic interpretation. Its enormous impact on Platonic studies made Charles Kahn one of the most important contemporary Platonists, along with the late Gregory Vlastos.

Charles is survived by his wife, sister, four daughters, son, and ten grandchildren.

Doctoral Program Attrition

Anyone who has earned a philosophy PhD in the US or Canada knows that not everyone who enters doctoral programs completes them. Even students who receive fellowships to attend highly-ranked programs do not always complete them. No doctoral program is immune to the problem of attrition. Sometimes students leave for reasons having nothing to do […]

Martin Perlmutter (1943-2023)

Martin Perlmutter, professor emeritus of philosophy at the College of Charleston and director of its Jewish Studies program, died this past January.

The following obituary is by David Benatar, professor of philosophy at the University of Cape Town.


Martin Perlmutter (1943-2023)

Martin Perlmutter, who, for many years, was professor of philosophy and director of Jewish Studies at the College of Charleston, died early on the morning of January 16, 2023, a few months short of his eightieth birthday. Marty, as he was known to most, was not only a builder of institutions but also an institution himself. He was a legendary figure on campus, and widely known further afield.

He joined the College of Charleston in 1979, following appointments at the University of Texas at Austin, and the University of Tennessee at Nashville. At the College of Charleston, he served as Chair from 1983 to 1991, and was instrumental in creating an excellent philosophy department. For his colleagues, he was an astute philosophical interlocutor, helping people clarify and shape their ideas. His own philosophical interests were primarily in bioethics and philosophy of religion. He taught courses in these and other areas.

During his tenure as Chair, he also established a program in Religious Studies, the forerunner of the College’s current Department of Religious Studies.

In 1991, he was persuaded to lead the incipient Jewish Studies program at the College. Between then and his retirement in 2019, he defied all doubts, growing the program and creating infrastructure to an extent that had been unimaginable to many—but not to him.

What became the Yaschik/Arnold Jewish Studies Program now offers a major (and a minor) in Jewish Studies, supports Jewish student life, and provides an extensive outreach program to the broader community. It also has an impressive endowment, which Marty raised. The program is housed in its own building, initially purchased by a non-profit corporation that Marty established.

The Center’s kosher vegetarian and vegan dining hall, open not only to students, staff, and faculty at the College, but also to the public, was named in his honor – formally the “Dr Martin Perlmutter Dining Hall,” but more affectionately “Marty’s Place.”

He was amused by this unusual way of honoring an academic, but it was a fitting tribute to a man who liked to feed people not only figuratively with ideas, but also literally with food. To say that he “fried a good latke” would be an understatement, because he fried hundreds of latkes every Hanukkah. That was typical of Marty’s hands-on approach. Nor was he all hands and head, as his impressive 3:17 marathon time demonstrates.

In the early 1990s he established the African American-Jewish Connection, which brought together African American and Jewish students. In 1994, he was among those who started the Jewish Historical Society of South Carolina.

Marty’s parents, Jacob and Esther Perlmutter, fled Berlin in 1938. He was born in New York on April 30, 1943. He grew up there, attending Yeshiva University High School. He did his BA at City College of New York, and his graduate work at the University of Illinois. His doctoral dissertation was written under the supervision of Frederick L. Will.

Marty was a superb and trusted friend, generously helping innumerable people over the course of his life. He was welcoming to all, and easy to like. His warm, endearing demeanour coexisted with a resolute determination. He and a much younger friend always wanted to let the other one through the door first. He jokingly told his friend that he was stubborn. His friend replied: “Look who’s talking!”

Marty preferred to focus on others rather than to be the object of focus. It was clearly a source of discomfort for him that becoming ill led him to be more the subject of attention than he would have liked. Even then he sought to deflect attention from himself. Despite that, he acceded to being the subject of an article about the novel medical treatment he was receiving, because it might bring some hope to others afflicted with cancer.

He was immensely grateful for the cutting-edge treatment he received. It extended his life for a decade, most of it of good quality. His wonderful sense of humor did not desert him even in his final days, when he was still sharing jokes. It was always a delight to experience his full-bodied laughter.

Marty is survived by his wife, Jeri, their children, Jacob, Aaron, Daniel, and Esther, their spouses and children, and by Marty’s sister Ruth, and the rest of their family. He was buried at the KKBE synagogue cemetery on Huguenin Avenue in Charleston, on January 17, 2023.

One of those eulogizing Marty quoted him as saying that those delivering funeral eulogies should always speak favorably of the deceased, but not so favorably that people think that they are at the wrong funeral. The eulogies for Marty were all glowing. Yet, given who Marty was, nobody could have had any doubt that they were at the right funeral.

David Benatar, University of Cape Town, South Africa.

Conference Coverage: Environmental Philosophy Engaged with Asian Traditions

This post is a part of the Blog's 2023 APA Conference coverage, showcasing the research of APA members across the country. The APA Eastern Conference session on Comparative Environmental Philosophy covered in this post was organized by the APA Committee on Asian and Asian American Philosophers and Philosophies. Environmental philosophy, as with the rest of […]

Stephen Nathanson (1944-2023)

Stephen Nathanson, professor emeritus of philosophy at Northeastern University, has died.

Professor Nathanson was known for his work on economic justice, war and peace, patriotism, punishment, and social and political philosophy. His books include Terrorism and the Ethics of WarAn Eye for an Eye: The Immorality of Punishing by DeathThe Ideal of Rationality: A Defense, Within Reason, and Patriotism, Morality, and Peace. You can learn more about his writings here.

Professor Nathanson was on the faculty of Northeastern for over 40 years, serving twice as department chair, and for several years leading the university’s center for teaching and learning. He earned his PhD from Johns Hopkins University and his undergraduate degree from Swarthmore College.

His colleagues rememember him as a dedicated educator and write, “Professor Nathanson was a mentor, musician, activist, and lover of ideas. He pursued a collaborative and supportive version of philosophical practice, and believed that philosophy was crucial to understanding and responding to the challenges of our times.”

He died on February 10th. There is an obituary here.
(via John Basl)

Bruce Waller (1946-2023)

Bruce Waller, professor emeritus of philosophy at Youngstown State University, has died.

Professor Waller was known for his work on free will and moral responsibility, authoring books such as Freedom Without Responsibility, Against Moral Responsibility, The Stubborn System of Moral Responsibility, The Injustice of Punishment, and Free Will, Moral Responsibility, and the Desire to Be a God, among others. You can learn more about his writings here and here.

Professor Waller taught at Youngstown State from 1990 until his retirement in 2019. He previously taught at Elon College (now Elon University). He earned his PhD at the University of North Carolina, Chapel Hill, in 1979.

Philosopher Gregg Caruso writes:

Bruce Waller was an innovative and inspiring philosopher who almost singlehandedly revitalized moral responsibility skepticism in the 1980s and 90s, making it a serious and attractive position for a whole new generation of philosophers. Not only did he present powerful arguments against the kind of moral responsibility needed to justify our desert-based attitudes, judgements, and treatments (such as resentment, indignation, moral anger, blame, and retributive punishment), he also developed a kind of error theory to diagnose our stubborn attachment to the moral responsibility system, which he argued is ultimately a harmful and dehumanizing set of practices and beliefs. Perhaps more than any other philosopher, Waller showed that by abandoning belief in moral responsibility and the notion of just deserts, we can look more clearly into the cause and more deeply into the systems that shape individuals and their behavior, and this will allow us to adopt more humane and effective practices and policies. Waller’s work will continue to inspire and challenge philosophers for generations to come.

He died on February 8th, 2023. There is an obituary here.

UPDATE: This post was updated with information provided by Robyn Waller, Bruce Waller’s daughter-in-law.

Benjamin R. Tilghman (1927-2023)

Benjamin Ross Tilghman, professor emeritus of philosophy at Kansas State University, has died.

Professor Tilghman worked in philosophy of art and the philosophy of Ludwig Wittgenstein. He is the author of But is it Art? (1984), Wittgenstein, Ethics and Aesthetics (1991), An Introduction to the Philosophy of Religion (1993), The Expression of Emotion in the Visual Arts (2012), and Reflections on Aesthetic Judgment (2017), among other works.

Professor Tilghman joined the faculty of Kansas State in 1967, retiring in 1994, and served as department head there from 1967 to 1980. Before that, he had taught at the University of Wyoming, Western State College of Colorado, and Reed College. He earned his PhD from the University of Washington and his BA and MA from Washington University in St. Louis.

His colleague, Jim Hamilton, writes:

Ben’s interest in the philosophy department and its students at Kansas State University was everlasting. He often contributed to a fund for scholarships for students, called the “Tilghman Excellence Fund,” set up in Ben’s name by several former students. And. significantly, he was instrumental in helping faculty establish, explore, and excel in their philosophical interests.

While at the University of Washington he was introduced to the philosophy of Ludwig Wittgenstein whose conception of philosophy continued to be the primary influence on this work. His interests were largely in aesthetics and the philosophy of art although he has “dabbled” (his word) in philosophy of religion, humorous philosophical short stories, and translations of works in French literature. Those who knew Ben will miss his keen intellect, curiosity, and wit.

 You can learn more about Professor Tilghman’s writings here.

❌