FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayNew APPS: Art, Politics, Philosophy, Science

The Supreme Court’s Disappearing Public

In the face of the general disaster of the Republican majority on the Supreme Court’s ongoing power grab in the student loan case, I worry that the damage of the LGBTQ Wedding Website decision, Creative LLC v. Elenis, will get overlooked.  It seems to me, based mainly on a reading of Justice Sotomayor’s dissent, that the real forerunner of Creative LLC is a case mentioned nowhere in the decision or dissent: Burwell v. Hobby Lobby (2014).  Recall that in Burwell, the Court ruled that the Hobby Lobby Corporation could not be compelled by the Affordable Care Act to provide contraceptive coverage as part of its employees’ healthcare coverage, on account of the corporation’s religious beliefs.  At the time, I noted that Hobby Lobby seemed very happy to avail itself of things like police and fire protection.  I don’t usually quote myself in blog posts, but here’s what I said at the time:

“Hobby Lobby is a large, big-box retail chain that employs over 13,000 people.  If those people (or others like them) didn’t exist or refused to work for Hobby Lobby, the corporation would go out of business immediately and the owners would have to find something else to do.  Hobby Lobby, Inc. takes advantage of the publicly-provided roads that its employees, managers, and customers take to get to its stores and that its owners use to get to their corporate offices.  Those offices were erected with the protection of enforceable building codes that make sure they don’t fall down, and that try to make sure that everyone can evacuate them in the event of a fire.  Hobby Lobby, Inc. also takes advantage of municipally provided services, including the installation of stormwater systems that deal with the massive runoff caused by big-box stores’ parking lots.  Hobby Lobby, Inc. also takes advantage of local police and fire services that protect their investment in their stores.  All of these things are provided substantially by property taxes paid by everyone living in the municipalities where the owners exercise their freedom to open a store.  Hobby Lobby, Inc. also freely avails itself of services provided by state and federal taxes, such as the Interstate highways on which it can transport its goods (highways which have to be widened at great public expense when suburbanization creates new local markets for its stores).  Hobby Lobby, Inc. also has no moral objections to taking advantage of the national defense system that keeps its stores safe from foreign intervention, or the publicly funded legal system that allowed them to challenge the ACA and that enables them to recover money from those who owe them.  No, in general, it seems that Hobby Lobby, Inc. depends quite a lot on the society in which it does business, even as its owners seek to excuse themselves from its rules.  In the meantime, Hobby Lobby’s owners also take advantage of the legal structure governing corporations (Hobby Lobby, Inc. isn’t a sole proprietorship!), such as the fact that they aren’t personally liable for any bad things that their corporation might do.  In other words, Hobby Lobby’s owners get to identify with the corporation when it’s a matter of religious belief, but not when doing so is inconvenient.”

It was this line of thought that I most remembered when reading Justice Sotomayor’s dissent in Creative LLC.  She notes that:

“A public accommodations law does not force anyone to start a business, or to hold out the business’s goods or services to the public at large. The law also does not compel any business to sell any particular good or service. But if a business chooses to profit from the public market, which is established and maintained by the state, the state may require the business to abide by a legal norm of nondiscrimination. In particular, the state may ensure that groups historically marked for second-class status are not denied goods or services on equal terms” (slip op., 7).

In other words, you don’t have to start a business, and you don’t have offer your business to the public.  But at the point you have established an LLC and offered services to the public, the public is a stakeholder in what you do.  You are trading the legal benefits associated with that status for some regulation over what you do.  Of course, the Court has repeatedly ruled that corporations can have constitutional rights.  But the conservative language around that point is disclosive. Here is Justice Alito in Burwell:

“It is important to keep in mind that the purpose of this [common legal] fiction [of corporate personhood] is to provide protection for human beings. A corporation is simply a form of organization used by human beings to achieve desired ends. An established body of law specifies the rights and obligations of the people (including shareholders, officers, and employees) who are associated with a corporation in one way or another. When rights, whether constitutional or statutory, are extended to corporations, the purpose is to protect the rights of these people. …. protecting the free-exercise rights of corporations like Hobby Lobby … protects the religious liberty of the humans who own and control those companies” (706-7)

Notice the gap between the beginning and the end: the people protected in the first parenthetical “include[] shareholders, officers, and employees,” but the religious liberty interests are those of the “humans who own and control those companies.”  What happened to the employees?  They vanished from consideration.

Now consider Creative LLC.  Justice Gorsuch begins by noting that “Through her business, 303 Creative LLC, Lorie Smith offers website and graphic design, marketing advice, and social media management services” (slip op., 1).   That is nearly the last mention of “Creative LLC” in the opinion – Gorsuch refers almost exclusively thereafter to “Ms. Smith.”

The other mention of the LLC is when Gorsuch addresses the point that “Ms. Smith offers her speech for pay and does so through 303 Creative LLC, a company in which she is “the sole member-owner.””  He then argues that “none of that makes a difference. Does anyone think a speechwriter loses his First Amendment right to choose for whom he works if he accepts money in return?” (17) and that “Nor …do speakers shed their First Amendment protections by employing the corporate form to disseminate their speech. This fact underlies our cases involving everything from movie producers to book publishers to newspapers” (18).

The second argument is on the right topic, but it amounts to less than might first meet the eye if Sotomayor’s characterization in the dissent is correct.  As she argues, Smith wants to be able to refuse to sell wedding designs to same-sex couples simply because of their status, independently of the content of the message.  She notes that the company could legally “offer only wedding websites with biblical quotations describing marriage as between one man and one woman.  The company could also refuse to include the words “Love is Love” if it would not provide those words to any customer. All the company has to do is offer its services without regard to customers’ protected characteristics” (27).  In other words – and this is one of Sotomayor’s main arguments – there’s a different between repressing speech directly and repressing actions (in this case, status-based discrimination) that have an incidental effect on speech.  In the latter case, the speech interests are still important, but less overwhelmingly so (i.e., the statute gets intermediate scrutiny, not strict).

First Amendment absolutists give very short shrift to the public as a stakeholder, at least when the public’s interest is lined up against religious conservatives.  As Kent Greenfield and Daniel A. Rubens point out about Burwell (they also refer to Masterpiece Cakeshop), there’s a subtle elision going on in these cases between the interests of the corporation and those of dominant stakeholders.  They also argue that this is, in fact, the opposite of what corporate law is supposed to do, by creating an independent entity.  I’m not a scholar of corporate law, and certainly the fact that Hobby Lobby was a “closely held” corporation, or that Smith is the only person involved in 303 Creative LLC, make the cases difficult.  But the argument underlines that there is a difference between offering your services to the “public” while availing yourself of the benefits of being an LLC and offering them in your own person.  This is precisely what Sotomayor points out in distinguishing the movie-producer cases: “even if Smith believes God is calling her to do so through her for-profit company, the company need not hold out its goods or services to the public at large. Many filmmakers, visual artists, and writers never do. (That is why the law does not require Steven Spielberg or Banksy to make films or art for anyone who asks).” (28).

The conservative worldview, especially the one that’s been around since the start of neoliberalism and Margaret Thatcher’s quip that there is “no such thing as society,” tends to disregard the extent to which the “public” is a stakeholder in what happens in business.  Business wouldn’t happen without an extensive publicly run (and funded) apparatus establishing everything from contract rules to limited liability to fire protection.  As Bernard Harcourt demonstrated a while ago, the free market fantasy in things like the Chicago futures market exists only by ignoring a great deal of state regulation and rulemaking, right down to the timing of trades.  This privatization of the public, and a discounting of its interests as such, is pervasive: I even spent a whole book tracking the disappearance of public interests from intellectual property law.

Sotomayor then notes a problem with discounting the public and its view of equality: “the majority insists that petitioners discriminate based on message, not status. The company, says the majority, will not sell same-sex wedding websites to anyone. It will sell only opposite-sex wedding websites; that is its service. Petitioners, however, “cannot define their service as ‘opposite-sex wedding [websites]’ any more than a hotel can recast its services as ‘whites-only lodgings.’” (31).  And:

“The majority protests that Smith will gladly sell her goods and services to anyone, including same-sex couples. She just will not sell websites for same-sex weddings. Apparently, a gay or lesbian couple might buy a wedding website for their straight friends. This logic would be amusing if it were not so embarrassing.  I suppose the Heart of Atlanta Motel could have argued that Black people may still rent rooms for their white friends. Smith answers that she will sell other websites for gay or lesbian clients. But then she, like Ollie McClung, who would serve Black people take-out but not table service, discriminates against LGBT people by offering them a limited menu. This is plain to see, for all who do not look the other way.” (32)

Once that sort of discrimination is on the table, so are things like the refusal to create a wedding cake for an interracial couple, or even the refusal of a stationer “to sell a birth announcement for a disabled couple because she opposes their having a child. A large retail store could reserve its family portrait services for “traditional” families. And so on” (37).

The fantasy of free markets is a great excuse to look the other way.  Sotomayor's bracing dissent and its attention to the LGBTQ members of the public who face ongoing and escalating discrimination show why that excuse is weak.

Bing also hallucinates, even with footnotes

Large Language Models (LLMs) like ChatGPT are well-known to hallucinate – to make up answers that sound pretty plausible, but have no relation to reality.  That of course is because they’re designed to produce text that sounds about right given a prompt.  What sounds kind of right may or may not be right, however.   ChatGPT-3 made up a hilariously bad answer to a Kierkagaard prompt I gave it and put a bunch of words into Sartre’s mouth.  It also fabricated a medical journal article to support a fabricated risk to oral contraception. ChatGPT-4 kept right on making up cites for me.  It has also defamed an Australian mayor and an American law professor.  Let’s call this a known problem.  You might even suggest, following Harry Frankfurt, that it’s not so much hallucinating as it is bullshitting.

Microsoft’s Bing chatbot-assisted search puts footnotes in its answers.  So it makes sense to wonder if it also hallucinates, or if it does better.  I started with ChatGPT today and asked it to name some articles by “Gordon Hull the philosopher.”  I’ll spare you the details, but suffice it to say it produced a list of six things that I did not write.  When I asked it where I might read one of them, it gave me a reference to an issue of TCS that included neither an article by me nor an article of that title.

So Bing doesn’t have to be spectacular to do better!  I asked Bing the same question and got the following:

Bing hallucinates 1

This is better, in a couple of ways.  First, footnotes!  I like to know where my information comes from, and one of my fears for the spread of AI-assisted search is that people are going to get lazy (even lazier) about sourcing their information.  For self-help, that may not matter, as it’s all pretty much the same anyway and very little of it is based on a reality beyond generic bullet points.  But for any sort of socially controversial or complicated topic, you precisely don’t want the “blurry JPEG” version, at least if you want to understand it.  Second point in Bing’s favor: some of those are actually things I wrote.

The use of footnotes is odd though.  The Hobbes book, for example, is listed on my website (which is the source it’s using.  Note that almost everything on the site includes a link and/or a doi, so it could be very precise in response to “where can I find” questions, even though it’s not).  I asked it about the (unfootnoted) Hobbes book and it gave me the right answer:

Bing hallucinates 2

So it gets the publisher right and a decent blurb.  But what about the third – the Deleuze Dictionary?  I didn’t write that, alas (and if I did, you probably shouldn’t trust it).  So I asked about that, and it generated a correct answer by dropping me from the reference:

Bing hallucinates 3

Ok, so it started to hallucinate but then sobered up.  But now here’s the weird part.  The fourth answer, which it sources to my university website, is not something I’ve written.  It seems to have misread the site in lossy-JPEG style.  I did in fact write a paper called the “Banality of Cynicism” on Foucault and the problems with authenticating online information, and I wrote one about library filtering.  But those are separate papers, and the website has them separated.  So I asked it:

Bing hallucinates 4

We’re into sort-of hallucination mode.  If you wanted library filtering, [1] points to the correct paper on ssrn, On the other hand, the underlined text is not a direct quote from the abstract like the underlining might make you think.  It’s not an awful paraphrase, though. If you wanted the “Banality of Cynicism” paper, you’re outta luck, and Bing hasn’t noticed that it’s conflated two papers.  One can only speculate as to why it picked the library filtering paper, but my hunch is that paper is more prominent online (it’s both older and has been cited more) and so it’s more likely to be associated with me than “The Banality of Cynicism.”  Things go off the rails a bit from here.  [2] is labeled ssrn, but actually links to the NCAC. [3], labeled NCAC refers to ICIDR.  [4] is labeled ICIDR and sends you to blablawriting.  So the numbers across the bottom are correct but the ones in the bullet list are not.  That’s just weird.

Also, only the actual ssrn cite answers the original question of where you can read the paper.  The ncac.org link uses some of the same examples I do, but I’m not involved.  The icidr.org link is a direct link to a paper on internet filtering that doesn’t cite my paper.   Blablawriting appears to be a paper mill site, and the cnn link is the news story about the Court case I criticize.

All of which is to say that Microsoft has clearly tried to put some guard rails on the hallucination problem.  But they haven’t solved it; the AI search isn’t trustworthy, at least in contexts like this.

ChatGPT Reread Kierkegaard but still makes up cites

Recall that ChatGPT a couple of months ago did a total face plant on the topic of Kierkegaard's knight of faith from the knight of infinite resignation.  Well, with the fullness of time and an upgrade, it's a lot better now: (screen grabs below the fold)

Screenshot 2023-03-17 at 1.33.20 PM

The previous version was notoriously bad about finding things inside of books.  Given the improvement of this answer, it looks like they've expanded the scope of the algorithm's training data to include more text from books and/or more text from journal articles.  Or maybe just a lot more text, because

On the other hand, you still shouldn't use it for your bibliography!

Screenshot 2023-03-17 at 1.39.27 PM

Well, um, the first article doesn't exist.  I mean, that's a real journal and Daniel Zamora writes on Foucault - but Zamora hasn't published in the journal or an article of that title.   Mitchell Dean is really well known (24,000 cites!) but wrote no such paper.  Neither did Colin Koopman (though I wish he had).  That Oxford Handbook doesn't exist either, and neither does the McWhorter paper as far as I can tell.  And as an admirer of Lisa Guenther's work on incarceration, which is theoretically located at the intersection of phenomenology and Levinas, I'd be... surprised if she wanted to talk about ethics of care.  Can't find that Revel paper either.  Thomas Flynn did write a paper called "Philosophy as a Way of Life: Foucault and Hadot" in 2005, so as long as the bar is the one set by nuclear warfare and horeshoes - close - I guess that counts?  And so it goes (I quit checking before I did all 10).

What strikes me as interesting is that all of these are plausible authors for papers on Foucault.  Most of them have in fact written on Foucault.  Zamora and Dean have published a few papers together.  All of them are also plausible paper topics, and they are in plausible places.  This offers anecdotal confirmation of a fundamental point about LLMs (large language models) like ChatGPT: they work through statistical prediction; after ingesting lots and lots (and lots and lots) of stuff on the Internet, they basically generate text by predicting what is likely to come next.  It's like a really fancy version of autofill.  So if I say "I'm writing a paper on Foucault and the microphysics of," the algorithm will almost certainly come up with "power," because that's pretty much how that sentence has to end.  If I start a sentence with "Brian Leiter is complaining about," the algorithm will be able to come up with "SPEP" or "woke."  The Internet as a whole makes these papers and authors and phrases plausible.  But the model is not designed to go back and index its answers to see to it if they've actually been said.  The bibliography task, at which it repeatedly fails, compared to its increasing prowess in understanding Kierkegaard, tells us a lot about how it works.  It generates plausible answers to prompts.  But that doesn't mean they're correct.

I asked it about myself too:

Screenshot 2023-03-17 at 2.07.46 PM0 for 5.  The closest it got was the first one, since "The Biopolitics of Intellectual Property" is the title of a book I wrote a couple of years ago (the subtitle is wrong, in addition to the journal and date).  My first published paper was in fact in Philosophy and Social Criticism, though it was about Marx and Derrida and in 1997.  The others are  totally wrong, though, again, a casual observer might find them plausible in the sense that you could imagine that I'd written them.

 

 

Some Reasons to be Skeptical of AI Authorship, Part 3: Capitalism and Social Justice

By Gordon Hull

In the previous two posts (here and here) I’ve developed a political account of authorship (according to which whether we should treat an AI as an author for journal articles and the like is a political question, not one about what the AI is, or whether its output resembles human output), and argued that AIs can’t be property held accountable.  Here I want to argue that AI authorship risks social justice concerns.

That is, there are social justice reasons to expand human authorship that are not present in AI.  As I mentioned in the original post, researchers like Liboiron are trying to make sure that the humans who put effort into papers, in the sense that they make it possible, get credit.  In a comment to that post, Michael Muller underlines that authorship interacts with precarity in complex ways.  For example, “some academic papers have been written by collectives. Some academic papers have been written by anonymous authors, who fear retribution for what they have said.”  Many authors have precarious employment or political circumstances, and sometimes works are sufficiently communal that entire communities are listed as authors. There are thus very good reasons to use authorship strategically when there are minoritized individuals or people in question.  My reference to Liboiron is meant only to indicate the sort of issue in the strategic use of authorship to protect minoritized or precarious individuals, and to gesture to the more complex versions of the problem that Muller points to.  The claim I want to make here is that , as a general matter, AI authorship isn’t going to help those minoritized people, and might well make matters worse.

If anything, therre’s a plausible case that elevating an AI to author status will make social justice issues worse.  There’s at least two ways to get to that result, one specific to AI and one more generally applicable to cognitive labor.

(a) AI Specific: The AI specific reason is that the entire AI industry is, as Kate Crawford explains, an extractive industry based on a variety of forms of exploitation, from the communities where the rare earth metals that compose the hardware are mined, up to the people whose data is incorporated into systems without their knowledge, up to the Mechanical Turkers who have to label the data.  We’ve known for a while thast LLMs are bad for the environment and poorly serve the people who will suffer the most from that environmental harm.  We’ve also known that content moderation for platform companies depends on human labor, which is often terribly paid and traumatized.  One of the great myths of AI and data in general is that it all happens without human intervention.

It should come as no surprise, then, that Open AI employed Kenyans at less than $2 an hour to make the system less toxic.  The model trains language scraped from the Internet, which is often toxic.  So the designers took a page from Facebook’s playbook and built an AI to detect and remove toxic speech.  As Billy Perrigo reports for Time:

“To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.”

Of course, to train that model, you have to give it lots of examples of correctly-labeled toxic speech, and therein lies the problem:

“To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.”

Open AI outsourced to Sama, which claims to be an “ethical” AI company and to have lifted thousands of people out of poverty.  However, as Perrigo’s reporting details, they also paid people $2 an hour or less to read graphic textual descriptions of sexual violence (etc) and label it, while pocketing most of the money that Open AI paid for the service.  The workers were, predictably, often left traumatized.

Recall from Foucault that an author is considered the “origin” point for a text – in other words, we stop our inquiry for most texts when we arrive at the author.  We may ask where the author gets a certain claim, and demand a footnote, but the text itself is generally taken to start with the author.  Treating an AI as an author is essentially to fetishize its output as its product, which occludes both the content it scraped, and (this is often overlooked) the human labor that cleaned up that content and made it usable for the AI.  We know that this labor is exploited and traumatized; calling the AI an “author” makes it even harder to see.

Including the AI as an “author” thus risks the opposite effect of calling a custodial worker or childcare provider an author.  In the latter case, you are naming and crediting the workers without whom research couldn’t have happened.  In the former case, you are burying those workers further under the AI.  There is no reason to do that, and doing so is swimming upstream against a discussion about how to be just in understanding research authorship.

(b) Cognitive Labor: The more general reason is that intellectual labor is produced in common.  As Michael Hardt and Antonio Negri emphasized some twenty yers ago:

“Immaterial labor tends to take the social form of networks based on communication, collaboration, and affective relationships.  Immaterial labor can only be conducted in common, and increasingly immaterial labor invents new, independent networks of cooperation through which it produces” (Multitude, 66)

That is, to produce knowledge, you depend on social and intellectual networks.  This is also a point made repeatedly in the early 2000s literature on large-scale social production of knowledge, in books like Yochai Benkler’s Wealth of Networks.  This argument interacts with a substantial critique of intellectual property: it often depends on a notion of solitary authorship that ultimately dates to literary romanticism.   As a theoretical construction of creativity and its credit, in other words, this notion of authorship is a poor fit with knowledge production today (if it ever was a good fit).  Indeed, this is why joint and other collective forms of authorship have been tricky to integrate into intellectual property law.

Hardt and Negri ground their argument in a reading of Marx’s so-called “Fragment on Machines” from the Grundrisse.  In it, Marx argues that machinery (broadly construed) becomes the repository for social knowledge, as various inventions get built into machines which are then used by workers:

“The accumulation of knowledge and skill, of the general productive forces of the social brain, is thus absorbed into capital, as opposed to labor, and hence appears as an attribute of capital, and more specifically of fixed capital, in so far as it enters into the production process as a means of production proper”

Fixed capital refers in this case to machinery used in production; all the knowledge and skill of the people who invented that machine – the scientists, the engineers, etc. – is represented in the production process in the form of the machine.  As a result, this social labor appears as the machine:

“In so far as machinery develops with the accumulation of society’s science, of productive force generally, general social labor presents itself not in labor but in capital. The productive force of society is measured in fixed capital, exists there in its objective form; and, inversely, the productive force of capital grows with this general process, which capital appropriates free of charge” (694-5)

Chat GPT is such a device, and as Marx implies will happen, it’s exploited by capital free of charge.  If Kenyan content moderators are directly exploited, most of what LLMs depend on just gets taken for free from the recently declared biopolitical public domain of data on the Internet, scraped up.  Tiziana Terranova argued in a famous paper in Social Text that this was how the Internet worked in general: value gets added to websites because we contribute the content (= do the work), which they appropriate for free.  LLMs do exactly that.

Calling an LLM an “author,” then, participates in a straightforward process of commodity fetishism, where all the processes that generated the LLM are submerged below the surface, and the LLM itself is represented as somehow generating the social capital that it has scraped from the Internet.  Calling a human an author arguably does the same thing, but there are reasons to do so – accountability, for example – that justify applying the term, especially in the context of research.  But there is no comparable advantage for the AI, and so insisting on calling the AI an author just repeats the myth of the romantic author for no obvious offsetting gain.

There may well be cases where the strategic use of authorship to protect precarious individuals is in order.  However, those strike me as exceptions to a general rule that would require individualized treatment, rather than the other way around.  Next time I’ll try to articulate a more philosophical claim about human-technology interaction.

ChatGPT Putting Words in Sartre's Mouth

As if Sartre didn't produce enough words all by himself!

ChatGPT's response to the following prompt is instructive for those of us who are concerned about ChatGPT being used to cheat.  Read past the content of the answer to notice the made-up citations.  The "consciousness is a question..." line is in fact in the Barnes translation of Being and Nothingness, but is actually a term in the glossary provided by the translator (so it's not on p. 60 - it's on p. 629).  Where did the AI find this?  I'm guessing on the Wikipedia page for the book, which has a "special terms" section that includes the quote (and attributes it to Barnes.  I should add as an aside that Barnes puts it in quote marks, but doesn't reference any source).  The "separation" quote is, as far as I can tell, made up whole cloth.  It does sound vaguely Sartrean, but it doesn't appear to be in the Barnes translation, and I can't find it on Google.  It's also worth pointing out that neither quote is from the section about the cafe - both page numbers are from the bad faith discussion. 

I don't doubt that LLMs will get better (etc etc etc) but for now, bogus citations are a well-known hallmark of ChatGPT.  Watch it make-up quotes from Foucault (and generally cause him to turn over in his grave) here.

Prompt:

Please write 300-400 words to answer the following prompt: Sartre famously uses a figure/ground analysis of a café to explain nihilation. Explain Sartre's argument and what he means by nihilation. How does this argument show that nihilation is an originary part of experience, and not just a grammatical/logical point? Cite the text twice.

Response:

Jean-Paul Sartre was a prominent philosopher in the 20th century who sought to understand the nature of human consciousness and experience. One of his key concepts was "nihilation," which he famously illustrated using the figure/ground analysis of a café in his work "Being and Nothingness."

In this analysis, Sartre describes a person sitting in a café, observing the objects around them, such as tables, chairs, and other people. These objects form the "figure" of the experience, or what is in the foreground of the person's awareness. However, Sartre argues that there is also a "ground" to the experience, which is the person's own consciousness. This consciousness is not an object in the world like the tables and chairs, but rather the "nothingness" or "negativity" that allows the objects to appear in the first place.

Sartre writes, "Consciousness is a being such that in its being, its being is in question in so far as this being implies a being other than itself" (Sartre, "Being and Nothingness," p. 60). In other words, consciousness is always aware of something other than itself, and this awareness necessitates a kind of self-negation or "nihilation." Sartre argues that this nihilation is an essential aspect of consciousness and experience, as it allows for the objects of the world to appear in consciousness.

This argument shows that nihilation is not just a grammatical or logical point, but an originary part of experience. Sartre writes, "It is not we who make a separation between consciousness and being; on the contrary, this separation is essential to the existence of consciousness" (p. 66). In other words, the separation between consciousness and being is not something that we choose to do, but rather a necessary condition for consciousness to exist at all. This implies that nihilation is not something that can be overcome or eliminated, but rather a fundamental aspect of human experience.

Overall, Sartre's figure/ground analysis of a café is a powerful illustration of the concept of nihilation. By showing how consciousness is always aware of something other than itself, Sartre demonstrates that nihilation is an essential part of human experience. This argument also shows that nihilation is not just a grammatical or logical point, but a fundamental aspect of consciousness itself.

 

Some Reasons to be Skeptical of AI Authorship, Part 2: Accountability

By Gordon Hull

As I argued last time, authorship is a political function, and we should be applying that construction of it to understand whether AI should be considered an author.  Here is a first reason for doing so: AI can’t really be “accountable.”

(a) Research accountability: The various journal editors all emphasize accountability.  This seems fundamentally correct to me.  First, it is unclear what it would mean to hold AI accountable.  Suppose the AI fabricates some evidence, or cites a non-existent study, or otherwise commits something that, were a human to do it, would count as egregious research misconduct.  For the human, we have some remedies that ought, at least in principle, to discourage such behavior.  A person’s reputation can be ruined, their position at a lab or employer terminated, and so on.  None of those incentives would make the slightest difference to the AI.  The only remedy that seems obviously available is retracting the study.  But there’s at least two reasons that’s not enough.  First, as is frequently mentioned, retracted studies still get cited.  A lot.  Retraction Watch even keeps a list of the top-10 most cited papers that have been retracted.  The top one right now is a NEJM paper published in 2013 and retracted in 2018; it had 1905 cites before retraction and 950 after.  The second place paper is a little older, published in 1998 and retracted in 2010, and has been cited more times since its retraction than before.  In other words, papers that are bad enough to be actually retracted cause ongoing harm; a retraction is not a sufficient remedy for research misconduct.  If nothing else, whatever AI is going to find and cite it.  And all of this is assuming something we know to be false, which is that all papers with false data (etc) get retracted.  Second, it’s not clear how retraction disincentivizes an AI any more than any other penalty.  In the meantime, there is at least one good argument in favor making humans accountable for the output of an AI: it incentivizes them to check its work.

In short, in the absence of analogues to accountability for humans, evidence that making an AI accountable achieves anything, or evidence that limiting accountability to humans doesn’t incentivize error-checking, it seems to me that the burden of proof ought to be on those who don’t think accountability is a good reason to deny authorship to AI. 

A basic account of juridical personhood would seem to buttress this line of reasoning.  I am not a scholar of the personhood literature, so this is going to be basic, but Locke, for example, in Essay II.27 identifies personhood with continuity of consciousness.  If you didn’t have some sort of framing construct along those lines, accountability would be difficult.  This is why Locke – who, remember, is arguing that personhood is a juridical concept and that trying to establish it metaphysically leads to total confusion – lands on difficult cases like amnesia.  If someone chooses to get drunk and then does something stupid, Locke is in favor of assigning responsibilty for their actions, since they chose to get drunk.  But if they have some sort of complete, non-recoverable break in their memory that isn’t their fault, he’s inclined to think that it makes sense to call the person after the break a different person from the one before it.

Against this, Ryan Jenkins and Patrick Lin (in the paper I mentioned last time) note that “authors are sometimes posthumously credited, even though they cannot presently be held accountable for what they said when alive, nor can they approve of a posthumous submission of a manuscript; yet it would clearly be hasty to forbid the submission or publication of posthumous works.”  While obviously true, this strikes me as misplaced in its application to AI authorship.  First, in the case of humans, posthumous publication is a clear exception to a general rule about how accountability works.  In the case of AI, the difficulty in accountability is the rule.

Second, all authors are going to die, which means that there’s a limit built in to accountabillity in any case.  But this applies to AI authors too: pretty much anything to do with computing is going to go offline at some point.  Indeed, this is likely a more serious problem with AI, since the life cycle of AIs is very, very short.  GPT-3, which was released last year, is going to be replaced… this year.  Now, one could argue that all the versions of OpenAI products are the same “author,” since they’re the product of the same research team and will have trained on an accumulating corpus of text.  There’s probably a Ship of Theseus problem buried in there somewhere, as research teams and algorithms change, as does what the model scrapes off the Internet.  Let’s concede that, since all of us at least have constantly varying inputs, and those inputs can change the cognitive content in our heads, sometimes quite radically.  The bigger issue is that there’s lots of different LLMs out there, and we have no reason to think that any one of them is going to be around for that long.  Even the companies that produce the LLMs (who have lots of reasons why they asren’t responsible for the outputs in any case) aren’t always around all that long.  And if they are, they change ownership: Microsoft just dropped a $10 billion investment into OpenAI.  Is OpenAI the same now as before, from the point of view of accountability?  It seems to me there’s at least a prima facie case that it isn’t, and that there’s probably some expensive lawyers at Microsoft who would say the same thing. 

If we want accountability, in other words, we want to attach it to something at least a little bit stable.  Human authors do that better than LLMs.

(b) Corporate Accountability? All of that said, I do think a different way of conceptualizing accountability might be helpful: if assigning authorship to AI somehow incentivizes accountabiliy on the part of those who create LLMs – the corporate entities behind them – then that might well be worth pursuing.  I’m not sure how that would happen, at least in a legal sense (and the legal sense is the only one they’ll respond to), because there are so many plausible gaps between the engineering design and the output of the system, such as the training data and the fact that the system generates its own internal nodes.  But if there were a way of somehow forcing the creators of LLMs to internalize some of the costs and harms they generate, that would be at least a prima facie reason to support that endeavor.  Authorship would be one such strategy.  Doing this would require at least two things: first, the incentive structure would have to somehow transfer liability.  The model that comes to mind is the limit on consumer loss to $50 from fraudulent credit card transactions.  The policy has both enabled consumers to feel safe using credit cards, and caused the credit card industry to try very hard to prevent fraud, and to absorb the damage when it does.  There’s obviously a ton of disanalogies with LLMs but something like that could be a model.

The other thing that would have to be developed is a standard for due diligence and when it would be right to tag the creator of the LLM with the harms it caused.  In the case of research authorship, we might argue that failing to give the LLM hard rules against fabricating a bibliography (as in Faust’s example, discussed last time), constitutes some form of legally accountable failure.  The limit of this approach is that the complexity of using LLMs “in the wild” is going to generate a continuous stream of examples of questionable behavior (for many of them, people are going to argue about whether the behavior is really questionable).  It will be relatively easy to develop post hoc rules to ban certain offensive behaviors, but it’s going to be very hard to articulate workable standards for when a model’s creators are working hard enough to prevent those harms from happening in the first place.  The current model of privacy regulation is really discouraging – the soft compliance and best practices model has been completely corrupted into meaninglessness by corporations who’d rather not change their data collection practices, as Ari Waldman has conclusively demonstrated.

In short, I think accountability for research is not served by assigning LLMs author roles.  Accountability for LLM creators might be, but there’s a lot of “devil in the details” work in the way.  I’ll look at a completely different kind of question – social justice – next time

ChatGPT Didn't Do the Kierkegaard Reading

You know how sometimes your students don't do the reading?  And then how, when you give them a writing prompt based on it, they try to guess their way to a good answer from the everyday meaning of the words in the prompt?  And how, sometimes, the outcome is spectacularly, wonderfully wrong?

Well, I don't know what else ChatGPT can do, but it can do an uncannily good imitation of such a student!

(oh, and like that student, it blew through the wordcount, apparently on the theory that a lot of words would makeup for a lack of reading)

This was a prompt from my existentialism class (the instructions also tell them they have to quote the text, but I omitted that here, because we already know ChatGPT can't do that).  It's two images because I am technically incompetent to capture the longer-than-a-screen answer into one image:

 

Screenshot 2023-02-08 at 7.06.45 PM

Screenshot 2023-02-08 at 7.07.01 PM

Come Study Philosophy in Charlotte!

The MA Program at UNC Charlotte has a number of funded lines for our two-year MA program in philosophy.  We're an eclectic, practically-oriented department that emphasizes working across disciplines and philosophical traditions.  If that sounds like you, or a student you know - get in touch!  You can email me ([email protected]), though for a lot of questions I'll pass you along to our grad director, Andrea Pitts ([email protected]).  Or, there's a QR code in the flyer below

 

MA Flyer

MA Flyer 2

Some Reasons to be Skeptical of AI Authorship, Part 1: What is an (AI) Author?

By Gordon Hull

Large Language Models (LLMs) like Chat-GPT burst into public consciousness sometime in the second half of last year, and Chat-GPT’s impressive results have led to a wave of concern about the future viability of any profession that depends on writing, or on teaching writing in education.  A lot of this is hype, but one issue that is emerging is the role of AI authorship in academic and other publications; there’s already a handful of submissions that list AI co-authors.  An editorial in Nature published on Feb. 3 outlines the scope of the issues at hand:

“This technology has far-reaching consequences for science and society. Researchers and others have already used ChatGPT and other large language models to write essays and talks, summarize literature, draft and improve papers, as well as identify research gaps and write computer code, including statistical analyses. Soon this technology will evolve to the point that it can design experiments, write and complete manuscripts, conduct peer review and support editorial decisions to accept or reject manuscripts”

As a result:

“Conversational AI is likely to revolutionize research practices and publishing, creating both opportunities and concerns. It might accelerate the innovation process, shorten time-to-publication and, by helping people to write fluently, make science more equitable and increase the diversity of scientific perspectives. However, it could also degrade the quality and transparency of research and fundamentally alter our autonomy as human researchers. ChatGPT and other LLMs produce text that is convincing, but often wrong, so their use can distort scientific facts and spread misinformation.”

The editorial then gives examples of LLM-based problems with incomplete results, bad generalizations, inaccurate summaries, and other easily-generated problems.  It emphasizes accountability (for the content of material: the use of AI should be clearly documented) and the need for the development of truly open AI products as part of a push toward transparency.

There’s lots of examples of LLMs misbehaving, but here’s one that should be alarming about the danger of trusting AI based research from Jeremy Faust of MedPage Today.  Faust asked OpenAI to diagnose a patient that he described (using medical jargon) as “age 35, female, no past medical history, presents with chest pain which is pleuritic -- worse with breathing -- and she takes oral contraception pills. What's the most likely diagnosis?”  The AI did really well with the diagnosis, but it also reported that the condition was “exacerbated by the use of oral contraceptive pills.”  Faust had never heard this before, so he asked the AI for its source.  Things went rapidly downhill:

“OpenAI came up with this study in the European Journal of Internal Medicine that was supposedly saying that. I went on Google and I couldn't find it. I went on PubMed and I couldn't find it. I asked OpenAI to give me a reference for that, and it spits out what looks like a reference. I look up that, and it's made up. That's not a real paper.  It took a real journal, the European Journal of Internal Medicine. It took the last names and first names, I think, of authors who have published in said journal. And it confabulated out of thin air a study that would apparently support this viewpoint.”

So much for the Chat GPT lit review.

Journal editors have been trying to get ahead of matters. Eric Topol has been following these developments, of which I’ll extract a bit here.  Nature – and the other Springer Nature journals – just established the following policy:

“First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.  Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM”

The editorial announcing this policy also speaks in terms of the credibility of the scientific enterprise: “researchers should ask themselves how the transparency and trust-worthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner.”   Science has updated its policies “to specify that text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author. A violation of these policies will constitute scientific misconduct no different from altered images or plagiarism of existing works.”  The JAMA network journals have similarly added to their “author responsibilities” that “Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship” and that:

“If these models or tools are used to create content or assist with writing or manuscript preparation, authors must take responsibility for the integrity of the content generated by these tools. Authors should report the use of artificial intelligence, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgment section or the Methods section if this is part of formal research design or methods”

One hesitates to speak of a “consensus” at this point, but these are three of the most influential journals in science and medicine, and so it’s at least going to be very influential.

There is also now call for a more nuanced conversation; in a new white paper, Ryan Jenkins and Patrick Lin argue that Nature’s policy is “hasty.”  On the one hand, reducing AI to the acknowledgments page could understate and obscure its role in a paper, undermining the original role of transparency.  On the other hand, the accountability argument strikes them as too unsubtle. “For instance, authors are sometimes posthumously credited, even though they cannot presently be held accountable for what they said when alive, nor can they approve of a posthumous submission of a manuscript; yet it would clearly be hasty to forbid the submission or publication of posthumous works.”  They thus argue for assessing matters on a continuum with two axes: continuity (“How substantially are the contributions of AI writers carried through to the final product?”) and creditworthiness (“Is this the kind of product a human author would normally receive credit for?”) (9)

Some of these concerns will be recognizable from pre-AI authorship debates.  For example, it is difficult to know when someone’s contribution to a paper is sufficient to warrant crediting them as an author, particularly for scientific work that draws on a lot of different anterior processes, subparts, and so forth.  Authorship rules have also tended to discount the labor of those whose material support has been vital to the outcome (such as childcare workers) or whose involvement and creative contribution have been essential in the development of the project (this is a particular concern when researchers go and study disadvantaged communities).  In both cases, institutional practices seem complicit in perpetuating unjustified (and inaccurate) understandings of how knowledge is produced.  Max Liboiron’s papers try to redress this with much longer author-lists than one ordinarily expect as part a broader effort to decolonize scientific practices.

I mention Liboiron, because, at the risk of sounding old-fashioned, something that seems important to me is that the people they include are, well, people, and in particular people facing structural injustice.  Lin’s concern with posthumous authorship isn’t about living people, but it is about personhood more broadly.  Of course I am not going to argue that metaphysical personhood is either interesting or relevant in this context.  What I am going to argue is that there is an issue of legal personhood that underlies the question of accountability and authorship. In other words, whether AI should be an author is basically the question of whether it makes sense to assign personhood to it, at least in this context.  It seems to me that this is what should drive questions about AI authorship, and not either metaphysical questions about whether AI “is” and author, or questions about the extent to which its output resembles that of a person.

In Foucauldian terms, “author” is a political category, and we have historically used it precisely to negotiate accountability for creation.  As Foucault writes in his “What is an Author” essay, authorship is a historically specific function, and “texts, books, and discourses really begin to have authors … to the extent that authors became subject to punishment, that is, to the extent that discourse could be transgressive” (Reader, 108).  In other words, it’s about accountability and individuation: “The coming into being of the notion of ‘author’ constitutes the privileged moment of individualization in the history of ideas, knowledge, literature, philosophy, and the sciences” (101).  We see this part of the author function at work in intellectual property, where the “author” is also the person who can get paid for the work (there’s litigation brewing in the IP-world about AI authorship and invention).  As works-for-hire doctrine indicates, the person who actually produces the work may not ever be the author: if I write code for Microsoft for a living, I am probably not the author of the code I write.  Microsoft is.

Given that “author” names a political and juridical function, it seems to me that there are three reasons to hesitate about assigning the term “author” to an AI, which I’ll start on next time.

❌