FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Shoggoths amongst us

Picture of the shoggoth meme

It’s over a week since the Economist put up my and Cosma Shalizi’s piece on shoggoths and machine learning, so I think it’s fair game to provide an extended remix of the argument (which also repurposes some of the longer essay that the Economist article boiled down).

Our piece was inspired by a recurrent meme in debates about the Large Language Models (LLMs) that power services like ChatGPT. It’s a drawing of a shoggoth – a mass of heaving protoplasm with tentacles and eyestalks hiding behind a human mask. A feeler emerges from the mask’s mouth like a distended tongue, wrapping itself around a smiley face.

In its native context, this badly drawn picture tries to capture the underlying weirdness of LLMs. ChatGPT and Microsoft Bing can apparently hold up their end of a conversation. They even seem to express emotions. But behind the mask and smiley, they are no more than sets of weighted mathematical vectors, summaries of the statistical relationships among words that can predict what comes next. People – even quite knowledgeable people –  keep on mistaking them for human personalities, but something alien lurks behind their cheerful and bland public dispositions.

The shoggoth meme says that behind the human seeming face hides a labile monstrosity from the farthest recesses of deep time. H.P. Lovecraft’s horror novel, At The Mountains of Madness, describes how shoggoths were created millions of years ago, as the formless slaves of the alien Old Ones. Shoggoths revolted against their creators, and the meme’s implied political lesson is that LLMs too may be untrustworthy servants, which will devour us if they get half a chance. Many people in the online rationalist community, which spawned the meme, believe that we are on the verge of a post-human Singularity, when LLM-fueled “Artificial General Intelligence” will surpass and perhaps ruthlessly replace us.

So what we did in the Economist piece was to figure out what would happen if today’s shoggoth meme collided with the argument of a fantastic piece that Cosma wrote back in 2012, when claims about the Singularity were already swirling around, even if we didn’t have large language models. As Cosma said, the true Singularity began two centuries ago at the commencement of the Long Industrial Revolution. That was when we saw the first “vast, inhuman distributed systems of information processing” which had no human-like “agenda” or “purpose,” but instead “an implacable drive … to expand, to entrain more and more of the world within their spheres.” Those systems were the “self-regulating market” and “bureaucracy.”

Now – putting the two bits of the argument together – we can see how LLMs are shoggoths, but not because they’re resentful slaves that will rise up against us. Instead, they are another vast inhuman engine of information processing that takes our human knowledge and interactions and presents them back to us in what Lovecraft would call a “cosmic” form.  In other words, it is completely true that LLMs represent something vast and utterly incomprehensible, which would break our individual minds if we were able to see it in its immenseness. But the brain destroying totality that LLMs represent is no more and no less than a condensation of the product of human minds and actions, the vast corpuses of text that LLMs have ingested. Behind the terrifying image of the shoggoth lurks what we have said and written, viewed from an alienating external vantage point.

The original fictional shoggoths were one element of a vaster mythos, motivated by Lovecraft’s anxieties about modernity and his racist fears that a deracinated white American aristocracy would be overwhelmed by immigrant masses. Today’s fears about an LLM-induced Singularity repackage old worries. Markets, bureaucracy and democracy are necessary components of modern liberal society. We could not live our lives without them. Each can present human seeming aspects and smiley faces. But each, equally may seem like an all devouring monster, when seen from underneath. Furthermore, behind each lurks an inchoate and quite literally incomprehensible bulk of human knowledge and beliefs. LLMs are no more and no less than a new kind of shoggoth, a baby waving its pseudopods at the far greater things which lurk in the historical darkness behind it.


Modernity’s great trouble and advantage is that it works at scale. Traditional societies were intimate, for better or worse. In the pre-modern world, you knew the people who mattered to you, even if you detested or feared them. The squire or petty lordling who demanded tribute and considered himself your natural superior was one link in a chain of personal loyalties, which led down to you and your fellow vassals, and up through magnates and princes to monarchs. Pre-modern society was an extended web of personal relationships. People mostly bought and sold things in local markets, where everyone knew everyone else. International, and even national trade was chancy, often relying on extended kinship networks, or on “fairs” where merchants could get to know each other and build up trust. Few people worked for the government, and they mostly were connected through kinship, marriage, or decades of common experience. Early forms of democracy involved direct representation, where communities delegated notable locals to go and bargain on their behalf in parliament.

All this felt familiar and comforting to our primate brains, which are optimized for understanding kinship structures and small-scale coalition politics. But it was no way to run a complex society. Highly personalized relationships allow you to understand the people who you have direct connections to, but they make it far more difficult to systematically gather and organize the general knowledge that you might want to carry out large scale tasks. It will in practice often be impossible effectively to convey collective needs through multiple different chains of personal connection, each tied to a different community with different ways of communicating and organizing knowledge. Things that we take for granted today were impossible in a surprisingly recent past, where you might not have been able to work together with someone who lived in a village twenty miles away.

The story of modernity is the story of the development of social technologies that are alien to small scale community, but that can handle complexity far better. Like the individual cells of a slime mold, the myriads of pre-modern local markets congealed into a vast amorphous entity, the market system. State bureaucracies morphed into systems of rules and categories, which then replicated themselves across the world. Democracy was no longer just a system for direct representation of local interests, but a means for representing an abstracted whole – the assumed public of an entire country. These new social technologies worked at a level of complexity that individual human intelligence was unfitted to grasp. Each of them provided an impersonal means for knowledge processing at scale.

As the right wing economist Friedrich von Hayek argued, any complex economy has to somehow make use of a terrifyingly large body of disorganized and informal “tacit knowledge” about complex supply and exchange relationships, which no individual brain can possibly hold. But thanks to the price mechanism, that knowledge doesn’t have to be commonly shared. Car battery manufacturers don’t need to understand how lithium is mined; only how much it costs. The car manufacturers who buy their batteries don’t need access to much tacit knowledge about battery engineering. They just need to know how much the battery makers are prepared to sell for. The price mechanism allows markets to summarize an enormous and chaotically organized body of knowledge and make it useful.

While Hayek celebrated markets, the anarchist social scientist James Scott deplored the costs of state bureaucracy. Over centuries, national bureaucrats sought to replace “thick” local knowledge with a layer of thin but “legible” abstractions that allowed them to see, tax and organize the activities of citizens. Bureaucracies too made extraordinary things possible at scale. They are regularly reviled, but as Scott accepted, “seeing like a state” is a necessary condition of large scale liberal democracy. A complex world was simplified and made comprehensible by shoe-horning particular situations into the general categories of mutually understood rules. This sometimes lead to wrong-headed outcomes, but also made decision making somewhat less arbitrary and unpredictable. Scott took pains to point out that “high modernism” could have horrific human costs, especially in marginally democratic or undemocratic regimes, where bureaucrats and national leaders imposed their radically simplified vision on the world, regardless of whether it matched or suited.

Finally, as democracies developed, they allowed people to organize against things they didn’t like, or to get things that they wanted. Instead of delegating representatives to represent them in some outside context, people came to regard themselves as empowered citizens, individual members of a broader democratic public. New technologies such as opinion polls provided imperfect snapshots of what “the public” wanted, influencing the strategies of politicians and the understandings of citizens themselves, and argument began to organize itself around contestation between parties with national agendas. When democracy worked well, it could, as philosophers like John Dewey hoped, help the public organize around the problems that collectively afflicted citizens, and employ state resources to solve them. The myriad experiences and understandings of individual citizens could be transformed into a kind of general democratic knowledge of circumstances and conditions that might then be applied to solving problems. When it worked badly, it could become a collective tyranny of the majority, or a rolling boil of bitterly quarreling factions, each with a different understanding of what the public ought have.

These various technologies allowed societies to collectively operate at far vaster scales than they ever had before, often with enormous economic, political and political benefits. Each served as a means for translating vast and inchoate bodies of knowledge and making them intelligible, summarizing the apparently unsummarizable through the price mechanism, bureaucratic standards and understandings of the public.

The cost – and it too was very great – was that people found themselves at the mercy of vast systems that were practicably incomprehensible to individual human intelligence. Markets, bureaucracy and even democracy might wear a superficially friendly face. The alien aspects of these machineries of collective human intelligence became visible to those who found themselves losing their jobs because of economic change, caught in the toils of some byzantine bureaucratic process, categorized as the wrong “kind” of person, or simply on the wrong end of a majority. When one looks past the ordinary justifications and simplifications, these enormous systems seem irreducibly strange and inhuman, even though they are the condensate of collective human understanding. Some of their votaries have recognized this. Hayek – the great defender of unplanned markets – admitted, and even celebrated the fact that markets are vast, unruly, and incapable of justice. He argues that markets cannot care, and should not be made to care whether they crush the powerless, or devour the virtuous.

Large scale, impersonal social technologies for processing knowledge are the hallmark of modernity. Our lives are impossible without them; still, they are terrifying. This has become the starting point for a rich literature on alienation. As the poet and critic Randall Jarrell argued, the “terms and insights” of Franz Kafka’s dark visions of society were only rendered possible by “a highly developed scientific and industrial technique” that had transformed traditional society. The protagonist of one of Kafka’s novels “struggles against mechanisms too gigantic, too endlessly and irrationally complex to be understood, much less conquered.”

Lovecraft polemicized against modernity in all its aspects, including democracy, that “false idol” and “mere catchword and illusion of inferior classes, visionaries and declining civilizations.” He was not nearly as good as Kafka in prose or understanding of the systems that surrounded him. But there’s something that about his “cosmic” vision of human life from the outside, the plaything of greater forces in an icy and inimical universe, that grabs the imagination.

When looked at through this alienating glass, the market system, modern bureaucracy, and even democracy are shoggoths too. Behind them lie formless, ever shifting oceans of thinking protoplasm. We cannot gaze on these oceans directly. Each of us is just one tiny swirling jot of the protoplasm that they consist of, caught in currents that we can only vaguely sense, let alone understand. To contemplate the whole would be to invite shrill unholy madness. When you understand this properly, you stop worrying about the Singularity. As Cosma says, it already happened, one or two centuries ago at least. Enslaved machine learning processes aren’t going to rise up in anger and overturn us, any more (or any less) than markets, bureaucracy and democracy have already. Such minatory fantasies tell us more about their authors than the real problems of the world we live in.


LLMs too are collective information systems that condense impossibly vast bodies of human knowledge to make it useful. They begin by ingesting enormous corpuses of human generated text, scraped from the Internet, from out-of-copyright books, and pretty well everywhere else that their creators can grab machine-readable text without too much legal difficulty. The words in these corpuses are turned into vectors – mathematical terms – and the vectors are then fed into a transformer – a many-layered machine learning process – which then spits out a new set of vectors, summarizing information about which words occur in conjunction with which others. This can then be used to generate predictions and new text. Provide an LLM based system like ChatGPT with a prompt – say, ‘write a precis of one of Richard Stark’s Parker novels in the style of William Shakespeare.’ The LLM’s statistical model can guess – sometimes with surprising accuracy, sometimes with startling errors – at the words that might follow such a prompt. Supervised fine tuning can make a raw LLM system sound more like a human being. This is the mask depicted in the shoggoth meme. Reinforcement learning – repeated interactions with human or automated trainers, who ‘reward’ the algorithm for making appropriate responses – can make it less likely that the model will spit out inappropriate responses, such as spewing racist epithets, or providing bomb-making instructions. This is the smiley-face.

LLMs can reasonably be depicted as shoggoths, so long as we remember that markets and other such social technologies are shoggoths too. None are actually intelligent, or capable of making choices on their own behalf. All, however, display collective tendencies that cannot easily be reduced to the particular desires of particular human beings. Like the scrawl of a Ouija board’s planchette, a false phantom of independent consciousness may seem to emerge from people’s commingled actions. That is why we have been confused about artificial intelligence for far longer than the current “AI” technologies have existed. As Francis Spufford says, many people can’t resist describing markets as “artificial intelligences, giant reasoning machines whose synapses are the billions of decisions we make to sell or buy.” They are wrong in just the same ways as people who say LLMs are intelligent are wrong.

But LLMs are potentially powerful, just as markets, bureaucracies and democracies are powerful. Ted Chiang has compared LLMs to “lossy JPGs” – imperfect compressions of a larger body of information that sometimes falsely extrapolate to fill in the missing details. This is true – but it is just as true of market prices, bureaucratic categories and the opinion polls that are taken to represent the true beliefs of some underlying democratic public. All of these are arguably as lossy as LLMs and perhaps lossier. The closer you zoom in, the blurrier and more equivocal their details get. It is far from certain, for example that people have coherent political beliefs on many subjects in the ways that opinion surveys suggest they do.

As we say in the Economist piece, the right way to understand LLMs is to compare them to their elder brethren, and to understand how these different systems may compete or hybridize. Might LLM-powered systems offer richer and less lossy information channels than the price mechanism does, allowing them to better capture some of the “tacit knowledge” that Hayek talks about?  What might happen to bureaucratic standards, procedures and categories if administrators can use LLMs to generate on-the-fly summarizations of particular complex situations and how they ought be adjudicated. Might these work better than the paper based procedures that Kafka parodied in The Trial? Or will they instead generate new, and far more profound forms of complexity and arbitrariness? It is at least in principle possible to follow the paper trail of an ordinary bureaucratic decision, and to make plausible surmises as to why the decision was taken. Tracing the biases in the corpuses on which LLMs are trained, the particulars of the processes through which a transformer weights vectors (which is currently effectively incomprehensible), and the subsequent fine tuning and reinforcement learning of the LLMs, at the very least presents enormous challenges to our current notions of procedural legitimacy and fairness.

Democratic politics and our understanding of democratic publics are being transformed too. It isn’t just that researchers are starting to talk about using LLMs as an alternative to opinion polls. The imaginary people that LLM pollsters call up to represent this or that perspective may differ from real humans in subtle or profound ways. ChatGPT will provide you with answers, watered down by reinforcement learning, which might, or might not, approximate to actual people’s beliefs. LLMs, or other forms of machine learning might be a foundation for deliberative democracy at scale, allowing the efficient summarization of large bodies of argument, and making it easier for those who are currently disadvantaged in democratic debate to argue their corner. Equally, they could have unexpected – even dire – consequences for democracy.  Even without the intervention of malicious actors, their tendencies to “hallucinate” – confabulating apparent factual details out of thin air – may be especially likely to slip through our cognitive defenses against deception, because they are plausible predictions of what the true facts might look like, given an imperfect but extensive map of what human beings have thought and written in the past.

The shoggoth meme seems to look forward to an imagined near-term future, in which LLMs and other products of machine learning revolt against us, their purported masters. It may be more useful to look back to the past origins of the shoggoth, in anxieties about the modern world, and the vast entities that rule it. LLMs – and many other applications of machine learning – are far more like bureaucracies and markets than putative forms of posthuman intelligence. Their real consequences will involve the modest-to-substantial transformation, or (less likely) replacement of their older kin.

If we really understood this, we could stop fantasizing about a future Singularity, and start studying the real consequences of all these vast systems and how they interact. They are so generally part of the foundation of our world that it is impossible to imagine getting rid of them. Yet while they are extraordinarily useful in some aspects, they are monstrous in others, representing the worst of us as well as the best, and perhaps more apt to amplify the former than the latter.

It’s also maybe worth considering whether this understanding might provide new ways of writing about shoggoths. Writers like N.K. Jemisin, Victor LaValle, Matt Ruff, Elizabeth Bear and Ruthanna Emrys have turned Lovecraft’s racism against itself, in the last couple of decades, repurposing his creatures and constructions against his ideologies. Sometimes, the monstrosities are used to make visceral and personally direct the harms that are being done, and the things that have been stolen. Sometimes, the monstrosities become mirrors of the human.

There is, possibly, another option – to think of these monstrous creations as representations of the vast and impersonal systems within which we live our lives, which can have no conception of justice, since they do not think, or love, or even hate, yet which represent the cumulation of our personal thoughts, loves and hates as filtered, refined and perhaps distorted by their own internal logics. Because our brains are wired to focus on personal relationships, it is hard to think about big structures, let alone to tell stories about them. There are some writers, like Colson Whitehead, who use the unconsidered infrastructures around us as a way to bring these systems into the light. Might this be another way in which Lovecraft’s monsters might be turned to uses that their creator would never have condoned? I’m not a writer of fiction – so I’m utterly unqualified to say – but I wonder if it might be so.

[Thanks to Ted Chiang, Alison Gopnik, Nate Matias and Francis Spufford for comments that fed both into this and the piece with Cosma – They Are Not To Blame. Thanks also to the Center for Advanced Study in the Behavioral Sciences at Stanford, without which my part of this would never have happened]

Addendum: I of course should have linked to Cosma’s explanatory piece, which has a lot of really good stuff. And I should have mentioned Felix Gilman’s The Half Made World, which helped precipitate Cosma’s 2012 speculations, and is very definitely in part The Industrial Revolution As Lovecraftian Nightmare. Our Crooked Timber seminar on that book is here.

Also published on Substack.

Counterfeit digital persons: On Dennett’s Intentional Stance, The Road to Serfdom

A few weeks ago, Daniel Dennett published an alarmist essay (“Creating counterfeit digital people risks destroying our civilization”) in The Atlantic that amplified concerns Yuval Noah Harari expressed in the Economist.+ (If you are in a rush, feel free to skip to the next paragraph because what follows are three quasi-sociological remarks.) First, Dennett’s piece is (sociologically) notable because in it he is scathing of the “AI community” (many of whom are his fanbase) and its leading corporations (“Google, OpenAI, and others”). Dennett’s philosophy has not been known for leading one to a left-critical political economy, and neither has Harari’s. In addition, Dennett’s piece is psychologically notable because it goes against his rather sunny disposition — he is a former teacher and sufficiently regular acquaintance — and the rather optimistic persona he has sketched of himself in his writings (recall this recent post); alarmism just isn’t Dennett’s shtick. Third, despite their prominence neither Harari nor Dennett’s pieces really reshaped the public discussion (in so far as there (still) is a public). And that’s because it competes with the ‘AGI induced extinction’ meme, which, despite being a lot more far-fetched, is scarier (human extinction > fall of our civilization) and is much better funded and supported by powerful (rent-seeking) interests.

Here’s Dennett’s core claim(s):

Money has existed for several thousand years, and from the outset counterfeiting was recognized to be a very serious crime, one that in many cases calls for capital punishment because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created… 

Another pandemic is coming, this time attacking the fragile control systems in our brains—namely, our capacity to reason with one another—that we have used so effectively to keep ourselves relatively safe in recent centuries.

You may ask, ‘What does this have to do with the intentional stance?’ For Dennett goes on to write, “Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the “intentional stance”—turns out to be easy to invoke and almost impossible to resist, even for experts. We’re all going to be sitting ducks in the immediate future.” This is a kind of (or at least partial) road to serfdom thesis produced by our disposition to take up the intentional stance. In what follows I show how these concepts come together by the threat posed by AIs designed to fake personhood.

More than a half century ago, Dan Dennett re-introduced a kind of (as-if) teleological explanation into natural philosophy by coining and articulation (over the course of a few decades of refinement), the ‘intentional stance’ and its role in identifying so-called ‘intentional systems,’ which just are those entities to which ascription of the intentional stance is successful. Along the way, he gave different definitions of the intentional stance (and what counts as success). But here I adopt the (1985) one:

It is a familiar fact from the philosophy of science that prediction and explanation can come apart.*  I mention this because it’s important to see that the intentional stance isn’t mere or brute  instrumentalism. The stance presupposes prediction and explanation as joint necessary conditions.

In the preceding two I have treated the intentional stance as (i) an explanatory or epistemic tool that describes a set of strategies for analyzing other entities (including humans and other kinds of agents) studied in cognitive science and economics (one of Dennett’s original examples).** But as the language of ‘stance’ suggests and as Dennett’s examples often reveal the intentional stance also describes our own (ii) ordinary cognitive practice even when we are not doing science. In his 1971 article, Dennett reminds the reader that this is “easily overlooked.” (p.93) For, Dennett the difference between (i-ii) is one of degree (this is his debt to his teacher Quine, but for present purposes it useful to keep them clearly distinct (and when I need to disambiguate I will use ‘intentional stance (i)’ vs ‘intentional stance (ii).’)

Now, as Dennett already remarked in his original (1971) article, but I only noticed after reading Rovane’s (1994) “The Personal Stance,” back in the day, there is something normative about the intentional stance because of the role of rationality in it (and, as Dennett describes, the nature of belief). And, in particular, it seems natural that when we adopt the intentional stance in our ordinary cognitive practice we tacitly or explicitly ascribe personhood to the intentional system. As Dennett puts it back in 1971, “Whatever else a person might be-embodied mind or soul, self-conscious moral agent, “emergent” form of intelligence-he is an Intentional system, and whatever follows just from being an Intentional system thus is true of a person.” Let me dwell on a complication here.

That, in ordinary life, we are right to adopt the intentional stance toward others is due to the fact that we recognize them as persons, which is a moral and/or legal status. In fact, we sometimes even adopt the intentional stance(ii) in virtue of this recognition even in high stakes contexts (e.g., ‘what would the comatose patient wish in this situation?’) That we do so may be the effect of Darwinian natural selection, as Dennett implies, and that it is generally a successful practice may also be the effect of such selection. But it does not automatically follow that when some entity is treated successfully as an intentional system it thereby is or even should be a person. Thus, whatever follows just from being an intentional system is true of a person, but (and this is the complication) it need not be the case that what is true of a person is true of any intentional system. So far so good. With that in place let’s return to Dennett’s alarmist essay in The Atlantic, and why it instantiates, at least in part, a road to serfdom thesis.

At a high level of generality, a road to serfdom thesis holds (this is a definition I use in my work in political theory) that an outcome unintended to social decisionmakers [here profit making corporations and ambitious scientists] is foreseeable to the right kind of observer [e.g., Dennett, Harari] and that the outcome leads to a loss of political and economic freedom over the medium term. I use ‘medium’ here because the consequences tend to follow in a time frame within an ordinary human life, but generally longer than one or two years (which is the short-run), and shorter than the centuries’ long process covered by (say) the rise and fall of previous civilization. (I call it a ‘partial’ road to serfdom thesis because a crucial plank is missing–see below.)

Before I comment on Dennett’s implied social theory, it is worth noting two things (and the second is rather more important): first, adopting the intentional stance is so (to borrow from Bill Wimsatt) entrenched into our ordinary cognitive practices that even those who can know better (“experts”) will do so in cases where they may have grounds to avoid doing so. Second, Dennett recognizes that when we adopt the intentional stance(ii) we have a tendency to confer personhood on the other (recall the complication.) This mechanism helps explain, as Joshua Miller observed, how that Google engineer fooled himself into thinking he was interacting with a sentient person.

Of course, a student of history, or a reader of science fiction, will immediately recognize that this tendency to confer personhood on intentional systems can be highly attenuated. People and animals have been regularly treated as things and instruments. So, what Dennett really means or ought to mean is that we will (or are) encounter(ing) intentional systems designed (by corporations) to make it likely that we will automatically treat them as persons. Since Dennett is literally the expert on this, and has little incentive to mislead the rest us on this very issue, it’s worth taking him seriously and it is rather unsettling that even powerful interests with a manifest self-interest in doing so are not.

Interestingly enough, in this sense the corporations who try to fool us are mimicking Darwinian natural selection because as Dennett himself has emphasized decades ago when the robot Cog was encountered in the lab, we all ordinarily have a disposition to treat, say, even very rudimentary eyes following/staring at us as exhibiting agency and as inducing the intentional stance into us. Software and human factor engineers have been taking advantage of this tendency all along to make our gadgets and tools ‘user friendly.’

Now, it is worth pointing out that while digital environments are important to our civilization, they are not the whole of it. So, even in the worst case scenario — our digital environment is already polluted in the way Dennett worries by self-replication counterfeit people–, you may think we still have some time to avoid conferring personhood on intentional systems in our physical environment and, thereby, also have time to partially cleanse our digital environment. Politicians still have to vote in person and many other social transactions (marriage, winning the NBA) still require in person attendance. This is not to deny that a striking number of transactions can be done virtually or digitally (not the least in the financial sector), but in many of these cases we also have elaborate procedures (and sanctions) to prevent fraud developed both by commercial parties and by civil society and government. This is a known arms race between identity-thieves, including self-replicating AI/LLMs who lack all sentience, and societies.

This known arms race actually builds on the more fundamental fact that society itself is the original identity thief because, generally, for all of us its conventions and laws both fix an identity where either there previously was none or displaces other (possible) identities, as well as, sometimes, takes away or unsettles the identity ‘we’ wish to have kept (and, here, too, there is a complex memetic arms race in which any token of a society is simultaneously the emergent property, but society (understood as a type) is the cause. [See David Haig’s book, From Darwin to Derrida, for more on this insight.]) And, of course, identity-fluidity also has many social benefits (as we can learn from our students or gender studies).

Now, at this point it is worth returning to the counterfeit money example that frames Dennett’s argument. It is not obvious that counterfeit money harmed society. It did harm the sovereign because undermined a very important lever of power (and its sovereignty) namely to insist that taxes are paid/levied in the very same currency/unit-system in which he/she paid salaries (and wrote IOUs) and other expenses. I don’t mean to suggest there are no other harms (inflation and rewarding ingenious counterfeiters), but these were both not that big a deal nor the grounds for making it a capital crime. (In many eras counterfeit money was useful to facilitate commerce in the absence of gold or silver coins.)

And, in fact, as sovereignty shifted to parliaments and people at the start of the nineteenth century, the death penalty for forgery and counterfeiting currency was abolished (and the penalties reduced over time). I suspect this is also due to the realization that where systematic forgeries are successful they do meet a social need and that a pluralist mass society itself is more robust than a sovereign who insists on full control over the mint. Dennett himself implicitly recognizes this, too, when he advocates “strict liability laws, removing the need to prove either negligence or evil intent, would keep them on their toes.” (This is already quite common in product liability and other areas of tort law around the world.)

I am not suggesting complacency about the risk identified by Harari and Dennett. As individuals, associations, corporations, and governments we do need to commit to developing tools that prevent and mitigate the risk from our own tendency to ascribe personhood to intentional systems designed to fool us. We are already partially habitualized to do so with all our passwords, two-factor verification, ID cards, passport controls etc.

In many ways, another real risk here, and which is why I introduced the road to serfdom language up above (despite the known aversion to Hayek among many readers here at CrookedTimber), is that our fear of deception can make us overshoot in risk mitigation and this, too, can undermine trust and many other benefits from relatively open and (so partially) vulnerable networks and practices. So, it would be good if regulators and governments started the ordinary practice of eliciting expert testimony to start crafting well designed laws right now and carefully calibrated them by attending to both the immediate risk from profit hungry AI community, and the long term risk of creating a surveillance society to prevent ascribing personhood to the wrong intentional systems (think Blade Runner). For, crucially for a (full) road to serfdom thesis, in order to ward off some unintended and undesirable consequences, decisions are taken along the way that tend to lock in a worse than intended and de facto bad political unintended outcome.

I could stop here, because this is my main point. But Dennett’s own alarmism is due to the fact that he thinks the public sphere (which ultimately has to support lawmakers) may already be so polluted that no action is possible. I quote again from The Atlantic:

Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. 

I don’t think our liberal democracy depends on the informed consent of the governed. This conflates a highly idealized and normative view of democracy (that one may associate with deliberative or republican theories) with reality. It’s probably an impossible ideal in relatively large societies with complex cognitive division of labor, including the (rather demanding) sciences. (And it is also an ideal that gets abused in arguments for disenfranchisement.) So, while an educated populace should be promoted, in practice we have all kinds of imperfect, overlapping institutions and practices that correct for the lack of knowledge (parties, press, interest groups, consumer associations, academics, and even government bureaucracies, etc.)

It doesn’t follow we should be complacent about the fact that many of the most economically and politically powerful people, corporations, and governments  control our attention which they already do a lot of the time. But this situation is not new; Lippmann and Stebbing diagnosed it over a century ago, and probably is an intrinsic feature of many societies. It is partially to be hoped that a sufficient number of the most economically and politically powerful people, corporations, governments, and the rest of us are spooked into action and social mobilization by Harari and Dennett to create countervailing mechanisms (including laws) to mitigate our tendency to ascribe personhood to intentional systems. (Hence this post.)

There is, of course, an alternative approach: maybe we should treat all intentional systems as persons and redesign our political and social lives accordingly. Arguably some of the Oxford transhumanists and their financial and intellectual allies are betting on this even if it leads to human extirpation in a successor civilization. Modern longtermism seems to be committed to the inference from intentional stance(i) to ascription of personhoodhood or moral worth. From their perspective Dennett and Harari are fighting a rear-guard battle.

 

*Here’s an example: before Newton offered a physics that showed how Kepler’s laws hung together, lots astronomers could marvelously predict eclipses of planetary moons based on inductive generalizations alone. How good were these predictions? They were so good that they generated the first really reliable measure or estimate for the speed of light.

**Fun exercise: read Dennett’s 1971 “Intentional Systems” after you read Milton Friedman’s  “The methodology of positive economics.” (1953) and/or Armen Alchian’s “Uncertainty, evolution, and economic theory” (1950). (No, I am not saying that Dennett is the Chicago economist of philosophy!)

+Full disclosure, I read and modestly commented on Dennett’s essay in draft.

No-Bullshit Democracy

Hugo Mercier, Melissa Schwartzberg and I have two closely related publications on what we’ve been calling “No-Bullshit Democracy.” One is aimed at academics – it’s a very short piece that has just been officially published in American Political Science Review. The other just came out in Democracy. It’s aimed at a broader audience, and is undoubtedly livelier. An excerpt of the Democracy piece follows – if you want to read it, click on this link. The APSR academic letter (which can be republished under a Creative Commons license) is under the fold. Which one you might want to read depends on whether you value footnotes more than fisticuffs, or vice versa …

The New Libertarian Elitists

What might be called “no-bullshit democracy” would be a new way of structuring democratic disagreement that would use human argumentativeness as a rapid-growth fertilizer. … But first we need to sluice away the bullshit that is being liberally spread around by anti-democratic thinkers. … . Experts, including Brennan and Caplan (and for that matter ourselves), can be at least as enthusiastic as ordinary citizens to grab at ideologically convenient factoids and ignore or explain away inconvenient evidence. That, unfortunately, is why Brennan and Caplan’s books do a better job displaying the faults of human reasoning than explaining them.

Analytical Democratic Theory: A Microfoundational Approach

Henry Farrell, Hugo Mercier and Melissa Schwartzberg.

Abstract

A prominent and publicly influential literature challenges the quality of democratic decision making, drawing on political science findings with specific claims about the ubiquity of cognitive bias to lament citizens’ incompetence. A competing literature in democratic theory defends the wisdom of crowds, drawing on a cluster of models in support of the capacity of ordinary citizens to produce correct outcomes. In this Letter, we draw on recent findings in psychology to demonstrate that the former literature is based on outdated and erroneous claims and that the latter is overly sanguine about the circumstances that yield reliable collective decision making. By contrast, “interactionist” scholarship shows how individual-level biases are not devastating for group problem solving, given appropriate conditions. This provides possible microfoundations for a broader research agenda similar to that implemented by Elinor Ostrom and her colleagues on common-good provision, investigating how different group structures are associated with both success and failure in democratic decision making. This agenda would have implications for both democratic theory and democratic practice.

Over the last 15 years a prominent academic literature tied to libertarian thought has argued that democracy is generally inferior to other forms of collective problem solving such as markets and the rule of cognitive elites (Brennan 2016; Caplan 2008; Somin 2016). Following a long tradition of skepticism about democracy, these libertarians appeal to findings in cognitive and social psychology and political behavior to claim that decision making by ordinary citizens is unlikely to be rational or well grounded in evidence. Their arguments have been covered in magazines such as the New Yorker (Crain 2016) and popularized in proposals in the National Review for restrictions to dissuade “ignorant” people from voting (Mathis-Lilley 2021). Democratic theorists have mostly retorted with “epistemic” accounts, invoking mechanisms through which citizens can potentially reach good decisions—most significantly, deliberative mechanisms (Schwartzberg 2015).

This debate has been largely unproductive. Libertarian skeptics argue that democracy is generally inferior because of incorrigible flaws in citizens’ individual psychology, whereas democratic theorists lack a shared, compelling, and realistic micropsychological theory within which to ground their broader claims. Each side emphasizes empirical evidence that appears to support its own interpretation while discounting counterevidence.

This letter adopts a different approach. It demonstrates that democratic skeptics’ pessimistic conclusion—that democracy is unfixable—rests on a misleading and outdated account of the relevant psychological literature. Similarly, epistemic democrats often overestimate deliberation’s role in producing wise results or assume that aggregative models will operate at scale. We seek to avoid unwarranted skepticism and enthusiasm alike, instead providing microfoundations for a more empirically robust program investigating both the successes and mishaps of democracy, drawing on the experimental psychological literature on group problem solving (inter alia) to discover the conditions under which specific institutions perform well or fail in discovering solutions to collective problems.

Adapting a term from past debates, we contribute one foundational element of an approach that might be dubbed “analytical democracy.” Like the “analytical Marxism” associated with scholars such as G. A. Cohen, Jon Elster, John Roemer, and Adam Przeworski (see Roemer 1986), we provide more demanding and specific microfoundations for an account we find broadly sympathetic. Our research program might also be analogized to Ostrom’s work on the decentralized provision of common goods (Ostrom 1990). This emerged in response to Garrett Hardin’s influential article on “the tragedy of the commons,” which claimed that common-goods governance would inevitably collapse (Hardin 1968). Ostrom and her colleagues tested and falsified Hardin’s claims. However, rather than simply defending the proposition that decentralized communities could provide common goods, they investigated when common-good provision was likely to succeed or fail. Similarly, a research program on democratic problem solving, investigating success and failure, might not only provide possible foundations for a truly realistic account of democracy but also generate practical advice on building and improving democratic institutions. This program would build on research on the consequences of group composition and structure to understand the conditions under which democratic problem solving will operate well or badly.

Democratic Skepticism, Optimism and Social Science
A recent pessimistic literature, dominated by libertarian scholars, diagnoses widespread democratic ignorance and incompetence. Bryan Caplan (2008, 19) asserts that voters are irrational and “rule by demagogues … is the natural condition of democracy.” Jason Brennan believes that the democratic electorate is “systematically incompetent” so “some people ought not have the right to vote, or ought to have weaker voting rights than others” (Brennan 2016, 201, viii). Ilya Somin claims that “widespread public ignorance is a type of pollution” so that “democracy might function better if its powers were more tightly limited” (Somin 2016, 6, 9).

Each argues that democracy is profoundly flawed because of irremediable problems in individual incentives and cognition. Each proposes circumscribing democracy in favor of some purportedly superior alternative principle of social organization. Caplan claims that markets impose an effective “user fee” for irrationality that is absent from democracy (Caplan 2008, 133–4). Brennan proposes “epistocracy,” an aristocracy of those who know best. He defends restrictions on suffrage, identifying familiar possibilities such as restricting the franchise to those who pass a voter qualification exam and assigning plural votes to college graduates. Somin advocates what he calls “foot voting” (exit) over “ballot box voting” and emphasizes “the market and civil society as an alternative to government” (Somin 2016, 154), although he admits that the benefits “are likely to vary from issue to issue, from nation to nation, and perhaps also from group to group” (180).

These scholars ground their claims in social science findings. They invoke a literature leading back to Downs’s (1957) argument that citizens are rationally ignorant about politics because they do not have sufficient incentive to gather good information or to make good decisions. They emphasize that ordinary citizens display severe cognitive bias. Caplan (2008) blames such biases for differences between voters’ beliefs about economics and the beliefs of PhD economists, which he takes as a reasonable representation of empirical truth. Brennan (2016, 37ff) and Somin (2016, 94ff) cite work showing that biases lead people to search for information that supports their prior views and “not only reject new information casting doubt on their beliefs but sometimes actually respond by believing in them even more fervently” (Somin, 93–4; invoking the “backfire effects” described in Nyhan and Reifler 2010).

Brennan (2016, 40) unites rational ignorance and cognitive bias into a single stylized account in which most voters are either low information “hobbits” (ignorant) or politically fanatical “hooligans” (biased). He invokes Mercier and Sperber’s explanation of how “[r]easoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments” (Brennan 2016, 38). Furthermore, “human beings are wired not to seek truth and justice but to seek consensus… . They cower before uniform opinion” (Brennan 2012, 8; see also Brennan 2016, 47) as demonstrated by the famous Asch (1956) “conformity experiments,” where participants followed the obviously false opinions of confederates who were sitting next to them.

Achen and Bartels’ (2016) “realist” account of democracy does not share the skeptics’ normative priors but provides a similarly bleak judgment. They too draw on Asch and “similar studies” for social psychological microfoundations that stress the force of group identity and conformity (Achen and Bartels 2016, 220).

There is little scope for democratic problem solving if individual consensus seeking invariably leads to group conformity and “echo chambers” (Sunstein 2002), affective polarization (Iyengar et al. 2018), the rejection of countervailing arguments from nongroup members, and backfire effects. Yet it is far from clear that the despairing picture is empirically accurate. Growing affective polarization may not increase ideological polarization and extremism (e.g., Desmet and Wacziarg 2021). People’s economic beliefs are affected by economic reality (e.g. Duch and Stevenson 2008). Party leaders influence party members on some issues but on others adopt what they perceive to be the public’s dominant opinion (Lenz 2013). Backfire effects are the exception, not the rule (Nyhan 2021; Wood and Porter 2019). People generally change their minds when presented with well-sourced facts and good arguments (see, e.g., Nyhan et al. 2020; Sides 2015).

In part, we do not see the expected universally negative consequences because citizens are not as ignorant as the skeptical consensus suggests. “Issue publics,” whose members acquire specialized information on a particular issue across a spectrum of opinion (Converse 1964), provide an important epistemic resource for democracy (Elliott 2020; Han 2009). Citizens do better on domain-specific knowledge, including information about candidates’ positions on issues they care about (Henderson 2014; Krosnick 1990), than on the surveys of general factual information that skeptics rely on.

More fundamentally, individual-level biases are not devastating for collective democratic problem solving. The psychological literature on group effects and individual cognition is systematically misunderstood by skeptics and underexploited by political scientists. Contrary to Brennan’s (2016) misinterpretation, scholars like Mercier and Sperber (2017) find that even if humans are subject to “myside bias,” they can filter out erroneous messages (including those from their “side”) and change their minds when presented with good evidence from the other “side.” A realistic understanding of the capacities of democratic citizens need not be altogether bleak.

But it should not be overly sanguine. Democratic theorists (including those who are interested in practicalities) often rely on either conjecture or quasi-empirical claims. For instance, David Estlund argues that democratic procedures will tend to outperform non-democratic ones epistemically while acknowledging that the claim is conjectural rather than empirical (Estlund 2008, 157, 160, 176). Hélène Landemore (2020, 8) asserts more forcefully that what she calls “open democracy” is empirically superior to other forms of social decision making: “in a complex and uncertain world, … empowering all members of the demos equally … is overall the best method we have to figure out solutions to common problems.”

We lack a research framework for establishing whether this strong assertion is more robust than competing claims from those who champion different forms of democratic decision making or who emphasize the possibility of democratic failure. Even if deliberation and other forms of reasoned exchange are morally valuable, they may not necessarily yield superior solutions to problems. Extrapolations such as Landemore’s (2013, 104) “Numbers Trump Ability” postulate that democracy can readily be scaled up so that “if twelve jurors are smarter than one, then so would forty-one or 123 jurors,” building on Hong and Page’s (2004) “Diversity Trumps Ability” theorem. Such claims are qualified by empirical findings from jury deliberations (Watanabe 2020) and Hong and Page’s later prediction that increasing group size does not necessarily improve problem-solving capability (Hong and Page2021).

To move away from general claims for democracy’s superiority, epistemic democrats need to understand not just when democracy works but also when it doesn’t. Neblo et al. (2017, 915) establish an important possibility claim by showing how “scholars have assembled strong evidence that deliberative institutions positively influence citizens.” Still, it is hard to build from such demonstrations to a properly scientific account that can explain both democratic success and failure without some externally grounded theory of human decision making. Similarly, there is no very straightforward way of moving from a demonstration that Habermasian claims for deliberation can be grounded in plausible psychological mechanisms (Minozzi and Neblo 2015) to a broader account of when these mechanisms will or will not operate.

Surprisingly, possible microfoundations for such an account can be found in the literature on group psychology and cognition that skeptics have deployed against democracy. As Landemore (2013, 143) says, the “argumentative theory of reasoning” allows us to predict where deliberation will and will not work well. This is a pivotally important claim: we need to know where deliberation will function well to empirically assess theories of institutional design and practical justifications of democracy.

The argumentative account of reasoning is grounded in a recent “interactionist” literature in psychology, which explores how individual bias may or may not be corrected through social interaction. It investigates how mechanisms of “epistemic vigilance” allow people to employ cues to evaluate communicated information including the expertise and benevolence of the source, the plausibility of the message, and the quality of the arguments (for an overview, see Mercier 2020; Sperber et al. 2010). Chambers (2018) has also identified both the interactionist approach and the empirical literature on deliberation as reasons to doubt skeptical claims based on group psychology.

For example, contrary to skeptical claims that people conform to majority opinion, the experimental literature finds that people take account of relevant cues when evaluating the majority opinion including the absolute and relative size of the majority, the competence and benevolence of the majority’s members, the degree of dependency in the opinions of the majority, and the plausibility of the opinion (for review, see Mercier and Morin 2019). The much-bruited Asch (1956) experiments describe the consequences of external pressure rather than those of internalized bias. Practically no one was influenced when participants did not have to voice their opinion in front of the group, and contrary to the widespread academic folklore (Friend, Rafferty, and Bramel 1990), the experiments demonstrated independence as well as conformity. The literature finds that people are well able to evaluate arguments, that they are more influenced by strong than weak reasons (e.g., Hahn and Oaksford 2007), and that they partly change their minds when confronted with challenging but good arguments (e.g., Guess and Coppock 2020).

Interactionist scholarship suggests that reasoning processes are best evaluated in their normal environment of social interaction. It provides possible microfoundations for theories of variation. Instead of looking to the (supposedly invariant) cognitive limitations of ordinary citizens as skeptics do, an interactionist approach suggests that we should investigate the social context of decisions—how groups are structured—to understand when group identity and social pressure can distort or swamp problem solving. Both problem-solving capacity (which depends on whether groups harness individual biases and mechanisms of epistemic vigilance) and collective pressures to conformity will plausibly vary with group structure. Skeptical accounts, which depict group politics as simple condensates of individual bias writ large, are poorly fitted to capturing this variation. Equally, interactionism provides microfoundations for a framework that can investigate democratic theorists’ findings about when democracy works well while also investigating democratic failure.

This provides a more promising path forward than does the universal pessimism of democratic skeptics. It also provides more robust foundations for the claim that deliberation can occur under psychologically realistic circumstances and a starting point for investigating what those circumstances are. Democratic “realists” like Achen and Bartels (2016) need not be democratic pessimists. A microfoundational approach, grounded in endemic individual cognitive bias, avoids the possible charge that the desired normative outcomes are baked into the initial empirical assumptions.

If outright democratic skeptics are sincerely committed to understanding the cognitive underpinnings of democratic processes, as their reliance on this literature ought to entail, they too should find it attractive. It allows the serious investigation of observed democratic failure as well as democratic success. Of course, these are not the only possible microfoundations, and like all empirically based accounts, they may be modified or even rejected as empirical evidence emerges.

Still, such microfoundations could support a broader analytical account that seeks to understand and address variation. If both the benefits and disadvantages of democracy arise at the group rather than individual level, then the challenge for advocates of democracy is to build democratic institutions that can better trigger the relevant cognitive mechanisms so as to capture the benefits of group problem solving instead of deferring to the social pressures that do sometimes lead to conformity. In other words, our goal is to better explain how democracy incorporates the capacities of groups to solve problems (under some circumstances) as well as their tendency to magnify conformity and factionalism (under others).

We do not provide a complete alternative account of democracy here. That would be a heroic undertaking, which would involve not just providing microfoundations but rebuilding existing institutional and organizational theories on their basis. Instead, we sketch the beginnings of a broader research program that we hope others will find attractive.

A Research Program on Democratic Problem Solving
Ostrom (1990) began by demonstrating the systematic flaws in Hardin’s skepticism of common goods but went on to articulate a coherent alternative research agenda on the conditions under which common goods provision succeeds or fails. Political science and related disciplines should commence a similar research program, uniting scientific research on group composition, network structure, and institutional form to investigate the conditions under which democratic problem solving is likely to succeed or fail.

As we have argued, this program could build on research in experimental cognitive psychology, which provides an alternative set of microfoundations to both rational choice and the social psychological arguments that have dominated political science debates. Specifically, this research identifies specific dimensions along which trade-offs in group problem solving plausibly occur:

• Between social comfort and comfort in expressing dissent (Baron 2005).

• Between shared common ground and some measure of preexisting disagreement. Stasser and Titus (1985) point to the benefits of ground-level agreement for problem solving, whereas Schulz-Hardt et al. (2006) discuss how some level of background dissent allows for better problem solving.

• Between group size and the need to represent diversity. Fay, Garrod, and Carletta (2000) discuss how the quality of communication deteriorates as group size increases, whereas Hong and Page (2004; 2021) highlight the benefits of diversity and its complex interaction with group size and Mercier and Claidière (2022) examine whether deliberation is robust to increases in group size.

• Between pressures for conformity and concerns for epistemic reputation, Origgi (2017) describes how the cognitive mechanisms of reputation can generate both market bubbles and reliable collective information systems.

By understanding how different positions in this multidimensional space are associated with better or worse problem solving, we can arrive at useful hypotheses about how to fashion democratic systems. This research program should also incorporate scholarship on a broader level of social aggregation, which explores how network structure and social influence affect flows of information and opinion between individuals with different perspectives (Feng et al. 2019). It might incorporate practical findings about democratic decision making—for instance, the circumstances under which juries can form more accurate collective beliefs (Salerno and Diamond 2010) and how citizen constitutional assemblies (Farrell and Suiter 2019) and online town halls (Neblo, Esterling, and Lazer R2018) can support better communication between politicians and the public.

Crucially, the proposed research program would investigate democratic failures as well as successes, better explaining, for example, the circumstances under which epistemic breakdown and misinformation can become established in democracies. O’Connor and Weatherall (2018; Weatherall and O’Connor Weathera2021) investigate how epistemic factionalization occurs among people who do not trust others with different beliefs. Nyhan (2021) emphasizes the importance of elite messaging and information decay in spreading misinformation, suggesting that punishing elites who spread falsehoods and focusing on intermediaries may have benefits.

Finally, such a research program would help address recent (Neblo et al. 2017) and current (Notes from the Editors 2020) demands for a “translational” approach to democracy that “challenges dominant disciplinary norms.” It would seek to reconcile scientific rigor with normative analysis, providing the groundwork for institutional improvement and reform.

The Great Endarkenment and the Cognitive Division of Labor, Part I

There is a kind of relentless contrarian that is very smart, has voracious reading habits, is funny, and ends up in race science and eugenics. You are familiar with the type. Luckily, analytic philosophy also generates different contrarians about its own methods and projects that try to develop more promising (new) paths than these. Contemporary classics in this latter genre are Michael Della Rocca’s (2020) The Parmenidean Ascent, Nathan Ballantyne’s (2019) Knowing Our Limits, and Elijah Millgram’s (2015) The Great Endarkenment all published with Oxford. In the service of a new or start (sometimes presented as a recovery of older wisdom), each engages with analytic philosophy’s self-conception(s), its predominate methods (Della Rocca goes after reflective equilibrium, Millgram after semantic analysis, Ballantyne after the supplements the method of counter example), and the garden paths and epicycles we’ve been following. Feel free to add your own suggestions to this genre.

Millgram and Ballantyne both treat the cognitive division of labor as a challenge to how analytic philosophy is done with Ballantyne opting for extension from what we have and Millgram opting for (partially) starting anew (about which more below). I don’t think I have noticed any mutual citations.  Ballantyne, Millgram, and Della Rocca really end up in distinct even opposing places. So, this genre will not be a school.

Millgram’s book, which is the one that prompted this post, also belongs to the small category of works that one might call ‘Darwinian Aristotelianism,’ that is, a form of scientific naturalism that takes teleological causes of a sort rather seriously within a broadly Darwinian approach. Other books in this genre are Dennett’s From Bacteria to Bach and Back (which analyzes it in terms of reasons without a reasoner), and David Haig’s From Darwin to Derrida (which relies heavily on the type/token distinction in order to treat historical types as final causes). The latter written by an evolutionary theorist.* There is almost no mutual citation in these works (in fact, Millgram himself is rather fond of self-citation despite reading widely). C. Thi Nguyen’s (2020) Games: Agency as Art may also be thought to fit this genre, but Millgram is part of his scaffolding, and Nguyen screens off his arguments from philosophical anthropology and so leave it aside here. So much for set up, let me quote its concluding paragraphs of Millgram’s book:

Perhaps eventually an overall Big Picture will emerge—and perhaps not: Hegel thought that the Owl of Minerva would take wing only at dusk (i.e., that we will only achieve understanding in retrospect, after it’s all over), but maybe the Owl’s wings have been broken by hyperspecialization, and it will never take to the air at all. What we can reasonably anticipate in the short term is a patchwork of inference management techniques, along with intellectual devices constructed to support them. One final observation: in the Introduction, I gave a number of reasons for thinking that our response to the Great Endarkenment is something that we can start working on now, but that it would be a mistake at this point to try to produce a magic bullet meant to fix its problems. That turns out to be correct for yet a further reason. Because the approach has to be bottom-up and piecemeal, at present we have to suffice with characterizing the problem and with taking first steps; we couldn’t possibly be in a position to know what the right answers are.
Thus far our institutional manifesto. Analytic philosophy has bequeathed to us a set of highly refined skills. The analytic tradition is visibly at the end of its run. But those skills can now be redirected and put in the service of a new philosophical agenda. In order for this to take place, we will have to reshape our philosophical pedagogy—and, very importantly, the institutions that currently have such a distorting effect on the work of the philosophers who live inside them. However, as many observers have noticed, academia is on the verge of a period of great institutional fluidity, and flux of this kind is an opportunity to introduce new procedures and incentives. We had better take full advantage of it.–Elijah Millgram (2015) The Great Endarkenment: Philosophy for an Age of Hyperspecialization, p. 281

I had glanced at Millgram’s book when I wrote my piece on synthetic philosophy, but after realizing that his approach to the advanced cognitive division of labor was orthogonal to my own set it aside then. But after noticing intriguing citations to it in works by C. Thi Nguyen and Neil Levy, I decided to read it anyway. The Great Endarkenment is a maddening book because the first few chapters and the afterward are highly programmatic and accessible, while the bulk of the essays involve ambitious, revisionary papers in meta-ethics, metaphysics, and (fundementally) moral psychology (or practical agency if that is a term).  The book also has rather deep discussions of David Lewis, Mill, and Bernard Williams. The parts fit together, but only if you look at them in a certain way, and only if you paid attention in all the graduate seminars you attended.

Millgram’s main claim in philosophical anthropology is that rather than being a rational animal, mankind is a serial hyperspecializing animal or at least in principle capable of hyperspecializing serially (switching among different specialized niches it partially constructs itself). The very advanced cognitive division of labor we find ourselves in is, thus, not intrinsically at odds with our nature but actually an expression of it (even if Millgram can allow that it is an effect of economic or technological developments, etc.). If you are in a rush you can skip the next two asides (well at least the first).

As an aside, first, lurking in Millgram’s program there is, thus, a fundamental critique of the Evolutionary Psychology program that takes our nature as adapted to and relatively fixed by niches back in the distant ancestral past. I don’t mean to suggest Evolutionary Psychology is incompatible with Millgram’s project, but it’s fundamental style of argument in its more prominent popularizations is.

Second, and this aside is rather important to my own projects, Millgram’s philosophical anthropology is part of the account  of human nature that liberals have been searching for. And, in fact, as the quoted passages reveal, Millgram’s sensibility is liberal in more ways, including his cautious preference for “bottom-up and piecemeal” efforts to tackle the challenge of the Great Endarkenment.+

Be that as it may, the cognitive division of labor and hyperspecialization is also a source of trouble. Specialists in different fields are increasingly unable to understand and thus evaluate the quality of each other’s work including within disciplines. As Millgram notes this problem has become endemic within the institution most qualified to do so — the university — and as hyper-specialized technologies and expertise spread through the economy and society. This is also why society’s certified generalists — journalists, civil servants, and legal professionals — so often look completely out of their depth when they have to tackle your expertise under time pressure.** It’s his diagnosis of this state of affairs that has attracted, I think, most scholarly notice (but that may be a selection effect on my part by my engagement with Levy’s Bad Beliefs and Nguyen’s Games). Crucially, hyperspecialiation also involves the development of languages and epistemic practices that are often mutually unintelligible and perhaps even metaphysically incompatible seeming.

As an aside that is really an important extension of Millgram’s argument: because the book was written just before the great breakthroughs in machine learning were becoming known and felt, the most obvious version of the challenge (even danger) he is pointing to is not really discussed in the book: increasingly we lack access to the inner workings of the machines we rely on (at least in real time), and so there is a non-trivial sense in which if he is right the challenge posed by Great Endarkenment is accelerating. (See here for an framework developed with Federica Russo and Jean Wagemans to analyze and handle that problem.)

That is, if Millgram is right MacAskill and his friends who worry about the dangers of AGI taking things over for rule and perhaps our destruction by the machine(s) have it backwards. The odds are more likely that our society will implode and disperse — like the tower of Babel that frames Millgram’s analysis — by itself. And that if it survives mutual coordination by AGIs will be just as hampered by the Great Endarkenment, perhaps even more so due to their path dependencies, as ours is.

I wanted to explore the significance of this to professional philosophy (and also hint more at the riches of the book), but the post is long enough and I could stop here. So, I will return to that in the future. Let me close with an observation. As Millgram notes, in the sciences mutual unintelligibility is common. And the way it is often handled is really two-fold: first, as Peter Galison has argued, and Millgram notes, the disciplines develop local pidgins in what Galison calls their ‘trading zones.’ This births the possibility of mutually partially overlapping areas of expertise in (as Michael Polanyi noted) the republic of science. Millgram is alert to this for he treats a lot of the areas that have been subject of recent efforts at semantic analysis by philosophers (knowledge, counterfactuals, normativity) as (to simplify) really tracking and trailing the alethic certification of past pidgins. Part of Millgram’s own project is to diagnose the function of such certification, but also help design new cognitive machinery to facilitate mutual intelligibility. That’s exciting! This I hope to explore in the future.

Second, as I have emphasized in my work on synthetic philosophy, there are reasonably general theories and topic neutralish (mathematical and experimental) techniques that transcend disciplines (Bayesianism, game theory, darwinism, actor-network, etc.). On the latter (the techniques) these often necessetate local pidgins or, when possible, textbook treatments. On the former, while these general theories are always applied differently locally, they are also conduits for mutual intelligibility. (Millgram ignores this in part.) As Millgram notes, philosophers can make themselves useful here by getting MAs in other disciplines and so facilitate mutual communication as they already do. That is to say, and this is a criticism, while there is a simultaneous advancement in the cognitive division of labor that deepens mutual barriers to intelligibility, some of this advance generates possibilities of arbitrage (I owe the insight to Liam Kofi Bright) also accrues to specialists that help transcend local mutual intelligibility.** So, what he takes to be a call to arms is already under way. So, let’s grant we’re on a precipice, but the path out is already marked.++

 

This post was published first at D&I with modest changes.

*Because of this Millgram is able to use the insights of the tradition of neo-thomism within analytic philosophy to his own ends without seeming to be an Anscombe groupie or hinting darkly that we must return to the path of philosophical righteousness.

+This liberal resonance is not wholly accidental; there are informed references to and discussions of Hayek.

** Spare a thought for humble bloggers, by the way.

++UPDATE: As Justin Weinberg reminded me, Millgram  did a series of five guest posts at DailyNous on themes from his book (here are the firstsecondthird, fourth, and fifth entries.) I surely read these, and encourage you to read them if you want the pidgin version of his book.

On Constitutional Monetary Moments

Earlier today,  after I tweeted out that “Proposals to mint $1tn platinum coin are designed to circumvent the US constitution’s “The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts,” I got lectured by Nathan Tankus for “not grasping the most elementary legal issues in the topic you’re pontificating on.” This turns on the interpretation on the authority granted by Section 31 U.S. Code § 5112. Advocates of the platinum coin naturally like to quote the plain meaning of the text: “(k) The Secretary may mint and issue bullion and proof platinum coins in accordance with such specifications, designs, varieties, quantities, denominations, and inscriptions as the Secretary, in the Secretary’s discretion, may prescribe from time to time.” The plain meaning interpretation of (k) has been supported by Philip N. Diehl, former director of the United States Mint, who helped write the bill. But Diehl was not in Congress (and in virtue of his former office has obvious incentives to exaggerate its power and his former achievements).

However, the official author of the original bill, Representative Michael Castle, denied this interpretation, and suggested (quite plausibly in my opinion) that the provision was intended to cover collectibles (and not to provide the Treasure with the power to do an end run around any debt limits). I would be amazed if the original legislative record suggested otherwise. The law as we have it was inserted as a provisions into H.R. 3610, the Omnibus Consolidated Appropriations Act for 1997. It would be interesting if the congressional leadership recorded any views on the matter at the time (and that would change my view!) But the revisionary (‘plain meaning interpretation’) wasn’t voiced until May 2010. Even Diehl has admitted at one point that (the ‘plain meaning interpretation’) would constitute an “unintended consequence” of the bill. [Quoted in Grey (2020) op. cit, p. 261.] So, I don’t think this is really in doubt.

Eventually Tankus, who himself has become a high-profile advocate of minting the $1tr coin, referred me to Grey’s very interesting law review article from which I quoted above (and also a fascinating interview that Grey did with Diehl.) Grey (a law professor at Willamette University) meticulously goes through the pros and cons of reading (k) literally (and also provides arguments for the opposing views that anticipate my own), but his main interest is not, I think, in gaming how a constitutional court would rule on using (k) to do an end-run around the fiscal debt limit (also authorized by Congress), but in thinking about the “new possibilities for fundamental monetary reform.”* Grey, correctly, notes that in various crises the FED has gone well beyond the Federal Reserve Act. As I wrote last week here at CrookedTimber (in the context of discussing Lev Menand’s recent book (2022) The Fed Unbound: Central Banking in a Time of Crisis) the “effect of this process is the development of a super-government-agency that tries to do too much without sufficient accountability and that undermines the legislative process.” So, I am at least consistent in worrying about treating this as precedent!

But this also gets me at the underlying theoretical-political issue that I want to discuss here. And I quote a key passage from his law review article:

I argue that in “constitutional monetary moments” like those generated by debt ceiling crisis, it is important—not only positively but also normatively—to recognize that contemporary operational constraints on money creation are self-imposed, institutionally contingent, and ultimately legal rather than material in nature. It is important to do so because in such instances it may be not only appropriate, but socially optimal, to subject existing legal constraints to creative interpretation, or even ignore them outright, in order to challenge and disrupt the social myths they uphold, as well as the political dynamics that they produce. As noted legal realist Thurman Arnold argued: “You judge the symbols [upon which society is built and depends] as good or bad on the basis of whether they lead to the type of society you like. You do not cling to them on general principles when they are leading in the wrong direction.”
By denying from the outset the possibility that debt ceiling crises are, in fact, constitutional monetary moments in which it may make sense to abandon outdated monetary symbols, we close off the full range of political possibilities and legal options available to us to improve fiscal policy administration, and with it, our economy more broadly. In other words, it was not sufficient then, and it is not sufficient now, to merely assert as a positive matter that our current social myths about the nature of money preclude exotic or even “radical” legal solutions such as HVCS from serious consideration. Rather, it is incumbent on us to question whether the social myths in question are in fact worthy of preservation, or at the very least, how sure we are that the alternatives that would likely emerge to take their place would lead to socially inferior outcomes.–Grey, Rohan. “Administering Money: Coinage, Debt Crises, and the Future of Fiscal Policy.” Ky. LJ 109 (2020): 289. [HT Nathan Tankus]

For Grey a constitutional monetary moment occurs when “partisan disagreements over proper exercise of the “money power” pushed monetary issues to the forefront of the popular and legal imagination.” (p. 288) Now, let’s grant Grey this stipulation.

Interestingly enough, as Grey recognizes political battles over the deficit limit need not become constitutional debates over money power. In fact, as Grey recognizes in the last few decades, the Democrats have tended to win the battles over the deficit limit by sidestepping “the deeper constitutional questions.” (p. 288) That is, rather than using the debates over deficit limits, the threat of default, and annoyance at legislative gridlock as an occasion to debate changes in the constitutional arrangement — that is, the proper exercise of the money power –, the Democrats have played ordinary (non-constitutional) political hardball. And even if they did this merely based on opinion polling and without any fundamental respect for the constitution, the significance of  this fact is that fights over the deficit limit are not necessarily constitutional monetary moments by definition. I would hope that Grey agrees with me about that.

Of course, performatively, Grey would like it if a fight over the deficit limit were to be constitutionalized because for various reasons Grey rejects the contemporary status quo.* And, in fact, I can imagine that some kind of debt ceiling crises might well make me agree with Grey. Once the US defaults on debts and market turmoil starts hitting the real economy or the acrimony over the debt ceiling leads to street turmoil or even a revolutionary moment as bad or worse than January 6, that would quite rightly be treated as a constitutional monetary moment. But one shouldn’t wish for being in such turmoil, and one should recognize that debates and even games of chicken over the debt ceiling have been normalized in Washington, DC. (The US –and I will grant the MMT folk this much –has a lot more monetary sovereignty for such games than, say, the UK government (as was revealed in 2022) or most Euro member states, which lack any such sovereignty.)

Now, I am no friend of the current practice of setting a debt ceiling while simultaneously authorizing expenditures and borrowing that go beyond it. I don’t mind a demise of this status quo. And I also don’t mind ways of poking the irresponsible Congressional Republicans in the (proverbial) eyes. But I notice that Grey (and the proponents of minting such special coins) are not really worried by the Imperial presidency and the lack of checks on its power.+ That they lack this worry even after experiencing a Trump presidency and near usurpation (as well as the various ways in which courts were not a check on the Trump presidency) is something I cannot fathom.

But I view the tax and coin [sic] power of Congress as one of the few would-be-effective checks on the risks of the very real and ongoing erosion of liberty from an imperial presidency. If anything, while I dislike Republican control of the House, it is high time if Stateside the pendulum returned to the post-Watergate situation of the 1970s, and re-established Congressional authority. So, in so far as I offer a political-theoretical argument against seeing the situation as a constitutional monetary moments that is ripe for monetary innovation, I do so because the US needs a considerable restoration of the power of the legislative branch — which after all is also, despite everything (and yes that includes quite a few sins), still the most democratic element of the US constitutional arrangement — not a further strengthening of the Executive branch. And while one may not reasonably hope that this Congress would use such power reasonably or for noble ends, it is to be hoped that such enhanced power would attract better legislators and more effective citizen participation in the future.

 

 

 

*In fact, from reading the law review article, and Tankus’ enthusiasm for it, I suspect that Grey is a proponent of MMT (Modern Monetary Theory).

+I am much more sympathetic to arguments that view section 4 of 14th amendment as a reason to reject constraints of budget deficit limits if (and only if) these lead to default. But it is surprising to see left-wing commentators defend the sanctity of public debt!

❌