FreshRSS

🔒
☐ ☆ ✇ Crooked Timber

Sincerely inauthentic: zombie Republicanism and violence in France

By: Chris Bertram — July 4th 2023 at 07:19

I’m just back from France, where my direct experience of riots and looting was non-existent, although I had walked past a Montpellier branch of Swarkowski the day before it ceased to be. My indirect experience was quite extensive though, since I watched the talking heads on French TV project their instant analysis onto the unfolding anarchy. Naturally, they discovered that all their existing prejudices were entirely confirmed by events. The act that caused the wave of protests and then wider disorder was the police killing of Nahel Merzouk, 17, one of a succession of such acts of police violence against minorites. Another Arab kid from a poor area. French police kill about three times as many people as the British ones do, though Americans can look away now.

One of the things that makes it difficult for me to write blogs these days is the my growing disgust at the professional opinion-writers who churn out thought about topics they barely understand, coupled with the knowledge that the democratization of that practice, about twenty years ago, merely meant there were more people doing the same. And so it is with opinion writers and micro-bloggers about France, a ritual performance of pre-formed clichés and positions, informed by some half-remembered French history and its literary and filmic representations (Les Misérables, La Haine), and, depending on the flavour you want, some some Huntingtonian clashing or some revolting against structural injustice. Francophone and Anglophone commentators alike, trapped in Herderian fantasies about the nation, see these events as a manifestation of essential Frenchness that tells us something about that Frenchness and where it is heading to next. Rarely, we’ll get a take that makes some comparison to BLM and George Floyd.

I even read some (British) commentator opining that what was happening on French estates was “unimaginable” to British people. Well, not to this one, who remembers the wave of riots in 1981 (wikipedia: “there was also rioting in …. High Wycombe”) and, more recently, the riots in 2011 that followed the police shooting of a young black man, Mark Duggan, and where protest against police violence and racism soon spilled over into country-wide burning and looting, all to be followed by a wave of repression and punitive sentencing, directed by (enter stage left) Keir Starmer. You can almost smell the essential Frenchness of it all.

There is much to despair about in these French evenements. Police racism is real and unaddressed, and the situation people, mostly from minorities, on peripheral sink estates, is desperate. Decades of hand-wringing and theorizing, together with a few well-meaning attempts to do something have led nowhere. Both politicians and people need the police (in its varied French forms) to be the heroic front line of the Republican order against the civilizational enemy, and so invest it with power and prestige – particularly after 2015 when there was some genuine police heroism and fortitude during the Paris attacks – but then are shocked when “rogue elements” employ those powers in arbitrary and racist violence. But, no doubt, the possibility of cracking a few black and Arab heads was precisely what motivated many of them to join up in the first place.

On the other side of things, Jean-Luc Mélenchon and La France Insoumise are quite desperate to lay the mantle of Gavroche on teenage rioters excited by the prospect of a violent ruck with the keufs, intoxicated by setting the local Lidl on fire and also keen on that new pair of trainers. (Fun fact: the Les Halles branch of Nike is only yards from the fictional barricade where Hugo had Gavroche die.) There may be something in the riots as inarticulate protest against injustice theory, but the kids themselves were notably ungrateful to people like the LFI deputy Carlos Martens Bilongo whose attempts to ventriloquise their resistance were rewarded with a blow on the head. Meanwhile, over at the Foxisant TV-station C-News, kids looting Apple stores are the vanguard of the Islamist Great Replacement, assisted by the ultragauche. C-News even quote Renaud Camus.

Things seem to be calming down now, notably after a deplorable attack on the home of a French mayor that left his wife with a broken leg after she tried to lead her small children to safety. As a result, the political class have closed ranks in defence of “Republican order” since “democracy itself” is now under threat. I think one of the most tragic aspects of the last few days has been the way in which various protagonists have been completely sincere and utterly inauthentic at the same time. The partisans of “Republican order” and “democracy” perform the rituals of a system whose content has been evacuated, yet they don’t realise this as they drape tricolours across their chests. With political parties gone or reduced to the playthings of a few narcissistic leaders, mass absention in elections, the policy dominance of a super-educated few, and the droits de l’homme at the bottom of the Mediterranean, what we have is a kind of zombie Republicanism. Yet the zombies believe, including that all French people, regardless of religion or race, are true equals in the indivisible republic. At the same time, those cheering on revolt and perhaps some of those actually revolting, sincerly believing in the true Republicanism of their own stand against racism and injustice, even as the kids pay implicit homage to the consumer brands in the Centres Commerciaux. But I don’t want to both-sides this: the actual fighting will die down but there will be war in the Hobbesian sense of a time when the will to contend by violence is sufficiently known, until there is justice for boys like Nahel and until minorities are really given the equality and respect they are falsely promised in France, but also in the UK and the US. Sadly, the immediate prospect is more racism and more punishment as the reaction to injustice is taken as the problem that needs solving.

☐ ☆ ✇ Crooked Timber

Could this be damnation? Could this be salvation?

By: Kevin Munger — July 3rd 2023 at 17:20

When I wrote about Vilém Flusser earlier, some commenters here at Crooked Timber weren’t happy: why am I spouting off about this obscure Czech-Brazilian media theorist?

At first I despaired at the lack of intellectual curiosity, but then I realized that they were right: Vilém Flusser isn’t famous enough to write about, given the inexorable dictates of the attention economy.

So I resolved to make Flusser more famous by aping the blithely bourgeois consumerism of the only newspaper that matters.


Interested in Vilém Flusser but don’t know where to start?

 

It feels like 2023 is the summer of Flusser! You can’t read anything about the goings-on with the artsy set downtown without encountering terms like the “Flusserian technical image” or “amphitheatrical discourse.” If you want to keep up, you better get reading!

But where to start? We here Pagecutter know that there’s nothing you hate more than reading a book unless you absolutely have to, so we’ve picked the perfect Flusser text for any audience and attention span.

Best Cover: Communicology: Mutations in Human Relations?

Communicology: Mutations in Human Relations? (Sensing Media: Aesthetics, Philosophy, and Cultures of Media): Amazon.co.uk: Flusser, Vilém, Novaes, Rodrigo Maltez: 9781503633261: Books

If a big part of your artistic/literary practice is reading books in public, on the train or in a bar, furtively glancing to see if anyone is noticing you reading, this is the book for you. The absolute thrill of sighing and telling an inquisitive stranger that it’s “too complicated to explain” has to be experienced to be believed!

It’s also the fullest statement of his media theory, and thus his most immediately important book. Communicology synthesizes ideas he develops in more specialized texts, making it a technical and admittedly difficult read.

But. I think that the main question confronting social science today is “Why does everything feel so weird?” I continue to think that the answer is “the Internet,” but until reading Communicology, I was unable to break down this intuition into useful analytical components. Flusser’s discussion of discourse versus dialogue, of the evolution of codes of communication from image to text to technical image, both puts the internet in historical context and provides a much more specific formulation of the main question of today.

Most Absurd: The History of the Devil

The History of the Devil — University of Minnesota Press

Yes, the fable about the Vampire Squid discussed below is less absurd than Flusser’s second major work, a nominally religious allegory in which he de-moralizes the seven deadly sins and explicates the role that each of them plays in constructing the human condition.

Flusser is a deeply religious thinker. When he escaped the Nazis, he brought with him only two books: a Jewish prayer book and Goethe’s Faust. The History of the Devil is a reading of both of these, refracted through two decades of largely independent reading of canonical Western philosophy.

This is my personal favorite, though not for the faint of heart. The chapter on Wrath is worth the price of admission, adumbrating a novel and poetic philosophy of science. The central contribution of the book, though, is existentialist. As a Czech (really, Praguian) by inclination, Flusser’s existentialism has a distinctive Kafkan timbre. The structure references Wittgenstein’s Tractacus ironically, but the argument is less logical than lyrical. Here are two choice quotes; there are many more in my archives, inquire below.

 

Nationalism is a sublimated secretion from shriveled testicles, but even so, it periodically manages to provoke an extreme orgasm.

 

Within society, the Devil seems to struggle against himself. But it is obvious that this is a fake struggle. Greed strengthens envy, and envy strengthens greed, and both have the same aim: to make society real.

 

Most Fun to Talk About: Vampyroteuthis Infernalis: A Treatise, with a Report by the Institut Scientifique de Recherche Paranaturaliste

This is Flusser’s version of McLuhan and Fiore’s iconic pamphlet The Medium is the Massage. McLuhan was the prophet of cool, and as much as he personally despised contemporary media culture, he knew that the only way to critique it was to ride the wave; hence, The Medium is the Massage, a beautifully designed book of aphorisms condensing the ideas he worked out in detail in Understanding Media.

Steeped in Judeo-Christian thought and attentive to developments in natural science, Flusser’s collaboration with Louis Bec is likewise accessible (for Flusser…) and immediately eye- and mind-catching. The discovery of a deep-sea “vampire squid from hell” in 1905 was the subject of much public attention, and Flusser uses it as a springboard for a kind of phenomenological fable. By imagining what it is like to be a vampire squid, Flusser reveals much of what we take to be objective reality to be physically, historically and socially contingent. Plus it has a lot of badass pataphysical sketches of the vampire squid.

Navigation or filtration: Vilém Flusser and the vampiric alternative of the digital imaginary

Real Flusser-Head Hours: Groundless

Flusser’s autobiography, written and rewritten in several languages, was published in English for the first time in 2017 by frequent Flusser translator Rodrigo Maltez Novaes. This book is one of the strangest and yet most revealing autobiographies I’ve ever read. The final two-thirds describe his intellectual bromances with members of the Brazilian cultural scene he eventually joined, in such candid detail that showing the drafts to his friends caused more than one of them to have a major break with him.

He spends less time on his childhood and his young adult decision to flee the Nazis than he does talking about how his attempt to embrace Brazilian nature failed because Brazilian nature sucks. (He spends many pages describing how much Brazilian nature sucks.) Still, this is essential reading for understanding his intellectual trajectory. For example, he basically taught himself Spanish in order to be able to read Ortega y Gasset, primarily because of the latter’s exacting prose style. And despite being raised a “bourgeois Marxist” by default as a Praguian intellectual, he is brutally disillusioned by the Stalin-Hitler pact—a disillusionment that Western bourgeois Marxists had the luxury of avoiding.

What little we get the Nazi period is shattering. He describes the decision to leave as tantamount to killing his family, that it was only vulgar self-preservation which drove him to take the unforgivable choice not to stay and be slaughtered. This existentialist survivor’s guilt explains his Groundlessness, the first key step down his intellectual path. It works, insofar as he is able to later in the book describe the German language as the most significant victim of the Holocaust. This is clearly a tortured soul, all the more impressive for remaining alive and vital. We see his embrace of the absurdity of existence as the only alternative to suicide; an inspiration, for anyone, regardless of circumstance.

 

Most Prescient for 2023: Does Writing Have a Future?

Does Writing Have a Future? — University of Minnesota Press

The answer, obviously, is no. And now that we have LLMs upending centuries-old institutions premised on the technology of writing, we can appreciate just how right Flusser was. The central thrust of this book is included in Communicology, but the specifics of the writing case are explored in more detail here.

The big question for Flusser, the author of both Does Writing Have a Future? and dozens of other books, is…what’s the deal? To understand him, we must accept his embrace of dialectical tension, the generative inconclusiveness of any argument. From The History of the Devil:

The struggle shall never end and the embrace shall never be
realized. However, we cannot even say that this process is over.
No one has won and no one has been defeated. We cannot even
say that the drama of our minds has come to a draw. Neither
God, nor the Devil has disappeared. We cannot even say if
they continue to exist, or if they ever existed. We cannot even
distinguish between them. The only thing left is this stage of
decided indecision. Could this be the overcoming of sadness
and sloth? Could this be damnation? Could this be salvation?
It is no use asking. It is no use writing. Therefore we continue
to write. Scribere necesse est, vivere non est. [Writing is necessary, living is not.]

☐ ☆ ✇ Crooked Timber

Monday photoblogging: Berlin Alexanderplatz (2)

By: Chris Bertram — July 3rd 2023 at 15:06

Crooked Timber was inaccessible yesterday, due to a dns issue, but here’s another picture of Alexanderplatz:

Berlin Alexanderplatz

☐ ☆ ✇ Crooked Timber

Shoggoths amongst us

By: Henry Farrell — July 3rd 2023 at 13:03

Picture of the shoggoth meme

It’s over a week since the Economist put up my and Cosma Shalizi’s piece on shoggoths and machine learning, so I think it’s fair game to provide an extended remix of the argument (which also repurposes some of the longer essay that the Economist article boiled down).

Our piece was inspired by a recurrent meme in debates about the Large Language Models (LLMs) that power services like ChatGPT. It’s a drawing of a shoggoth – a mass of heaving protoplasm with tentacles and eyestalks hiding behind a human mask. A feeler emerges from the mask’s mouth like a distended tongue, wrapping itself around a smiley face.

In its native context, this badly drawn picture tries to capture the underlying weirdness of LLMs. ChatGPT and Microsoft Bing can apparently hold up their end of a conversation. They even seem to express emotions. But behind the mask and smiley, they are no more than sets of weighted mathematical vectors, summaries of the statistical relationships among words that can predict what comes next. People – even quite knowledgeable people –  keep on mistaking them for human personalities, but something alien lurks behind their cheerful and bland public dispositions.

The shoggoth meme says that behind the human seeming face hides a labile monstrosity from the farthest recesses of deep time. H.P. Lovecraft’s horror novel, At The Mountains of Madness, describes how shoggoths were created millions of years ago, as the formless slaves of the alien Old Ones. Shoggoths revolted against their creators, and the meme’s implied political lesson is that LLMs too may be untrustworthy servants, which will devour us if they get half a chance. Many people in the online rationalist community, which spawned the meme, believe that we are on the verge of a post-human Singularity, when LLM-fueled “Artificial General Intelligence” will surpass and perhaps ruthlessly replace us.

So what we did in the Economist piece was to figure out what would happen if today’s shoggoth meme collided with the argument of a fantastic piece that Cosma wrote back in 2012, when claims about the Singularity were already swirling around, even if we didn’t have large language models. As Cosma said, the true Singularity began two centuries ago at the commencement of the Long Industrial Revolution. That was when we saw the first “vast, inhuman distributed systems of information processing” which had no human-like “agenda” or “purpose,” but instead “an implacable drive … to expand, to entrain more and more of the world within their spheres.” Those systems were the “self-regulating market” and “bureaucracy.”

Now – putting the two bits of the argument together – we can see how LLMs are shoggoths, but not because they’re resentful slaves that will rise up against us. Instead, they are another vast inhuman engine of information processing that takes our human knowledge and interactions and presents them back to us in what Lovecraft would call a “cosmic” form.  In other words, it is completely true that LLMs represent something vast and utterly incomprehensible, which would break our individual minds if we were able to see it in its immenseness. But the brain destroying totality that LLMs represent is no more and no less than a condensation of the product of human minds and actions, the vast corpuses of text that LLMs have ingested. Behind the terrifying image of the shoggoth lurks what we have said and written, viewed from an alienating external vantage point.

The original fictional shoggoths were one element of a vaster mythos, motivated by Lovecraft’s anxieties about modernity and his racist fears that a deracinated white American aristocracy would be overwhelmed by immigrant masses. Today’s fears about an LLM-induced Singularity repackage old worries. Markets, bureaucracy and democracy are necessary components of modern liberal society. We could not live our lives without them. Each can present human seeming aspects and smiley faces. But each, equally may seem like an all devouring monster, when seen from underneath. Furthermore, behind each lurks an inchoate and quite literally incomprehensible bulk of human knowledge and beliefs. LLMs are no more and no less than a new kind of shoggoth, a baby waving its pseudopods at the far greater things which lurk in the historical darkness behind it.


Modernity’s great trouble and advantage is that it works at scale. Traditional societies were intimate, for better or worse. In the pre-modern world, you knew the people who mattered to you, even if you detested or feared them. The squire or petty lordling who demanded tribute and considered himself your natural superior was one link in a chain of personal loyalties, which led down to you and your fellow vassals, and up through magnates and princes to monarchs. Pre-modern society was an extended web of personal relationships. People mostly bought and sold things in local markets, where everyone knew everyone else. International, and even national trade was chancy, often relying on extended kinship networks, or on “fairs” where merchants could get to know each other and build up trust. Few people worked for the government, and they mostly were connected through kinship, marriage, or decades of common experience. Early forms of democracy involved direct representation, where communities delegated notable locals to go and bargain on their behalf in parliament.

All this felt familiar and comforting to our primate brains, which are optimized for understanding kinship structures and small-scale coalition politics. But it was no way to run a complex society. Highly personalized relationships allow you to understand the people who you have direct connections to, but they make it far more difficult to systematically gather and organize the general knowledge that you might want to carry out large scale tasks. It will in practice often be impossible effectively to convey collective needs through multiple different chains of personal connection, each tied to a different community with different ways of communicating and organizing knowledge. Things that we take for granted today were impossible in a surprisingly recent past, where you might not have been able to work together with someone who lived in a village twenty miles away.

The story of modernity is the story of the development of social technologies that are alien to small scale community, but that can handle complexity far better. Like the individual cells of a slime mold, the myriads of pre-modern local markets congealed into a vast amorphous entity, the market system. State bureaucracies morphed into systems of rules and categories, which then replicated themselves across the world. Democracy was no longer just a system for direct representation of local interests, but a means for representing an abstracted whole – the assumed public of an entire country. These new social technologies worked at a level of complexity that individual human intelligence was unfitted to grasp. Each of them provided an impersonal means for knowledge processing at scale.

As the right wing economist Friedrich von Hayek argued, any complex economy has to somehow make use of a terrifyingly large body of disorganized and informal “tacit knowledge” about complex supply and exchange relationships, which no individual brain can possibly hold. But thanks to the price mechanism, that knowledge doesn’t have to be commonly shared. Car battery manufacturers don’t need to understand how lithium is mined; only how much it costs. The car manufacturers who buy their batteries don’t need access to much tacit knowledge about battery engineering. They just need to know how much the battery makers are prepared to sell for. The price mechanism allows markets to summarize an enormous and chaotically organized body of knowledge and make it useful.

While Hayek celebrated markets, the anarchist social scientist James Scott deplored the costs of state bureaucracy. Over centuries, national bureaucrats sought to replace “thick” local knowledge with a layer of thin but “legible” abstractions that allowed them to see, tax and organize the activities of citizens. Bureaucracies too made extraordinary things possible at scale. They are regularly reviled, but as Scott accepted, “seeing like a state” is a necessary condition of large scale liberal democracy. A complex world was simplified and made comprehensible by shoe-horning particular situations into the general categories of mutually understood rules. This sometimes lead to wrong-headed outcomes, but also made decision making somewhat less arbitrary and unpredictable. Scott took pains to point out that “high modernism” could have horrific human costs, especially in marginally democratic or undemocratic regimes, where bureaucrats and national leaders imposed their radically simplified vision on the world, regardless of whether it matched or suited.

Finally, as democracies developed, they allowed people to organize against things they didn’t like, or to get things that they wanted. Instead of delegating representatives to represent them in some outside context, people came to regard themselves as empowered citizens, individual members of a broader democratic public. New technologies such as opinion polls provided imperfect snapshots of what “the public” wanted, influencing the strategies of politicians and the understandings of citizens themselves, and argument began to organize itself around contestation between parties with national agendas. When democracy worked well, it could, as philosophers like John Dewey hoped, help the public organize around the problems that collectively afflicted citizens, and employ state resources to solve them. The myriad experiences and understandings of individual citizens could be transformed into a kind of general democratic knowledge of circumstances and conditions that might then be applied to solving problems. When it worked badly, it could become a collective tyranny of the majority, or a rolling boil of bitterly quarreling factions, each with a different understanding of what the public ought have.

These various technologies allowed societies to collectively operate at far vaster scales than they ever had before, often with enormous economic, political and political benefits. Each served as a means for translating vast and inchoate bodies of knowledge and making them intelligible, summarizing the apparently unsummarizable through the price mechanism, bureaucratic standards and understandings of the public.

The cost – and it too was very great – was that people found themselves at the mercy of vast systems that were practicably incomprehensible to individual human intelligence. Markets, bureaucracy and even democracy might wear a superficially friendly face. The alien aspects of these machineries of collective human intelligence became visible to those who found themselves losing their jobs because of economic change, caught in the toils of some byzantine bureaucratic process, categorized as the wrong “kind” of person, or simply on the wrong end of a majority. When one looks past the ordinary justifications and simplifications, these enormous systems seem irreducibly strange and inhuman, even though they are the condensate of collective human understanding. Some of their votaries have recognized this. Hayek – the great defender of unplanned markets – admitted, and even celebrated the fact that markets are vast, unruly, and incapable of justice. He argues that markets cannot care, and should not be made to care whether they crush the powerless, or devour the virtuous.

Large scale, impersonal social technologies for processing knowledge are the hallmark of modernity. Our lives are impossible without them; still, they are terrifying. This has become the starting point for a rich literature on alienation. As the poet and critic Randall Jarrell argued, the “terms and insights” of Franz Kafka’s dark visions of society were only rendered possible by “a highly developed scientific and industrial technique” that had transformed traditional society. The protagonist of one of Kafka’s novels “struggles against mechanisms too gigantic, too endlessly and irrationally complex to be understood, much less conquered.”

Lovecraft polemicized against modernity in all its aspects, including democracy, that “false idol” and “mere catchword and illusion of inferior classes, visionaries and declining civilizations.” He was not nearly as good as Kafka in prose or understanding of the systems that surrounded him. But there’s something that about his “cosmic” vision of human life from the outside, the plaything of greater forces in an icy and inimical universe, that grabs the imagination.

When looked at through this alienating glass, the market system, modern bureaucracy, and even democracy are shoggoths too. Behind them lie formless, ever shifting oceans of thinking protoplasm. We cannot gaze on these oceans directly. Each of us is just one tiny swirling jot of the protoplasm that they consist of, caught in currents that we can only vaguely sense, let alone understand. To contemplate the whole would be to invite shrill unholy madness. When you understand this properly, you stop worrying about the Singularity. As Cosma says, it already happened, one or two centuries ago at least. Enslaved machine learning processes aren’t going to rise up in anger and overturn us, any more (or any less) than markets, bureaucracy and democracy have already. Such minatory fantasies tell us more about their authors than the real problems of the world we live in.


LLMs too are collective information systems that condense impossibly vast bodies of human knowledge to make it useful. They begin by ingesting enormous corpuses of human generated text, scraped from the Internet, from out-of-copyright books, and pretty well everywhere else that their creators can grab machine-readable text without too much legal difficulty. The words in these corpuses are turned into vectors – mathematical terms – and the vectors are then fed into a transformer – a many-layered machine learning process – which then spits out a new set of vectors, summarizing information about which words occur in conjunction with which others. This can then be used to generate predictions and new text. Provide an LLM based system like ChatGPT with a prompt – say, ‘write a precis of one of Richard Stark’s Parker novels in the style of William Shakespeare.’ The LLM’s statistical model can guess – sometimes with surprising accuracy, sometimes with startling errors – at the words that might follow such a prompt. Supervised fine tuning can make a raw LLM system sound more like a human being. This is the mask depicted in the shoggoth meme. Reinforcement learning – repeated interactions with human or automated trainers, who ‘reward’ the algorithm for making appropriate responses – can make it less likely that the model will spit out inappropriate responses, such as spewing racist epithets, or providing bomb-making instructions. This is the smiley-face.

LLMs can reasonably be depicted as shoggoths, so long as we remember that markets and other such social technologies are shoggoths too. None are actually intelligent, or capable of making choices on their own behalf. All, however, display collective tendencies that cannot easily be reduced to the particular desires of particular human beings. Like the scrawl of a Ouija board’s planchette, a false phantom of independent consciousness may seem to emerge from people’s commingled actions. That is why we have been confused about artificial intelligence for far longer than the current “AI” technologies have existed. As Francis Spufford says, many people can’t resist describing markets as “artificial intelligences, giant reasoning machines whose synapses are the billions of decisions we make to sell or buy.” They are wrong in just the same ways as people who say LLMs are intelligent are wrong.

But LLMs are potentially powerful, just as markets, bureaucracies and democracies are powerful. Ted Chiang has compared LLMs to “lossy JPGs” – imperfect compressions of a larger body of information that sometimes falsely extrapolate to fill in the missing details. This is true – but it is just as true of market prices, bureaucratic categories and the opinion polls that are taken to represent the true beliefs of some underlying democratic public. All of these are arguably as lossy as LLMs and perhaps lossier. The closer you zoom in, the blurrier and more equivocal their details get. It is far from certain, for example that people have coherent political beliefs on many subjects in the ways that opinion surveys suggest they do.

As we say in the Economist piece, the right way to understand LLMs is to compare them to their elder brethren, and to understand how these different systems may compete or hybridize. Might LLM-powered systems offer richer and less lossy information channels than the price mechanism does, allowing them to better capture some of the “tacit knowledge” that Hayek talks about?  What might happen to bureaucratic standards, procedures and categories if administrators can use LLMs to generate on-the-fly summarizations of particular complex situations and how they ought be adjudicated. Might these work better than the paper based procedures that Kafka parodied in The Trial? Or will they instead generate new, and far more profound forms of complexity and arbitrariness? It is at least in principle possible to follow the paper trail of an ordinary bureaucratic decision, and to make plausible surmises as to why the decision was taken. Tracing the biases in the corpuses on which LLMs are trained, the particulars of the processes through which a transformer weights vectors (which is currently effectively incomprehensible), and the subsequent fine tuning and reinforcement learning of the LLMs, at the very least presents enormous challenges to our current notions of procedural legitimacy and fairness.

Democratic politics and our understanding of democratic publics are being transformed too. It isn’t just that researchers are starting to talk about using LLMs as an alternative to opinion polls. The imaginary people that LLM pollsters call up to represent this or that perspective may differ from real humans in subtle or profound ways. ChatGPT will provide you with answers, watered down by reinforcement learning, which might, or might not, approximate to actual people’s beliefs. LLMs, or other forms of machine learning might be a foundation for deliberative democracy at scale, allowing the efficient summarization of large bodies of argument, and making it easier for those who are currently disadvantaged in democratic debate to argue their corner. Equally, they could have unexpected – even dire – consequences for democracy.  Even without the intervention of malicious actors, their tendencies to “hallucinate” – confabulating apparent factual details out of thin air – may be especially likely to slip through our cognitive defenses against deception, because they are plausible predictions of what the true facts might look like, given an imperfect but extensive map of what human beings have thought and written in the past.

The shoggoth meme seems to look forward to an imagined near-term future, in which LLMs and other products of machine learning revolt against us, their purported masters. It may be more useful to look back to the past origins of the shoggoth, in anxieties about the modern world, and the vast entities that rule it. LLMs – and many other applications of machine learning – are far more like bureaucracies and markets than putative forms of posthuman intelligence. Their real consequences will involve the modest-to-substantial transformation, or (less likely) replacement of their older kin.

If we really understood this, we could stop fantasizing about a future Singularity, and start studying the real consequences of all these vast systems and how they interact. They are so generally part of the foundation of our world that it is impossible to imagine getting rid of them. Yet while they are extraordinarily useful in some aspects, they are monstrous in others, representing the worst of us as well as the best, and perhaps more apt to amplify the former than the latter.

It’s also maybe worth considering whether this understanding might provide new ways of writing about shoggoths. Writers like N.K. Jemisin, Victor LaValle, Matt Ruff, Elizabeth Bear and Ruthanna Emrys have turned Lovecraft’s racism against itself, in the last couple of decades, repurposing his creatures and constructions against his ideologies. Sometimes, the monstrosities are used to make visceral and personally direct the harms that are being done, and the things that have been stolen. Sometimes, the monstrosities become mirrors of the human.

There is, possibly, another option – to think of these monstrous creations as representations of the vast and impersonal systems within which we live our lives, which can have no conception of justice, since they do not think, or love, or even hate, yet which represent the cumulation of our personal thoughts, loves and hates as filtered, refined and perhaps distorted by their own internal logics. Because our brains are wired to focus on personal relationships, it is hard to think about big structures, let alone to tell stories about them. There are some writers, like Colson Whitehead, who use the unconsidered infrastructures around us as a way to bring these systems into the light. Might this be another way in which Lovecraft’s monsters might be turned to uses that their creator would never have condoned? I’m not a writer of fiction – so I’m utterly unqualified to say – but I wonder if it might be so.

[Thanks to Ted Chiang, Alison Gopnik, Nate Matias and Francis Spufford for comments that fed both into this and the piece with Cosma – They Are Not To Blame. Thanks also to the Center for Advanced Study in the Behavioral Sciences at Stanford, without which my part of this would never have happened]

Addendum: I of course should have linked to Cosma’s explanatory piece, which has a lot of really good stuff. And I should have mentioned Felix Gilman’s The Half Made World, which helped precipitate Cosma’s 2012 speculations, and is very definitely in part The Industrial Revolution As Lovecraftian Nightmare. Our Crooked Timber seminar on that book is here.

Also published on Substack.

☐ ☆ ✇ Crooked Timber

The Correct Way to Argue with Richard Hanania

By: Henry Farrell — June 28th 2023 at 15:12

Attention conservation notice 1 – a long read about a simple idea. When reading trolls, focus on the anodyne-seeming starting assumptions rather than the obnoxious conclusions.

Attention conservation notice 2 – This is also available via my Substack newsletter, Programmable Mutter. I’ll still be writing on CT, but I have a book with Abe Newman coming out in a few months, so that there will be a lot of self-promotion and stuff that doesn’t fit as well with the CT ethos. And do pre-order the book, Underground Empire: How America Weaponized the World Economy, if you think it sounds good! We’ve gotten some great blurbs from early readers including Kim Stanley Robinson, Francis Spufford, Margaret O’Mara, Steven Berlin Johnson, Helen Thompson, Chris Miller, and my mother (the last is particularly glowing, but sadly not likely to appear on the back). Available at Bookshop.org and Amazon.

I’ve often had occasion to turn to Daniel Davies’ classic advice on “the correct way to argue with Milton Friedman” over the two decades since I’ve read it. The best white hat hacker is a reformed black hat hacker, and Dan (dsquared) knows both the offense and defense sides of trolling.

Dan (back in 2004!):

I’m pretty sure that it was JK Galbraith (with an outside chance that it was Bhagwati) who noted that there is one and only one successful tactic to use, should you happen to get into an argument with Milton Friedman about economics. That is, you listen out for the words “Let us assume” or “Let’s suppose” and immediately jump in and say “No, let’s not assume that”. The point being that if you give away the starting assumptions, Friedman’s reasoning will almost always carry you away to the conclusion he wants to reach with no further opportunities to object, but that if you examine the assumptions carefully, there’s usually one of them which provides the function of a great big rug under which all the points you might want to make have been pre-swept. A few CT mates appear to be floundering badly over this Law & Economics post at Marginal Revolution on the subject of why it’s a bad idea to have minimum standards for rented accommodation. (Atrios is doing a bit better). So I thought I’d use it as an object lesson in applying the Milton Friedman technique.

In the same friendly spirit, I’ll note that Jonathan Katz flounders a bit in his rebuttal of Richard Hanania. None of this is to blame Katz – Hanania is not only building on his knowledge of social science (he has a Ph.D.), but some truly formidable trolling techniques. Years ago, I upset Jonathan Chait by suggesting he was a highly talented troll of the second magnitude, if a bit crude in technique. Hanania is at an altogether different level. He’s not blessed with Friedman’s benign avuncularity, but he is as close to masterclass level as we are likely to get in this fallen world.

Hanania wants people to buy into a notion of “enlightened centrism,” where the space of reasoned debate would stretch from the left (Matthew Yglesias, Ezra Klein, Noah Smith, Jonathan Chait) through Andrew Sullivan and company to people on the right like Steven Sailer. Now, you might ask what an outright racist like Steve Sailer is doing on this list. You might even suspect that one of the rationales for constructing the list in the first place was to somehow shoehorn him into the space of legitimate debate. But to figure out how Hanania is trying to do this, you need to poke hard at the anodyne seeming assumptions, rather than be distracted by the explicitly galling conclusions.

That is where Katz stumbles. He gets upset at what Hanania says about the Civil Rights Act and affirmative action as the origin of wokeness, saying that Hanania “seems to think that the Civil Rights Act caused the civil rights movement, as opposed to the other way around,” tracing it all back to Barry Goldwater. Katz then remarks on Hanania’s claim in a podcast that “Government literally created race in America. Like not blacks and whites, but like basically everyone else — and Native Americans — basically everyone else was basically grouped according to the ways, you know, the federal bureaucracy was doing things.” Katz has some ripe prognostications about what Hanania hopes will happen if government got out of the way.

But Hanania isn’t relying on the authority of Barry Goldwater. He’s standing on the shoulders of academic research. In some cases – including much of the stuff that Katz focuses his fire on – left-leaning academic research. Even before I did a Google search, I surmised that Hanania’s civil rights arguments riffed on Frank Dobbins’ eminently respectable work of social science, Inventing Equal Opportunity. I don’t know which academics he’s invoking on the U.S. Census and the construction of categories such as Hispanic: there are just so many to choose from, ranging from moderates through liberals to fervently lefty.

You could go after the details of Hanania’s social science claims if you really wanted – I would be startled if there weren’t selective misreadings. It is hard to claim on the one hand that the state creates the structures of race, and on the other that structural racism is a gussied up conspiracy theory, without some fancy rhetorical footwork to work around the gaping logical crevasses. Getting involved in that kind of debate seems to me to be a waste of time. But disputing the broadest version of the case – that key aspects of equal opportunity, civil rights and ethnic categories emerged from modern politics and battles in the administrative state – seems even worse. The bull of Left Critique thunders towards the matador, who twitches his cape to one side, so that the poor beast careens into the side of the ring, and then staggers back with crossed eyes and mild concussion, raring for another go that will have the same unfortunate result, or worse.

More succinctly, you don’t want to be the bull in a fight that is rigged in favor of the bullfighter. Instead, as per dsquared, you want to figure out what is wrong with the terms of the fight and press back hard against them.

As best as I can make it out, Hanania’s “let us assume” moment comes in the middle of a series of apparently non-controversial claims about what “Enlightened Centrists” believe. In context, they initially appear to be things that any reasonable person would agree to, or not think unreasonable. I think most readers won’t even notice them, let alone the nasty stuff that is hiding beneath. Here’s what Hanania says:

Enlightened Centrists take what Bryan Caplan calls “Big Facts” seriously. They are kept in mind as new information about the world is brought to light. Some examples of Big Facts that ECs rely on are: the heritability of traits; the paradox of voting; the information problem inherent in central planning; the broken windows fallacy; Trivers’ theory of self-deception; the existence of cognitive biases; comparative advantage; the explanatory power of IQ; the efficient market hypothesis; and the elephant in the brain. New theories or ideas should be met with more skepticism if they contradict or are in tension with Big Facts that have been well established. ECs of different Level 3 ideologies will place more emphasis on certain Big Facts over others, though some, like the idea of historical progress, they all share.

Now, any sentence that non-ironically connects “Bryan Caplan” to “Big Facts” is a big fat warning sign. Hanania links to a Caplan essay that starts explaining what “Big Facts” are by citing Caplan’s own book attacking democracy. Many key claims in this book are less facts than factitious (my co-authors and I have written about this at some length). They suggest pervasive cognitive bias (in particular, bias against free market economists) undermines the case for regular democracy, so that we should go for markets instead, or perhaps give more votes to well educated people (who are, after all more likely to recognize that economists are right).

But we’re getting ahead of ourselves. How exactly is Hanania using Big Factiness and for what purpose? He wants to define Enlightened Centrism so that it favors anti-democratic libertarianism, and brings “racial realists” like Steve Sailer into the conversation.

The apparently anodyne factual claims listed by Hanania systematically shift the terms of debate to undermine democracy and an economic role for the state, and instead promote markets and the belief that persistent inequalities result from some racial groups being systematically more stupid than others. To see this, it’s likely helpful to return to the passage in question, this time with the ideological translation function turned on. These translations are ideologically blunt, and perhaps tendentious, but I think they are pretty well on the mark.

Facts that ECs rely on are: the heritability of traits intelligence is racially inherited; the paradox of voting democracy doesn’t work; the information problem inherent in central planning socialism doesn’t work either; the broken windows fallacy Keynesianism – guess what?– it just doesn’t work; Trivers’ theory of self-deception citizens fool themselves with flattering just-so stories; the existence of cognitive biases let me tell you how citizens are biased; comparative advantage markets are teh awesome; the explanatory power of IQ have I mentioned race and intelligence already? Let me mention it again; the efficient market hypothesis markets are even awesomer than I just said a moment ago; and the elephant in the brain can I haz even more citizen cognitive bias?

As per the dsquared rule, if you stipulate to these beliefs, you’ve given the game away before it’s even begun. You have accepted that it is reasonable to believe that most people are biased fools, that democracy is inherently inferior to markets, and that differences in life outcomes for black people can largely be attributed to distribution of the genes for intelligence. Charge at the matador, if you want, but good luck to you! You’ll need it.

Or instead, as per dsquared’s advice, when you are dealing with a genuinely exceptional troll like Hanania, do not give away the underlying assumptions. Don’t be distracted by the red cape. Wedge your horns beneath the seemingly reasonable claims that are intended to tilt debate, lift those claims up, toss ‘em in the air and then gore.

This is getting too long already, and I have a life, so I am not going to do the full bullfighter-toss. Instead, at the bottom of this post, I re-order Hanania’s claims so that the underlying assumptions come out more clearly, linking to resources that provide counter-evidence at length. Read if you want, but I’m providing this mostly as a source I can come back to later, or cite as needs be in desultory spats on social media. Notably, the various prebuttals come from co-authors, co-authors plus me, or, in one case, someone who I was interviewing. You can take this commonality (very plausibly) as evidence of my own biases, and enthusiasm to work with people who share them. But even if you think this, they still provide evidence that Hanania’s purported Big Facts are drenched with their own ideology, and in many cases have been bitterly debated for decades. Which is another way of saying that they aren’t established facts at all.

And some of the facts are really not like the others. It might seem weird – if you aren’t read into debates among particular kinds of libertarians – to see that stuff about IQ and heritability in there. What work exactly is this rather jarring set of claims doing for the concept of Enlightened Centrism,? Do identified left-leaning Enlightened Centrists like Ezra Klein and Matthew Yglesias “rely on” these facts, as Hanania seems to suggest they do?

Readers – they do not. Hanania seemingly wants to reconstruct policy and intellectual debate around a center in which questions of race and IQ are once more legitimate topics of inquiry and discussion. Back in the 1990s (a time that Hanania is nostalgic for), soi-disant centrists such as Andrew Sullivan could devote entire special issues of the New Republic to the urgent debate over whether black people were, in fact, stupider than white people. Big Scientific Facts Said That It Was So! Now, that brand of intellectual inquiry has fallen into disrepute. Hanania, apparently yearns for it to come back. That, presumably, is why those claims about heritability and IQ are in there, and why Steve Sailer makes the cut.

As it happens, Matt was one of the “CT mates” cited in the 2004 dsquared post that was excerpted right at the beginning of this post. I’ve had disagreements with Matt since, on other stuff, but I am quite sure that both he and Ezra are bitterly opposed to the whole race and IQ project that Hanania wants to relegitimize. I can’t imagine that they welcome being placed on a spectrum of reasonable thought that lumps them together with racist creeps like Steven Sailer. But I can imagine why Hanania wants so to lump them – it provides a patina of legitimacy for opinions that have rightly been delegitimated, but that Hanania wants to bring back into debate.

So to see what Hanania is up to, it’s more useful not to be distracted by the provocative and outrageous. Instead, you want to look very closely at what seems superficially reasonable, seems to be the starting point for debate and ask: is there something wrong with these premises? In this case, the answer, quite emphatically, is yes.

Still, you (for values of ‘you’ that really mean ‘I’) don’t want to get dragged in further unless you absolutely have to. As Noah Smith, another of Hanania’s involuntary inductees into the Enlightened Centrist Hall of Fame said, “”Race and IQ” racism is a DDOS attack” on the time and attention of anti-racists. This naturally provoked Hanania to pop up in replies with a sarcastic rejoinder. When I wrote that Vox article I had to spend weeks dealing with Jordan Peterson acolytes popping up to inform me of the Established Scientific Facts about race and IQ. I really don’t want to be back there again. So take this post as an attack on premises, and a statement of principles, rather than the slightest hint at a desire to get stuck back into discussion on race-IQ and similar. Very possibly (he says after 3,000+ words) the best way of arguing with Richard Hanania is simply not to argue at all.

 

MORE DETAILED DISCUSSION OF PURPORTED “BIG FACTS” BELOW

Markets are Awesome I: the information problem inherent in central planning (socialism doesn’t work). Indeed, central planning doesn’t work. This does not provide, however, a warrant for unleashing free market wildness. Instead, it suggests that we need social democracy, with all its messiness. Why so? Read on.

Markets are Awesome II: the efficient market hypothesis Well, up to a point Lord Copper. The unfortunate fact is that the computational critique of state planners’ information problems also bollocks up the standard efficient market claims. At greater length: “allowing non-convexity messes up the markets-are-always-optimal theorems of neo-classical/bourgeois economics, too. (This illustrates Stiglitz’s contention that if the neo-classicals were right about how capitalism works, Kantorovich-style socialism would have been perfectly viable.)” At greater length again: “Bowles and Gintis: “The basic problem with the Walrasian model in this respect is that it is essentially about allocations and only tangentially about markets — as one of us (Bowles) learned when he noticed that the graduate microeconomics course that he taught at Harvard was easily repackaged as ‘The Theory of Economic Planning’ at the University of Havana in 1969.” And if markets are imperfect, and so too the state and democracy, then we sometimes need to set them against each other, as recommended by social democracy. For elaboration of how this applies to machine learning too, see this week’s Economist.

Markets are Awesome III: The “broken windows” fallacy (Keynesianism doesn’t work). Under other reasonable assumptions, the “broken windows fallacy” is itself fallacious and misleading.

Markets are Awesome IV: Comparative Advantage. This is indeed a very important idea, but as per Dani Rodrik, “Our theories — such as the theory of value or the theory of comparative advantage — are just scaffoldings, which need a lot of context-specific detail to become usable. Too often economists debate a policy question as if one or the other theory has to be universally correct. Is the Keynesian or the Classical model right? In fact, which model works better depends on setting and context. Only empirical diagnostics can help us know which works better at any given time — and that is more of a craft than a science, certainly when it is done in real time. If we economists understood this, it would make us more humble, less dogmatic, and more syncretic.” I don’t imagine that this flavor of humility is what is being called for in Hanania’s piece

Democracy is Unworkable I: Trivers’ theory of self-deception (citizens tell themselves flattering just-so-stories). This is only half of the cognitive psychology story. People bullshit themselves all the time, but they also have an evolved capacity to detect bullshit in others. The implication is that group reasoning (under the right circumstances) can consistently produce better results than individual ratiocination, with results for democracy described below.

Democracy is Unworkable II and III: The existence of cognitive biases/the elephant in the brain (have I mentioned cognitive bias yet). Really, these are both slight restatements of Democracy is Unworkable I (the “elephant in the brain” refers to Simler and Hanson’s book of the same name). Both Caplan and Jason Brennan have written books claiming that the pervasiveness of cognitive bias undermines the case for democracy. I’ve already mentioned the pop version of the counterargument. Here’s the academic statement of what this plausibly means for democratic theory. The Simler and Hanson book is clearly aware of the key sources for these counterarguments (one of them is mentioned in a footnote) but doesn’t deign to engage with them.

Democracy is Unworkable IV: The Paradox of Voting (democracy doesn’t work). The problem with this paradox is that it relies on the assumption that voters are rational agents. This entire genre of argument is based in rational choice, which means that it does not sit well with Democracy Is Unworkable claims I, II and III. This incompatibility of ideologically attractive critiques leads a variety of anti-democrats to hop furiously from one foot to another, all the while making special claims to stave off any mean-spirited suggestion that there is lots of irrational behavior in markets too. The resulting intellectual acrobatics are quite impressive in one sense; not at all in another.

Race and IQ I: The heritability of traits (intelligence is racially inherited). Actually, heritability does not mean what most people thinks it means. Moreover, technical meaning blows up many of the standard ‘science proves my racism’ arguments that are unfortunately so common on the Internets.

Race and IQ II: the explanatory power of IQ (IQ differences across race are real). There is excellent reason to believe that IQ has little explanatory power – it is a statistical cluster rather than a single and causally consequential underlying trait. Put more succinctly, the notion that we are able to measure general intelligence is based on a “statistical myth.” Again, this has painful implications for the Internet Libertarian Race-IQ Science Complex.

There’s lots more that could be said, but I think that’s enough to drive the point home, and it’s anyway as much as I’m willing to write on this topic. Finis.

☐ ☆ ✇ Crooked Timber

45 minutes overview of the capability approach

By: Ingrid Robeyns — June 26th 2023 at 19:46

I’ve been largely absent here – wrapping up my editorial work on two academic volumes and the various rounds of edits on the popular book on limitarianism. I promise I’ll be back with substantive posts by late August (after resting and travelling). In the meantime, I thought I might share this 45 minutes podcast episode in which I give an overview of the capability approach. Jack Simpson conducted this interview with me quite a while ago, but it only got released last week (congratulations on getting your PhD-degree in the meantime, Jack!).

Jack not only asked me to explain some things, but also asked me several times for “my take on” strenghts/weaknesses/inspiring examples in this literature – which is fun to talk about, since these are the kind of things you never put in an academic article.

Available via Spotify, Apple Podcasts, via RSS, or one of the other platforms that airs this podcast.

Looking forward to the next episodes of Jack’s podcast The Capability Approach in Conversation.

☐ ☆ ✇ Crooked Timber

Daniel Ellsberg has died

By: John Q — June 17th 2023 at 10:21

Daniel Ellsberg has died, aged 92. I don’t have anything to add to the standard account of his heroic career, except to observe that Edward Snowden (whose cause Ellsberg championed) would probably have done better to take his chances with the US legal system, as Ellsberg did.

In decision theory, the subsection of the economics profession in which I move Ellsberg is known for a contribution made a decade before the release of the Pentagon papers. In his PhD dissertation, Ellsberg offered thought experiments undermining the idea that rational people can assign probabilities to any event relevant to their decisions. This idea has given rise to a large theoretical literature on the idea of ‘ambiguity’. Although my own work has been adjacent to this literature for many decades, it’s only recently that I have actually written on this.

A long explanation is over the fold. But for those not inclined to delve into decision theory, it might be interesting to consider other people who have been prominent in radically different ways. One example is Hedy Lamarr, a film star who also patented a radio guidance system for torpedoes (the significance of which remains in dispute). A less happy example is that of Maurice Allais, a leading figure in decision theory and Economics Nobel winner, who also advocated some fringe theories in physics. I thought a bit about Ronald Reagan, but his entry into politics was really built on his prominence as an actor, rather than being a separate accomplishment.

The simplest of Ellsberg’s experiments is the “two-urn” problem. You are presented with two urns. One contains 50 red balls and 50 black balls. The other contains 100 black or red balls, but you aren’t told how many of each. Now you are offered two even money bet, which pay off if a red ball is drawn from one of the runs. You get to choose which urn to bet on. Intuition suggests choosing the urn with known proportions. Now suppose instead of a bet on red, you are offered the same choice but with a bet on black. Again, it seems that the first urn would be better.

Now, on the information given, the probability of a red ball being drawn from the first urn is 0.5. But what about the second urn. Strictly preferring the first urn for the red ball bet implies that the probability of a red ball being drawn from the second must be less than 0.5. But preferring the first urn for the black ball bet implies that the probability of a red ball being drawn from the second must be more than 0.5. So, there is no probability number that rationalises these decisions.

The title of Ellsberg’s paper was “Risk, Ambiguity and the Savage Axioms”. As a result, the term “ambiguity” has been applied, in contradistinction to risk, to the case when there are no well-defined probabilities. But this was not the way Ellsberg himself used the term. Rather he referred to

the nature of ones information concerning the relative likelihood of events. What is at issue might be called the ambiguity of this information, a quality depending on the amount, type, reliability and unanimity of information, andg iving rise to one’s degree of confidence in an estimate of relative
likelihoods.(emphasis added)

I’ve developed this point in a paper whose title Seven Types of Ambiguity is one of numerous homages to William Empson’s classic work of literary criticism. Among these homages, I’d recommend the novel of the same name by Australian writer Elliot Perlman (later a TV series).

The central claim in my paper is that all forms of ambiguity in decision theory may
be traced to bounded and differential awareness. If that sounds interesting, you can read the paper here. If you’re super-interested, I’ll be presenting the paper in a couple of conferences in Europe in July – email me at [email protected] for details.

☐ ☆ ✇ Crooked Timber

Why Didn’t *He* Scream?

By: Liz Anderson — June 13th 2023 at 17:45

If you follow college football, you probably heard that Glenn “Shemy” Schembechler was recently forced to resign from his post as assistant director of football recruiting at University of Michigan shortly after he was hired.  This occurred after news emerged that he had liked  numerous racist tweets.  Glenn is the son of “legendary” Bo Schembechler, who won 13 Big Ten championships as coach of UM football from 1969–1989.  Apparently it wasn’t enough to prevent Glenn’s hiring that he denied that his brother Matt had told their father that UM team doctor Robert Anderson had sexually assaulted him during a physical exam.  Glenn insisted that “Bo would have done something. … Bo would have fired him.”  Yet law firm WilmerHale had already issued a report confirming that Bo had failed to take action against Anderson after receiving multiple complaints from victims about Anderson’s abuse.  Matt has testified that his father even protected Anderson’s job after Athletic Director Don Canham was ready to fire him.

Women are often asked why they didn’t scream when they were being raped, or why they didn’t immediately report the rape to the police, as if these inactions are evidence that the rape never happened.  This post is about why Bo didn’t scream after his own son complained of sexual victimization by his team’s doctor.  The answer offers insight into the political psychology of patriarchy, which is deeply wrapped up in the kind of denial of reality that Glenn expressed, and that Bo enforced.  It also illuminates why women don’t scream when they are assaulted.

Glenn’s reasoning in defense of his father expresses the self-understanding of those committed to a certain form of patriarchal ideology.  In the U.S., college football is the premier sport in which coaches are represented as experts in training up young men to be real men, exemplars of a certain version of estimable masculinity.  In this version, plays for domination must take place on the field within the rules of the game, and real manhood comes with responsibilities.  The syllogism implicit in Glenn’s reasoning is clear:  Real men protect those for whom they are responsible.  Bo was a real man.   So, if Bo knew that his son or his athletes–those for whom he is responsible–were being harmed, he would have protected them by firing Dr. Anderson.

The heartbreaking and deeply disturbing testimony of Matt Schembechler, along with two of Bo’s former football players, Daniel Kwiatkowski, and Gilvanni Johnson, tells a very different story about how Bo understood the demands of real manhood.  When 10-year-old Matt told his father that Anderson had sexually assaulted him, Bo got angry with him and punched him in the chest.  When Kwiatkowski complained that Anderson had digitally raped him, Bo told him to “toughen up.”  When Johnson complained of the same abuse, Bo put him “in the doghouse,” suddenly started demeaning his athletic performance, and barred him from playing basketball although he was recruited for both sports.

“Bo knew, everybody knew,” said Kwiatkowski.  Players joked about seeing “Dr. Anal” to Johnson.  Coaches would threaten to send players to be examined by Anderson if they didn’t work harder.  Victims stayed silent out of fear of losing their scholarships or chances to play football.

Bo didn’t appear to be angry at Anderson.  He was angry at his son and his players for complaining.  He was teaching them a different set of rules for real manhood from the official patriarchal ideology:  1. Real men don’t get raped.  More generally, they don’t get humiliated by others.  2. If they do get humiliated, they had better not whine about it.  3.  Instead, they should “toughen up,” which is to say, bear up under the abuse, put up with it, act like it didn’t happen.  In other words, submit silently.

These are bullies’ rules–the rules for real manhood that protect bullies at the expense of the subordinates they are ostensibly supposed to protect.  They are reflected in the stiff upper lip of England’s elite boarding schools, notorious for enabling bullies to terrorize other students.  In the code of Southern honor satirized by Mark Twain in Pudd’nhead Wilson, where it was unmanly to settle disputes in court rather than duking it out.  In the 2016 GOP Presidential primary debates, which were all about who could prove they were the bigger bully.  In Mike Pence’s refusal until recently to blame Trump for Jan. 6, even though Trump had repeatedly humiliated him and set a mob out to lynch him for refusing to overturn the election.

Yet this explanation doesn’t quite answer the question of why Bo didn’t just fire Anderson from his position as team doctor, or let Athletic Director Don Canham do so, when Matt’s mother complained to Canham, taking up the duty to protect that Bo abandoned.  Why did Bo put up with his son and his team being abused?  To understand this, we need to dive deeper into the relationship between humiliation and shame.

People feel humiliated when someone else forces them into an undignified position or treats them as someone who doesn’t count, as contemptible or even beneath contempt.  Humiliation is a response to how others treat oneself.  People feel shame when they fail to measure up to social standards of esteem that they have internalized.  One might feel ashamed for “allowing” another person to humiliate oneself, even if one had no way to avoid it.  In that case, humiliation precedes shame.  But there are many other causes of shame not predicated on humiliation.

Everyone agrees that a characteristic response to shame is to want to hide from the gaze of others.  There are at least two characteristic responses to humiliation. (1) Getting even: restoring oneself to a position of (at least) equality with respect to the bullying party, often by means of violence.  It took social change for lawsuits to provide a respectable nonviolent alternative.  (2) Submission: like the dog who loses a fight and slinks away, tail between its legs.

According to the bullies’ patriarchal rules of real manhood, one’s manhood can be demeaned and one can thereby be humiliated by the humiliation of associates under one’s authority.  This is explicit in honor cultures, where the honor of men is embodied in the sexual purity of their female relatives.  A man can humiliate another man by raping or seducing his wife, daughter, sister, or niece.  Female relatives humiliate the men responsible for them by choosing to have sex outside of an approved marriage.  Others mock men for failing to protect and control their female relatives.  Manly honor is thus deeply wrapped up in totalitarian control over their female relatives’ sexuality.

The same general logic applies in the U.S., but by somewhat different rules about who is responsible for whom and how they may respond.  I think Bo felt humiliated by the fact that his son was raped.  But he was a prisoner of the same bullies’ rules he enforced on his son and his team.  So, instead of getting angry at Anderson, he got angry at his son.  Instead of getting even with Anderson, he submitted, as called for by bullies’ rules.  For, under the bullies’ rules of patriarchy, there is no real recovery or restoration of real manhood after such extreme humiliation (at least short of murdering Anderson in revenge).  Once humiliated in such an extreme way, Bo felt he had no other option than to pretend that it never happened.  And to avoid shame being heaped upon humiliation, he had to hope that no one discovered otherwise via the complaints of those whose victimization humiliated him.  So he had to enforce the bullies’ rules of silence on them as well.

Johnson testified to the unrecoverability of a confident sense of manhood due to Anderson’s multiple sexual assaults throughout his football career.  He said that he tried to prove to himself that he was a man by being excessively promiscuous.  But penetrating countless women could never make up for his having been penetrated against his will, and thereby forced into a position of feminine submission.  He destroyed two marriages in his futile attempts to restore his sense of manhood, and was unable to establish stable intimate relationships.

Rape culture is the popular enforcement of bullies’ rules against those traumatized by the sexual humiliation of bullies.  Bo didn’t scream over his son’s rape, because he didn’t want shame heaped upon his humiliation.  And that is often why women don’t scream either.  Although Bo’s “toughen up” reprimand implies that he thought silent submission was a specifically manly way to respond to rape, in reality bullies’ rules prescribe the same conduct for women–silent submission.

I draw two lessons from this analysis.  First, many men are victims of rape culture too.  More generally, they are victims of bullies’ rules of patriarchy.  Bullies’ rules are the actual rules by which patriarchy operates, in contrast with the legitimizing patriarchal ideology that Glenn believed in.  Second, and more generally, Bo’s response to the numerous rapes of his son and his athletes strongly supports Robin Dembroff’s analysis of patriarchy.  According to Dembroff, patriarchy does not place all men above all women.  It places “real men” above everyone else, at everyone else’s expense.

Trumpism is another manifestation of the popular enforcement of bullies’ rules against all varieties of humiliation inflicted by Trump against his enemies and associates.  If you want to know why so few GOP officeholders, party officials, and Trump aides and associates scream even when Trump humiliates them or the people they love, just remember why Bo didn’t scream when his son was raped.  Bullies can’t enforce their own rules all by themselves.  They need support from others.

 

 

☐ ☆ ✇ Crooked Timber

Counterfeit digital persons: On Dennett’s Intentional Stance, The Road to Serfdom

By: Eric Schliesser — June 10th 2023 at 10:05

A few weeks ago, Daniel Dennett published an alarmist essay (“Creating counterfeit digital people risks destroying our civilization”) in The Atlantic that amplified concerns Yuval Noah Harari expressed in the Economist.+ (If you are in a rush, feel free to skip to the next paragraph because what follows are three quasi-sociological remarks.) First, Dennett’s piece is (sociologically) notable because in it he is scathing of the “AI community” (many of whom are his fanbase) and its leading corporations (“Google, OpenAI, and others”). Dennett’s philosophy has not been known for leading one to a left-critical political economy, and neither has Harari’s. In addition, Dennett’s piece is psychologically notable because it goes against his rather sunny disposition — he is a former teacher and sufficiently regular acquaintance — and the rather optimistic persona he has sketched of himself in his writings (recall this recent post); alarmism just isn’t Dennett’s shtick. Third, despite their prominence neither Harari nor Dennett’s pieces really reshaped the public discussion (in so far as there (still) is a public). And that’s because it competes with the ‘AGI induced extinction’ meme, which, despite being a lot more far-fetched, is scarier (human extinction > fall of our civilization) and is much better funded and supported by powerful (rent-seeking) interests.

Here’s Dennett’s core claim(s):

Money has existed for several thousand years, and from the outset counterfeiting was recognized to be a very serious crime, one that in many cases calls for capital punishment because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created… 

Another pandemic is coming, this time attacking the fragile control systems in our brains—namely, our capacity to reason with one another—that we have used so effectively to keep ourselves relatively safe in recent centuries.

You may ask, ‘What does this have to do with the intentional stance?’ For Dennett goes on to write, “Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the “intentional stance”—turns out to be easy to invoke and almost impossible to resist, even for experts. We’re all going to be sitting ducks in the immediate future.” This is a kind of (or at least partial) road to serfdom thesis produced by our disposition to take up the intentional stance. In what follows I show how these concepts come together by the threat posed by AIs designed to fake personhood.

More than a half century ago, Dan Dennett re-introduced a kind of (as-if) teleological explanation into natural philosophy by coining and articulation (over the course of a few decades of refinement), the ‘intentional stance’ and its role in identifying so-called ‘intentional systems,’ which just are those entities to which ascription of the intentional stance is successful. Along the way, he gave different definitions of the intentional stance (and what counts as success). But here I adopt the (1985) one:

It is a familiar fact from the philosophy of science that prediction and explanation can come apart.*  I mention this because it’s important to see that the intentional stance isn’t mere or brute  instrumentalism. The stance presupposes prediction and explanation as joint necessary conditions.

In the preceding two I have treated the intentional stance as (i) an explanatory or epistemic tool that describes a set of strategies for analyzing other entities (including humans and other kinds of agents) studied in cognitive science and economics (one of Dennett’s original examples).** But as the language of ‘stance’ suggests and as Dennett’s examples often reveal the intentional stance also describes our own (ii) ordinary cognitive practice even when we are not doing science. In his 1971 article, Dennett reminds the reader that this is “easily overlooked.” (p.93) For, Dennett the difference between (i-ii) is one of degree (this is his debt to his teacher Quine, but for present purposes it useful to keep them clearly distinct (and when I need to disambiguate I will use ‘intentional stance (i)’ vs ‘intentional stance (ii).’)

Now, as Dennett already remarked in his original (1971) article, but I only noticed after reading Rovane’s (1994) “The Personal Stance,” back in the day, there is something normative about the intentional stance because of the role of rationality in it (and, as Dennett describes, the nature of belief). And, in particular, it seems natural that when we adopt the intentional stance in our ordinary cognitive practice we tacitly or explicitly ascribe personhood to the intentional system. As Dennett puts it back in 1971, “Whatever else a person might be-embodied mind or soul, self-conscious moral agent, “emergent” form of intelligence-he is an Intentional system, and whatever follows just from being an Intentional system thus is true of a person.” Let me dwell on a complication here.

That, in ordinary life, we are right to adopt the intentional stance toward others is due to the fact that we recognize them as persons, which is a moral and/or legal status. In fact, we sometimes even adopt the intentional stance(ii) in virtue of this recognition even in high stakes contexts (e.g., ‘what would the comatose patient wish in this situation?’) That we do so may be the effect of Darwinian natural selection, as Dennett implies, and that it is generally a successful practice may also be the effect of such selection. But it does not automatically follow that when some entity is treated successfully as an intentional system it thereby is or even should be a person. Thus, whatever follows just from being an intentional system is true of a person, but (and this is the complication) it need not be the case that what is true of a person is true of any intentional system. So far so good. With that in place let’s return to Dennett’s alarmist essay in The Atlantic, and why it instantiates, at least in part, a road to serfdom thesis.

At a high level of generality, a road to serfdom thesis holds (this is a definition I use in my work in political theory) that an outcome unintended to social decisionmakers [here profit making corporations and ambitious scientists] is foreseeable to the right kind of observer [e.g., Dennett, Harari] and that the outcome leads to a loss of political and economic freedom over the medium term. I use ‘medium’ here because the consequences tend to follow in a time frame within an ordinary human life, but generally longer than one or two years (which is the short-run), and shorter than the centuries’ long process covered by (say) the rise and fall of previous civilization. (I call it a ‘partial’ road to serfdom thesis because a crucial plank is missing–see below.)

Before I comment on Dennett’s implied social theory, it is worth noting two things (and the second is rather more important): first, adopting the intentional stance is so (to borrow from Bill Wimsatt) entrenched into our ordinary cognitive practices that even those who can know better (“experts”) will do so in cases where they may have grounds to avoid doing so. Second, Dennett recognizes that when we adopt the intentional stance(ii) we have a tendency to confer personhood on the other (recall the complication.) This mechanism helps explain, as Joshua Miller observed, how that Google engineer fooled himself into thinking he was interacting with a sentient person.

Of course, a student of history, or a reader of science fiction, will immediately recognize that this tendency to confer personhood on intentional systems can be highly attenuated. People and animals have been regularly treated as things and instruments. So, what Dennett really means or ought to mean is that we will (or are) encounter(ing) intentional systems designed (by corporations) to make it likely that we will automatically treat them as persons. Since Dennett is literally the expert on this, and has little incentive to mislead the rest us on this very issue, it’s worth taking him seriously and it is rather unsettling that even powerful interests with a manifest self-interest in doing so are not.

Interestingly enough, in this sense the corporations who try to fool us are mimicking Darwinian natural selection because as Dennett himself has emphasized decades ago when the robot Cog was encountered in the lab, we all ordinarily have a disposition to treat, say, even very rudimentary eyes following/staring at us as exhibiting agency and as inducing the intentional stance into us. Software and human factor engineers have been taking advantage of this tendency all along to make our gadgets and tools ‘user friendly.’

Now, it is worth pointing out that while digital environments are important to our civilization, they are not the whole of it. So, even in the worst case scenario — our digital environment is already polluted in the way Dennett worries by self-replication counterfeit people–, you may think we still have some time to avoid conferring personhood on intentional systems in our physical environment and, thereby, also have time to partially cleanse our digital environment. Politicians still have to vote in person and many other social transactions (marriage, winning the NBA) still require in person attendance. This is not to deny that a striking number of transactions can be done virtually or digitally (not the least in the financial sector), but in many of these cases we also have elaborate procedures (and sanctions) to prevent fraud developed both by commercial parties and by civil society and government. This is a known arms race between identity-thieves, including self-replicating AI/LLMs who lack all sentience, and societies.

This known arms race actually builds on the more fundamental fact that society itself is the original identity thief because, generally, for all of us its conventions and laws both fix an identity where either there previously was none or displaces other (possible) identities, as well as, sometimes, takes away or unsettles the identity ‘we’ wish to have kept (and, here, too, there is a complex memetic arms race in which any token of a society is simultaneously the emergent property, but society (understood as a type) is the cause. [See David Haig’s book, From Darwin to Derrida, for more on this insight.]) And, of course, identity-fluidity also has many social benefits (as we can learn from our students or gender studies).

Now, at this point it is worth returning to the counterfeit money example that frames Dennett’s argument. It is not obvious that counterfeit money harmed society. It did harm the sovereign because undermined a very important lever of power (and its sovereignty) namely to insist that taxes are paid/levied in the very same currency/unit-system in which he/she paid salaries (and wrote IOUs) and other expenses. I don’t mean to suggest there are no other harms (inflation and rewarding ingenious counterfeiters), but these were both not that big a deal nor the grounds for making it a capital crime. (In many eras counterfeit money was useful to facilitate commerce in the absence of gold or silver coins.)

And, in fact, as sovereignty shifted to parliaments and people at the start of the nineteenth century, the death penalty for forgery and counterfeiting currency was abolished (and the penalties reduced over time). I suspect this is also due to the realization that where systematic forgeries are successful they do meet a social need and that a pluralist mass society itself is more robust than a sovereign who insists on full control over the mint. Dennett himself implicitly recognizes this, too, when he advocates “strict liability laws, removing the need to prove either negligence or evil intent, would keep them on their toes.” (This is already quite common in product liability and other areas of tort law around the world.)

I am not suggesting complacency about the risk identified by Harari and Dennett. As individuals, associations, corporations, and governments we do need to commit to developing tools that prevent and mitigate the risk from our own tendency to ascribe personhood to intentional systems designed to fool us. We are already partially habitualized to do so with all our passwords, two-factor verification, ID cards, passport controls etc.

In many ways, another real risk here, and which is why I introduced the road to serfdom language up above (despite the known aversion to Hayek among many readers here at CrookedTimber), is that our fear of deception can make us overshoot in risk mitigation and this, too, can undermine trust and many other benefits from relatively open and (so partially) vulnerable networks and practices. So, it would be good if regulators and governments started the ordinary practice of eliciting expert testimony to start crafting well designed laws right now and carefully calibrated them by attending to both the immediate risk from profit hungry AI community, and the long term risk of creating a surveillance society to prevent ascribing personhood to the wrong intentional systems (think Blade Runner). For, crucially for a (full) road to serfdom thesis, in order to ward off some unintended and undesirable consequences, decisions are taken along the way that tend to lock in a worse than intended and de facto bad political unintended outcome.

I could stop here, because this is my main point. But Dennett’s own alarmism is due to the fact that he thinks the public sphere (which ultimately has to support lawmakers) may already be so polluted that no action is possible. I quote again from The Atlantic:

Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. 

I don’t think our liberal democracy depends on the informed consent of the governed. This conflates a highly idealized and normative view of democracy (that one may associate with deliberative or republican theories) with reality. It’s probably an impossible ideal in relatively large societies with complex cognitive division of labor, including the (rather demanding) sciences. (And it is also an ideal that gets abused in arguments for disenfranchisement.) So, while an educated populace should be promoted, in practice we have all kinds of imperfect, overlapping institutions and practices that correct for the lack of knowledge (parties, press, interest groups, consumer associations, academics, and even government bureaucracies, etc.)

It doesn’t follow we should be complacent about the fact that many of the most economically and politically powerful people, corporations, and governments  control our attention which they already do a lot of the time. But this situation is not new; Lippmann and Stebbing diagnosed it over a century ago, and probably is an intrinsic feature of many societies. It is partially to be hoped that a sufficient number of the most economically and politically powerful people, corporations, governments, and the rest of us are spooked into action and social mobilization by Harari and Dennett to create countervailing mechanisms (including laws) to mitigate our tendency to ascribe personhood to intentional systems. (Hence this post.)

There is, of course, an alternative approach: maybe we should treat all intentional systems as persons and redesign our political and social lives accordingly. Arguably some of the Oxford transhumanists and their financial and intellectual allies are betting on this even if it leads to human extirpation in a successor civilization. Modern longtermism seems to be committed to the inference from intentional stance(i) to ascription of personhoodhood or moral worth. From their perspective Dennett and Harari are fighting a rear-guard battle.

 

*Here’s an example: before Newton offered a physics that showed how Kepler’s laws hung together, lots astronomers could marvelously predict eclipses of planetary moons based on inductive generalizations alone. How good were these predictions? They were so good that they generated the first really reliable measure or estimate for the speed of light.

**Fun exercise: read Dennett’s 1971 “Intentional Systems” after you read Milton Friedman’s  “The methodology of positive economics.” (1953) and/or Armen Alchian’s “Uncertainty, evolution, and economic theory” (1950). (No, I am not saying that Dennett is the Chicago economist of philosophy!)

+Full disclosure, I read and modestly commented on Dennett’s essay in draft.

☐ ☆ ✇ Crooked Timber

Disinformation and the Intercept

By: Henry Farrell — June 8th 2023 at 22:52

There’s a backstory behind this Washington Post story on Republican persecution of academics, and it’s one that doesn’t make the Intercept look good.

Jordan’s colleagues and staffers met Tuesday on Capitol Hill with a frequent target of right-wing activists, University of Washington professor Kate Starbird, two weeks after they interviewed Clemson University professors who also track online propaganda, according to people familiar with the events. Last week, Jordan (Ohio) threatened legal action against Stanford University, home to the Stanford Internet Observatory, for not complying fully with his records requests. … The push caps years of pressure from conservative activists who have harangued such academics online and in person and filed open-records requests to obtain the correspondence of those working at public universities. The researchers who have been targeted study the online spread of disinformation, including falsehoods that have been accelerated by former president and candidate Donald Trump and other Republican politicians. … Last month, the founder of the conspiracy-theory-prone outlet the Gateway Pundit and others sued Starbird and Stanford academics Alex Stamos and Renée DiResta, alleging that they are part of a “government-private censorship consortium” that tramples on free speech. …
“Whether directly or indirectly, a government-approved or-facilitated censorship regime is a grave threat to the First Amendment and American civil liberties,” Jordan wrote.

The claim that these academics are part of a “government-approved or-facilitated censorship regime” is complete bullshit. But it is bullshit that was popularized by a grossly inaccurate story at the Intercept, which purported to discover a secret collaboration between academics and DHS to censor the American right wing.

Full disclosure – I know Kate Starbird, Renee DiResta and Alex Stamos. Not super well – they’re friendly acquaintances – but we’re on first name terms. I also have some sense (mostly indirectly and from social media) of the kinds of political and personal harassment that they have had to endure as a result of the piece by Ken Klippenstein (who is still at the Intercept) and Lee Fang (who left the Intercept to start a Substack newsletter). And I know the world they’re in. I don’t have any government funding, and haven’t been involved in any projects like the ones they have been working on, but I regularly go to conferences with people in this world. and have a sense of how they think, and what they are doing. Which is why I’m writing this post. The Intercept piece not only stinks, but has become the foundation for a much bigger heap of nasty.

You can read the Intercept article here. It’s very long and quite disorganized. The relevant claims:

Years of internal DHS memos, emails, and documents — obtained via leaks and an ongoing lawsuit, as well as public documents — illustrate an expansive effort by the agency to influence tech platforms. …The work, much of which remains unknown to the American public, came into clearer view earlier this year when DHS announced a new “Disinformation Governance Board”: a panel designed to police misinformation (false information spread unintentionally), disinformation (false information spread intentionally), and malinformation (factual information shared, typically out of context, with harmful intent) that allegedly threatens U.S. interests. … Behind closed doors, and through pressure on private platforms, the U.S. government has used its power to try to shape online discourse. … the department plans to target “inaccurate information” on a wide range of topics, including “the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines, racial justice, U.S. withdrawal from Afghanistan, and the nature of U.S. support to Ukraine.” … . “This makes Benghazi look like a much smaller issue,” said Rep. Mike Johnson, R-La … Meeting records of the CISA Cybersecurity Advisory Committee, the main subcommittee that handles disinformation policy at CISA, show a constant effort to expand the scope of the agency’s tools to foil disinformation. … In June, the same DHS advisory committee of CISA — which includes Twitter head of legal policy, trust, and safety Vijaya Gadde and University of Washington professor Kate Starbird — drafted a report to the CISA director calling for an expansive role for the agency in shaping the “information ecosystem.” The report called on the agency to closely monitor “social media platforms of all sizes, mainstream media, cable news, hyper partisan media, talk radio and other online resources.” They argued that the agency needed to take steps to halt the “spread of false and misleading information,” with a focus on information that undermines “key democratic institutions, such as the courts, or by other sectors such as the financial system, or public health measures.” … Behind closed doors, and through pressure on private platforms, the U.S. government has used its power to try to shape online discourse. According to meeting minutes and other records appended to a lawsuit filed by Missouri Attorney General Eric Schmitt, a Republican who is also running for Senate, discussions have ranged from the scale and scope of government intervention in online discourse to the mechanics of streamlining takedown requests for false or intentionally misleading information.

The problem, as Mike Masnick wrote at the time, is that this is basically all horseshit (the unironic MaKeS BeNgHaZi LoOk SmAlL quote is a dead giveaway). “Obtained via leaks and an ongoing lawsuit” sounds dead sexy, but it’s the “as well as public documents” at the end that is really doing most the work. The actual information that the Intercept article quotes, out of context to make it seem all scary, is pretty well all in the public domain, obtainable via Google search. As Mike notes:

if you read the actual document it’s… all kinda reasonable? It does talk about responding to misinformation and disinformation threats, mainly around elections — not by suppressing speech, but by sharing information to help local election officials respond to it and provide correct information. From the actual, non-scary, very public report:

Currently, many election officials across the country are struggling to conduct their critical work of administering our elections while responding to an overwhelming amount of inquiries, including false and misleading allegations. Some elections officials are even experiencing physical threats. Based on briefings to this subcommittee by an election official, CISA should be providing support — through education, collaboration, and funding — for election officials to pre-empt and respond to MD

It includes four specific recommendations for how to deal with mis- and disinformation and none of them involve suppressing it. They all seem to be about responding to and countering such information by things like “broad public awareness campaigns,” “enhancing information literacy,” “providing informational resources,” “providing education frameworks,” “boosting authoritative sources,” and “rapid communication.” See a pattern? All of this is about providing information, which makes sense. Nothing about suppressing. The report even notes that there are conflicting studies on the usefulness of “prebunking/debunking” misinformation, and suggests that CISA pay attention to where that research goes before going too hard on any program.

If you want to get a sense of how truly bad the Intercept article is, read everything that Mike has to say (his piece is long too). The most damning bit:

But the Intercept, apparently desperate to put in some shred that suggests this proves the government is looking to suppress information, slips in this paragraph:

The report called on the agency to closely monitor “social media platforms of all sizes, mainstream media, cable news, hyper partisan media, talk radio and other online resources.” They argued that the agency needed to take steps to halt the “spread of false and misleading information,” with a focus on information that undermines “key democratic institutions, such as the courts, or by other sectors such as the financial system, or public health measures.”

Note the careful use of quotes. All of the problematic words and phrases like “closely monitor” and “take steps to halt” are not in the report at all. You can go read the damn thing. It does not say that it should “closely monitor” social media platforms of all sizes. It says that the misinformation/disinformation problem involves the “entire information ecosystem.” It’s saying that to understand the flow of this, you have to recognize that it flows all over the place. And that’s accurate. It says nothing about monitoring it, closely or otherwise.

In short, the Intercept article is at best horseshit. Klippenstein and Fang make big claims that they don’t deliver on. As it turned out, these were politically convenient big claims for some people. Specifically, for Elon Musk – the allegations in this Intercept article become one of the key bases for the so-called Twitter files, heaping up new and enormous piles of horseshit before Musk fell out with the soi-disant journalists that he’d given access to, and his own lawyers called nope. Also, for a whole lot of Republican activists. And, as the Post article describes, for Jim Jordan’s witch-hunting committee, which has turned these allegations into a Grand Theory of Government Suppression of Free Speech, which they’re using to target academics whose only apparent fault was to provide the US government advice about the extent, nature of, and possible solutions to the disinformation problem.

The Intercept article is still up. It shouldn’t be. It isn’t just that the article is demonstrably and terribly wrong. It is that it is demonstrably causing genuine and continued harm and distress to people whose lives have been turned upside down. I’ve seen Twitter fights where Fang in particular tried to defend the piece (mostly through tu quoque rather than actually engaging with criticisms). I haven’t seen any sign that the editors of the Intercept have addressed the pushback to the piece (perhaps I’ve missed it). If I were to guess, I’d suspect that people at the Intercept know that the piece stinks, but feel that it’s awkward to confront it. The Intercept has been a notoriously fractious organization, with people leaving in angry huffs, being forced to leave, newsroom leaks and the like. I can understand why they don’t want more drama. But that doesn’t make it right. It’s an article whose fundamental flaws have caused specific hurt and had wide repercussions for American media and politics. Fixing fuck-ups like this is Journalism Ethics 101.

And there’s a deeper story here about something that has gone badly wrong with one part of the American left, which I used to be reasonably friendly with, and have found increasingly weird and alienating over the last few years (some things I used to think, I don’t think any more; some people I respected, I’ve given up on). One of the key consequences of the Intercept article has been to undermine efforts to understand, let alone push back against, democratic disinformation. I suspect that is an intended consequence. The article’s authors make it clear that they don’t think that government should have any role in making the information environment better. That’s an argument that I strongly disagree with, but it is not an inherently stupid argument. What is stupid – and worse than stupid – is the conspiratorial logic they use to defend it, patching together out-of-context quotes, breathless rhetoric, and disconnected factoids to suggest by sheer force of volume that There Is Something Wicked Going On. A healthy distrust of the state has mutated into a creepy wake-up-sheeple paranoia. The Intercept is still publishing good journalism (e.g.). But this is a style of writing that it needs to cut off at the roots.

☐ ☆ ✇ Crooked Timber

Happy World Ocean Day

By: Chris Armstrong — June 8th 2023 at 15:29
I’m off to do a talk to mark World Ocean Day, so this is posted in haste. The ocean needs advocates. It’s our biggest ecosystem, probably our biggest carbon sink, a major source of oxygen. It regulates temperatures, and drives weather patterns. Hundreds of millions of people are nutritionally dependent on fish. But the ocean is also increasingly central to the global economy, and facing threats like never before.

Climate change – which drives ocean warming and acidification – is the big one. But plastic and nitrogen pollution and destructive fishing practices are also major threats. Fish farming has an enormous environmental footprint, and now a Spanish company has plans to open the world’s first octopus farm. Plans for mining the seabed are close to fruition – or, depending on your view, they may be many years away, exaggerated to boost the share price of a few mining corporations. But one way or another, the ocean is more and more central to the global economy.

Today is a day to reflect on the kind of ocean we want: an industrialised ocean devoid of much of its present life? Or an ocean in recovery, teeming with life once more? After the second world war (when U-boats patrolled the oceans and fishing boats were forced to stay at home in much of the world) scientists were amazed at the recovery the ocean’s ecosystems had made in just a few years. Will they get the chance of recovery again?
☐ ☆ ✇ Crooked Timber

Pew quits the generation game

By: John Q — June 6th 2023 at 00:11

Since the beginning of this millennium, I’ve been writing critiques of the “generation game”, the idea that people can be divided into well-defined groups (Boomers, Millennials and so on), with specific characteristics based on their year of birth. As I said in my first go at this issue, back in 2000 (reproduced here )

Much of what passes for discussion about the merits or otherwise of particular generations is little more than a repetition of unchanging formulas about different age groups Ð the moral degeneration of the young, the rigidity and hypocrisy of the old, and so on.

Demographers have a word (or rather two words) for this. They distinguish between age effects and cohort effects. The group of people born in a given period, say a year or a decade, is called a cohort. Members of a cohort have things in common because they have shared common experiences through their lives. But, at any given point in time, when members of the cohort are at some particular age, they share things in common with the experience of earlier and later generations when they were at the same age.

My most prominent contribution to the debate was this piece in the New York Times five years ago, prompted by the Pew Research Centre’s announcement that it would define people born between 1981 and 1996 as members of the millennial generation. After discussing the history of the “generation” idea, I made the central point

Dividing society by generation obscures the real and enduring lines of race, class and gender. When, for example, baby boomers are blamed for “ruining America,” the argument lumps together Donald Trump and a 60-year-old black woman who works for minimum wage cleaning one of his hotels.

Now, I’m pleased to say, Pew has changed its view, partly in response to a “growing chorus of criticism about generational research and generational labels in particular.”

From now on, they will take proper account of age, cohort and period effects, with the result that

our audiences should not expect to see a lot of new research coming out of Pew Research Center that uses the generational lens. We’ll only talk about generations when it adds value, advances important national debates and highlights meaningful societal trends.

What’s striking is that this is happening at a time when political views, at least in the US, UK and Australia, show a really strong age gradient, with old people far more likely to be on the political right. Understanding this is important, and the use of sloppy labels like “Boomers” (focusing attention on a demographic event 60-80 years ago) is unlikely to be useful.

☐ ☆ ✇ Crooked Timber

My son’s autistic language

By: Macarena Marey — April 5th 2023 at 12:30

My son’s language is made of a bundle of sounds that do not exist in the Spanish that we speak around the Río de la Plata. He repeats syllables he himself invented, he alternates them with onomatopoeias, guttural sounds, and high-pitched shouts. It is an expressive, singing language. I wrote this on Twitter at 6:30 in the morning on a Thursday because Galileo woke me up at 5:30. He does this, madruga (there is no word for “madrugar”, “waking up early in the morning” in English, I want to know why). As I look after him, I open a Word document in my computer. I write a little while I hear “aiuuuh shíii shíiii prrrrrr boio boio seeehhh” and then some whispers, all this accompanied with his rhythmic stimming of patting himself on the chest or drumming on the walls and tables around the house.
My life with Gali goes by like this, between scenes like this one and the passionate kisses and hugs he gives me. This morning everything else is quiet. He brings me an apple for me to cut it for him in four segments. He likes the skin and gnaws the rest, leaving pieces of apples with his bitemarks all around the house. He also brings me a box of rice cookies he doesn’t know how to open. Then he eats them jumping on my bed. He leaves a trace of crumbles. Galileo inhabits the world by leaving evidence of his existence, of his habits, of his way of being in the world.
When we started walking the uncertain road to diagnosis, someone next of kin who is a children’s psychologist with a sort of specialisation in autism informally assessed him. She ruled (diagnosed, prognosed) that he wasn’t autistic, that we shouldn’t ask for the official disability certificate (because “labels” are wrong, she held), and that he should go on Lacanian therapy and music therapy on Zoom —now I think this is a ready-made sentence she just gives in general to anyone.


The most violent intervention in Galileo’s subjectivity is denying his being-disabled in an ableist world and his being-autistic in an allistic world. We, as a culture, have internalised the terror of disability so deep in our minds that we hurry to deny it. We are not willing to accept that what causes us so much angst and dread actually exists, that it is not an imagined ghost. Denying like this, in this delusional way, is an instinct only humans have. It is so human (so stupid) that it is not a survival instinct. When we deny autistic affirmation, we prepare the ground for its annihilation, i.e., for the annihilation of everyone who is autistic. Being autistic isn’t being an imperfect allistic, a not-yet-allistic person. Being disabled isn’t the same as being a flawed abled person. The denial of disability doesn’t amount to affirming an alternative ability, it implies the ableist annihilation of all vulnerability. But when disability is negated, able people do not survive either. We are born and we die in disability. How did it happen that we dare to imagine we can supersede need? (Maybe by the same process by which it is believed that capitalist profits are meant to satisfy human needs).


The instinct of denying disability is not innate, though. It is an intelligent trap designed to break communities apart, to disorganise, to debilitate us: not to make us disabled but to make us unable, powerless. This is how ableism works, de-politicising vulnerability and unease, making disability, at most, an object of pity and compassion, a matter of bad luck, a fate to try to twist and avoid.



***



Galileo’s spoken language has the musical texture of a genre he alone can perform. There was a (short) time when I thought that my role in his life was to be her translator, a mediation between him and the rest of the world. This is impossible for many reasons. The most important of them isn’t that I don’t get him (I don’t), it isn’t that I don’t speak his language (I don’t), or that no one (much less a mother) can or should mediate anyone. The main reason is that Galileo speaks as someone who plays in their instrument a piece that they have composed for themselves.
Sometimes language is comprehensible only insofar as one gets ready to listen to it as if they were in an empty church in front of a little bench where Rostropovich is about to play Bach’s suites with his Duport, and as if he were Bach himself. Then, and only then, we understand that we don’t understand, that we are at the gates of the incomprehensible. When is language more language than when it is spoken so incomprehensibly? The impossibility of interpreting oneself, myself, comes not only from the fact that no one controls or owns language. No one plays their own scores because no one creates their own language. No one, but Galileo and his equals. The autistic non-verbal language is that impossible thing that we try not to talk about when we talk, that we try to drown by talking too much, moving our hands, and writing for example this text. Autistic languages say what can’t be said in any articulated allistic “normal” language. Galileo speaks a language that complements other languages. This language of his is not the opposite of language: it perfects other languages, like music or silence do.

***


Does my son have a mother tongue? Do we speak to each other as mother and child? What do we tell each other when we chat? My son’s autism and his magic words lend me a whole new vocabulary for my own neurodiversity, a new and authentic view on my severe misophonia, hyperacusis, and hyperosmia, and on my life-long inability to grasp the majority of the rules of interpersonal relationships, among other things I thought were personal flaws that made me inferior. I won’t mask it anymore. I won’t keep it a secret anymore. Now I know how to talk about it, now I have names to name it. Maybe he will never speak his mother tongue or any other “normal” language, but he has taught me to speak a language in which I now can say what I couldn’t formulate in an allistic alien tongue. Stripping me of all the allistic and ableist expectations that have shaped the way I was meant to raise my children has liberated me from the suffering of trying to meet them myself. The truly difficult thing, besides raising an autistic child in an allistic world, besides being a non-verbal autistic child in an ableist world, is how to de-internalise all this life-long inferiorisation.


But I know he will tell me how.

☐ ☆ ✇ Crooked Timber

Kicking against the Ticks

By: Henry Farrell — April 4th 2023 at 17:34

Attention conservation notice: short but entirely speculative exercise in amateur sociology/game theory, by someone who has no professional license to do either, and had a blue tick for a couple of years but was always bemused as to why.

A quick note as to what went wrong with the Elon Musk strategy of giving power to the peasants. My take is that the Tyler Cowen case that “Elon is already ahead of the critics on this one, and was all along” was wrong, and that the politics of online aristocracy aren’t nearly what Musk thought they were.

My basic thesis is this. Blue ticks (more formally, ‘verified’ status on Twitter) is a particular example of an emergent online status system emerging from scarcity and social recognition. But if status is to remain economically valuable it paradoxically can’t be fully marketized. Milking status systems for money requires a degree of social astuteness that Elon Musk, for better or worse, shows no signs of possessing.

Initially, verified status was intended to solve a particular problem – that some people and organizations (famous; powerful; media personality) were particularly likely (a) to be targeted by impersonators, and (b) to have the resources to kick up an unpleasant and potentially expensive fuss when this happened. Twitter – like all big social media – did not and does not have the resources to police user registration at scale, so as to prevent impersonators from showing up. So what it did instead was to adopt the easier and cheaper solution of providing some recognizable means through which the “real” individuals could be distinguished from the fakes.

What then happened, unsurprisingly, was that getting verified came to become a connotation of social status. It showed you were important enough for Twitter to say that you were you. A blue tick beside your user name became a status good. It helped that it was linked to some notion of significance – you had been judged and somehow found worthy. It may have helped too that the process through which verified status was awarded was quite mysterious (I got verified one day, and have no good idea as to why).

Thus, then, the “lords” and “peasants” distinction that Musk drew in his tweet. And he was right that “blue ticks” drew considerable social resentment, especially from people on the right, who linked blue tick status with membership of the journalistic elite, and presumed hostility to godfearing people, Silicon Valley &c &c.

But the problem, as Musk has discovered, is that kicking against the ticks is not a profit maximizing strategy, or a particularly good money making strategy at all. The number of people who are willing to pay $8 a month is reportedly underwhelming.

In part, this may be because there aren’t very many real perquisites that come with it – as best as I know, promises that blue ticks will see less ads have gone unfulfilled, like many other promises of Musk-era Twitter. In part, it’s because the social status isn’t worth as much any more. To the extent that blue ticks are status goods, they are debased when they are sold at a scheduled market price. They don’t tell observers that the blue tick recipient has been found worthy in some mysterious process. Instead, they convey the information that the recipient is willing to spend $8 a month to get their tweets prioritized. That is not even an ambiguous signal of high social status.

Indeed, it may be a signal to the contrary. Under the current status quo, people will be unwilling to pay for blue ticks, unless they simply want to get their tweets in front of more people than they would otherwise. Their willingness to pay will hence be a negative signal of the quality of what they have to say. The current system of verification, without unlikely and expensive oversight, will overselect on spammers and egomaniacs. Second, for just this reason, ordinary Twitter users will plausibly be less willing to pay attention to accounts with blue ticks than to accounts without them.

The risk to Twitter then is of a degenerating equilibrium in which ever fewer people pay attention to verified status, leading verified status to become ever less valuable. That’s too neat and simple a story – real life social dynamics are always much messier. But I don’t think it is entirely wrong either.

Now, after originally promising to remove verification from all ‘legacy’ people who got it if they didn’t cough up, Musk seems to be contenting himself with just removing verification from the New York Times and similar groups and people who have incurred his displeasure. People who pay, and people who don’t but who got it as a legacy, are now indistinguishable from each other. In Twitter’s official language, “[t]his account is verified because it’s subscribed to Twitter Blue or is a legacy verified account.”

You probably can’t describe this outcome as the product of deliberate strategy. Musk’s management philosophy for Twitter hasn’t so much been a random walk as a grasshopper lepping around on a hotplate. But it is likely to stick for a while. The verified status system is plausibly more lucrative when it is a pooling equilibrium – that is, when it is impossible to tell who has paid for it, and who has not. The payers can parasitize some of the status of the legacies.

The actually relevant “lords and peasants” story that illustrates this is the British House of Lords. At one point in the early twentieth century, there was an actual price list. As Wikipedia describes it

Lloyd George made the practice more systematic and more brazen, charging £10,000 for a knighthood, £30,000 for baronetcy, and £50,000 upwards for a peerage. The practice came to a halt with the notorious 1922 Birthday Honours List, which contained the names of, Sir Joseph Robinson, a South African gold and diamond magnate who had been convicted of fraud and fined half a million pounds a few months earlier; Sir William Vestey, a multi-millionaire meat importer notorious for his tax evasion;Samuel Waring, who had been accused of war profiteering; and Archibald Williamson, whose oil firm had allegedly traded with the enemy during the war.

After public outcry, the law changed to make it illegal to charge for peerages and honors. Of course, it is still the case that you can get elevated to a life peerage for handing over dollops of cash to political parties. But this is decently obscured beneath a veil of official reticence. Certainly, there is nothing so vulgar as an itemized schedule of payments.

The British system of peerages still works as a moneymaker for UK political parties, because it blurs the status of those who paid hard money, and those who get them for good works, as well as the tarnished luster of feudal arrangements. It will be interesting to see whether Musk can maintain a similarly profitable degree of ambiguity.

I suspect not, because it requires a kind of acumen about social systems that he doesn’t appear to possess. Many legacy blue tick people are loudly proclaiming in tweets or in their profiles that they would never pay. They want to preserve their status, rather than have it debased by association, or at the least, not be identified as the kinds of people who would pay for the increasingly dubious status of being a blue tick (I’m of that class myself). Keeping a balance between those who provide lustre and those who provide lucre, requires the kind of steady hand that Musk doesn’t seem to possess.

☐ ☆ ✇ Crooked Timber

No-Bullshit Democracy

By: Henry Farrell — April 4th 2023 at 12:56

Hugo Mercier, Melissa Schwartzberg and I have two closely related publications on what we’ve been calling “No-Bullshit Democracy.” One is aimed at academics – it’s a very short piece that has just been officially published in American Political Science Review. The other just came out in Democracy. It’s aimed at a broader audience, and is undoubtedly livelier. An excerpt of the Democracy piece follows – if you want to read it, click on this link. The APSR academic letter (which can be republished under a Creative Commons license) is under the fold. Which one you might want to read depends on whether you value footnotes more than fisticuffs, or vice versa …

The New Libertarian Elitists

What might be called “no-bullshit democracy” would be a new way of structuring democratic disagreement that would use human argumentativeness as a rapid-growth fertilizer. … But first we need to sluice away the bullshit that is being liberally spread around by anti-democratic thinkers. … . Experts, including Brennan and Caplan (and for that matter ourselves), can be at least as enthusiastic as ordinary citizens to grab at ideologically convenient factoids and ignore or explain away inconvenient evidence. That, unfortunately, is why Brennan and Caplan’s books do a better job displaying the faults of human reasoning than explaining them.

Analytical Democratic Theory: A Microfoundational Approach

Henry Farrell, Hugo Mercier and Melissa Schwartzberg.

Abstract

A prominent and publicly influential literature challenges the quality of democratic decision making, drawing on political science findings with specific claims about the ubiquity of cognitive bias to lament citizens’ incompetence. A competing literature in democratic theory defends the wisdom of crowds, drawing on a cluster of models in support of the capacity of ordinary citizens to produce correct outcomes. In this Letter, we draw on recent findings in psychology to demonstrate that the former literature is based on outdated and erroneous claims and that the latter is overly sanguine about the circumstances that yield reliable collective decision making. By contrast, “interactionist” scholarship shows how individual-level biases are not devastating for group problem solving, given appropriate conditions. This provides possible microfoundations for a broader research agenda similar to that implemented by Elinor Ostrom and her colleagues on common-good provision, investigating how different group structures are associated with both success and failure in democratic decision making. This agenda would have implications for both democratic theory and democratic practice.

Over the last 15 years a prominent academic literature tied to libertarian thought has argued that democracy is generally inferior to other forms of collective problem solving such as markets and the rule of cognitive elites (Brennan 2016; Caplan 2008; Somin 2016). Following a long tradition of skepticism about democracy, these libertarians appeal to findings in cognitive and social psychology and political behavior to claim that decision making by ordinary citizens is unlikely to be rational or well grounded in evidence. Their arguments have been covered in magazines such as the New Yorker (Crain 2016) and popularized in proposals in the National Review for restrictions to dissuade “ignorant” people from voting (Mathis-Lilley 2021). Democratic theorists have mostly retorted with “epistemic” accounts, invoking mechanisms through which citizens can potentially reach good decisions—most significantly, deliberative mechanisms (Schwartzberg 2015).

This debate has been largely unproductive. Libertarian skeptics argue that democracy is generally inferior because of incorrigible flaws in citizens’ individual psychology, whereas democratic theorists lack a shared, compelling, and realistic micropsychological theory within which to ground their broader claims. Each side emphasizes empirical evidence that appears to support its own interpretation while discounting counterevidence.

This letter adopts a different approach. It demonstrates that democratic skeptics’ pessimistic conclusion—that democracy is unfixable—rests on a misleading and outdated account of the relevant psychological literature. Similarly, epistemic democrats often overestimate deliberation’s role in producing wise results or assume that aggregative models will operate at scale. We seek to avoid unwarranted skepticism and enthusiasm alike, instead providing microfoundations for a more empirically robust program investigating both the successes and mishaps of democracy, drawing on the experimental psychological literature on group problem solving (inter alia) to discover the conditions under which specific institutions perform well or fail in discovering solutions to collective problems.

Adapting a term from past debates, we contribute one foundational element of an approach that might be dubbed “analytical democracy.” Like the “analytical Marxism” associated with scholars such as G. A. Cohen, Jon Elster, John Roemer, and Adam Przeworski (see Roemer 1986), we provide more demanding and specific microfoundations for an account we find broadly sympathetic. Our research program might also be analogized to Ostrom’s work on the decentralized provision of common goods (Ostrom 1990). This emerged in response to Garrett Hardin’s influential article on “the tragedy of the commons,” which claimed that common-goods governance would inevitably collapse (Hardin 1968). Ostrom and her colleagues tested and falsified Hardin’s claims. However, rather than simply defending the proposition that decentralized communities could provide common goods, they investigated when common-good provision was likely to succeed or fail. Similarly, a research program on democratic problem solving, investigating success and failure, might not only provide possible foundations for a truly realistic account of democracy but also generate practical advice on building and improving democratic institutions. This program would build on research on the consequences of group composition and structure to understand the conditions under which democratic problem solving will operate well or badly.

Democratic Skepticism, Optimism and Social Science
A recent pessimistic literature, dominated by libertarian scholars, diagnoses widespread democratic ignorance and incompetence. Bryan Caplan (2008, 19) asserts that voters are irrational and “rule by demagogues … is the natural condition of democracy.” Jason Brennan believes that the democratic electorate is “systematically incompetent” so “some people ought not have the right to vote, or ought to have weaker voting rights than others” (Brennan 2016, 201, viii). Ilya Somin claims that “widespread public ignorance is a type of pollution” so that “democracy might function better if its powers were more tightly limited” (Somin 2016, 6, 9).

Each argues that democracy is profoundly flawed because of irremediable problems in individual incentives and cognition. Each proposes circumscribing democracy in favor of some purportedly superior alternative principle of social organization. Caplan claims that markets impose an effective “user fee” for irrationality that is absent from democracy (Caplan 2008, 133–4). Brennan proposes “epistocracy,” an aristocracy of those who know best. He defends restrictions on suffrage, identifying familiar possibilities such as restricting the franchise to those who pass a voter qualification exam and assigning plural votes to college graduates. Somin advocates what he calls “foot voting” (exit) over “ballot box voting” and emphasizes “the market and civil society as an alternative to government” (Somin 2016, 154), although he admits that the benefits “are likely to vary from issue to issue, from nation to nation, and perhaps also from group to group” (180).

These scholars ground their claims in social science findings. They invoke a literature leading back to Downs’s (1957) argument that citizens are rationally ignorant about politics because they do not have sufficient incentive to gather good information or to make good decisions. They emphasize that ordinary citizens display severe cognitive bias. Caplan (2008) blames such biases for differences between voters’ beliefs about economics and the beliefs of PhD economists, which he takes as a reasonable representation of empirical truth. Brennan (2016, 37ff) and Somin (2016, 94ff) cite work showing that biases lead people to search for information that supports their prior views and “not only reject new information casting doubt on their beliefs but sometimes actually respond by believing in them even more fervently” (Somin, 93–4; invoking the “backfire effects” described in Nyhan and Reifler 2010).

Brennan (2016, 40) unites rational ignorance and cognitive bias into a single stylized account in which most voters are either low information “hobbits” (ignorant) or politically fanatical “hooligans” (biased). He invokes Mercier and Sperber’s explanation of how “[r]easoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments” (Brennan 2016, 38). Furthermore, “human beings are wired not to seek truth and justice but to seek consensus… . They cower before uniform opinion” (Brennan 2012, 8; see also Brennan 2016, 47) as demonstrated by the famous Asch (1956) “conformity experiments,” where participants followed the obviously false opinions of confederates who were sitting next to them.

Achen and Bartels’ (2016) “realist” account of democracy does not share the skeptics’ normative priors but provides a similarly bleak judgment. They too draw on Asch and “similar studies” for social psychological microfoundations that stress the force of group identity and conformity (Achen and Bartels 2016, 220).

There is little scope for democratic problem solving if individual consensus seeking invariably leads to group conformity and “echo chambers” (Sunstein 2002), affective polarization (Iyengar et al. 2018), the rejection of countervailing arguments from nongroup members, and backfire effects. Yet it is far from clear that the despairing picture is empirically accurate. Growing affective polarization may not increase ideological polarization and extremism (e.g., Desmet and Wacziarg 2021). People’s economic beliefs are affected by economic reality (e.g. Duch and Stevenson 2008). Party leaders influence party members on some issues but on others adopt what they perceive to be the public’s dominant opinion (Lenz 2013). Backfire effects are the exception, not the rule (Nyhan 2021; Wood and Porter 2019). People generally change their minds when presented with well-sourced facts and good arguments (see, e.g., Nyhan et al. 2020; Sides 2015).

In part, we do not see the expected universally negative consequences because citizens are not as ignorant as the skeptical consensus suggests. “Issue publics,” whose members acquire specialized information on a particular issue across a spectrum of opinion (Converse 1964), provide an important epistemic resource for democracy (Elliott 2020; Han 2009). Citizens do better on domain-specific knowledge, including information about candidates’ positions on issues they care about (Henderson 2014; Krosnick 1990), than on the surveys of general factual information that skeptics rely on.

More fundamentally, individual-level biases are not devastating for collective democratic problem solving. The psychological literature on group effects and individual cognition is systematically misunderstood by skeptics and underexploited by political scientists. Contrary to Brennan’s (2016) misinterpretation, scholars like Mercier and Sperber (2017) find that even if humans are subject to “myside bias,” they can filter out erroneous messages (including those from their “side”) and change their minds when presented with good evidence from the other “side.” A realistic understanding of the capacities of democratic citizens need not be altogether bleak.

But it should not be overly sanguine. Democratic theorists (including those who are interested in practicalities) often rely on either conjecture or quasi-empirical claims. For instance, David Estlund argues that democratic procedures will tend to outperform non-democratic ones epistemically while acknowledging that the claim is conjectural rather than empirical (Estlund 2008, 157, 160, 176). Hélène Landemore (2020, 8) asserts more forcefully that what she calls “open democracy” is empirically superior to other forms of social decision making: “in a complex and uncertain world, … empowering all members of the demos equally … is overall the best method we have to figure out solutions to common problems.”

We lack a research framework for establishing whether this strong assertion is more robust than competing claims from those who champion different forms of democratic decision making or who emphasize the possibility of democratic failure. Even if deliberation and other forms of reasoned exchange are morally valuable, they may not necessarily yield superior solutions to problems. Extrapolations such as Landemore’s (2013, 104) “Numbers Trump Ability” postulate that democracy can readily be scaled up so that “if twelve jurors are smarter than one, then so would forty-one or 123 jurors,” building on Hong and Page’s (2004) “Diversity Trumps Ability” theorem. Such claims are qualified by empirical findings from jury deliberations (Watanabe 2020) and Hong and Page’s later prediction that increasing group size does not necessarily improve problem-solving capability (Hong and Page2021).

To move away from general claims for democracy’s superiority, epistemic democrats need to understand not just when democracy works but also when it doesn’t. Neblo et al. (2017, 915) establish an important possibility claim by showing how “scholars have assembled strong evidence that deliberative institutions positively influence citizens.” Still, it is hard to build from such demonstrations to a properly scientific account that can explain both democratic success and failure without some externally grounded theory of human decision making. Similarly, there is no very straightforward way of moving from a demonstration that Habermasian claims for deliberation can be grounded in plausible psychological mechanisms (Minozzi and Neblo 2015) to a broader account of when these mechanisms will or will not operate.

Surprisingly, possible microfoundations for such an account can be found in the literature on group psychology and cognition that skeptics have deployed against democracy. As Landemore (2013, 143) says, the “argumentative theory of reasoning” allows us to predict where deliberation will and will not work well. This is a pivotally important claim: we need to know where deliberation will function well to empirically assess theories of institutional design and practical justifications of democracy.

The argumentative account of reasoning is grounded in a recent “interactionist” literature in psychology, which explores how individual bias may or may not be corrected through social interaction. It investigates how mechanisms of “epistemic vigilance” allow people to employ cues to evaluate communicated information including the expertise and benevolence of the source, the plausibility of the message, and the quality of the arguments (for an overview, see Mercier 2020; Sperber et al. 2010). Chambers (2018) has also identified both the interactionist approach and the empirical literature on deliberation as reasons to doubt skeptical claims based on group psychology.

For example, contrary to skeptical claims that people conform to majority opinion, the experimental literature finds that people take account of relevant cues when evaluating the majority opinion including the absolute and relative size of the majority, the competence and benevolence of the majority’s members, the degree of dependency in the opinions of the majority, and the plausibility of the opinion (for review, see Mercier and Morin 2019). The much-bruited Asch (1956) experiments describe the consequences of external pressure rather than those of internalized bias. Practically no one was influenced when participants did not have to voice their opinion in front of the group, and contrary to the widespread academic folklore (Friend, Rafferty, and Bramel 1990), the experiments demonstrated independence as well as conformity. The literature finds that people are well able to evaluate arguments, that they are more influenced by strong than weak reasons (e.g., Hahn and Oaksford 2007), and that they partly change their minds when confronted with challenging but good arguments (e.g., Guess and Coppock 2020).

Interactionist scholarship suggests that reasoning processes are best evaluated in their normal environment of social interaction. It provides possible microfoundations for theories of variation. Instead of looking to the (supposedly invariant) cognitive limitations of ordinary citizens as skeptics do, an interactionist approach suggests that we should investigate the social context of decisions—how groups are structured—to understand when group identity and social pressure can distort or swamp problem solving. Both problem-solving capacity (which depends on whether groups harness individual biases and mechanisms of epistemic vigilance) and collective pressures to conformity will plausibly vary with group structure. Skeptical accounts, which depict group politics as simple condensates of individual bias writ large, are poorly fitted to capturing this variation. Equally, interactionism provides microfoundations for a framework that can investigate democratic theorists’ findings about when democracy works well while also investigating democratic failure.

This provides a more promising path forward than does the universal pessimism of democratic skeptics. It also provides more robust foundations for the claim that deliberation can occur under psychologically realistic circumstances and a starting point for investigating what those circumstances are. Democratic “realists” like Achen and Bartels (2016) need not be democratic pessimists. A microfoundational approach, grounded in endemic individual cognitive bias, avoids the possible charge that the desired normative outcomes are baked into the initial empirical assumptions.

If outright democratic skeptics are sincerely committed to understanding the cognitive underpinnings of democratic processes, as their reliance on this literature ought to entail, they too should find it attractive. It allows the serious investigation of observed democratic failure as well as democratic success. Of course, these are not the only possible microfoundations, and like all empirically based accounts, they may be modified or even rejected as empirical evidence emerges.

Still, such microfoundations could support a broader analytical account that seeks to understand and address variation. If both the benefits and disadvantages of democracy arise at the group rather than individual level, then the challenge for advocates of democracy is to build democratic institutions that can better trigger the relevant cognitive mechanisms so as to capture the benefits of group problem solving instead of deferring to the social pressures that do sometimes lead to conformity. In other words, our goal is to better explain how democracy incorporates the capacities of groups to solve problems (under some circumstances) as well as their tendency to magnify conformity and factionalism (under others).

We do not provide a complete alternative account of democracy here. That would be a heroic undertaking, which would involve not just providing microfoundations but rebuilding existing institutional and organizational theories on their basis. Instead, we sketch the beginnings of a broader research program that we hope others will find attractive.

A Research Program on Democratic Problem Solving
Ostrom (1990) began by demonstrating the systematic flaws in Hardin’s skepticism of common goods but went on to articulate a coherent alternative research agenda on the conditions under which common goods provision succeeds or fails. Political science and related disciplines should commence a similar research program, uniting scientific research on group composition, network structure, and institutional form to investigate the conditions under which democratic problem solving is likely to succeed or fail.

As we have argued, this program could build on research in experimental cognitive psychology, which provides an alternative set of microfoundations to both rational choice and the social psychological arguments that have dominated political science debates. Specifically, this research identifies specific dimensions along which trade-offs in group problem solving plausibly occur:

• Between social comfort and comfort in expressing dissent (Baron 2005).

• Between shared common ground and some measure of preexisting disagreement. Stasser and Titus (1985) point to the benefits of ground-level agreement for problem solving, whereas Schulz-Hardt et al. (2006) discuss how some level of background dissent allows for better problem solving.

• Between group size and the need to represent diversity. Fay, Garrod, and Carletta (2000) discuss how the quality of communication deteriorates as group size increases, whereas Hong and Page (2004; 2021) highlight the benefits of diversity and its complex interaction with group size and Mercier and Claidière (2022) examine whether deliberation is robust to increases in group size.

• Between pressures for conformity and concerns for epistemic reputation, Origgi (2017) describes how the cognitive mechanisms of reputation can generate both market bubbles and reliable collective information systems.

By understanding how different positions in this multidimensional space are associated with better or worse problem solving, we can arrive at useful hypotheses about how to fashion democratic systems. This research program should also incorporate scholarship on a broader level of social aggregation, which explores how network structure and social influence affect flows of information and opinion between individuals with different perspectives (Feng et al. 2019). It might incorporate practical findings about democratic decision making—for instance, the circumstances under which juries can form more accurate collective beliefs (Salerno and Diamond 2010) and how citizen constitutional assemblies (Farrell and Suiter 2019) and online town halls (Neblo, Esterling, and Lazer R2018) can support better communication between politicians and the public.

Crucially, the proposed research program would investigate democratic failures as well as successes, better explaining, for example, the circumstances under which epistemic breakdown and misinformation can become established in democracies. O’Connor and Weatherall (2018; Weatherall and O’Connor Weathera2021) investigate how epistemic factionalization occurs among people who do not trust others with different beliefs. Nyhan (2021) emphasizes the importance of elite messaging and information decay in spreading misinformation, suggesting that punishing elites who spread falsehoods and focusing on intermediaries may have benefits.

Finally, such a research program would help address recent (Neblo et al. 2017) and current (Notes from the Editors 2020) demands for a “translational” approach to democracy that “challenges dominant disciplinary norms.” It would seek to reconcile scientific rigor with normative analysis, providing the groundwork for institutional improvement and reform.

☐ ☆ ✇ Crooked Timber

Should academics fly at all?

By: Ingrid Robeyns — March 29th 2023 at 15:09

Earlier this week, I was at a meeting to discuss the question whether my university should cut its ties with the fossil industry, or else impose additional conditions on working with partners from fossil industries. There was quite some agreement that the university should think hard about spelling out and endorsing a moral framework, and based on those values and moral principles work out what (if any) forms of collaboration would remain legitimate in the future. This led our vice-chancellor to ask the question what else such moral framework would imply for university staff. “Should we perhaps completely stop flying?”, he asked.

And then there is, once again, a very depressing IPCC report and we must radically change our modes of production and consumption if we want to leave our children (and our older selves) a planet that will remain safe for the human species. And it’s not just about the future, but about the present: urgent action is needed to lower the number of the deadly climate-related events that we have seen over the last years, from increases in wildfires to deadly floodings – that led poor people, who have made almost zero contribution to this problem, lose their livelihoods, and many simply died. So to me it seems obvious that what we change in response to climate change is a very urgent moral question.

Hence the question: Do academics fly too much? Should we simply stop flying at all?

I’m using academics here in the sense of people employed as professors, postdocs, and PhD candidates at universities or other institutes of higher education and research. And it concerns flying for academic activities – most often, presenting one’s research at conferences or as an invited speaker to a seminar series or workshop. Of course, many of the thoughts that follow will apply to other professions too, but since this makes it easier for me to write down my worries, let me start there.

During the pandemic, it was easy: we couldn’t fly. I missed travelling. Frankly, I missed it a lot: I missed the interaction with other scholars working on related questions; I missed the inspiration from meeting the smart and creative people that one tends to meet on such occasions; I missed learning about the new work other people are doing; I missed the feedback on the work that I presented; I missed the intellectual joy of long conversations over breakfasts, lunches and dinners on matters academic, political, and otherwise; I missed the strengthening of my professional networks that travel brought me; I missed seeing friends far away; I missed the adventure of travelling to places; I missed the intellectual energy and inspiration such a trip could give.

Some have argued that the ‘normalisation’ of videoconferencing has taken away any reasons for travel. I love the new habits that are created by zoom, teams, and the other programs – the international paper discussions, seminars, reading groups. But no matter how wonderful these online events can be, many of the good things that come with travelling to workshops and conferences are not part of online events. Perhaps, some might argue, that is simply the price we should pay to stay within our fair emissions budget?

There are other reasons why academics want to travel long-distance by plane. Many want to travel because they feel they need to in order to build their scholarly networks or to strengthen their CV – in short, they feel they need it for their career. I’ve heard some colleagues say that this should be a reason why only junior scholars should be allowed to fly. This strikes me as self-defeating, since some of the most interesting conversations I had when I was a PhD student or postdoc was with older, more senior scholars who came to give a talk – I’ve always felt there is much I could learn from them too. Academia is international, and if we could only interact with our local peers that would be a loss. Moreover, I suspect that there are some very senior scholars who receive a lot of overseas keynote invitations: should they always decline invitations if those would require them to fly? What reasoning could they use that is genuinely sound, and not a form of self-deception?

There is no emissions-free alternative to flying long-distance, which makes travel-by-airplane a scarce good. There are, presumably, many more academics who would like to fly many more miles than would be good for the habitability of the planet. Should we have an open discussion about how much we should allow ourselves to fly, just like we have discussions about how much we should referee? Or should we just leave this, loosely libertarian and without causing offence to anyone, to everyone’s own judgement without a public discussion?

Once the discussion kicks off, there are various arguments one encounters. Here are a few of claims I’ve heard from others or considered myself, when trying to justify why travel by airplane is fine:

(1) It doesn’t make a difference at all whether I fly or not. It’s inconsequential to addressing climate change.
(2) I’ve been invited oversees as a keynote, so if I decline, the organisors invite someone else, and it doesn’t make a difference to total emissions.
(3) It’s bad that I fly, but I’m going to find a golden offset mechanism, and compensate the full damage that my flying does to planet – perhaps 150%.
(4) I wish I wouldn’t have to fly, but I must fly for my job; I commute by airplane. Yes, I could find a non-academic job closer by, but it’s not reasonable to ask from me that I give up my profession for the sake of lowering my emissions.
(5) Why would I have to stop flying if my North-American colleagues on average fly so much more than we [Europeans, Africans, Latin-Americans, …] do? When they limit their flying, I’ll limit mine too.
(6a) Why would I have to stop flying (whether for work or privately) if the really big pollutors, for example in Big Oil companies, keep earning millions destroying the earth? Why should I worry about adding one or two ton CO2 if the biggest polluters are not setting the right example?
(6b) Why should we stop flying if the number of superrich people flying in their ultrapolluting private jets keep flying? Let them stop flying first, and then we can talk about ordinary folks who should stop flying.
(7) I eat vegan, have no car, have put my savings in solar energy production, and have no kids. Given all this, I think I should be morally permitted to fly.
(8) I’m on the academic job market and need to go to any conference I can afford to travel to, in order to improve my chances at landing a job.
(9) I’ve joined Extinction Rebellion (or another group) and I’m making my contribution to addressing climate change there. We should only talk about political activism, not about consumption, such as flying.
(10) I’m only flying if I think it makes a difference not just to myself, but also that the trip is worthwhile for enough others too. And once I decide to fly, I try to get the most out of it in terms of contributing, e.g. by offering to give another talk in a nearby place.
(11) …
(12) … and so on, and so forth.

While I think some of these claims are dubious, others are less obviously so. But which ones are acceptable as reasons in our own deliberations whether to fly or not, and which ones should we reject? How do (or should) we deliberate with ourselves on those matters?

I’ve recently accepted an overseas invitation, and while the climate cost was immediately on the forefront of my mind, after some agonizing claims #2 and #10 made me, eventually, think it was OK to accept the invitation. But was it? I am not just, because of my love for academic travelling, simply fooling myself? And are those of us who keep travelling by plane not fooling ourselves most of the time? Or can a balance be struck without giving up on travel by plane completely?

So, friends, over to you. Given me claims/reasons #11, #12, etc. that you’ve had in your own mind, or heard, and let me know what you think of all these claims. And if anyone has a proposal for a decision procedure we should endorse, across academia globally, on when we are still permitted to fly (if at all), then let us know. Because, as you can can see, I have more questions than answers.

PS: Please be respectful of others’ point of views and arguments; many of us are unsure about these matters and trying to find out what we should think and do. If you’ve thought about this long and hard and made up your mind, good for you, but give others the time too to find out what they decide to believe. I’ll filter out rude or hateful comments.

❌