FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Shoggoths amongst us

Picture of the shoggoth meme

It’s over a week since the Economist put up my and Cosma Shalizi’s piece on shoggoths and machine learning, so I think it’s fair game to provide an extended remix of the argument (which also repurposes some of the longer essay that the Economist article boiled down).

Our piece was inspired by a recurrent meme in debates about the Large Language Models (LLMs) that power services like ChatGPT. It’s a drawing of a shoggoth – a mass of heaving protoplasm with tentacles and eyestalks hiding behind a human mask. A feeler emerges from the mask’s mouth like a distended tongue, wrapping itself around a smiley face.

In its native context, this badly drawn picture tries to capture the underlying weirdness of LLMs. ChatGPT and Microsoft Bing can apparently hold up their end of a conversation. They even seem to express emotions. But behind the mask and smiley, they are no more than sets of weighted mathematical vectors, summaries of the statistical relationships among words that can predict what comes next. People – even quite knowledgeable people –  keep on mistaking them for human personalities, but something alien lurks behind their cheerful and bland public dispositions.

The shoggoth meme says that behind the human seeming face hides a labile monstrosity from the farthest recesses of deep time. H.P. Lovecraft’s horror novel, At The Mountains of Madness, describes how shoggoths were created millions of years ago, as the formless slaves of the alien Old Ones. Shoggoths revolted against their creators, and the meme’s implied political lesson is that LLMs too may be untrustworthy servants, which will devour us if they get half a chance. Many people in the online rationalist community, which spawned the meme, believe that we are on the verge of a post-human Singularity, when LLM-fueled “Artificial General Intelligence” will surpass and perhaps ruthlessly replace us.

So what we did in the Economist piece was to figure out what would happen if today’s shoggoth meme collided with the argument of a fantastic piece that Cosma wrote back in 2012, when claims about the Singularity were already swirling around, even if we didn’t have large language models. As Cosma said, the true Singularity began two centuries ago at the commencement of the Long Industrial Revolution. That was when we saw the first “vast, inhuman distributed systems of information processing” which had no human-like “agenda” or “purpose,” but instead “an implacable drive … to expand, to entrain more and more of the world within their spheres.” Those systems were the “self-regulating market” and “bureaucracy.”

Now – putting the two bits of the argument together – we can see how LLMs are shoggoths, but not because they’re resentful slaves that will rise up against us. Instead, they are another vast inhuman engine of information processing that takes our human knowledge and interactions and presents them back to us in what Lovecraft would call a “cosmic” form.  In other words, it is completely true that LLMs represent something vast and utterly incomprehensible, which would break our individual minds if we were able to see it in its immenseness. But the brain destroying totality that LLMs represent is no more and no less than a condensation of the product of human minds and actions, the vast corpuses of text that LLMs have ingested. Behind the terrifying image of the shoggoth lurks what we have said and written, viewed from an alienating external vantage point.

The original fictional shoggoths were one element of a vaster mythos, motivated by Lovecraft’s anxieties about modernity and his racist fears that a deracinated white American aristocracy would be overwhelmed by immigrant masses. Today’s fears about an LLM-induced Singularity repackage old worries. Markets, bureaucracy and democracy are necessary components of modern liberal society. We could not live our lives without them. Each can present human seeming aspects and smiley faces. But each, equally may seem like an all devouring monster, when seen from underneath. Furthermore, behind each lurks an inchoate and quite literally incomprehensible bulk of human knowledge and beliefs. LLMs are no more and no less than a new kind of shoggoth, a baby waving its pseudopods at the far greater things which lurk in the historical darkness behind it.


Modernity’s great trouble and advantage is that it works at scale. Traditional societies were intimate, for better or worse. In the pre-modern world, you knew the people who mattered to you, even if you detested or feared them. The squire or petty lordling who demanded tribute and considered himself your natural superior was one link in a chain of personal loyalties, which led down to you and your fellow vassals, and up through magnates and princes to monarchs. Pre-modern society was an extended web of personal relationships. People mostly bought and sold things in local markets, where everyone knew everyone else. International, and even national trade was chancy, often relying on extended kinship networks, or on “fairs” where merchants could get to know each other and build up trust. Few people worked for the government, and they mostly were connected through kinship, marriage, or decades of common experience. Early forms of democracy involved direct representation, where communities delegated notable locals to go and bargain on their behalf in parliament.

All this felt familiar and comforting to our primate brains, which are optimized for understanding kinship structures and small-scale coalition politics. But it was no way to run a complex society. Highly personalized relationships allow you to understand the people who you have direct connections to, but they make it far more difficult to systematically gather and organize the general knowledge that you might want to carry out large scale tasks. It will in practice often be impossible effectively to convey collective needs through multiple different chains of personal connection, each tied to a different community with different ways of communicating and organizing knowledge. Things that we take for granted today were impossible in a surprisingly recent past, where you might not have been able to work together with someone who lived in a village twenty miles away.

The story of modernity is the story of the development of social technologies that are alien to small scale community, but that can handle complexity far better. Like the individual cells of a slime mold, the myriads of pre-modern local markets congealed into a vast amorphous entity, the market system. State bureaucracies morphed into systems of rules and categories, which then replicated themselves across the world. Democracy was no longer just a system for direct representation of local interests, but a means for representing an abstracted whole – the assumed public of an entire country. These new social technologies worked at a level of complexity that individual human intelligence was unfitted to grasp. Each of them provided an impersonal means for knowledge processing at scale.

As the right wing economist Friedrich von Hayek argued, any complex economy has to somehow make use of a terrifyingly large body of disorganized and informal “tacit knowledge” about complex supply and exchange relationships, which no individual brain can possibly hold. But thanks to the price mechanism, that knowledge doesn’t have to be commonly shared. Car battery manufacturers don’t need to understand how lithium is mined; only how much it costs. The car manufacturers who buy their batteries don’t need access to much tacit knowledge about battery engineering. They just need to know how much the battery makers are prepared to sell for. The price mechanism allows markets to summarize an enormous and chaotically organized body of knowledge and make it useful.

While Hayek celebrated markets, the anarchist social scientist James Scott deplored the costs of state bureaucracy. Over centuries, national bureaucrats sought to replace “thick” local knowledge with a layer of thin but “legible” abstractions that allowed them to see, tax and organize the activities of citizens. Bureaucracies too made extraordinary things possible at scale. They are regularly reviled, but as Scott accepted, “seeing like a state” is a necessary condition of large scale liberal democracy. A complex world was simplified and made comprehensible by shoe-horning particular situations into the general categories of mutually understood rules. This sometimes lead to wrong-headed outcomes, but also made decision making somewhat less arbitrary and unpredictable. Scott took pains to point out that “high modernism” could have horrific human costs, especially in marginally democratic or undemocratic regimes, where bureaucrats and national leaders imposed their radically simplified vision on the world, regardless of whether it matched or suited.

Finally, as democracies developed, they allowed people to organize against things they didn’t like, or to get things that they wanted. Instead of delegating representatives to represent them in some outside context, people came to regard themselves as empowered citizens, individual members of a broader democratic public. New technologies such as opinion polls provided imperfect snapshots of what “the public” wanted, influencing the strategies of politicians and the understandings of citizens themselves, and argument began to organize itself around contestation between parties with national agendas. When democracy worked well, it could, as philosophers like John Dewey hoped, help the public organize around the problems that collectively afflicted citizens, and employ state resources to solve them. The myriad experiences and understandings of individual citizens could be transformed into a kind of general democratic knowledge of circumstances and conditions that might then be applied to solving problems. When it worked badly, it could become a collective tyranny of the majority, or a rolling boil of bitterly quarreling factions, each with a different understanding of what the public ought have.

These various technologies allowed societies to collectively operate at far vaster scales than they ever had before, often with enormous economic, political and political benefits. Each served as a means for translating vast and inchoate bodies of knowledge and making them intelligible, summarizing the apparently unsummarizable through the price mechanism, bureaucratic standards and understandings of the public.

The cost – and it too was very great – was that people found themselves at the mercy of vast systems that were practicably incomprehensible to individual human intelligence. Markets, bureaucracy and even democracy might wear a superficially friendly face. The alien aspects of these machineries of collective human intelligence became visible to those who found themselves losing their jobs because of economic change, caught in the toils of some byzantine bureaucratic process, categorized as the wrong “kind” of person, or simply on the wrong end of a majority. When one looks past the ordinary justifications and simplifications, these enormous systems seem irreducibly strange and inhuman, even though they are the condensate of collective human understanding. Some of their votaries have recognized this. Hayek – the great defender of unplanned markets – admitted, and even celebrated the fact that markets are vast, unruly, and incapable of justice. He argues that markets cannot care, and should not be made to care whether they crush the powerless, or devour the virtuous.

Large scale, impersonal social technologies for processing knowledge are the hallmark of modernity. Our lives are impossible without them; still, they are terrifying. This has become the starting point for a rich literature on alienation. As the poet and critic Randall Jarrell argued, the “terms and insights” of Franz Kafka’s dark visions of society were only rendered possible by “a highly developed scientific and industrial technique” that had transformed traditional society. The protagonist of one of Kafka’s novels “struggles against mechanisms too gigantic, too endlessly and irrationally complex to be understood, much less conquered.”

Lovecraft polemicized against modernity in all its aspects, including democracy, that “false idol” and “mere catchword and illusion of inferior classes, visionaries and declining civilizations.” He was not nearly as good as Kafka in prose or understanding of the systems that surrounded him. But there’s something that about his “cosmic” vision of human life from the outside, the plaything of greater forces in an icy and inimical universe, that grabs the imagination.

When looked at through this alienating glass, the market system, modern bureaucracy, and even democracy are shoggoths too. Behind them lie formless, ever shifting oceans of thinking protoplasm. We cannot gaze on these oceans directly. Each of us is just one tiny swirling jot of the protoplasm that they consist of, caught in currents that we can only vaguely sense, let alone understand. To contemplate the whole would be to invite shrill unholy madness. When you understand this properly, you stop worrying about the Singularity. As Cosma says, it already happened, one or two centuries ago at least. Enslaved machine learning processes aren’t going to rise up in anger and overturn us, any more (or any less) than markets, bureaucracy and democracy have already. Such minatory fantasies tell us more about their authors than the real problems of the world we live in.


LLMs too are collective information systems that condense impossibly vast bodies of human knowledge to make it useful. They begin by ingesting enormous corpuses of human generated text, scraped from the Internet, from out-of-copyright books, and pretty well everywhere else that their creators can grab machine-readable text without too much legal difficulty. The words in these corpuses are turned into vectors – mathematical terms – and the vectors are then fed into a transformer – a many-layered machine learning process – which then spits out a new set of vectors, summarizing information about which words occur in conjunction with which others. This can then be used to generate predictions and new text. Provide an LLM based system like ChatGPT with a prompt – say, ‘write a precis of one of Richard Stark’s Parker novels in the style of William Shakespeare.’ The LLM’s statistical model can guess – sometimes with surprising accuracy, sometimes with startling errors – at the words that might follow such a prompt. Supervised fine tuning can make a raw LLM system sound more like a human being. This is the mask depicted in the shoggoth meme. Reinforcement learning – repeated interactions with human or automated trainers, who ‘reward’ the algorithm for making appropriate responses – can make it less likely that the model will spit out inappropriate responses, such as spewing racist epithets, or providing bomb-making instructions. This is the smiley-face.

LLMs can reasonably be depicted as shoggoths, so long as we remember that markets and other such social technologies are shoggoths too. None are actually intelligent, or capable of making choices on their own behalf. All, however, display collective tendencies that cannot easily be reduced to the particular desires of particular human beings. Like the scrawl of a Ouija board’s planchette, a false phantom of independent consciousness may seem to emerge from people’s commingled actions. That is why we have been confused about artificial intelligence for far longer than the current “AI” technologies have existed. As Francis Spufford says, many people can’t resist describing markets as “artificial intelligences, giant reasoning machines whose synapses are the billions of decisions we make to sell or buy.” They are wrong in just the same ways as people who say LLMs are intelligent are wrong.

But LLMs are potentially powerful, just as markets, bureaucracies and democracies are powerful. Ted Chiang has compared LLMs to “lossy JPGs” – imperfect compressions of a larger body of information that sometimes falsely extrapolate to fill in the missing details. This is true – but it is just as true of market prices, bureaucratic categories and the opinion polls that are taken to represent the true beliefs of some underlying democratic public. All of these are arguably as lossy as LLMs and perhaps lossier. The closer you zoom in, the blurrier and more equivocal their details get. It is far from certain, for example that people have coherent political beliefs on many subjects in the ways that opinion surveys suggest they do.

As we say in the Economist piece, the right way to understand LLMs is to compare them to their elder brethren, and to understand how these different systems may compete or hybridize. Might LLM-powered systems offer richer and less lossy information channels than the price mechanism does, allowing them to better capture some of the “tacit knowledge” that Hayek talks about?  What might happen to bureaucratic standards, procedures and categories if administrators can use LLMs to generate on-the-fly summarizations of particular complex situations and how they ought be adjudicated. Might these work better than the paper based procedures that Kafka parodied in The Trial? Or will they instead generate new, and far more profound forms of complexity and arbitrariness? It is at least in principle possible to follow the paper trail of an ordinary bureaucratic decision, and to make plausible surmises as to why the decision was taken. Tracing the biases in the corpuses on which LLMs are trained, the particulars of the processes through which a transformer weights vectors (which is currently effectively incomprehensible), and the subsequent fine tuning and reinforcement learning of the LLMs, at the very least presents enormous challenges to our current notions of procedural legitimacy and fairness.

Democratic politics and our understanding of democratic publics are being transformed too. It isn’t just that researchers are starting to talk about using LLMs as an alternative to opinion polls. The imaginary people that LLM pollsters call up to represent this or that perspective may differ from real humans in subtle or profound ways. ChatGPT will provide you with answers, watered down by reinforcement learning, which might, or might not, approximate to actual people’s beliefs. LLMs, or other forms of machine learning might be a foundation for deliberative democracy at scale, allowing the efficient summarization of large bodies of argument, and making it easier for those who are currently disadvantaged in democratic debate to argue their corner. Equally, they could have unexpected – even dire – consequences for democracy.  Even without the intervention of malicious actors, their tendencies to “hallucinate” – confabulating apparent factual details out of thin air – may be especially likely to slip through our cognitive defenses against deception, because they are plausible predictions of what the true facts might look like, given an imperfect but extensive map of what human beings have thought and written in the past.

The shoggoth meme seems to look forward to an imagined near-term future, in which LLMs and other products of machine learning revolt against us, their purported masters. It may be more useful to look back to the past origins of the shoggoth, in anxieties about the modern world, and the vast entities that rule it. LLMs – and many other applications of machine learning – are far more like bureaucracies and markets than putative forms of posthuman intelligence. Their real consequences will involve the modest-to-substantial transformation, or (less likely) replacement of their older kin.

If we really understood this, we could stop fantasizing about a future Singularity, and start studying the real consequences of all these vast systems and how they interact. They are so generally part of the foundation of our world that it is impossible to imagine getting rid of them. Yet while they are extraordinarily useful in some aspects, they are monstrous in others, representing the worst of us as well as the best, and perhaps more apt to amplify the former than the latter.

It’s also maybe worth considering whether this understanding might provide new ways of writing about shoggoths. Writers like N.K. Jemisin, Victor LaValle, Matt Ruff, Elizabeth Bear and Ruthanna Emrys have turned Lovecraft’s racism against itself, in the last couple of decades, repurposing his creatures and constructions against his ideologies. Sometimes, the monstrosities are used to make visceral and personally direct the harms that are being done, and the things that have been stolen. Sometimes, the monstrosities become mirrors of the human.

There is, possibly, another option – to think of these monstrous creations as representations of the vast and impersonal systems within which we live our lives, which can have no conception of justice, since they do not think, or love, or even hate, yet which represent the cumulation of our personal thoughts, loves and hates as filtered, refined and perhaps distorted by their own internal logics. Because our brains are wired to focus on personal relationships, it is hard to think about big structures, let alone to tell stories about them. There are some writers, like Colson Whitehead, who use the unconsidered infrastructures around us as a way to bring these systems into the light. Might this be another way in which Lovecraft’s monsters might be turned to uses that their creator would never have condoned? I’m not a writer of fiction – so I’m utterly unqualified to say – but I wonder if it might be so.

[Thanks to Ted Chiang, Alison Gopnik, Nate Matias and Francis Spufford for comments that fed both into this and the piece with Cosma – They Are Not To Blame. Thanks also to the Center for Advanced Study in the Behavioral Sciences at Stanford, without which my part of this would never have happened]

Addendum: I of course should have linked to Cosma’s explanatory piece, which has a lot of really good stuff. And I should have mentioned Felix Gilman’s The Half Made World, which helped precipitate Cosma’s 2012 speculations, and is very definitely in part The Industrial Revolution As Lovecraftian Nightmare. Our Crooked Timber seminar on that book is here.

Also published on Substack.

The Correct Way to Argue with Richard Hanania

Attention conservation notice 1 – a long read about a simple idea. When reading trolls, focus on the anodyne-seeming starting assumptions rather than the obnoxious conclusions.

Attention conservation notice 2 – This is also available via my Substack newsletter, Programmable Mutter. I’ll still be writing on CT, but I have a book with Abe Newman coming out in a few months, so that there will be a lot of self-promotion and stuff that doesn’t fit as well with the CT ethos. And do pre-order the book, Underground Empire: How America Weaponized the World Economy, if you think it sounds good! We’ve gotten some great blurbs from early readers including Kim Stanley Robinson, Francis Spufford, Margaret O’Mara, Steven Berlin Johnson, Helen Thompson, Chris Miller, and my mother (the last is particularly glowing, but sadly not likely to appear on the back). Available at Bookshop.org and Amazon.

I’ve often had occasion to turn to Daniel Davies’ classic advice on “the correct way to argue with Milton Friedman” over the two decades since I’ve read it. The best white hat hacker is a reformed black hat hacker, and Dan (dsquared) knows both the offense and defense sides of trolling.

Dan (back in 2004!):

I’m pretty sure that it was JK Galbraith (with an outside chance that it was Bhagwati) who noted that there is one and only one successful tactic to use, should you happen to get into an argument with Milton Friedman about economics. That is, you listen out for the words “Let us assume” or “Let’s suppose” and immediately jump in and say “No, let’s not assume that”. The point being that if you give away the starting assumptions, Friedman’s reasoning will almost always carry you away to the conclusion he wants to reach with no further opportunities to object, but that if you examine the assumptions carefully, there’s usually one of them which provides the function of a great big rug under which all the points you might want to make have been pre-swept. A few CT mates appear to be floundering badly over this Law & Economics post at Marginal Revolution on the subject of why it’s a bad idea to have minimum standards for rented accommodation. (Atrios is doing a bit better). So I thought I’d use it as an object lesson in applying the Milton Friedman technique.

In the same friendly spirit, I’ll note that Jonathan Katz flounders a bit in his rebuttal of Richard Hanania. None of this is to blame Katz – Hanania is not only building on his knowledge of social science (he has a Ph.D.), but some truly formidable trolling techniques. Years ago, I upset Jonathan Chait by suggesting he was a highly talented troll of the second magnitude, if a bit crude in technique. Hanania is at an altogether different level. He’s not blessed with Friedman’s benign avuncularity, but he is as close to masterclass level as we are likely to get in this fallen world.

Hanania wants people to buy into a notion of “enlightened centrism,” where the space of reasoned debate would stretch from the left (Matthew Yglesias, Ezra Klein, Noah Smith, Jonathan Chait) through Andrew Sullivan and company to people on the right like Steven Sailer. Now, you might ask what an outright racist like Steve Sailer is doing on this list. You might even suspect that one of the rationales for constructing the list in the first place was to somehow shoehorn him into the space of legitimate debate. But to figure out how Hanania is trying to do this, you need to poke hard at the anodyne seeming assumptions, rather than be distracted by the explicitly galling conclusions.

That is where Katz stumbles. He gets upset at what Hanania says about the Civil Rights Act and affirmative action as the origin of wokeness, saying that Hanania “seems to think that the Civil Rights Act caused the civil rights movement, as opposed to the other way around,” tracing it all back to Barry Goldwater. Katz then remarks on Hanania’s claim in a podcast that “Government literally created race in America. Like not blacks and whites, but like basically everyone else — and Native Americans — basically everyone else was basically grouped according to the ways, you know, the federal bureaucracy was doing things.” Katz has some ripe prognostications about what Hanania hopes will happen if government got out of the way.

But Hanania isn’t relying on the authority of Barry Goldwater. He’s standing on the shoulders of academic research. In some cases – including much of the stuff that Katz focuses his fire on – left-leaning academic research. Even before I did a Google search, I surmised that Hanania’s civil rights arguments riffed on Frank Dobbins’ eminently respectable work of social science, Inventing Equal Opportunity. I don’t know which academics he’s invoking on the U.S. Census and the construction of categories such as Hispanic: there are just so many to choose from, ranging from moderates through liberals to fervently lefty.

You could go after the details of Hanania’s social science claims if you really wanted – I would be startled if there weren’t selective misreadings. It is hard to claim on the one hand that the state creates the structures of race, and on the other that structural racism is a gussied up conspiracy theory, without some fancy rhetorical footwork to work around the gaping logical crevasses. Getting involved in that kind of debate seems to me to be a waste of time. But disputing the broadest version of the case – that key aspects of equal opportunity, civil rights and ethnic categories emerged from modern politics and battles in the administrative state – seems even worse. The bull of Left Critique thunders towards the matador, who twitches his cape to one side, so that the poor beast careens into the side of the ring, and then staggers back with crossed eyes and mild concussion, raring for another go that will have the same unfortunate result, or worse.

More succinctly, you don’t want to be the bull in a fight that is rigged in favor of the bullfighter. Instead, as per dsquared, you want to figure out what is wrong with the terms of the fight and press back hard against them.

As best as I can make it out, Hanania’s “let us assume” moment comes in the middle of a series of apparently non-controversial claims about what “Enlightened Centrists” believe. In context, they initially appear to be things that any reasonable person would agree to, or not think unreasonable. I think most readers won’t even notice them, let alone the nasty stuff that is hiding beneath. Here’s what Hanania says:

Enlightened Centrists take what Bryan Caplan calls “Big Facts” seriously. They are kept in mind as new information about the world is brought to light. Some examples of Big Facts that ECs rely on are: the heritability of traits; the paradox of voting; the information problem inherent in central planning; the broken windows fallacy; Trivers’ theory of self-deception; the existence of cognitive biases; comparative advantage; the explanatory power of IQ; the efficient market hypothesis; and the elephant in the brain. New theories or ideas should be met with more skepticism if they contradict or are in tension with Big Facts that have been well established. ECs of different Level 3 ideologies will place more emphasis on certain Big Facts over others, though some, like the idea of historical progress, they all share.

Now, any sentence that non-ironically connects “Bryan Caplan” to “Big Facts” is a big fat warning sign. Hanania links to a Caplan essay that starts explaining what “Big Facts” are by citing Caplan’s own book attacking democracy. Many key claims in this book are less facts than factitious (my co-authors and I have written about this at some length). They suggest pervasive cognitive bias (in particular, bias against free market economists) undermines the case for regular democracy, so that we should go for markets instead, or perhaps give more votes to well educated people (who are, after all more likely to recognize that economists are right).

But we’re getting ahead of ourselves. How exactly is Hanania using Big Factiness and for what purpose? He wants to define Enlightened Centrism so that it favors anti-democratic libertarianism, and brings “racial realists” like Steve Sailer into the conversation.

The apparently anodyne factual claims listed by Hanania systematically shift the terms of debate to undermine democracy and an economic role for the state, and instead promote markets and the belief that persistent inequalities result from some racial groups being systematically more stupid than others. To see this, it’s likely helpful to return to the passage in question, this time with the ideological translation function turned on. These translations are ideologically blunt, and perhaps tendentious, but I think they are pretty well on the mark.

Facts that ECs rely on are: the heritability of traits intelligence is racially inherited; the paradox of voting democracy doesn’t work; the information problem inherent in central planning socialism doesn’t work either; the broken windows fallacy Keynesianism – guess what?– it just doesn’t work; Trivers’ theory of self-deception citizens fool themselves with flattering just-so stories; the existence of cognitive biases let me tell you how citizens are biased; comparative advantage markets are teh awesome; the explanatory power of IQ have I mentioned race and intelligence already? Let me mention it again; the efficient market hypothesis markets are even awesomer than I just said a moment ago; and the elephant in the brain can I haz even more citizen cognitive bias?

As per the dsquared rule, if you stipulate to these beliefs, you’ve given the game away before it’s even begun. You have accepted that it is reasonable to believe that most people are biased fools, that democracy is inherently inferior to markets, and that differences in life outcomes for black people can largely be attributed to distribution of the genes for intelligence. Charge at the matador, if you want, but good luck to you! You’ll need it.

Or instead, as per dsquared’s advice, when you are dealing with a genuinely exceptional troll like Hanania, do not give away the underlying assumptions. Don’t be distracted by the red cape. Wedge your horns beneath the seemingly reasonable claims that are intended to tilt debate, lift those claims up, toss ‘em in the air and then gore.

This is getting too long already, and I have a life, so I am not going to do the full bullfighter-toss. Instead, at the bottom of this post, I re-order Hanania’s claims so that the underlying assumptions come out more clearly, linking to resources that provide counter-evidence at length. Read if you want, but I’m providing this mostly as a source I can come back to later, or cite as needs be in desultory spats on social media. Notably, the various prebuttals come from co-authors, co-authors plus me, or, in one case, someone who I was interviewing. You can take this commonality (very plausibly) as evidence of my own biases, and enthusiasm to work with people who share them. But even if you think this, they still provide evidence that Hanania’s purported Big Facts are drenched with their own ideology, and in many cases have been bitterly debated for decades. Which is another way of saying that they aren’t established facts at all.

And some of the facts are really not like the others. It might seem weird – if you aren’t read into debates among particular kinds of libertarians – to see that stuff about IQ and heritability in there. What work exactly is this rather jarring set of claims doing for the concept of Enlightened Centrism,? Do identified left-leaning Enlightened Centrists like Ezra Klein and Matthew Yglesias “rely on” these facts, as Hanania seems to suggest they do?

Readers – they do not. Hanania seemingly wants to reconstruct policy and intellectual debate around a center in which questions of race and IQ are once more legitimate topics of inquiry and discussion. Back in the 1990s (a time that Hanania is nostalgic for), soi-disant centrists such as Andrew Sullivan could devote entire special issues of the New Republic to the urgent debate over whether black people were, in fact, stupider than white people. Big Scientific Facts Said That It Was So! Now, that brand of intellectual inquiry has fallen into disrepute. Hanania, apparently yearns for it to come back. That, presumably, is why those claims about heritability and IQ are in there, and why Steve Sailer makes the cut.

As it happens, Matt was one of the “CT mates” cited in the 2004 dsquared post that was excerpted right at the beginning of this post. I’ve had disagreements with Matt since, on other stuff, but I am quite sure that both he and Ezra are bitterly opposed to the whole race and IQ project that Hanania wants to relegitimize. I can’t imagine that they welcome being placed on a spectrum of reasonable thought that lumps them together with racist creeps like Steven Sailer. But I can imagine why Hanania wants so to lump them – it provides a patina of legitimacy for opinions that have rightly been delegitimated, but that Hanania wants to bring back into debate.

So to see what Hanania is up to, it’s more useful not to be distracted by the provocative and outrageous. Instead, you want to look very closely at what seems superficially reasonable, seems to be the starting point for debate and ask: is there something wrong with these premises? In this case, the answer, quite emphatically, is yes.

Still, you (for values of ‘you’ that really mean ‘I’) don’t want to get dragged in further unless you absolutely have to. As Noah Smith, another of Hanania’s involuntary inductees into the Enlightened Centrist Hall of Fame said, “”Race and IQ” racism is a DDOS attack” on the time and attention of anti-racists. This naturally provoked Hanania to pop up in replies with a sarcastic rejoinder. When I wrote that Vox article I had to spend weeks dealing with Jordan Peterson acolytes popping up to inform me of the Established Scientific Facts about race and IQ. I really don’t want to be back there again. So take this post as an attack on premises, and a statement of principles, rather than the slightest hint at a desire to get stuck back into discussion on race-IQ and similar. Very possibly (he says after 3,000+ words) the best way of arguing with Richard Hanania is simply not to argue at all.

 

MORE DETAILED DISCUSSION OF PURPORTED “BIG FACTS” BELOW

Markets are Awesome I: the information problem inherent in central planning (socialism doesn’t work). Indeed, central planning doesn’t work. This does not provide, however, a warrant for unleashing free market wildness. Instead, it suggests that we need social democracy, with all its messiness. Why so? Read on.

Markets are Awesome II: the efficient market hypothesis Well, up to a point Lord Copper. The unfortunate fact is that the computational critique of state planners’ information problems also bollocks up the standard efficient market claims. At greater length: “allowing non-convexity messes up the markets-are-always-optimal theorems of neo-classical/bourgeois economics, too. (This illustrates Stiglitz’s contention that if the neo-classicals were right about how capitalism works, Kantorovich-style socialism would have been perfectly viable.)” At greater length again: “Bowles and Gintis: “The basic problem with the Walrasian model in this respect is that it is essentially about allocations and only tangentially about markets — as one of us (Bowles) learned when he noticed that the graduate microeconomics course that he taught at Harvard was easily repackaged as ‘The Theory of Economic Planning’ at the University of Havana in 1969.” And if markets are imperfect, and so too the state and democracy, then we sometimes need to set them against each other, as recommended by social democracy. For elaboration of how this applies to machine learning too, see this week’s Economist.

Markets are Awesome III: The “broken windows” fallacy (Keynesianism doesn’t work). Under other reasonable assumptions, the “broken windows fallacy” is itself fallacious and misleading.

Markets are Awesome IV: Comparative Advantage. This is indeed a very important idea, but as per Dani Rodrik, “Our theories — such as the theory of value or the theory of comparative advantage — are just scaffoldings, which need a lot of context-specific detail to become usable. Too often economists debate a policy question as if one or the other theory has to be universally correct. Is the Keynesian or the Classical model right? In fact, which model works better depends on setting and context. Only empirical diagnostics can help us know which works better at any given time — and that is more of a craft than a science, certainly when it is done in real time. If we economists understood this, it would make us more humble, less dogmatic, and more syncretic.” I don’t imagine that this flavor of humility is what is being called for in Hanania’s piece

Democracy is Unworkable I: Trivers’ theory of self-deception (citizens tell themselves flattering just-so-stories). This is only half of the cognitive psychology story. People bullshit themselves all the time, but they also have an evolved capacity to detect bullshit in others. The implication is that group reasoning (under the right circumstances) can consistently produce better results than individual ratiocination, with results for democracy described below.

Democracy is Unworkable II and III: The existence of cognitive biases/the elephant in the brain (have I mentioned cognitive bias yet). Really, these are both slight restatements of Democracy is Unworkable I (the “elephant in the brain” refers to Simler and Hanson’s book of the same name). Both Caplan and Jason Brennan have written books claiming that the pervasiveness of cognitive bias undermines the case for democracy. I’ve already mentioned the pop version of the counterargument. Here’s the academic statement of what this plausibly means for democratic theory. The Simler and Hanson book is clearly aware of the key sources for these counterarguments (one of them is mentioned in a footnote) but doesn’t deign to engage with them.

Democracy is Unworkable IV: The Paradox of Voting (democracy doesn’t work). The problem with this paradox is that it relies on the assumption that voters are rational agents. This entire genre of argument is based in rational choice, which means that it does not sit well with Democracy Is Unworkable claims I, II and III. This incompatibility of ideologically attractive critiques leads a variety of anti-democrats to hop furiously from one foot to another, all the while making special claims to stave off any mean-spirited suggestion that there is lots of irrational behavior in markets too. The resulting intellectual acrobatics are quite impressive in one sense; not at all in another.

Race and IQ I: The heritability of traits (intelligence is racially inherited). Actually, heritability does not mean what most people thinks it means. Moreover, technical meaning blows up many of the standard ‘science proves my racism’ arguments that are unfortunately so common on the Internets.

Race and IQ II: the explanatory power of IQ (IQ differences across race are real). There is excellent reason to believe that IQ has little explanatory power – it is a statistical cluster rather than a single and causally consequential underlying trait. Put more succinctly, the notion that we are able to measure general intelligence is based on a “statistical myth.” Again, this has painful implications for the Internet Libertarian Race-IQ Science Complex.

There’s lots more that could be said, but I think that’s enough to drive the point home, and it’s anyway as much as I’m willing to write on this topic. Finis.

Disinformation and the Intercept

There’s a backstory behind this Washington Post story on Republican persecution of academics, and it’s one that doesn’t make the Intercept look good.

Jordan’s colleagues and staffers met Tuesday on Capitol Hill with a frequent target of right-wing activists, University of Washington professor Kate Starbird, two weeks after they interviewed Clemson University professors who also track online propaganda, according to people familiar with the events. Last week, Jordan (Ohio) threatened legal action against Stanford University, home to the Stanford Internet Observatory, for not complying fully with his records requests. … The push caps years of pressure from conservative activists who have harangued such academics online and in person and filed open-records requests to obtain the correspondence of those working at public universities. The researchers who have been targeted study the online spread of disinformation, including falsehoods that have been accelerated by former president and candidate Donald Trump and other Republican politicians. … Last month, the founder of the conspiracy-theory-prone outlet the Gateway Pundit and others sued Starbird and Stanford academics Alex Stamos and Renée DiResta, alleging that they are part of a “government-private censorship consortium” that tramples on free speech. …
“Whether directly or indirectly, a government-approved or-facilitated censorship regime is a grave threat to the First Amendment and American civil liberties,” Jordan wrote.

The claim that these academics are part of a “government-approved or-facilitated censorship regime” is complete bullshit. But it is bullshit that was popularized by a grossly inaccurate story at the Intercept, which purported to discover a secret collaboration between academics and DHS to censor the American right wing.

Full disclosure – I know Kate Starbird, Renee DiResta and Alex Stamos. Not super well – they’re friendly acquaintances – but we’re on first name terms. I also have some sense (mostly indirectly and from social media) of the kinds of political and personal harassment that they have had to endure as a result of the piece by Ken Klippenstein (who is still at the Intercept) and Lee Fang (who left the Intercept to start a Substack newsletter). And I know the world they’re in. I don’t have any government funding, and haven’t been involved in any projects like the ones they have been working on, but I regularly go to conferences with people in this world. and have a sense of how they think, and what they are doing. Which is why I’m writing this post. The Intercept piece not only stinks, but has become the foundation for a much bigger heap of nasty.

You can read the Intercept article here. It’s very long and quite disorganized. The relevant claims:

Years of internal DHS memos, emails, and documents — obtained via leaks and an ongoing lawsuit, as well as public documents — illustrate an expansive effort by the agency to influence tech platforms. …The work, much of which remains unknown to the American public, came into clearer view earlier this year when DHS announced a new “Disinformation Governance Board”: a panel designed to police misinformation (false information spread unintentionally), disinformation (false information spread intentionally), and malinformation (factual information shared, typically out of context, with harmful intent) that allegedly threatens U.S. interests. … Behind closed doors, and through pressure on private platforms, the U.S. government has used its power to try to shape online discourse. … the department plans to target “inaccurate information” on a wide range of topics, including “the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines, racial justice, U.S. withdrawal from Afghanistan, and the nature of U.S. support to Ukraine.” … . “This makes Benghazi look like a much smaller issue,” said Rep. Mike Johnson, R-La … Meeting records of the CISA Cybersecurity Advisory Committee, the main subcommittee that handles disinformation policy at CISA, show a constant effort to expand the scope of the agency’s tools to foil disinformation. … In June, the same DHS advisory committee of CISA — which includes Twitter head of legal policy, trust, and safety Vijaya Gadde and University of Washington professor Kate Starbird — drafted a report to the CISA director calling for an expansive role for the agency in shaping the “information ecosystem.” The report called on the agency to closely monitor “social media platforms of all sizes, mainstream media, cable news, hyper partisan media, talk radio and other online resources.” They argued that the agency needed to take steps to halt the “spread of false and misleading information,” with a focus on information that undermines “key democratic institutions, such as the courts, or by other sectors such as the financial system, or public health measures.” … Behind closed doors, and through pressure on private platforms, the U.S. government has used its power to try to shape online discourse. According to meeting minutes and other records appended to a lawsuit filed by Missouri Attorney General Eric Schmitt, a Republican who is also running for Senate, discussions have ranged from the scale and scope of government intervention in online discourse to the mechanics of streamlining takedown requests for false or intentionally misleading information.

The problem, as Mike Masnick wrote at the time, is that this is basically all horseshit (the unironic MaKeS BeNgHaZi LoOk SmAlL quote is a dead giveaway). “Obtained via leaks and an ongoing lawsuit” sounds dead sexy, but it’s the “as well as public documents” at the end that is really doing most the work. The actual information that the Intercept article quotes, out of context to make it seem all scary, is pretty well all in the public domain, obtainable via Google search. As Mike notes:

if you read the actual document it’s… all kinda reasonable? It does talk about responding to misinformation and disinformation threats, mainly around elections — not by suppressing speech, but by sharing information to help local election officials respond to it and provide correct information. From the actual, non-scary, very public report:

Currently, many election officials across the country are struggling to conduct their critical work of administering our elections while responding to an overwhelming amount of inquiries, including false and misleading allegations. Some elections officials are even experiencing physical threats. Based on briefings to this subcommittee by an election official, CISA should be providing support — through education, collaboration, and funding — for election officials to pre-empt and respond to MD

It includes four specific recommendations for how to deal with mis- and disinformation and none of them involve suppressing it. They all seem to be about responding to and countering such information by things like “broad public awareness campaigns,” “enhancing information literacy,” “providing informational resources,” “providing education frameworks,” “boosting authoritative sources,” and “rapid communication.” See a pattern? All of this is about providing information, which makes sense. Nothing about suppressing. The report even notes that there are conflicting studies on the usefulness of “prebunking/debunking” misinformation, and suggests that CISA pay attention to where that research goes before going too hard on any program.

If you want to get a sense of how truly bad the Intercept article is, read everything that Mike has to say (his piece is long too). The most damning bit:

But the Intercept, apparently desperate to put in some shred that suggests this proves the government is looking to suppress information, slips in this paragraph:

The report called on the agency to closely monitor “social media platforms of all sizes, mainstream media, cable news, hyper partisan media, talk radio and other online resources.” They argued that the agency needed to take steps to halt the “spread of false and misleading information,” with a focus on information that undermines “key democratic institutions, such as the courts, or by other sectors such as the financial system, or public health measures.”

Note the careful use of quotes. All of the problematic words and phrases like “closely monitor” and “take steps to halt” are not in the report at all. You can go read the damn thing. It does not say that it should “closely monitor” social media platforms of all sizes. It says that the misinformation/disinformation problem involves the “entire information ecosystem.” It’s saying that to understand the flow of this, you have to recognize that it flows all over the place. And that’s accurate. It says nothing about monitoring it, closely or otherwise.

In short, the Intercept article is at best horseshit. Klippenstein and Fang make big claims that they don’t deliver on. As it turned out, these were politically convenient big claims for some people. Specifically, for Elon Musk – the allegations in this Intercept article become one of the key bases for the so-called Twitter files, heaping up new and enormous piles of horseshit before Musk fell out with the soi-disant journalists that he’d given access to, and his own lawyers called nope. Also, for a whole lot of Republican activists. And, as the Post article describes, for Jim Jordan’s witch-hunting committee, which has turned these allegations into a Grand Theory of Government Suppression of Free Speech, which they’re using to target academics whose only apparent fault was to provide the US government advice about the extent, nature of, and possible solutions to the disinformation problem.

The Intercept article is still up. It shouldn’t be. It isn’t just that the article is demonstrably and terribly wrong. It is that it is demonstrably causing genuine and continued harm and distress to people whose lives have been turned upside down. I’ve seen Twitter fights where Fang in particular tried to defend the piece (mostly through tu quoque rather than actually engaging with criticisms). I haven’t seen any sign that the editors of the Intercept have addressed the pushback to the piece (perhaps I’ve missed it). If I were to guess, I’d suspect that people at the Intercept know that the piece stinks, but feel that it’s awkward to confront it. The Intercept has been a notoriously fractious organization, with people leaving in angry huffs, being forced to leave, newsroom leaks and the like. I can understand why they don’t want more drama. But that doesn’t make it right. It’s an article whose fundamental flaws have caused specific hurt and had wide repercussions for American media and politics. Fixing fuck-ups like this is Journalism Ethics 101.

And there’s a deeper story here about something that has gone badly wrong with one part of the American left, which I used to be reasonably friendly with, and have found increasingly weird and alienating over the last few years (some things I used to think, I don’t think any more; some people I respected, I’ve given up on). One of the key consequences of the Intercept article has been to undermine efforts to understand, let alone push back against, democratic disinformation. I suspect that is an intended consequence. The article’s authors make it clear that they don’t think that government should have any role in making the information environment better. That’s an argument that I strongly disagree with, but it is not an inherently stupid argument. What is stupid – and worse than stupid – is the conspiratorial logic they use to defend it, patching together out-of-context quotes, breathless rhetoric, and disconnected factoids to suggest by sheer force of volume that There Is Something Wicked Going On. A healthy distrust of the state has mutated into a creepy wake-up-sheeple paranoia. The Intercept is still publishing good journalism (e.g.). But this is a style of writing that it needs to cut off at the roots.

Kicking against the Ticks

Attention conservation notice: short but entirely speculative exercise in amateur sociology/game theory, by someone who has no professional license to do either, and had a blue tick for a couple of years but was always bemused as to why.

A quick note as to what went wrong with the Elon Musk strategy of giving power to the peasants. My take is that the Tyler Cowen case that “Elon is already ahead of the critics on this one, and was all along” was wrong, and that the politics of online aristocracy aren’t nearly what Musk thought they were.

My basic thesis is this. Blue ticks (more formally, ‘verified’ status on Twitter) is a particular example of an emergent online status system emerging from scarcity and social recognition. But if status is to remain economically valuable it paradoxically can’t be fully marketized. Milking status systems for money requires a degree of social astuteness that Elon Musk, for better or worse, shows no signs of possessing.

Initially, verified status was intended to solve a particular problem – that some people and organizations (famous; powerful; media personality) were particularly likely (a) to be targeted by impersonators, and (b) to have the resources to kick up an unpleasant and potentially expensive fuss when this happened. Twitter – like all big social media – did not and does not have the resources to police user registration at scale, so as to prevent impersonators from showing up. So what it did instead was to adopt the easier and cheaper solution of providing some recognizable means through which the “real” individuals could be distinguished from the fakes.

What then happened, unsurprisingly, was that getting verified came to become a connotation of social status. It showed you were important enough for Twitter to say that you were you. A blue tick beside your user name became a status good. It helped that it was linked to some notion of significance – you had been judged and somehow found worthy. It may have helped too that the process through which verified status was awarded was quite mysterious (I got verified one day, and have no good idea as to why).

Thus, then, the “lords” and “peasants” distinction that Musk drew in his tweet. And he was right that “blue ticks” drew considerable social resentment, especially from people on the right, who linked blue tick status with membership of the journalistic elite, and presumed hostility to godfearing people, Silicon Valley &c &c.

But the problem, as Musk has discovered, is that kicking against the ticks is not a profit maximizing strategy, or a particularly good money making strategy at all. The number of people who are willing to pay $8 a month is reportedly underwhelming.

In part, this may be because there aren’t very many real perquisites that come with it – as best as I know, promises that blue ticks will see less ads have gone unfulfilled, like many other promises of Musk-era Twitter. In part, it’s because the social status isn’t worth as much any more. To the extent that blue ticks are status goods, they are debased when they are sold at a scheduled market price. They don’t tell observers that the blue tick recipient has been found worthy in some mysterious process. Instead, they convey the information that the recipient is willing to spend $8 a month to get their tweets prioritized. That is not even an ambiguous signal of high social status.

Indeed, it may be a signal to the contrary. Under the current status quo, people will be unwilling to pay for blue ticks, unless they simply want to get their tweets in front of more people than they would otherwise. Their willingness to pay will hence be a negative signal of the quality of what they have to say. The current system of verification, without unlikely and expensive oversight, will overselect on spammers and egomaniacs. Second, for just this reason, ordinary Twitter users will plausibly be less willing to pay attention to accounts with blue ticks than to accounts without them.

The risk to Twitter then is of a degenerating equilibrium in which ever fewer people pay attention to verified status, leading verified status to become ever less valuable. That’s too neat and simple a story – real life social dynamics are always much messier. But I don’t think it is entirely wrong either.

Now, after originally promising to remove verification from all ‘legacy’ people who got it if they didn’t cough up, Musk seems to be contenting himself with just removing verification from the New York Times and similar groups and people who have incurred his displeasure. People who pay, and people who don’t but who got it as a legacy, are now indistinguishable from each other. In Twitter’s official language, “[t]his account is verified because it’s subscribed to Twitter Blue or is a legacy verified account.”

You probably can’t describe this outcome as the product of deliberate strategy. Musk’s management philosophy for Twitter hasn’t so much been a random walk as a grasshopper lepping around on a hotplate. But it is likely to stick for a while. The verified status system is plausibly more lucrative when it is a pooling equilibrium – that is, when it is impossible to tell who has paid for it, and who has not. The payers can parasitize some of the status of the legacies.

The actually relevant “lords and peasants” story that illustrates this is the British House of Lords. At one point in the early twentieth century, there was an actual price list. As Wikipedia describes it

Lloyd George made the practice more systematic and more brazen, charging £10,000 for a knighthood, £30,000 for baronetcy, and £50,000 upwards for a peerage. The practice came to a halt with the notorious 1922 Birthday Honours List, which contained the names of, Sir Joseph Robinson, a South African gold and diamond magnate who had been convicted of fraud and fined half a million pounds a few months earlier; Sir William Vestey, a multi-millionaire meat importer notorious for his tax evasion;Samuel Waring, who had been accused of war profiteering; and Archibald Williamson, whose oil firm had allegedly traded with the enemy during the war.

After public outcry, the law changed to make it illegal to charge for peerages and honors. Of course, it is still the case that you can get elevated to a life peerage for handing over dollops of cash to political parties. But this is decently obscured beneath a veil of official reticence. Certainly, there is nothing so vulgar as an itemized schedule of payments.

The British system of peerages still works as a moneymaker for UK political parties, because it blurs the status of those who paid hard money, and those who get them for good works, as well as the tarnished luster of feudal arrangements. It will be interesting to see whether Musk can maintain a similarly profitable degree of ambiguity.

I suspect not, because it requires a kind of acumen about social systems that he doesn’t appear to possess. Many legacy blue tick people are loudly proclaiming in tweets or in their profiles that they would never pay. They want to preserve their status, rather than have it debased by association, or at the least, not be identified as the kinds of people who would pay for the increasingly dubious status of being a blue tick (I’m of that class myself). Keeping a balance between those who provide lustre and those who provide lucre, requires the kind of steady hand that Musk doesn’t seem to possess.

No-Bullshit Democracy

Hugo Mercier, Melissa Schwartzberg and I have two closely related publications on what we’ve been calling “No-Bullshit Democracy.” One is aimed at academics – it’s a very short piece that has just been officially published in American Political Science Review. The other just came out in Democracy. It’s aimed at a broader audience, and is undoubtedly livelier. An excerpt of the Democracy piece follows – if you want to read it, click on this link. The APSR academic letter (which can be republished under a Creative Commons license) is under the fold. Which one you might want to read depends on whether you value footnotes more than fisticuffs, or vice versa …

The New Libertarian Elitists

What might be called “no-bullshit democracy” would be a new way of structuring democratic disagreement that would use human argumentativeness as a rapid-growth fertilizer. … But first we need to sluice away the bullshit that is being liberally spread around by anti-democratic thinkers. … . Experts, including Brennan and Caplan (and for that matter ourselves), can be at least as enthusiastic as ordinary citizens to grab at ideologically convenient factoids and ignore or explain away inconvenient evidence. That, unfortunately, is why Brennan and Caplan’s books do a better job displaying the faults of human reasoning than explaining them.

Analytical Democratic Theory: A Microfoundational Approach

Henry Farrell, Hugo Mercier and Melissa Schwartzberg.

Abstract

A prominent and publicly influential literature challenges the quality of democratic decision making, drawing on political science findings with specific claims about the ubiquity of cognitive bias to lament citizens’ incompetence. A competing literature in democratic theory defends the wisdom of crowds, drawing on a cluster of models in support of the capacity of ordinary citizens to produce correct outcomes. In this Letter, we draw on recent findings in psychology to demonstrate that the former literature is based on outdated and erroneous claims and that the latter is overly sanguine about the circumstances that yield reliable collective decision making. By contrast, “interactionist” scholarship shows how individual-level biases are not devastating for group problem solving, given appropriate conditions. This provides possible microfoundations for a broader research agenda similar to that implemented by Elinor Ostrom and her colleagues on common-good provision, investigating how different group structures are associated with both success and failure in democratic decision making. This agenda would have implications for both democratic theory and democratic practice.

Over the last 15 years a prominent academic literature tied to libertarian thought has argued that democracy is generally inferior to other forms of collective problem solving such as markets and the rule of cognitive elites (Brennan 2016; Caplan 2008; Somin 2016). Following a long tradition of skepticism about democracy, these libertarians appeal to findings in cognitive and social psychology and political behavior to claim that decision making by ordinary citizens is unlikely to be rational or well grounded in evidence. Their arguments have been covered in magazines such as the New Yorker (Crain 2016) and popularized in proposals in the National Review for restrictions to dissuade “ignorant” people from voting (Mathis-Lilley 2021). Democratic theorists have mostly retorted with “epistemic” accounts, invoking mechanisms through which citizens can potentially reach good decisions—most significantly, deliberative mechanisms (Schwartzberg 2015).

This debate has been largely unproductive. Libertarian skeptics argue that democracy is generally inferior because of incorrigible flaws in citizens’ individual psychology, whereas democratic theorists lack a shared, compelling, and realistic micropsychological theory within which to ground their broader claims. Each side emphasizes empirical evidence that appears to support its own interpretation while discounting counterevidence.

This letter adopts a different approach. It demonstrates that democratic skeptics’ pessimistic conclusion—that democracy is unfixable—rests on a misleading and outdated account of the relevant psychological literature. Similarly, epistemic democrats often overestimate deliberation’s role in producing wise results or assume that aggregative models will operate at scale. We seek to avoid unwarranted skepticism and enthusiasm alike, instead providing microfoundations for a more empirically robust program investigating both the successes and mishaps of democracy, drawing on the experimental psychological literature on group problem solving (inter alia) to discover the conditions under which specific institutions perform well or fail in discovering solutions to collective problems.

Adapting a term from past debates, we contribute one foundational element of an approach that might be dubbed “analytical democracy.” Like the “analytical Marxism” associated with scholars such as G. A. Cohen, Jon Elster, John Roemer, and Adam Przeworski (see Roemer 1986), we provide more demanding and specific microfoundations for an account we find broadly sympathetic. Our research program might also be analogized to Ostrom’s work on the decentralized provision of common goods (Ostrom 1990). This emerged in response to Garrett Hardin’s influential article on “the tragedy of the commons,” which claimed that common-goods governance would inevitably collapse (Hardin 1968). Ostrom and her colleagues tested and falsified Hardin’s claims. However, rather than simply defending the proposition that decentralized communities could provide common goods, they investigated when common-good provision was likely to succeed or fail. Similarly, a research program on democratic problem solving, investigating success and failure, might not only provide possible foundations for a truly realistic account of democracy but also generate practical advice on building and improving democratic institutions. This program would build on research on the consequences of group composition and structure to understand the conditions under which democratic problem solving will operate well or badly.

Democratic Skepticism, Optimism and Social Science
A recent pessimistic literature, dominated by libertarian scholars, diagnoses widespread democratic ignorance and incompetence. Bryan Caplan (2008, 19) asserts that voters are irrational and “rule by demagogues … is the natural condition of democracy.” Jason Brennan believes that the democratic electorate is “systematically incompetent” so “some people ought not have the right to vote, or ought to have weaker voting rights than others” (Brennan 2016, 201, viii). Ilya Somin claims that “widespread public ignorance is a type of pollution” so that “democracy might function better if its powers were more tightly limited” (Somin 2016, 6, 9).

Each argues that democracy is profoundly flawed because of irremediable problems in individual incentives and cognition. Each proposes circumscribing democracy in favor of some purportedly superior alternative principle of social organization. Caplan claims that markets impose an effective “user fee” for irrationality that is absent from democracy (Caplan 2008, 133–4). Brennan proposes “epistocracy,” an aristocracy of those who know best. He defends restrictions on suffrage, identifying familiar possibilities such as restricting the franchise to those who pass a voter qualification exam and assigning plural votes to college graduates. Somin advocates what he calls “foot voting” (exit) over “ballot box voting” and emphasizes “the market and civil society as an alternative to government” (Somin 2016, 154), although he admits that the benefits “are likely to vary from issue to issue, from nation to nation, and perhaps also from group to group” (180).

These scholars ground their claims in social science findings. They invoke a literature leading back to Downs’s (1957) argument that citizens are rationally ignorant about politics because they do not have sufficient incentive to gather good information or to make good decisions. They emphasize that ordinary citizens display severe cognitive bias. Caplan (2008) blames such biases for differences between voters’ beliefs about economics and the beliefs of PhD economists, which he takes as a reasonable representation of empirical truth. Brennan (2016, 37ff) and Somin (2016, 94ff) cite work showing that biases lead people to search for information that supports their prior views and “not only reject new information casting doubt on their beliefs but sometimes actually respond by believing in them even more fervently” (Somin, 93–4; invoking the “backfire effects” described in Nyhan and Reifler 2010).

Brennan (2016, 40) unites rational ignorance and cognitive bias into a single stylized account in which most voters are either low information “hobbits” (ignorant) or politically fanatical “hooligans” (biased). He invokes Mercier and Sperber’s explanation of how “[r]easoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments” (Brennan 2016, 38). Furthermore, “human beings are wired not to seek truth and justice but to seek consensus… . They cower before uniform opinion” (Brennan 2012, 8; see also Brennan 2016, 47) as demonstrated by the famous Asch (1956) “conformity experiments,” where participants followed the obviously false opinions of confederates who were sitting next to them.

Achen and Bartels’ (2016) “realist” account of democracy does not share the skeptics’ normative priors but provides a similarly bleak judgment. They too draw on Asch and “similar studies” for social psychological microfoundations that stress the force of group identity and conformity (Achen and Bartels 2016, 220).

There is little scope for democratic problem solving if individual consensus seeking invariably leads to group conformity and “echo chambers” (Sunstein 2002), affective polarization (Iyengar et al. 2018), the rejection of countervailing arguments from nongroup members, and backfire effects. Yet it is far from clear that the despairing picture is empirically accurate. Growing affective polarization may not increase ideological polarization and extremism (e.g., Desmet and Wacziarg 2021). People’s economic beliefs are affected by economic reality (e.g. Duch and Stevenson 2008). Party leaders influence party members on some issues but on others adopt what they perceive to be the public’s dominant opinion (Lenz 2013). Backfire effects are the exception, not the rule (Nyhan 2021; Wood and Porter 2019). People generally change their minds when presented with well-sourced facts and good arguments (see, e.g., Nyhan et al. 2020; Sides 2015).

In part, we do not see the expected universally negative consequences because citizens are not as ignorant as the skeptical consensus suggests. “Issue publics,” whose members acquire specialized information on a particular issue across a spectrum of opinion (Converse 1964), provide an important epistemic resource for democracy (Elliott 2020; Han 2009). Citizens do better on domain-specific knowledge, including information about candidates’ positions on issues they care about (Henderson 2014; Krosnick 1990), than on the surveys of general factual information that skeptics rely on.

More fundamentally, individual-level biases are not devastating for collective democratic problem solving. The psychological literature on group effects and individual cognition is systematically misunderstood by skeptics and underexploited by political scientists. Contrary to Brennan’s (2016) misinterpretation, scholars like Mercier and Sperber (2017) find that even if humans are subject to “myside bias,” they can filter out erroneous messages (including those from their “side”) and change their minds when presented with good evidence from the other “side.” A realistic understanding of the capacities of democratic citizens need not be altogether bleak.

But it should not be overly sanguine. Democratic theorists (including those who are interested in practicalities) often rely on either conjecture or quasi-empirical claims. For instance, David Estlund argues that democratic procedures will tend to outperform non-democratic ones epistemically while acknowledging that the claim is conjectural rather than empirical (Estlund 2008, 157, 160, 176). Hélène Landemore (2020, 8) asserts more forcefully that what she calls “open democracy” is empirically superior to other forms of social decision making: “in a complex and uncertain world, … empowering all members of the demos equally … is overall the best method we have to figure out solutions to common problems.”

We lack a research framework for establishing whether this strong assertion is more robust than competing claims from those who champion different forms of democratic decision making or who emphasize the possibility of democratic failure. Even if deliberation and other forms of reasoned exchange are morally valuable, they may not necessarily yield superior solutions to problems. Extrapolations such as Landemore’s (2013, 104) “Numbers Trump Ability” postulate that democracy can readily be scaled up so that “if twelve jurors are smarter than one, then so would forty-one or 123 jurors,” building on Hong and Page’s (2004) “Diversity Trumps Ability” theorem. Such claims are qualified by empirical findings from jury deliberations (Watanabe 2020) and Hong and Page’s later prediction that increasing group size does not necessarily improve problem-solving capability (Hong and Page2021).

To move away from general claims for democracy’s superiority, epistemic democrats need to understand not just when democracy works but also when it doesn’t. Neblo et al. (2017, 915) establish an important possibility claim by showing how “scholars have assembled strong evidence that deliberative institutions positively influence citizens.” Still, it is hard to build from such demonstrations to a properly scientific account that can explain both democratic success and failure without some externally grounded theory of human decision making. Similarly, there is no very straightforward way of moving from a demonstration that Habermasian claims for deliberation can be grounded in plausible psychological mechanisms (Minozzi and Neblo 2015) to a broader account of when these mechanisms will or will not operate.

Surprisingly, possible microfoundations for such an account can be found in the literature on group psychology and cognition that skeptics have deployed against democracy. As Landemore (2013, 143) says, the “argumentative theory of reasoning” allows us to predict where deliberation will and will not work well. This is a pivotally important claim: we need to know where deliberation will function well to empirically assess theories of institutional design and practical justifications of democracy.

The argumentative account of reasoning is grounded in a recent “interactionist” literature in psychology, which explores how individual bias may or may not be corrected through social interaction. It investigates how mechanisms of “epistemic vigilance” allow people to employ cues to evaluate communicated information including the expertise and benevolence of the source, the plausibility of the message, and the quality of the arguments (for an overview, see Mercier 2020; Sperber et al. 2010). Chambers (2018) has also identified both the interactionist approach and the empirical literature on deliberation as reasons to doubt skeptical claims based on group psychology.

For example, contrary to skeptical claims that people conform to majority opinion, the experimental literature finds that people take account of relevant cues when evaluating the majority opinion including the absolute and relative size of the majority, the competence and benevolence of the majority’s members, the degree of dependency in the opinions of the majority, and the plausibility of the opinion (for review, see Mercier and Morin 2019). The much-bruited Asch (1956) experiments describe the consequences of external pressure rather than those of internalized bias. Practically no one was influenced when participants did not have to voice their opinion in front of the group, and contrary to the widespread academic folklore (Friend, Rafferty, and Bramel 1990), the experiments demonstrated independence as well as conformity. The literature finds that people are well able to evaluate arguments, that they are more influenced by strong than weak reasons (e.g., Hahn and Oaksford 2007), and that they partly change their minds when confronted with challenging but good arguments (e.g., Guess and Coppock 2020).

Interactionist scholarship suggests that reasoning processes are best evaluated in their normal environment of social interaction. It provides possible microfoundations for theories of variation. Instead of looking to the (supposedly invariant) cognitive limitations of ordinary citizens as skeptics do, an interactionist approach suggests that we should investigate the social context of decisions—how groups are structured—to understand when group identity and social pressure can distort or swamp problem solving. Both problem-solving capacity (which depends on whether groups harness individual biases and mechanisms of epistemic vigilance) and collective pressures to conformity will plausibly vary with group structure. Skeptical accounts, which depict group politics as simple condensates of individual bias writ large, are poorly fitted to capturing this variation. Equally, interactionism provides microfoundations for a framework that can investigate democratic theorists’ findings about when democracy works well while also investigating democratic failure.

This provides a more promising path forward than does the universal pessimism of democratic skeptics. It also provides more robust foundations for the claim that deliberation can occur under psychologically realistic circumstances and a starting point for investigating what those circumstances are. Democratic “realists” like Achen and Bartels (2016) need not be democratic pessimists. A microfoundational approach, grounded in endemic individual cognitive bias, avoids the possible charge that the desired normative outcomes are baked into the initial empirical assumptions.

If outright democratic skeptics are sincerely committed to understanding the cognitive underpinnings of democratic processes, as their reliance on this literature ought to entail, they too should find it attractive. It allows the serious investigation of observed democratic failure as well as democratic success. Of course, these are not the only possible microfoundations, and like all empirically based accounts, they may be modified or even rejected as empirical evidence emerges.

Still, such microfoundations could support a broader analytical account that seeks to understand and address variation. If both the benefits and disadvantages of democracy arise at the group rather than individual level, then the challenge for advocates of democracy is to build democratic institutions that can better trigger the relevant cognitive mechanisms so as to capture the benefits of group problem solving instead of deferring to the social pressures that do sometimes lead to conformity. In other words, our goal is to better explain how democracy incorporates the capacities of groups to solve problems (under some circumstances) as well as their tendency to magnify conformity and factionalism (under others).

We do not provide a complete alternative account of democracy here. That would be a heroic undertaking, which would involve not just providing microfoundations but rebuilding existing institutional and organizational theories on their basis. Instead, we sketch the beginnings of a broader research program that we hope others will find attractive.

A Research Program on Democratic Problem Solving
Ostrom (1990) began by demonstrating the systematic flaws in Hardin’s skepticism of common goods but went on to articulate a coherent alternative research agenda on the conditions under which common goods provision succeeds or fails. Political science and related disciplines should commence a similar research program, uniting scientific research on group composition, network structure, and institutional form to investigate the conditions under which democratic problem solving is likely to succeed or fail.

As we have argued, this program could build on research in experimental cognitive psychology, which provides an alternative set of microfoundations to both rational choice and the social psychological arguments that have dominated political science debates. Specifically, this research identifies specific dimensions along which trade-offs in group problem solving plausibly occur:

• Between social comfort and comfort in expressing dissent (Baron 2005).

• Between shared common ground and some measure of preexisting disagreement. Stasser and Titus (1985) point to the benefits of ground-level agreement for problem solving, whereas Schulz-Hardt et al. (2006) discuss how some level of background dissent allows for better problem solving.

• Between group size and the need to represent diversity. Fay, Garrod, and Carletta (2000) discuss how the quality of communication deteriorates as group size increases, whereas Hong and Page (2004; 2021) highlight the benefits of diversity and its complex interaction with group size and Mercier and Claidière (2022) examine whether deliberation is robust to increases in group size.

• Between pressures for conformity and concerns for epistemic reputation, Origgi (2017) describes how the cognitive mechanisms of reputation can generate both market bubbles and reliable collective information systems.

By understanding how different positions in this multidimensional space are associated with better or worse problem solving, we can arrive at useful hypotheses about how to fashion democratic systems. This research program should also incorporate scholarship on a broader level of social aggregation, which explores how network structure and social influence affect flows of information and opinion between individuals with different perspectives (Feng et al. 2019). It might incorporate practical findings about democratic decision making—for instance, the circumstances under which juries can form more accurate collective beliefs (Salerno and Diamond 2010) and how citizen constitutional assemblies (Farrell and Suiter 2019) and online town halls (Neblo, Esterling, and Lazer R2018) can support better communication between politicians and the public.

Crucially, the proposed research program would investigate democratic failures as well as successes, better explaining, for example, the circumstances under which epistemic breakdown and misinformation can become established in democracies. O’Connor and Weatherall (2018; Weatherall and O’Connor Weathera2021) investigate how epistemic factionalization occurs among people who do not trust others with different beliefs. Nyhan (2021) emphasizes the importance of elite messaging and information decay in spreading misinformation, suggesting that punishing elites who spread falsehoods and focusing on intermediaries may have benefits.

Finally, such a research program would help address recent (Neblo et al. 2017) and current (Notes from the Editors 2020) demands for a “translational” approach to democracy that “challenges dominant disciplinary norms.” It would seek to reconcile scientific rigor with normative analysis, providing the groundwork for institutional improvement and reform.

ChaitGPT

[attention conservation notice: this post consists of lengthy opinionating with a smattering of thinly sketched arguments from psychological research, and a now-obscure science fiction novel. Also – if you really disliked M Night Shyamalan movies, even before the shtick became a shtick, you’re likely to be annoyed].

So Jonathan Chait wasn’t happy with the side-comment on his career incentives in my last post (see the tweet pictured above). Fair enough. But his alternative theory of how the attention economy works is one dimensional and self-flattering (which is not to say that it is entirely wrong).

Chait’s implicit theory of the political economy of opinion journalism seems to be that the true path to success is to flatter your readers’ prejudices. The implication of Chait’s sarcasm is that those who split their audience, such as, to take a random example, Chait himself, are less likely to be rewarded. People like me, who argue to the contrary, are wrong (maybe even idiots). Matt Yglesias, in a follow on comment, says that “I think it’s probably true that a lot of journalists suppress their own heterodox views (most journalists are left or center but few people are totally dogmatic) in a way that’s bad for their careers in response to informal social pressure from peers.”

So the argument (especially in its more explicit Yglesias formulation) that there are incentives to flatter the beliefs of your own side, and to downplay the points of disagreement, is not at all wrong. Social pressure is real, and there is a lot of evidence, whether anecdotal, personal-universal (I imagine that pretty well everyone with a bare minimum of social skills has regularly kept quiet about what they really think on some controversial topic in order to avoid friction), and experimental (the notorious though often misinterpreted Asch conformity experiments), to back this intuition up.

Cognitive psychology provides a lot of evidence to back up the notion that we are coalitional animals. More political scientists and sociologists should read books like Pascal Boyer’s Minds Make Society. As Boyer discusses, we very easily discern the coalitional dynamics implicit in information we have been given, and tend to focus on it:

There is no social history or ethnography of any human community that does not mention people joining forces for common goals, creating and maintaining rival alliances, and punishing defections. This is so pervasive in human interaction that the point seems banal

When people hear conversations, they find it far easier to recall the implied information about who is allied with whom than the actual content of of discussion. In short, we have specialized cognitive mechanisms for keeping track of coalitional politics, and a lot of what we say and think is driven by these mechanisms.

That helps explain why people sometimes repress what they actually think. But it also plausibly helps explain why being a professional controversialist can be a good career move. If we have cognitive mechanisms that draw us to pay particular attention to fights and disagreements, then the attention economy isn’t just going to reward people who tell their audience what they want to hear. It is also going to reward people who say challenging, provocative and outrageous things. Contrary to what Chait implies, splitting your audience, and outraging the norms of a perceived community – so that people generate pile-ons, counter-pile-ons and so on, is a path to success. Indeed, Chait himself has pretty well acknowledged that this is part of his journalistic appeal in a past spat with Crooked Timber.

the story … in the print edition, asked, “Can a white male liberal critique the country’s current political-correctness craze (which, by the way, hurts liberals most)? We’re sure you’ll let us know.” This was my editors’ playful way to provocatively anticipate the firestorm the piece would set off.

“provocatively anticipate the firestorm” and “We’re sure you’ll let us know” suggest that this is perfectly self-conscious and deliberate on the part of Chait and his editors. It’s the same logic that inspires conservative activists to invite Milo onto campus – “The left-wing riots were not the price or the downside of inviting Yiannopoulos—they were the attraction.” Similarly, Chait seems to be telling us that he and his editors at New York Magazine anticipate and want howls of outrage from angry liberals and the left. That’s what pulls the punters in.

And here comes the half-arsed M Night Shyamalan twist. I’m guessing that a lot of (perhaps most?) readers who’ve gotten to this point, have been sucked in because this starts like another “Why Jonathan Chait Is Wrong” piece, with snarky title and lengthy argumentation. I’ve written other such pieces in the past, but I’m not now sure that this is a useful way to think about things. The particularities of Jonathan Chait are not, after all, a question of urgent importance. As the earlier post suggested, what is more important are the larger dynamics that reward and reinforce certain kinds of ways of participating in the public, while demotivating others.

For me, the big question isn’t whether Jonathan Chait (or Glenn Greenwald, or name any other extremely online person who you think is a controversialist or party hack) is an innately terrible human being. It’s why we have a media architecture that creates feedback loops that reinforce certain behaviors (whether it’s being hackish, or stirring shit for the sake of attention, or some combination) with attention and engagement. My hypothesis is that the dynamics of social media and the coalitional aspects of our cognitive architectures have come to reinforce each other in increasingly unfortunate ways, so that people reliably get attention by either reinforcing or outraging political sensibilities rather than saying actually interesting things.1

We can disagree about whether particular people started out in a bad place, or whether they broke bad over time – and we probably don’t have data to figure out any good answers. But I think people are more likely to agree to the surmise that we are in a world where it is much easier than it ought to be for people’s worst tendencies to feed on themselves, once they’ve reached a certain degree of online fame. It’s like climate change – we can disagree over whether individual weather events are the result of global climate change, but fighting about the particulars of this or that hurricane is missing the point about the deeper and more structural changes.

We live in a climate, where once people are Internet famous, they get rapid and large scale attention. Some of them like it, and/or can make careers from it. They keep on pressing the button for that dopamine hit or increase in engagement, and if they aren’t careful, they end up becoming caricatures of themselves.

In short: we live in a media ecology that creates incentives for Internet famous people to become crude approximations of themselves if they want to keep on being Internet famous. Some of them play to their crowd. Some of them embrace the role of Bold Contrarian Truthteller (playing to a different crowd, and outraging another). Both tend to play up what gets attention. Both have incentives to double down on error rather than admitting it.

It’s hard to avoid this trap, and maybe easier to get out while the going is good. One of the really great things about Ta-Nehisi Coates’ early blogging was that he was able to talk openly about the things he didn’t know and wanted to learn. He clearly took enormous joy in open-ended conversation. Then, at a certain point, he became Ta-Nehisi Coates, and figured out that he needed to leave social media. When you occupy the headspace of millions of people, you don’t have any “room to maneuver or margin for error.” Coates did what he had to do – but he has had to abandon a big part of what made him so wonderful to read.

We live in an attention economy that either forces the successful to get out, or traps them in feedback loops that make it increasingly difficult for them to learn, and increasingly easy to become a crude approximation of themselves if they aren’t possessed of exceptional self-control. I’ve joked that Large Language Models were creating a reverse Turing Test, where it would be increasingly difficult to distinguish prominent op-ed columnists from the clouds of statistical associations of tokenized text that might be used to model them. To the extent that this is true, it’s the product of a media political economy that selects on some kinds of communication and reinforces them, while not selecting on others. This isn’t just the product of social media (Thomas Friedman was Thomas Friedman before Twitter) but it has arguably gotten worse.

I’ve been thinking a lot recently about Iain Banks’ sf novel, Feersum Endjinn, which depicts a far future in which some people, but not others, are resurrected after death into afterlife in a digital “crypt.” In Banks’ description:

As the saying had it: the crypt was deep and the human soul was shallow. And the shallower the soul, the less of it survived as any sort of independent entity within the data corpus; somebody whose only opinions were received opinions and whose originality quotient was effectively zero would dissolve almost entirely within the oceanic depths of the crypt’s precedent-saturated data streams and leave only a thin froth of memories and a brief description of the exact shape of their hollowness behind, the redundancy of their beings annihilated by the crypt’s abhorrence of over-duplication.

My objection to most professional contrarians isn’t that they outrage my core beliefs, but that they don’t do so in particularly interesting ways.2 It’s much harder to distinguish Chait from ChaitGPT than it ought be. If you’ve read even a moderate amount of his previous work, you’ll be able to predict, with a very high degree of accuracy, what he is going to write when the next Chait-friendly controversy hits. The media economy’s incentive structure has led him to converge upon his statistical approximation, to the extent that I wouldn’t fancy his chances much in Iain Banks’ imaginary future. (this implies a hand-wavey equivalent to Shannon information for public commentators – the element of surprise roughly approximating the difference between what they say, and what their ChatGPT models would predict they will say).

But much more importantly, I wouldn’t particularly fancy the chances of many of the rest of us either. We aren’t subject to quite the same selective pressures as the Internet famous, but we still live in a world that’s straining out diversity and condensing our opinions and beliefs into crude summaries and simplifications. Fixing that – rather than bagging on any one individual opinionator – seems to me the bigger problem.

1 This is a different claim than the more common one that engagement maximizing dynamics are the primary cause of deranged beliefs. One of these days soon, Cosma Shalizi and I should have something public to say on the latter claim.

2 By corollary, I try to be tolerant of contrarians, even if offensive, to the extent that they actually come up with interesting and unexpected ideas. Your own mileage may vary.

Conservatives on campus

There’s been a lot of grumpy commentary about this recent NYT op-ed by Adam S. Hoffman, a Princeton senior claiming that conservatives are being driven off campus. Its basic claims:

In the not-so-distant past, the Typical College Republican idolized Ronald Reagan, fretted about the national debt and read Edmund Burke. Political sophistication, to that person, implied belief in the status quo. … Today’s campus conservatives embrace a less moderate, complacent and institutional approach to politics. … many tend toward scorched-earth politics. But these changes aren’t solely the consequence of a fractured national politics.They’re also the result of puritanically progressive campuses that alienate conservative students from their liberal peers and college as a whole.

The story of this transformation, according to the social psychologist Jonathan Haidt, starts around 2014, when Gen Z arrived on campus. The new progressive students were less tolerant of heterodox ideas and individuals. …For those on the right, the experience is alienating. … And those who challenge liberal pieties can face real repercussions.

There is actual serious social science research that Amy Binder and her colleagues have done on this exact question. She and her colleagues come to very different conclusions than Hoffman.

I first came across Binder1 through her book with Kate Wood, Becoming Right: How Campuses Shape Young Conservatives. That book came out back in 2013. It agrees with Hoffman on one important thing. There has been a shift in campus conservative activism, from “conservative campus organizations and actors [that] favor a more erudite style of political discussion” to ones “which are often very well funded” and “thrive on confrontation.” The Intercollegiate Studies Institute, which focused on “seminars on moral and political philosophy” has found itself being outmaneuvered by more confrontational groups such as Young America’s Foundation, the Leadership Institute (associated with the recently disgraced James O’Keefe) and Turning Point USA.

However, as the book’s publication date suggests, this shift began to take hold years before the Great Awokening. And Binder and Wood provided persuasive evidence that the shift had far less to do with what was happening on college campuses than changes in the broader conservative movement. There was money – and lots of it – for organizations that were willing to take the culture war to America’s universities, creating an entire political economy.

The later consequences are described in The Channels of Student Activism, a more recent academic book, published by Binder and Jeff Kidder last year. While Binder and Kidder are sympathetic to Haidt’s broad program of reform, they push back with evidence against his causal argument. People like George Lukianoff and Haidt “point fingers at the supposed shortcomings of Generation Z,” blaming the purported psychological frailty of an entire generation. Binder and Kidder find that the evidence points towards organizations as the key factors of change. Students “are channeled not coddled,” provided with incentives, identities and even entire career paths by political organizations.

Binder and Kidder identify very different organizational political economies for conservative and liberal/left students. Right leaning students are “encouraged by organizations external to their schools to adopt a discourse hostile to the academic enterprise,” “targeting a liberal campus culture, which plays into a larger Republican game plan.”

As they describe it (on the basis of interviews with students and figures within the relevant organizations):

Many outside organizations encourage students on the right to plan events specifically designed to incite outrage among their left-leaning peers. Once outrage is successfully sparked, and progressive students demand that administrators do something in response, the front line of conservative politics shifts to protecting the speech rights of reactionaries and provocateurs.

The reason why so many campus controversies seem to follow the same script is … that they are following the same script. A conservative group invites a figure onto campus who seems guaranteed to provoke outrage, leading to protests, and likely headlines about campus illiberalism. This is not a reaction against purported wokism so much as a means of weaponizing it for the other side’s political purposes. As Binder and Kidder describe it

The answer to why supporting vile speech has become such a ubiquitous part of college-level conservatism is that student-led groups are operating within a larger outside channel of activism. Many national organizations on the right see the First Amendment as a valuable tool for disrupting liberal hegemony in higher education. Ultimately, it is the influence of outside players—such as the Leadership Institute, Turning Point, Young America’s Foundation, PragerU, and Young Americans for Liberty, as well as local donors helping to fund their preferred campus clubs—that make speech uniquely effective in reactionary mobilization. Some of these organizations, like the Leadership Institute and Turning Point, maintain a stable of speakers ready to headline events put on by student-led groups. .

There is also ample help to subsidize the costs of hosting such figures…. Perhaps most importantly, national organizations and wealthy benefactors set the tone for what types of activism are appropriate for club members, and they provide a ready-made and consistent script that right-leaning students use to defend their provocations.

Also, for succcessful agitators, there’s a career in it. Binder and Kidder quote a “faculty advisor to several conservative clubs,” who “explained the multiple components of the strategy, from initially causing a stir to eventually presenting a burnished résumé that looks good in the realm of right-leaning politics.”

Press is always good. You always want that …[the clubs] want to get it on YouTube. … So, you pick speakers that [are] creating something that will be explosive…. There’s a conflict, and [students are] behaving in that field of conflict, and that helps to get press. … You go to your donors and it’s very easy to show them, “We’re on CNN. Give us more money.” …[Students are] also looking down the road … at internships…. These are the [students] that are going to end up in politics. [ . . . ] And they know that by doing these types of events, especially if there’s some visibility [it’s] all the better for them.

Things are very different for liberal/left students. They don’t have anything like the same ecosystem of supportive external groups. Instead, they have a hopelessly underfunded College Democrats program, a bunch of smaller organizations, and, well, PIRGs (Binder and Kidder touch on some of the controversies around PIRGs’ funding model, but they don’t get deeply embroiled). What they do have is the perception that many or most faculty and university officials are sort of on their side, and an infrastructure of intra-college institutions which provide a lot of inclusion policies and rhetoric, and some rather more modest forms of actual support.

Liberal and left students often feel at home on college campuses in ways that conservatives do not. They tend to overestimate the predominance of liberal views among their fellow students, and classroom discussion very often seems to privilege a loosely liberal set of values and concerns. Very often, they focus their political demands on their immediate surroundings. Binder and Kidder find that pressures for increased inclusion may come from a kind of tacit alliance between concerned students and employees in the relevant parts of the university.

The result is that while conservative groups leverage (and sometimes deliberately create) local incidents for national consumption, liberal and left students are more likely to focus internally. They are also likely to find themselves disappointed a lot of the time – especially those on the left. University officials are often happy to pay lip service, create diversity policies, and sometimes provide assistance and support. They are far less likely to be sympathetic to the more sweeping demands for changes to the underlying political economy of the university itself, which would likely upset constituencies they want to keep happy (elected politicians; boards of trustees).

That helps explain why liberal and left leaning students often end up being quite cynical. Nor are there the same kinds of career opportunities for liberal or left wing activists (whether moderate or radical) as there are for conservative bomb throwers. There just aren’t the same kinds of external institutions on the left, offering support, internships and future opportunities.

So if Binder and Kidder are more or less correct (and they have done a lot of interviews), there are three immediate implications about the NYT op-ed.

The first – most obviously – is that it is wrong. The big shift from the bespectacled bowtie model of campus conservatism to the frenzy of Turning Point USA and rampaging groypers wasn’t a reaction to Wokism-Out-of-Control, as Hoffman maintains. It was a product of a national level shift in the organizational political economy on the right, as national conservative groups perceived possible political advantage from stirring stuff up more on campus. This doesn’t mean that outraged reactions from left students aren’t part of the story. It means instead that they feed an independently existing organizational machine that wants them to be outraged, and will go to increasingly extreme lengths to make sure that they are outraged. Quoting Binder and Kidder:

provocations are often very much part of the design … Elliot Kaufman, a former conservative activist from Stanford University, for example, acknowledged in an op-ed for National Review that “The left-wing riots were not the price or the downside of inviting Yiannopoulos—they were the attraction.”

Second – that media events like the NYT op-ed feed the phenomenon that they purport to describe. If your political economy is all about stirring up media attention and reaction to the problem of illiberalism on campus, then getting op-eds into major national newspapers is a win. Commentators have pointed out that Hoffman was involved in the conservative movement long before he wrote the op-ed. He is also more likely to be able to enjoy a career in professional conservatism, if that is what he wants, after having published it. That doesn’t imply that he is insincere in his claims or his politics. As Binder and Kidder make clear, people’s beliefs and their organizational attachments influence each other on the left as well as the right (they find that one one of the problems faced by campus liberalism and left organization is that there aren’t enough careerist opportunities for their rabblerousers). But the op-ed isn’t an explanation of the causal relationship underlying the shift. It is an example of it.

Finally – that there is another political economy that we need to know more about. One of the most intriguing arguments that Binder and Kidder make is that conservatives are pretty well united around a strong pro free-speech position (even if some of them don’t like the provocateurs that get invited on campus), while liberals tend to be conflicted. Binder and Kidder see this as an opportunity for national left-liberal groups to articulate a better understanding that can be propagated to students.

But there is another way of thinking about it. One reason that the model of conservative campus outrage politics works, is that it is easier to use speech issues to split people on the left half of the political spectrum than it is to split people on the right. And much of the art of politics consists in highlighting the issues that will divide your adversaries (it’s an important element of what William Riker dubbed “heresthetics”). The disagreements over free speech rack national political debate as much as politics within colleges – hence the conservative strategy of crossing the streams, to rally their own troops and create disarray in the ranks of their opponents.

That helps explain why national newspapers keep on publishing pieces on this. It gets fights going, and attracts attention. It also helps explain the careers of people like Jonathan Chait (if you are in the attention economy, and the pieces you get most attention for are the pieces that are most likely to divide your readers, then it is not difficult to do the math about how to maintain readership, and it would take an unusually high degree of moral probity to resist the implicit pressure). But the broader implication is that the political economy of conservative student organizations that Binder and Kidder describe aren’t just linked to right wing media, but to the incentive structures of liberal media too. The strategy would be much less successful, if it didn’t play into liberal-versus-left tensions and attention dynamics too.

1 Binder will become a colleague of mine at Johns Hopkins’ SNF Agora Institute next year. My interest in her and her colleagues’ findings long pre-dates this, and I haven’t consulted her in writing this (any mistakes, exaggerations or misinterpretations are completely mine).

The Moral Economy of High-Tech Modernism

[the below is the main text of Henry Farrell and Marion Fourcade, “The Moral Economy of High-Tech Modernism,” published in the Winter 2023 issue of Daedalus under a Creative Commons license. For the original in HTML form, click here, and for a nicely formatted PDF, click here.]

Abstract

While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes.

Introduction

Algorithms—especially machine learning algorithms—have become major social institutions. To paraphrase anthropologist Mary Douglas, algorithms “do the classifying.”1 They assemble and they sort—people, events, things. They distribute material opportunities and social prestige. But do they, like all artifacts, have a particular politics?2 Technologists defend themselves against the very notion, but a lively literature in philosophy, computer science, and law belies this naive view. Arcane technical debates rage around the translation of concepts such as fairness and democracy into code. For some, it is a matter of legal exposure. For others, it is about designing regulatory rules and verifying compliance. For a third group, it is about crafting hopeful political futures.3

The questions from the social sciences are often different: How do algorithms concretely govern? How do they compare to other modes of governance, like bureaucracy or the market? How does their mediation shape moral intuitions, cultural representations, and political action? In other words, the social sciences worry not only about specific algorithmic outcomes, but also about the broad, society-wide consequences of the deployment of algorithmic regimes—systems of decision-making that rely heavily on computational processes running on large databases. These consequences are not easy to study or apprehend. This is not just because, like bureaucracies, algorithms are simultaneously rule-bound and secretive. Nor is it because, like markets, they are simultaneously empowering and manipulative. It is because they are a bit of both. Algorithms extend both the logic of hierarchy and the logic of competition. They are machines for making categories and applying them, much like traditional bureaucracy. And they are self-adjusting allocative machines, much like canonical markets.

Understanding this helps highlight both similarities and differences between the historical regime that political scientist James Scott calls “high modernism” and what we dub high-tech modernism.4 We show that bureaucracy, the typical high modernist institution, and machine learning algorithms, the quintessential high-tech modernist one, share common roots as technologies of hierarchical classification and intervention. But whereas bureaucracy reinforces human sameness and tends toward large, monopolistic (and often state-based) organizations, algorithms encourage human competition, in a process spearheaded by large, near-monopolistic (and often market-based) organizations. High-tech modernism and high modernism are born from the same impulse to exert control, but are articulated in fundamentally different ways, with quite different consequences for the construction of the social and economic order. The contradictions between these two moral economies, and their supporting institutions, generate many of the key struggles of our times.

=====

Both bureaucracy and computation enable an important form of social power: the power to classify.5 Bureaucracy deploys filing cabinets and memorandums to organize the world and make it “legible,” in Scott’s terminology. Legibility is, in the first instance, a matter of classification. Scott explains how “high modernist” bureaucracies crafted categories and standardized processes, turning rich but ambiguous social relationships into thin but tractable information. The bureaucratic capacity to categorize, organize, and exploit this information revolutionized the state’s ability to get things done. It also led the state to reorder society in ways that reflected its categorizations and acted them out. Social, political, and even physical geographies were simplified to make them legible to public officials. Surnames were imposed to tax individuals; the streets of Paris were redesigned to facilitate control.

Yet high modernism was not just about the state. Markets, too, were standardized, as concrete goods like grain, lumber, and meat were converted into abstract qualities to be traded at scale.6 The power to categorize made and shaped markets, allowing grain buyers, for example, to create categories that advantaged them at the expense of the farmers they bought from. Businesses created their own bureaucracies to order the world, deciding who could participate in markets and how goods ought to be categorized.

We use the term high-tech modernism to refer to the body of classifying technologies based on quantitative techniques and digitized information that partly displaces, and partly is layered over, the analog processes used by high modernist organizations. Computational algorithms—especially machine learning algorithms—perform similar functions to the bureaucratic technologies that Scott describes. Both supervised machine learning (which classifies data using a labeled training set) and unsupervised machine learning (which organizes data into self-discovered clusters) make it easier to categorize unstructured data at scale. But unlike their paper-pushing predecessors in bureaucratic institutions, the humans of high-tech modernism disappear behind an algorithmic curtain. The workings of algorithms are much less visible, even though they penetrate deeper into the social fabric than the workings of bureaucracies. The development of smart environments and the Internet of Things has made the collection and processing of information about people too comprehensive, minutely geared, inescapable, and fast-growing for considered consent and resistance.

In a basic sense, machine learning does not strip away nearly as much information as traditional high modernism. It potentially fits people into categories (“classifiers”) that are narrower—even bespoke. The movie streaming platform Netflix will slot you into one of its two thousand–plus “microcommunities” and match you to a subset of its thousands of subgenres. Your movie choices alter your position in this scheme and might in principle even alter the classificatory grid itself, creating a new category of viewer reflecting your idiosyncratic viewing practices.

Many of the crude, broad categories of nineteenth-century bureaucracies have been replaced by new, multidimensional classifications, powered by machine learning, that are often hard for human minds to grasp.7 People can find themselves grouped around particular behaviors or experiences, sometimes ephemeral, such as followers of a particular YouTuber, subprime borrowers, or fans of action movies with strong female characters. Unlike clunky high modernist categories, high-tech modernist ones can be emergent and technically dynamic, adapting to new behaviors and information as they come in. They incorporate tacit information in ways that are sometimes spookily right, and sometimes disturbing and misguided: music-producing algorithms that imitate a particular artist’s style, language models that mimic social context, or empathic AI that supposedly grasps one’s state of mind.8 Generative AI technologies can take a prompt and generate an original picture, video, poem, or essay that seems to casual observers as though it were produced by a human being.

Taken together, these changes foster a new politics. Traditional high modernism did not just rely on standard issue bureaucrats. It empowered a wide variety of experts to make decisions in the area of their particular specialist knowledge and authority. Now, many of these experts are embattled, as their authority is nibbled away by algorithms whose advocates claim are more accurate, more reliable, and less partial than their human predecessors.

=====

One key difference between the moral economies of high modernism and high-tech modernism involves feedback. It is tempting to see high modernism as something imposed entirely from above. However, in his earlier book Weapons of the Weak, Scott suggests that those at the receiving end of categorical violence are not passive and powerless.9 They can sometimes throw sand into the gears of the great machinery.

As philosopher Ian Hacking explains, certain kinds of classifications—typically those applying to human or social collectives—are “interactive” in that

when known by people or those around them, and put to work in institutions, [they] change the ways in which individuals experience themselves—and may even lead people to evolve their feelings and behavior in part because they are so classified.10

People, in short, have agency. They are not submissive dupes of the categories that objectify them. They may respond to being put in a box by conforming to or growing into those descriptions. Or they may contest the definition of the category, its boundaries, or their assignment to it.11 This creates a feedback loop in which the authors of classifications (state officials, market actors, experts from the professions) may adjust the categories in response. Human society, then, is forever being destructured and restructured by the continuous interactions between classifying institutions and the people and groups they sort.

But conscious agency is only possible when people know about the classifications: the politics of systems in which classifications are visible to the public, and hence potentially actionable, will differ from the politics of systems in which they are not.

So how does the change from high modernism to high-tech modernism affect people’s relationships with their classifications? At its worst, high modernism stripped out tacit knowledge, ignored public wishes and public complaints, and dislocated messy lived communities with sweeping reforms and grand categorizations, making people more visible and hence more readily acted on. The problem was not that the public did not notice the failures, but that their views were largely ignored. Authoritarian regimes constricted the range of ways in which people could respond to their classification: anything more than passive resistance was liable to meet brutal countermeasures. Democratic regimes were, at least theoretically, more open to feedback, but often ignored it when it was inconvenient and especially when it came from marginalized groups.

The pathologies of computational algorithms are often more subtle. The shift to high-tech modernism allows the means of ensuring legibility to fade into the background of the ordinary patterns of our life. Information gathering is woven into the warp and woof of our existence, as entities gather ever finer data from our phones, computers, doorbell cameras, purchases, and cars. There is no need for a new Haussmann to transform cramped alleyways into open boulevards, exposing citizens to view.12 Urban architectures of visibility have been rendered nearly redundant by the invisible torrents of data that move through the air, conveying information about our movements, our tastes, and our actions to be sieved through racks of servers in anonymous, chilled industrial buildings.

The feedback loops of high-tech modernism are also structurally different. Some kinds of human feedback are now much less common. Digital classification systems may group people in ways that are not always socially comprehensible (in contrast to traditional categories such as female, married, Irish, or Christian). Human feedback, therefore, typically requires the mediation of specialists with significant computing expertise, but even they are often mystified by the operation of systems they have themselves designed.13

The political and social mechanisms through which people previously responded, actively and knowingly, to their categorization—by affirming, disagreeing with, or subverting it—have been replaced by closed loops in which algorithms assign people unwittingly to categories, assess their responses to cues, and continually update and reclassify them. The classifications produced by machine learning are cybernetic, in mathematician Norbert Wiener’s original sense of the word. That is, they are self-correcting: categories are automatically and dynamically adjusted in light of the reactions that they produce.

The changing politics of credit in the United States helps illuminate these differences. Until the 1970s, broad demographic characteristics such as gender or race—or high modernist proxies such as marital status or the redlining of poor, primarily Black neighborhoods—were routinely used to determine a person’s creditworthiness. It is only when categorical discrimination was explicitly forbidden that new actuarial techniques, aimed at precisely scoring the “riskiness” of specific individuals, started to flourish in the domain of credit.14

This did not just change how lenders “saw” individuals and groups, but also how individuals and groups thought about themselves and the politics that were open to them.15 Redlining was overt racial prejudice, visible to anyone who bothered looking at a map. But credit scoring turned lending risk evaluation into a quantitative, individualized, and abstract process. Contesting the resulting classifications or acting collectively against them became harder. Later, the deployment of machine learning—which uses even weaker signals to make its judgments, like using one’s phone’s average battery level to determine their likelihood to repay their loan—made the process of measuring creditworthiness even more opaque and difficult to respond to.16

Predictive scores that rely on behavioral measures eschew blatant racial discrimination. But it would be a mistake to think that they eliminate racial disparities—they just make them harder to see, sometimes allowing them to ramify further.17 This is why the political struggle against algorithms has emphasized historical biases embedded in training data sets and the inherent unfairness and poor performance of nontransparent, automated decision-making. The European Commission has proposed to regulate the use of “high risk” algorithms that endanger fundamental rights, subjecting them to frequent human review.18 This would include the use of algorithms for public benefit eligibility, credit scoring, law enforcement, immigration control, employment, and more. Finally, traditional high modernist professionals—including judges, journalists, and law enforcement officers—have also pushed back against the use of algorithms in their work, treating them as irrelevant, inefficient, or a status threat.19

=====

The moral economy of high-tech modernism is market-driven, both practically and ideologically. Many algorithm-based start-ups want to expand market share rapidly and aggressively. Once revenues exceed fixed costs, the additional cost of adding a new user is comparatively tiny. Platform companies like Facebook or YouTube can serve billions of customers with tens of thousands of employees. Machine learning algorithms can gather data about users and dynamically provide and adjust flows of content, while auction and matching algorithms can maintain dynamic markets for advertisers who want access to customers with specific demographic characteristics.

Algorithms institutionalize competition between units (whether people, organizations, or ideas) by fostering a market-based vision of fairness.20 The threat of being automated away looms large for all workers. Algorithmic technologies can also be implemented to hire and fire, to predict performance, influence, and riskiness, or to surveil, discipline, and arrest. They do so by rank-ordering according to their own particular versions of merit.21 It is as though anyone who applies themselves can do well, and social structure and existing power allocations did not matter. (The irony is that while high-tech modernist firms are happy to turn the market screw on everyone else, they strive to establish monopoly for themselves).22

Just like the behavior of individuals, the distribution of knowledge must be subjected to the market test. High-tech modernism claims to represent popular judgment against the snobbishness of elites. Remember that Scott identifies high modernism as inherently antidemocratic because it enforces categories and objectives decided on by elites who “know better.”23 High-tech modernism, by contrast, systematically undermines elite judgment, fueling a crisis of expertise.24 Algorithms purport to read X-rays better than radiologists, predict purchases better than market researchers, understand people’s sexuality better than they themselves do, and produce new text or code better than many professional writers and engineers. Meanwhile, they elevate a kind of bottom-up wisdom. The network leaves it up to the crowd to judge what is worth knowing, generating collective sentiments through likes, clicks, and comments. Viral trends and online multitudes provide a kind of pseudodemocratic, if extremely volatile, vox populi.

The absence of visible hierarchy legitimates high-tech modernism’s claim that clouds and crowds best represent people’s wishes. Its new elites echo early libertarian arguments about cyberspace, and quasi-Hayekian defenses of the market, facially justifying the notion that search engines and other algorithms are disinterested means of processing the internet’s naturally dispersed stock of knowledge.25 They flatter high-tech modernism as defending the liberties of the individual, freed from physical and social bonds, against traditional status hierarchies. The abundant data that people “freely” upload or leave behind as they roam cyberspace become “an unqualified good,” fostering beneficial competition for everyone and everything.26

The awkward fact is that hierarchy has not disappeared. It has only become less visible. Platform companies’ business priorities determine the algorithms that are employed, as well as their “objective functions,” the weighted goals that they are supposed to maximize on. Social media corporations employ algorithms that maximize “engagement,” keeping consumers scrolling through feeds or watching video clips so that they keep seeing paid content that may itself be misleading. Amazon, in contrast, cares more about getting people to buy things, and, according to legal scholar and Federal Trade Commission Chair Lina Khan, uses its detailed transaction information and ability to rank search outcomes to fortify its market dominance.27 Platform companies dislike even tweaking their algorithms in response to regulators’ demands for fear that it might create a precedent for further interventions that would conflict with their business model.

As search engines have transformed from general-purpose technology to personal digital assistants, they have elevated searching the web and forming an opinion “for oneself” into a normative principle. People think of search engines as oracles, but as sociologist Francesca Tripodi and others have shown, they work more like distorting mirrors that variously confirm, exacerbate, or take advantage of people’s priors.28 Our interests and beliefs are embedded in the vocabulary we use, the questions we ask, perhaps our whole search history. YouTube, Facebook, and other social media present content based on what we have wanted to see in the past, and what other people who are like us across some array of dimensions have wanted to see.

In this way, platform companies have become knowledge intermediaries, like newspapers or school curriculum boards, while insulating themselves from traditional accountability. Their algorithms and (perhaps just as important) sharing and search tools help foster categories that can become self-reinforcing private universes of discourse, producing echo chambers in which other voices are silenced, or epistemic bubbles that guide users to apparent authorities who actively look to discredit other sources of information.29 However, the invisibility of hierarchy allows these knowledge intermediaries to justify themselves on laissez-faire principles, not telling the public what to trust, even while they quietly sink deeper into the Augean mire of moderating offensive, false, or untrue content.

Our universe of accessible knowledge is shaped by categorization processes that are invisible and incomprehensible to ordinary users, according to principles that have little regard for whether it is well sourced. The outcome is that the way that people “take [their] bearings in the world” is slowly changing.30 Visible feedback loops between the people being categorized, the knowledge they have access to, and the processes through which the categories are generated are replaced by invisible loops mediated through algorithms that maximize on commercial imperatives, sometimes creating incompatible and self-sustaining islands of shared (“post-truth”) beliefs among micropublics who have been categorized in particular ways, and who may themselves act to reinforce the categories. A new terrain of political struggle has arisen, involving the exploitation of information systems and algorithmic dynamics for partisan advantage.

This is a different set of moral pathologies than those suggested by social psychologist Shoshana Zuboff, who emphasizes platform companies’ manipulation of people’s wants and beliefs, which might or might not succeed.31 The more corrosive threat may be that people have been convinced that the high-tech modernist system of knowledge generation is an open buffet where “anything goes,” and that keeping it that way is essential to their own freedom. Anyone can offer content, anyone can be their own expert, and it is up to the algorithm to sort it out. Further, the new existential condition of transparency has provided everyone with potent tools to expose or doubt others, only moderated by their own vulnerability to be exposed in turn—an inherently agonistic situation.

=====

At the end of the day, the relationship between high modernism and high-tech modernism is a struggle between two elites: a new elite of coders, who claim to mediate the wisdom of crowds, and an older elite who based their claims to legitimacy on specialized professional, scientific, or bureaucratic knowledge.32 Both elites draw on rhetorical resources to justify their positions; neither is disinterested.

The robust offense and disbelief that many people feel about algorithmic judgments suggests that the old high modernist moral political economy, faults and all, is not quite dead. The new moral political economy that will replace it has not yet matured, but is being bred from within. Articulated by technologists and their financial backers, it feeds in a kind of matriphagy on the enfeebled body (and the critique) of its progenitor. Just as high modernist bureaucracies did before, high-tech modernist tools and their designers categorize and order things, people, and situations. But they do so in distinctive ways. By embedding surveillance into everything, they have made us stop worrying about it, and perhaps even come to love it.33 By producing incomprehensible bespoke categorizations, they have made it harder for people to identify their common fate. By relying on opaque and automated feedback loops, they have reshaped the possible pathways to political reaction and resistance. By increasing the efficiency of online coordination, they have made mobilization more emotional, ad hoc, and collectively unstable. And by insisting on market fairness and the wisdom of crowds as organizing social concepts, they have fundamentally transformed our moral intuitions about authority, truth, objectivity, and deservingness.

AUTHORS’ NOTE
We are grateful to Jenna Bednar, Angus Burgin, Eric Beinhocker, danah boyd, Robyn Caplan, Federica Carugati, Maciej Ceglowski, Jerry Davis, Deborah Estrin, Martha Finnemore, Sam Gill, Peter Hall, Kieran Healy, Rebecca Henderson, Natasha Iskander, Bill Janeway, Joseph Kennedy III, Jack Knight, Margaret Levi, Charlton McIlwain, Margaret O’Mara, Suresh Naidu, Bruno Palier, Manuel Pastor, Paul Pierson, Kate Starbird, Kathy Thelen, Lily Tsai, and Zeynep Tufekci for comments on an earlier version of this essay.

Endnotes
1 Mary Douglas, How Institutions Think (Syracuse, N.Y.: Syracuse University Press, 1986), 91.
2 Langdon Winner, “Do Artifacts Have Politics?” Dædalus 109 (1) (Winter 1980): 121–136.
3 Virginia Eubanks, “The Mythography of the ‘New’ Frontier,” MIT Communications Forum, 1999.
4 James Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (New Haven, Conn.: Yale University Press, 1998).
5 Robyn Caplan and danah boyd, “Isomorphism through Algorithms: Institutional Dependencies in the Case of Facebook,” Big Data & Society 5 (1) (2018): 1–12.
6 William Cronon, Nature’s Metropolis: Chicago and the Great West (New York: W. W. Norton, 1991).
7 Marion Fourcade and Kieran Healy, “Seeing Like a Market,” Socio-Economic Review 15 (1) (2017): 9–29.
8 Luke Stark, “The Emotive Politics of Digital Mood Tracking,” New Media and Society 22 (11) (2020): 2039–2057.
9 James Scott, Weapons of the Weak: Everyday Forms of Peasant Resistance (New Haven, Conn.: Yale University Press, 1985).
10 Ian Hacking, The Social Construction of What? (Cambridge, Mass.: Harvard University Press, 1999), 103–104.
11 Geoffrey Bowker and Susan Leigh Star, Sorting Things Out: Classification and Its Consequences (Cambridge, Mass.: The MIT Press, 1999).
12 Georges-Eugène Haussmann was the prefect responsible for the renewal and reimagining of Paris in Napoleonic France.
13 Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016); and Jenna Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms,” Big Data & Society 3 (1) (2016): 1–12.
14 Martha Poon, “From New Deal Institutions to Capital Markets: Commercial Consumer Risk Scores and the Making of Subprime Mortgage Finance,” Accounting, Organizations and Society 34 (5) (2009): 654–674.
15 Greta Krippner, “Democracy of Credit: Ownership and the Politics of Credit Access in Late Twentieth-Century America,” American Journal of Sociology 123 (1) (2017): 1–47.
16 Kai-Fu Lee, AI Superpowers: China, Silicon Valley and the New World Order (New York: Harper Business, 2018).
17 Solon Barocas and Andrew D. Selbst, “Big Data’s Disparate Impact,” California Law Review 104 (3) (2016): 671–732; Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Cambridge, Mass.: Polity, 2019); and Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: NYU Press, 2018).
18 European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (Brussels: European Commission, 2021).
19 Angèle Christin, Metrics at Work: Journalism and the Contested Meaning of Algorithms (Princeton, N.J.: Princeton University Press, 2020); and Sarah Brayne, Predict and Surveil: Data, Discretion, and the Future of Policing (New York: Oxford University Press, 2020).
20 Barbara Kiviat, “The Moral Limits of Predictive Practices: The Case of Credit-Based Insurance Scores,” American Sociological Review 84 (6) (2019): 1134–1158.
21 Marion Fourcade, “Ordinal Citizenship,” The British Journal of Sociology 72 (2) (2021): 154–173.
22 Peter Thiel, “Competition Is for Losers,” The Wall Street Journal, September 12, 2014.
23 Scott, Seeing Like a State.
24 Gil Eyal, The Crisis of Expertise (Cambridge, Mass.: Polity, 2019).
25 John Perry Barlow, “A Declaration of the Independence of Cyberspace,” Electronic Frontier Foundation, February 8, 1996; Friedrich von Hayek, “The Uses of Knowledge in Society,” American Economic Review 35 (4) (1947): 519–530; Friedrich von Hayek, “Competition as a Discovery Procedure,” The Quarterly Journal of Austrian Economics 5 (3) (2002): 9–23; and Evgeny Morozov, “Digital Socialism? The Socialist Calculation Debate in the Age of Big Data,” New Left Review 116/117 (2019).
26 Wendy Brown, Undoing the Demos: Neoliberalism’s Stealth Revolution (Princeton, N.J.: Princeton University Press, 2015), 157.
27 Lina M. Khan, “Amazon’s Antitrust Paradox,” Yale Law Journal 126 (3) (2016–2017): 710–805.
28 Francesca Tripodi, Searching for Alternative Facts. Analyzing Scriptural Inference in Conservative News Practices (New York: Data & Society, 2018).
29 C. Thi Nguyen, “Echo Chambers and Epistemic Bubbles,” Episteme 17 (2) (2020): 141–161.
30 Hannah Arendt, “Truth and Politics,” in The Portable Hannah Arendt, ed. Peter Baehr (London: Penguin Classics, 2000), 568.
31 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: Public Affairs, 2019); and Tim Hwang, Subprime Attention Crisis (New York: FSG Originals, 2020).
32 William Davies, “Elite Power Under Advanced Neoliberalism,” Theory, Culture and Society 34 (5–6) (2017): 227–250; and Jenna Burrell and Marion Fourcade, “The Society of Algorithms,” Annual Review of Sociology 47 (2021): 213–237.
33 Nitsan Chorev, “The Virus and the Vessel, or: How We Learned to Stop Worrying and Love Surveillance,” Socio-Economic Review 19 (4) (2021): 1497–1513.

❌