FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Counterfeit digital persons: On Dennett’s Intentional Stance, The Road to Serfdom

A few weeks ago, Daniel Dennett published an alarmist essay (“Creating counterfeit digital people risks destroying our civilization”) in The Atlantic that amplified concerns Yuval Noah Harari expressed in the Economist.+ (If you are in a rush, feel free to skip to the next paragraph because what follows are three quasi-sociological remarks.) First, Dennett’s piece is (sociologically) notable because in it he is scathing of the “AI community” (many of whom are his fanbase) and its leading corporations (“Google, OpenAI, and others”). Dennett’s philosophy has not been known for leading one to a left-critical political economy, and neither has Harari’s. In addition, Dennett’s piece is psychologically notable because it goes against his rather sunny disposition — he is a former teacher and sufficiently regular acquaintance — and the rather optimistic persona he has sketched of himself in his writings (recall this recent post); alarmism just isn’t Dennett’s shtick. Third, despite their prominence neither Harari nor Dennett’s pieces really reshaped the public discussion (in so far as there (still) is a public). And that’s because it competes with the ‘AGI induced extinction’ meme, which, despite being a lot more far-fetched, is scarier (human extinction > fall of our civilization) and is much better funded and supported by powerful (rent-seeking) interests.

Here’s Dennett’s core claim(s):

Money has existed for several thousand years, and from the outset counterfeiting was recognized to be a very serious crime, one that in many cases calls for capital punishment because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created… 

Another pandemic is coming, this time attacking the fragile control systems in our brains—namely, our capacity to reason with one another—that we have used so effectively to keep ourselves relatively safe in recent centuries.

You may ask, ‘What does this have to do with the intentional stance?’ For Dennett goes on to write, “Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the “intentional stance”—turns out to be easy to invoke and almost impossible to resist, even for experts. We’re all going to be sitting ducks in the immediate future.” This is a kind of (or at least partial) road to serfdom thesis produced by our disposition to take up the intentional stance. In what follows I show how these concepts come together by the threat posed by AIs designed to fake personhood.

More than a half century ago, Dan Dennett re-introduced a kind of (as-if) teleological explanation into natural philosophy by coining and articulation (over the course of a few decades of refinement), the ‘intentional stance’ and its role in identifying so-called ‘intentional systems,’ which just are those entities to which ascription of the intentional stance is successful. Along the way, he gave different definitions of the intentional stance (and what counts as success). But here I adopt the (1985) one:

It is a familiar fact from the philosophy of science that prediction and explanation can come apart.*  I mention this because it’s important to see that the intentional stance isn’t mere or brute  instrumentalism. The stance presupposes prediction and explanation as joint necessary conditions.

In the preceding two I have treated the intentional stance as (i) an explanatory or epistemic tool that describes a set of strategies for analyzing other entities (including humans and other kinds of agents) studied in cognitive science and economics (one of Dennett’s original examples).** But as the language of ‘stance’ suggests and as Dennett’s examples often reveal the intentional stance also describes our own (ii) ordinary cognitive practice even when we are not doing science. In his 1971 article, Dennett reminds the reader that this is “easily overlooked.” (p.93) For, Dennett the difference between (i-ii) is one of degree (this is his debt to his teacher Quine, but for present purposes it useful to keep them clearly distinct (and when I need to disambiguate I will use ‘intentional stance (i)’ vs ‘intentional stance (ii).’)

Now, as Dennett already remarked in his original (1971) article, but I only noticed after reading Rovane’s (1994) “The Personal Stance,” back in the day, there is something normative about the intentional stance because of the role of rationality in it (and, as Dennett describes, the nature of belief). And, in particular, it seems natural that when we adopt the intentional stance in our ordinary cognitive practice we tacitly or explicitly ascribe personhood to the intentional system. As Dennett puts it back in 1971, “Whatever else a person might be-embodied mind or soul, self-conscious moral agent, “emergent” form of intelligence-he is an Intentional system, and whatever follows just from being an Intentional system thus is true of a person.” Let me dwell on a complication here.

That, in ordinary life, we are right to adopt the intentional stance toward others is due to the fact that we recognize them as persons, which is a moral and/or legal status. In fact, we sometimes even adopt the intentional stance(ii) in virtue of this recognition even in high stakes contexts (e.g., ‘what would the comatose patient wish in this situation?’) That we do so may be the effect of Darwinian natural selection, as Dennett implies, and that it is generally a successful practice may also be the effect of such selection. But it does not automatically follow that when some entity is treated successfully as an intentional system it thereby is or even should be a person. Thus, whatever follows just from being an intentional system is true of a person, but (and this is the complication) it need not be the case that what is true of a person is true of any intentional system. So far so good. With that in place let’s return to Dennett’s alarmist essay in The Atlantic, and why it instantiates, at least in part, a road to serfdom thesis.

At a high level of generality, a road to serfdom thesis holds (this is a definition I use in my work in political theory) that an outcome unintended to social decisionmakers [here profit making corporations and ambitious scientists] is foreseeable to the right kind of observer [e.g., Dennett, Harari] and that the outcome leads to a loss of political and economic freedom over the medium term. I use ‘medium’ here because the consequences tend to follow in a time frame within an ordinary human life, but generally longer than one or two years (which is the short-run), and shorter than the centuries’ long process covered by (say) the rise and fall of previous civilization. (I call it a ‘partial’ road to serfdom thesis because a crucial plank is missing–see below.)

Before I comment on Dennett’s implied social theory, it is worth noting two things (and the second is rather more important): first, adopting the intentional stance is so (to borrow from Bill Wimsatt) entrenched into our ordinary cognitive practices that even those who can know better (“experts”) will do so in cases where they may have grounds to avoid doing so. Second, Dennett recognizes that when we adopt the intentional stance(ii) we have a tendency to confer personhood on the other (recall the complication.) This mechanism helps explain, as Joshua Miller observed, how that Google engineer fooled himself into thinking he was interacting with a sentient person.

Of course, a student of history, or a reader of science fiction, will immediately recognize that this tendency to confer personhood on intentional systems can be highly attenuated. People and animals have been regularly treated as things and instruments. So, what Dennett really means or ought to mean is that we will (or are) encounter(ing) intentional systems designed (by corporations) to make it likely that we will automatically treat them as persons. Since Dennett is literally the expert on this, and has little incentive to mislead the rest us on this very issue, it’s worth taking him seriously and it is rather unsettling that even powerful interests with a manifest self-interest in doing so are not.

Interestingly enough, in this sense the corporations who try to fool us are mimicking Darwinian natural selection because as Dennett himself has emphasized decades ago when the robot Cog was encountered in the lab, we all ordinarily have a disposition to treat, say, even very rudimentary eyes following/staring at us as exhibiting agency and as inducing the intentional stance into us. Software and human factor engineers have been taking advantage of this tendency all along to make our gadgets and tools ‘user friendly.’

Now, it is worth pointing out that while digital environments are important to our civilization, they are not the whole of it. So, even in the worst case scenario — our digital environment is already polluted in the way Dennett worries by self-replication counterfeit people–, you may think we still have some time to avoid conferring personhood on intentional systems in our physical environment and, thereby, also have time to partially cleanse our digital environment. Politicians still have to vote in person and many other social transactions (marriage, winning the NBA) still require in person attendance. This is not to deny that a striking number of transactions can be done virtually or digitally (not the least in the financial sector), but in many of these cases we also have elaborate procedures (and sanctions) to prevent fraud developed both by commercial parties and by civil society and government. This is a known arms race between identity-thieves, including self-replicating AI/LLMs who lack all sentience, and societies.

This known arms race actually builds on the more fundamental fact that society itself is the original identity thief because, generally, for all of us its conventions and laws both fix an identity where either there previously was none or displaces other (possible) identities, as well as, sometimes, takes away or unsettles the identity ‘we’ wish to have kept (and, here, too, there is a complex memetic arms race in which any token of a society is simultaneously the emergent property, but society (understood as a type) is the cause. [See David Haig’s book, From Darwin to Derrida, for more on this insight.]) And, of course, identity-fluidity also has many social benefits (as we can learn from our students or gender studies).

Now, at this point it is worth returning to the counterfeit money example that frames Dennett’s argument. It is not obvious that counterfeit money harmed society. It did harm the sovereign because undermined a very important lever of power (and its sovereignty) namely to insist that taxes are paid/levied in the very same currency/unit-system in which he/she paid salaries (and wrote IOUs) and other expenses. I don’t mean to suggest there are no other harms (inflation and rewarding ingenious counterfeiters), but these were both not that big a deal nor the grounds for making it a capital crime. (In many eras counterfeit money was useful to facilitate commerce in the absence of gold or silver coins.)

And, in fact, as sovereignty shifted to parliaments and people at the start of the nineteenth century, the death penalty for forgery and counterfeiting currency was abolished (and the penalties reduced over time). I suspect this is also due to the realization that where systematic forgeries are successful they do meet a social need and that a pluralist mass society itself is more robust than a sovereign who insists on full control over the mint. Dennett himself implicitly recognizes this, too, when he advocates “strict liability laws, removing the need to prove either negligence or evil intent, would keep them on their toes.” (This is already quite common in product liability and other areas of tort law around the world.)

I am not suggesting complacency about the risk identified by Harari and Dennett. As individuals, associations, corporations, and governments we do need to commit to developing tools that prevent and mitigate the risk from our own tendency to ascribe personhood to intentional systems designed to fool us. We are already partially habitualized to do so with all our passwords, two-factor verification, ID cards, passport controls etc.

In many ways, another real risk here, and which is why I introduced the road to serfdom language up above (despite the known aversion to Hayek among many readers here at CrookedTimber), is that our fear of deception can make us overshoot in risk mitigation and this, too, can undermine trust and many other benefits from relatively open and (so partially) vulnerable networks and practices. So, it would be good if regulators and governments started the ordinary practice of eliciting expert testimony to start crafting well designed laws right now and carefully calibrated them by attending to both the immediate risk from profit hungry AI community, and the long term risk of creating a surveillance society to prevent ascribing personhood to the wrong intentional systems (think Blade Runner). For, crucially for a (full) road to serfdom thesis, in order to ward off some unintended and undesirable consequences, decisions are taken along the way that tend to lock in a worse than intended and de facto bad political unintended outcome.

I could stop here, because this is my main point. But Dennett’s own alarmism is due to the fact that he thinks the public sphere (which ultimately has to support lawmakers) may already be so polluted that no action is possible. I quote again from The Atlantic:

Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. 

I don’t think our liberal democracy depends on the informed consent of the governed. This conflates a highly idealized and normative view of democracy (that one may associate with deliberative or republican theories) with reality. It’s probably an impossible ideal in relatively large societies with complex cognitive division of labor, including the (rather demanding) sciences. (And it is also an ideal that gets abused in arguments for disenfranchisement.) So, while an educated populace should be promoted, in practice we have all kinds of imperfect, overlapping institutions and practices that correct for the lack of knowledge (parties, press, interest groups, consumer associations, academics, and even government bureaucracies, etc.)

It doesn’t follow we should be complacent about the fact that many of the most economically and politically powerful people, corporations, and governments  control our attention which they already do a lot of the time. But this situation is not new; Lippmann and Stebbing diagnosed it over a century ago, and probably is an intrinsic feature of many societies. It is partially to be hoped that a sufficient number of the most economically and politically powerful people, corporations, governments, and the rest of us are spooked into action and social mobilization by Harari and Dennett to create countervailing mechanisms (including laws) to mitigate our tendency to ascribe personhood to intentional systems. (Hence this post.)

There is, of course, an alternative approach: maybe we should treat all intentional systems as persons and redesign our political and social lives accordingly. Arguably some of the Oxford transhumanists and their financial and intellectual allies are betting on this even if it leads to human extirpation in a successor civilization. Modern longtermism seems to be committed to the inference from intentional stance(i) to ascription of personhoodhood or moral worth. From their perspective Dennett and Harari are fighting a rear-guard battle.

 

*Here’s an example: before Newton offered a physics that showed how Kepler’s laws hung together, lots astronomers could marvelously predict eclipses of planetary moons based on inductive generalizations alone. How good were these predictions? They were so good that they generated the first really reliable measure or estimate for the speed of light.

**Fun exercise: read Dennett’s 1971 “Intentional Systems” after you read Milton Friedman’s  “The methodology of positive economics.” (1953) and/or Armen Alchian’s “Uncertainty, evolution, and economic theory” (1950). (No, I am not saying that Dennett is the Chicago economist of philosophy!)

+Full disclosure, I read and modestly commented on Dennett’s essay in draft.

Lessons from the Baruch Plan for Nuclear Weapons

The invention of atomic energy posed a novel global challenge: could the technology be controlled to avoid destructive uses and an existentially dangerous arms race while permitting the broad sharing of its benefits? From 1944 onwards, scientists, policymakers, and other technical specialists began to confront this challenge and explored policy options for dealing with the impact of nuclear technology. We focus on the years 1944 to 1951 and review this period for lessons for the governance of powerful technologies, and find the following: Radical schemes for international control can get broad support when confronted by existentially dangerous technologies, but this support can be tenuous and cynical. Secrecy is likely to play an important, and perhaps harmful, role. The public sphere may be an important source of influence, both in general and in particular in favor of cooperation, but also one that is manipulable and poorly informed. Technical experts may play a critical role, but need to be politically savvy. Overall, policymaking may look more like “muddling through” than clear-eyed grand strategy. Cooperation may be risky, and there may be many obstacles to success.

That is by Waqar Zaidi and Allan Dafoe, at the Centre for Governance of AI, exactly the kind of work people should be doing.

The post Lessons from the Baruch Plan for Nuclear Weapons appeared first on Marginal REVOLUTION.

The Nuclear Non-proliferation Treaty and existential AGI risk

The Nuclear Non-Proliferation Treaty, activated in 1970, has been relatively successful in limiting nuclear proliferation.  When it comes to nuclear weapons, it is hard to find good news, but the treaty has acted as one deterrent of many to nation-states acquiring nuclear arms.  Of course the treaty works, in large part, because the United States (working with allies) has lots of nuclear weapons, a powerful non-nuclear military, de facto control of SWIFT, and so on.  We strongly encourage nations not to go acquiring nuclear weapons — just look at the current sanctions on Iran, noting the policy does not always succeed.

One approach to AI risk is to treat it like nuclear weapons and also their delivery systems.  Let the United States get a lead, and then hope the U.S. can (in conjunction with others) enforce “OK enough” norms on the rest of the world.

Another approach to AI risk is to try to enforce a collusive agreement amongst all nations not to proceed with AI development, at least along certain dimensions, or perhaps altogether.

The first of these two options seems obviously better to me.  But I am not here to argue that point, at least not today.  Conditional on accepting the superiority of the first approach, all the arguments for AI safety are arguments for AI continuationism.  (And no, this doesn’t mean building a nuclear submarine without securing the hatch doors.)  At least for the United States.  In fact I do support a six-month AI pause — for China.  Yemen too.

It is a common mode of presentation in AGI circles to present wordy, swirling tomes of multiple concerns about AI risk.  If some outside party cannot sufficiently assuage all of those concerns, the writer is left with the intuition that so much is at stake, indeed the very survival of the world, and so we need to “play it safe,” and thus they are lead to measures such as AI pauses and moratoriums.

But that is a non sequitur.  The stronger the safety concerns, the stronger the arguments for the “America First” approach.  Because that is the better way of managing the risk.  Or if somehow you think it is not, that is the main argument you must make and persuade us of.

(Scott Alexander has a new post “Most technologies aren’t races,” but he doesn’t either choose one of the two approaches listed above, nor does he outline a third alternative.  Fine if you don’t want to call them “races,” you still have to choose.  As a side point, once you consider delivery systems, nuclear weapons are less of a yes/no thing than he suggests.  And this postulated take is a view that nobody holds, nor did we practice it with nuclear weapons: “But also, we can’t worry about alignment, because that would be an unacceptable delay when we need to “win” the AI “race”.”  On the terminology, Rohit is on target.  Furthermore, good points from Erusian.  And this claim of Scott’s shows how far apart we are in how we consider institutional and also physical and experimental constraints: “In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships.”)

Addendum:

As a side note, if the real issue in the safety debate is “America First” vs. “collusive international agreement to halt development,” who are the actual experts?  It is not in general “the AI experts,” rather it is people with experience in and study of:

1. Game theory and collective action

2. International agreements and international relations

3. National security issues and understanding of how government works

4. History, and so on.

There is a striking tendency, amongst AI experts, EA types, AGI writers, and “rationalists” to think they are the experts in this debate.  But they are only on some issues, and many of those issues (“new technologies can be quite risky”) are not so contested. And because these individuals do not frame the problem properly, they are doing relatively little to consult what the actual “all things considered” experts think.

The post The Nuclear Non-proliferation Treaty and existential AGI risk appeared first on Marginal REVOLUTION.

No-Bullshit Democracy

Hugo Mercier, Melissa Schwartzberg and I have two closely related publications on what we’ve been calling “No-Bullshit Democracy.” One is aimed at academics – it’s a very short piece that has just been officially published in American Political Science Review. The other just came out in Democracy. It’s aimed at a broader audience, and is undoubtedly livelier. An excerpt of the Democracy piece follows – if you want to read it, click on this link. The APSR academic letter (which can be republished under a Creative Commons license) is under the fold. Which one you might want to read depends on whether you value footnotes more than fisticuffs, or vice versa …

The New Libertarian Elitists

What might be called “no-bullshit democracy” would be a new way of structuring democratic disagreement that would use human argumentativeness as a rapid-growth fertilizer. … But first we need to sluice away the bullshit that is being liberally spread around by anti-democratic thinkers. … . Experts, including Brennan and Caplan (and for that matter ourselves), can be at least as enthusiastic as ordinary citizens to grab at ideologically convenient factoids and ignore or explain away inconvenient evidence. That, unfortunately, is why Brennan and Caplan’s books do a better job displaying the faults of human reasoning than explaining them.

Analytical Democratic Theory: A Microfoundational Approach

Henry Farrell, Hugo Mercier and Melissa Schwartzberg.

Abstract

A prominent and publicly influential literature challenges the quality of democratic decision making, drawing on political science findings with specific claims about the ubiquity of cognitive bias to lament citizens’ incompetence. A competing literature in democratic theory defends the wisdom of crowds, drawing on a cluster of models in support of the capacity of ordinary citizens to produce correct outcomes. In this Letter, we draw on recent findings in psychology to demonstrate that the former literature is based on outdated and erroneous claims and that the latter is overly sanguine about the circumstances that yield reliable collective decision making. By contrast, “interactionist” scholarship shows how individual-level biases are not devastating for group problem solving, given appropriate conditions. This provides possible microfoundations for a broader research agenda similar to that implemented by Elinor Ostrom and her colleagues on common-good provision, investigating how different group structures are associated with both success and failure in democratic decision making. This agenda would have implications for both democratic theory and democratic practice.

Over the last 15 years a prominent academic literature tied to libertarian thought has argued that democracy is generally inferior to other forms of collective problem solving such as markets and the rule of cognitive elites (Brennan 2016; Caplan 2008; Somin 2016). Following a long tradition of skepticism about democracy, these libertarians appeal to findings in cognitive and social psychology and political behavior to claim that decision making by ordinary citizens is unlikely to be rational or well grounded in evidence. Their arguments have been covered in magazines such as the New Yorker (Crain 2016) and popularized in proposals in the National Review for restrictions to dissuade “ignorant” people from voting (Mathis-Lilley 2021). Democratic theorists have mostly retorted with “epistemic” accounts, invoking mechanisms through which citizens can potentially reach good decisions—most significantly, deliberative mechanisms (Schwartzberg 2015).

This debate has been largely unproductive. Libertarian skeptics argue that democracy is generally inferior because of incorrigible flaws in citizens’ individual psychology, whereas democratic theorists lack a shared, compelling, and realistic micropsychological theory within which to ground their broader claims. Each side emphasizes empirical evidence that appears to support its own interpretation while discounting counterevidence.

This letter adopts a different approach. It demonstrates that democratic skeptics’ pessimistic conclusion—that democracy is unfixable—rests on a misleading and outdated account of the relevant psychological literature. Similarly, epistemic democrats often overestimate deliberation’s role in producing wise results or assume that aggregative models will operate at scale. We seek to avoid unwarranted skepticism and enthusiasm alike, instead providing microfoundations for a more empirically robust program investigating both the successes and mishaps of democracy, drawing on the experimental psychological literature on group problem solving (inter alia) to discover the conditions under which specific institutions perform well or fail in discovering solutions to collective problems.

Adapting a term from past debates, we contribute one foundational element of an approach that might be dubbed “analytical democracy.” Like the “analytical Marxism” associated with scholars such as G. A. Cohen, Jon Elster, John Roemer, and Adam Przeworski (see Roemer 1986), we provide more demanding and specific microfoundations for an account we find broadly sympathetic. Our research program might also be analogized to Ostrom’s work on the decentralized provision of common goods (Ostrom 1990). This emerged in response to Garrett Hardin’s influential article on “the tragedy of the commons,” which claimed that common-goods governance would inevitably collapse (Hardin 1968). Ostrom and her colleagues tested and falsified Hardin’s claims. However, rather than simply defending the proposition that decentralized communities could provide common goods, they investigated when common-good provision was likely to succeed or fail. Similarly, a research program on democratic problem solving, investigating success and failure, might not only provide possible foundations for a truly realistic account of democracy but also generate practical advice on building and improving democratic institutions. This program would build on research on the consequences of group composition and structure to understand the conditions under which democratic problem solving will operate well or badly.

Democratic Skepticism, Optimism and Social Science
A recent pessimistic literature, dominated by libertarian scholars, diagnoses widespread democratic ignorance and incompetence. Bryan Caplan (2008, 19) asserts that voters are irrational and “rule by demagogues … is the natural condition of democracy.” Jason Brennan believes that the democratic electorate is “systematically incompetent” so “some people ought not have the right to vote, or ought to have weaker voting rights than others” (Brennan 2016, 201, viii). Ilya Somin claims that “widespread public ignorance is a type of pollution” so that “democracy might function better if its powers were more tightly limited” (Somin 2016, 6, 9).

Each argues that democracy is profoundly flawed because of irremediable problems in individual incentives and cognition. Each proposes circumscribing democracy in favor of some purportedly superior alternative principle of social organization. Caplan claims that markets impose an effective “user fee” for irrationality that is absent from democracy (Caplan 2008, 133–4). Brennan proposes “epistocracy,” an aristocracy of those who know best. He defends restrictions on suffrage, identifying familiar possibilities such as restricting the franchise to those who pass a voter qualification exam and assigning plural votes to college graduates. Somin advocates what he calls “foot voting” (exit) over “ballot box voting” and emphasizes “the market and civil society as an alternative to government” (Somin 2016, 154), although he admits that the benefits “are likely to vary from issue to issue, from nation to nation, and perhaps also from group to group” (180).

These scholars ground their claims in social science findings. They invoke a literature leading back to Downs’s (1957) argument that citizens are rationally ignorant about politics because they do not have sufficient incentive to gather good information or to make good decisions. They emphasize that ordinary citizens display severe cognitive bias. Caplan (2008) blames such biases for differences between voters’ beliefs about economics and the beliefs of PhD economists, which he takes as a reasonable representation of empirical truth. Brennan (2016, 37ff) and Somin (2016, 94ff) cite work showing that biases lead people to search for information that supports their prior views and “not only reject new information casting doubt on their beliefs but sometimes actually respond by believing in them even more fervently” (Somin, 93–4; invoking the “backfire effects” described in Nyhan and Reifler 2010).

Brennan (2016, 40) unites rational ignorance and cognitive bias into a single stylized account in which most voters are either low information “hobbits” (ignorant) or politically fanatical “hooligans” (biased). He invokes Mercier and Sperber’s explanation of how “[r]easoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments” (Brennan 2016, 38). Furthermore, “human beings are wired not to seek truth and justice but to seek consensus… . They cower before uniform opinion” (Brennan 2012, 8; see also Brennan 2016, 47) as demonstrated by the famous Asch (1956) “conformity experiments,” where participants followed the obviously false opinions of confederates who were sitting next to them.

Achen and Bartels’ (2016) “realist” account of democracy does not share the skeptics’ normative priors but provides a similarly bleak judgment. They too draw on Asch and “similar studies” for social psychological microfoundations that stress the force of group identity and conformity (Achen and Bartels 2016, 220).

There is little scope for democratic problem solving if individual consensus seeking invariably leads to group conformity and “echo chambers” (Sunstein 2002), affective polarization (Iyengar et al. 2018), the rejection of countervailing arguments from nongroup members, and backfire effects. Yet it is far from clear that the despairing picture is empirically accurate. Growing affective polarization may not increase ideological polarization and extremism (e.g., Desmet and Wacziarg 2021). People’s economic beliefs are affected by economic reality (e.g. Duch and Stevenson 2008). Party leaders influence party members on some issues but on others adopt what they perceive to be the public’s dominant opinion (Lenz 2013). Backfire effects are the exception, not the rule (Nyhan 2021; Wood and Porter 2019). People generally change their minds when presented with well-sourced facts and good arguments (see, e.g., Nyhan et al. 2020; Sides 2015).

In part, we do not see the expected universally negative consequences because citizens are not as ignorant as the skeptical consensus suggests. “Issue publics,” whose members acquire specialized information on a particular issue across a spectrum of opinion (Converse 1964), provide an important epistemic resource for democracy (Elliott 2020; Han 2009). Citizens do better on domain-specific knowledge, including information about candidates’ positions on issues they care about (Henderson 2014; Krosnick 1990), than on the surveys of general factual information that skeptics rely on.

More fundamentally, individual-level biases are not devastating for collective democratic problem solving. The psychological literature on group effects and individual cognition is systematically misunderstood by skeptics and underexploited by political scientists. Contrary to Brennan’s (2016) misinterpretation, scholars like Mercier and Sperber (2017) find that even if humans are subject to “myside bias,” they can filter out erroneous messages (including those from their “side”) and change their minds when presented with good evidence from the other “side.” A realistic understanding of the capacities of democratic citizens need not be altogether bleak.

But it should not be overly sanguine. Democratic theorists (including those who are interested in practicalities) often rely on either conjecture or quasi-empirical claims. For instance, David Estlund argues that democratic procedures will tend to outperform non-democratic ones epistemically while acknowledging that the claim is conjectural rather than empirical (Estlund 2008, 157, 160, 176). Hélène Landemore (2020, 8) asserts more forcefully that what she calls “open democracy” is empirically superior to other forms of social decision making: “in a complex and uncertain world, … empowering all members of the demos equally … is overall the best method we have to figure out solutions to common problems.”

We lack a research framework for establishing whether this strong assertion is more robust than competing claims from those who champion different forms of democratic decision making or who emphasize the possibility of democratic failure. Even if deliberation and other forms of reasoned exchange are morally valuable, they may not necessarily yield superior solutions to problems. Extrapolations such as Landemore’s (2013, 104) “Numbers Trump Ability” postulate that democracy can readily be scaled up so that “if twelve jurors are smarter than one, then so would forty-one or 123 jurors,” building on Hong and Page’s (2004) “Diversity Trumps Ability” theorem. Such claims are qualified by empirical findings from jury deliberations (Watanabe 2020) and Hong and Page’s later prediction that increasing group size does not necessarily improve problem-solving capability (Hong and Page2021).

To move away from general claims for democracy’s superiority, epistemic democrats need to understand not just when democracy works but also when it doesn’t. Neblo et al. (2017, 915) establish an important possibility claim by showing how “scholars have assembled strong evidence that deliberative institutions positively influence citizens.” Still, it is hard to build from such demonstrations to a properly scientific account that can explain both democratic success and failure without some externally grounded theory of human decision making. Similarly, there is no very straightforward way of moving from a demonstration that Habermasian claims for deliberation can be grounded in plausible psychological mechanisms (Minozzi and Neblo 2015) to a broader account of when these mechanisms will or will not operate.

Surprisingly, possible microfoundations for such an account can be found in the literature on group psychology and cognition that skeptics have deployed against democracy. As Landemore (2013, 143) says, the “argumentative theory of reasoning” allows us to predict where deliberation will and will not work well. This is a pivotally important claim: we need to know where deliberation will function well to empirically assess theories of institutional design and practical justifications of democracy.

The argumentative account of reasoning is grounded in a recent “interactionist” literature in psychology, which explores how individual bias may or may not be corrected through social interaction. It investigates how mechanisms of “epistemic vigilance” allow people to employ cues to evaluate communicated information including the expertise and benevolence of the source, the plausibility of the message, and the quality of the arguments (for an overview, see Mercier 2020; Sperber et al. 2010). Chambers (2018) has also identified both the interactionist approach and the empirical literature on deliberation as reasons to doubt skeptical claims based on group psychology.

For example, contrary to skeptical claims that people conform to majority opinion, the experimental literature finds that people take account of relevant cues when evaluating the majority opinion including the absolute and relative size of the majority, the competence and benevolence of the majority’s members, the degree of dependency in the opinions of the majority, and the plausibility of the opinion (for review, see Mercier and Morin 2019). The much-bruited Asch (1956) experiments describe the consequences of external pressure rather than those of internalized bias. Practically no one was influenced when participants did not have to voice their opinion in front of the group, and contrary to the widespread academic folklore (Friend, Rafferty, and Bramel 1990), the experiments demonstrated independence as well as conformity. The literature finds that people are well able to evaluate arguments, that they are more influenced by strong than weak reasons (e.g., Hahn and Oaksford 2007), and that they partly change their minds when confronted with challenging but good arguments (e.g., Guess and Coppock 2020).

Interactionist scholarship suggests that reasoning processes are best evaluated in their normal environment of social interaction. It provides possible microfoundations for theories of variation. Instead of looking to the (supposedly invariant) cognitive limitations of ordinary citizens as skeptics do, an interactionist approach suggests that we should investigate the social context of decisions—how groups are structured—to understand when group identity and social pressure can distort or swamp problem solving. Both problem-solving capacity (which depends on whether groups harness individual biases and mechanisms of epistemic vigilance) and collective pressures to conformity will plausibly vary with group structure. Skeptical accounts, which depict group politics as simple condensates of individual bias writ large, are poorly fitted to capturing this variation. Equally, interactionism provides microfoundations for a framework that can investigate democratic theorists’ findings about when democracy works well while also investigating democratic failure.

This provides a more promising path forward than does the universal pessimism of democratic skeptics. It also provides more robust foundations for the claim that deliberation can occur under psychologically realistic circumstances and a starting point for investigating what those circumstances are. Democratic “realists” like Achen and Bartels (2016) need not be democratic pessimists. A microfoundational approach, grounded in endemic individual cognitive bias, avoids the possible charge that the desired normative outcomes are baked into the initial empirical assumptions.

If outright democratic skeptics are sincerely committed to understanding the cognitive underpinnings of democratic processes, as their reliance on this literature ought to entail, they too should find it attractive. It allows the serious investigation of observed democratic failure as well as democratic success. Of course, these are not the only possible microfoundations, and like all empirically based accounts, they may be modified or even rejected as empirical evidence emerges.

Still, such microfoundations could support a broader analytical account that seeks to understand and address variation. If both the benefits and disadvantages of democracy arise at the group rather than individual level, then the challenge for advocates of democracy is to build democratic institutions that can better trigger the relevant cognitive mechanisms so as to capture the benefits of group problem solving instead of deferring to the social pressures that do sometimes lead to conformity. In other words, our goal is to better explain how democracy incorporates the capacities of groups to solve problems (under some circumstances) as well as their tendency to magnify conformity and factionalism (under others).

We do not provide a complete alternative account of democracy here. That would be a heroic undertaking, which would involve not just providing microfoundations but rebuilding existing institutional and organizational theories on their basis. Instead, we sketch the beginnings of a broader research program that we hope others will find attractive.

A Research Program on Democratic Problem Solving
Ostrom (1990) began by demonstrating the systematic flaws in Hardin’s skepticism of common goods but went on to articulate a coherent alternative research agenda on the conditions under which common goods provision succeeds or fails. Political science and related disciplines should commence a similar research program, uniting scientific research on group composition, network structure, and institutional form to investigate the conditions under which democratic problem solving is likely to succeed or fail.

As we have argued, this program could build on research in experimental cognitive psychology, which provides an alternative set of microfoundations to both rational choice and the social psychological arguments that have dominated political science debates. Specifically, this research identifies specific dimensions along which trade-offs in group problem solving plausibly occur:

• Between social comfort and comfort in expressing dissent (Baron 2005).

• Between shared common ground and some measure of preexisting disagreement. Stasser and Titus (1985) point to the benefits of ground-level agreement for problem solving, whereas Schulz-Hardt et al. (2006) discuss how some level of background dissent allows for better problem solving.

• Between group size and the need to represent diversity. Fay, Garrod, and Carletta (2000) discuss how the quality of communication deteriorates as group size increases, whereas Hong and Page (2004; 2021) highlight the benefits of diversity and its complex interaction with group size and Mercier and Claidière (2022) examine whether deliberation is robust to increases in group size.

• Between pressures for conformity and concerns for epistemic reputation, Origgi (2017) describes how the cognitive mechanisms of reputation can generate both market bubbles and reliable collective information systems.

By understanding how different positions in this multidimensional space are associated with better or worse problem solving, we can arrive at useful hypotheses about how to fashion democratic systems. This research program should also incorporate scholarship on a broader level of social aggregation, which explores how network structure and social influence affect flows of information and opinion between individuals with different perspectives (Feng et al. 2019). It might incorporate practical findings about democratic decision making—for instance, the circumstances under which juries can form more accurate collective beliefs (Salerno and Diamond 2010) and how citizen constitutional assemblies (Farrell and Suiter 2019) and online town halls (Neblo, Esterling, and Lazer R2018) can support better communication between politicians and the public.

Crucially, the proposed research program would investigate democratic failures as well as successes, better explaining, for example, the circumstances under which epistemic breakdown and misinformation can become established in democracies. O’Connor and Weatherall (2018; Weatherall and O’Connor Weathera2021) investigate how epistemic factionalization occurs among people who do not trust others with different beliefs. Nyhan (2021) emphasizes the importance of elite messaging and information decay in spreading misinformation, suggesting that punishing elites who spread falsehoods and focusing on intermediaries may have benefits.

Finally, such a research program would help address recent (Neblo et al. 2017) and current (Notes from the Editors 2020) demands for a “translational” approach to democracy that “challenges dominant disciplinary norms.” It would seek to reconcile scientific rigor with normative analysis, providing the groundwork for institutional improvement and reform.

That was then, this is now

From Taylor C. Sherman’s useful Nehru’s India: A History in Seven Myths:

Although Hindu nationalists had gained prominence in the run-up to partition, the new Congress leaders of the Government of India tried to sideline them.  After Gandhi’s assassination on 30 January 1948, members of the Rashtriya Swayamsevak Sangh were arrested, and the Hindu Mahasabha declared it would not take part in politics.  In short, though raging before partition, the flames of Hindu chauvinism were quickly doused after independence, at least according to the old nationalist narrative.  Secondly, the reform of Hinduism was seen as an essential element of secularism.  To this end, a prominent Dalit, Bhimrao Ramji Ambedkar, was put in charge of both writing the Constitution and overseeing reform of Hindu personal law.  Within a short time after independence, so the myth goes, India had a secular state, and was on course to establish a sense of security and belonging for the two groups who had raised the loudest objections to Congress’s nationalism: Muslims and Dalits.

As with so many of the myths that have arisen about this period after independence, the myth of India secularism owes a great deal to Jawaharlal Nehru.

The book is both a good focused view of the Nehru era, but excellent background for current disputes.

The post That was then, this is now appeared first on Marginal REVOLUTION.

Which political tweets do best?

Analyzing every tweet of all US senators holding office from 2013 to 2021 (861,104 tweets from 140 senators), we identify a psycholinguistic factor, greed communication [TC: basically accusing other people of greed], that robustly predicts increased approval (favorites) and reach (retweets). These effects persist when tested against diverse established psycholinguistic predictors of political content dissemination on social media and various other psycholinguistic variables. We further find that greed communication in the tweets of Democratic senators is associated with greater approval and retweeting compared to greed communication in the tweets of Republican senators, especially when those tweets also mention political outgroups.

That is from new research by Eric J. Mercadante, Jessica L. Tracy, and Friedrich M. Götz.  Via David Lilienfeld.

The post Which political tweets do best? appeared first on Marginal REVOLUTION.

My Conversation with Yasheng Huang

Here is the audio, video, and transcript, Yasheng is a China scholar and a professor at MIT.  Here is part of the episode summary:

Yasheng joined Tyler to discuss China’s lackluster technological innovation, why declining foreign investment is more of a concern than a declining population, why Chinese literacy stagnated in the 19th century, how he believes the imperial exam system deprived China of a thriving civil society, why Chinese succession has been so stable, why the Six Dynasties is his favorite period in Chinese history, why there were so few female emperors, why Chinese and Chinese Americans have less well becoming top CEOs of American companies than Indians and Indian Americans, where he’d send someone on a two week trip to China, what he learned from János Kornai, and more.

And an excerpt:

COWEN: Now, in your book, you write of what you call Tullock’s curse— Gordon Tullock having been my former colleague — namely, embedded succession conflict in an autocracy. Why has Chinese succession been so stable up to now? And will we see Tullock’s curse whenever Xi steps down, passes on, whatever happens there?

HUANG: I do want to modify the word that you use, stable. There are two ways to use that term. One is to describe the succession process itself. If that’s the situation we’re trying to describe, it is not stable at all. If you look at the entire history of the PRC, there have been so many succession plans that failed, and at a catastrophic level. One potential successor was persecuted to death. Another fled and died in a plane crash. Others were unceremoniously dismissed, and one was put under house arrest for almost 15 years, and he died —

COWEN: But no civil war, right?

HUANG: Yes, that’s right.

COWEN: No civil war.

HUANG: That’s right. There’s another way to talk about stability, which is stability at the system level, and that, you are absolutely right. Despite all these problems with these successions, the system as a whole has remained stable. The CCP is in power. There’s no coup, and there were not even demonstrations on the street associated with the succession failures. So, we do need to distinguish between these two kinds of stability. By one criterion, it was not stable. By the other criterion, it is quite stable.

The reason for that is, I think — although it’s a little bit difficult to generalize because we don’t really have many data points — one reason is the charisma power of individual leaders, Mao and Xiaoping. These were founding fathers of the PRC, of the CCP, and they had the prestige and — using Max Weber’s term — charisma, that they could do whatever they wanted while being able to contain the spillover effects of their mistakes. The big uncertain issue now is whether Xi Jinping has that kind of charisma to contain future spillover effects of succession failure.

This is a remarkable statistic: Since 1976, there have been six leaders of the CCP. Of these six leaders, five of them were managed either by Mao or by Deng Xiaoping. Essentially, the vast majority of the successions were handled by these two giants who had oversized charisma, oversized prestige, and unshakeable political capital.

Now we have one leader who doesn’t really have that. He relies mostly on formal power, and that’s why he has accumulated so many titles, whereas he’s making similar succession errors as the previous two leaders.

Obviously, we don’t know — because he hasn’t chosen a successor — we don’t really know what will happen if he chooses a successor. But my bet is that the ability to contain the spillover effect is going to be less, rather than more, down the road, because Xi Jinping does not match, even in a remote sense, the charisma and the prestige of Mao Zedong and Deng Xiaoping. There’s no match there.

Recommended.  And I am happy to recommend Yasheng Huang’s forthcoming book The Rise and Fall of the East.

Pre-order here: https://www.amazon.com/dp/0300266367?ref_=cm_sw_r_cp_ud_dp_CXCHDSQB8JBKEXM4J5BE

The post My Conversation with Yasheng Huang appeared first on Marginal REVOLUTION.

Competing for residents rather than businesses

Amazon is pulling back from its second headquarters expansion in Crystal City (yes I still call it that), and this will herald a new age of lesser competition for businesses and their main offices:

…the growing difficulty of courting corporations. If Amazon stiffs Northern Virginia, future politicians elsewhere may be less eager to promise tax breaks and infrastructure investments, not to mention spend their reputational capital. Politically speaking, it will be harder for urban and suburban leaders to rise to the top by attracting a new major corporate tenants. “Pro-business” local governments may be less common in the years to come.

Another relevant trend is the work-from-home and hybrid models. Why should a major corporation invest in more office space if a lot of that space will be used only part of the time?

It is worth thinking through how remote and hybrid work will affect regional evolution. There have already been “booms” in some relatively small resort areas, such as parts of Maine, Long Island and West Virginia. But there will be a more general impact as well. To the extent corporations give up on clustering their talent in big office buildings, people will spread out where they live. Not everyone will set down stake in the Hamptons or along the Irish coast. Plenty of people will want to live near family or where they were born, or perhaps a few hours away from the main office as part of a hybrid arrangement.

In this new world, it will be much harder for a well-governed region to rise to the top. Even if its leaders succeed in convincing a company to relocate, for instance, there may be fewer workers who do so. Or perhaps there will be the same number of workers but they will come into the office less frequently and live scattered in many directions, sometimes in other states or metropolitan areas.

There is nothing necessarily wrong with this outcome. But the potential parvenu region just won’t feel that exciting, and the level of activity won’t feed upon itself in terms of attracting more retail and cultural amenities.

And:

Overall, there may be less competition to attract corporations. At the same time, political competition for residents may become more intense, because more people will be able to choose where to live regardless of where they work. This competition could lead to improvements in schools and parks.

Here is the rest of my Bloomberg column.

The post Competing for residents rather than businesses appeared first on Marginal REVOLUTION.

Statement of Commitment to Academic Freedom and to Intellectual Merit

Academic freedom and intellectual merit are under attack in the United States, from both the left and the right. The norms of the university and intellectual life are fragile and need protecting because such norms are always in tension with political and economic power.

The undersigned members of the GMU Department of Economics express their commitment to academic freedom and to intellectual merit.

Addressed to the George Mason University (GMU) community and the public at large

~~~

American universities have professed allegiance to two ideals. First, the ideal of academic freedom – the right of students and faculty to express any idea in speech or writing, without fear of university punishment, and secure in the knowledge that the university will protect dissenters from threats and violence on campus.

Second, the ideal of intellectual merit – the right and duty of academic departments to hire and promote the most brilliant, creative, and productive faculty in their fields, and admit the most intellectually promising students, without pressures from the administration.

These ideals are the cornerstones of liberal education. They protect faculty and students who hold views unpopular on university campuses. Academic freedom protects existing students and faculty who dissent from current dominant academic opinion and ideology. No matter how unpopular their views, they know the university will protect them. As stated in the University of Chicago Statement on freedom of expression and as quoted in GMU’s “Free Speech at Mason” Statement:

[We must hold a fundamental commitment to] the principle that debate or deliberation may not be suppressed because the ideas put forth are thought by some or even by most members of the University community to be offensive, unwise, immoral, or wrong-headed.

Intellectual merit protects prospective students and faculty who speak and write against current dominant viewpoints. No matter how unpopular their views, they know that university administration will not obstruct or prejudice their admission, hiring, or promotion.

Recently, both of these ideals have come under attack. Pressure for conformity has intensified and universities have increasingly interfered with departments’ personnel decisions. For example, at some universities, one of the more egregious new practices is the requiring of written “diversity” statements by prospective students, staff, or faculty, then used to discriminate among candidates, often by quarters of the university with interests other than those of the department or unit. Such methods recall arrogations of the past, such as The Levering Act of 1950, used against radicals.

We strongly believe the attacks on academic freedom and intellectual merit are deeply mistaken. The classic rationales in favor of these ideals are sound. To protect them, viewpoint diversity must be celebrated and academic departments must maintain their ability to select, hire, and promote students and personnel based on intellectual merit. We insist that the degree of institutional autonomy that the GMU Department of Economics has traditionally enjoyed is vital to the health of viewpoint diversity not only within the university but within the academy writ large.

It is vital that every department in a university enjoys independence, so it can dare to be different and keep viewpoint diversity alive. George Mason University has excelled in supporting viewpoint diversity with a variety of diverse departments, centers and organizations. Viewpoint diversity at George Mason has benefited the university, the United States, and the wider intellectual world.

Indeed, some of the Department’s chief contributions have taught that all forms of authority can exert power to excess, and that guarding against such excess calls for the very ideals affirmed here, respect for dissent and intellectual merit.

We, the undersigned members of the GMU Department of Economics, look forward to continuing our independence to do good economics according to our judgment, guided by the ideals of academic freedom and intellectual merit.

Signed by the following GMU Department of Economics faculty (full-time & emeritus):

1. Jonathan P. Beauchamp
2. James T. Bennett
3. Donald J. Boudreaux
4. Bryan D. Caplan
5. Vincent J. Geloso
6. Timothy Groseclose
7. Robin D. Hanson
8. Garett Jones
9. Daniel B. Klein
10. Mark Koyama
11. David M. Levy
12. Cesar A. Martinelli
13. John V.C. Nye
14. Thomas C. Rustici
15. Vernon L. Smith
16. Alex Tabarrok
17. Karen I. Vaughn
18. Richard E. Wagner
19. Lawrence H. White

The post Statement of Commitment to Academic Freedom and to Intellectual Merit appeared first on Marginal REVOLUTION.

Matt Yglesias on depression and political ideology

The connection between Left thinking and high neuroticism (as a formal personality trait) is one of the most underdiscussed yet important themes in American politics (to be clear, the Right has its own pathologies as well).  Here is one excerpt from Matt’s latest Substack (gated):

But I want to talk about something Goldberg mentions but doesn’t focus on: a 2021 paper by Catherine Gimbrone, Lisa Bates, Seth Prins, and Katherine Keyes titled “The politics of depression: Diverging trends in internalizing symptoms among US adolescents by political beliefs.” The CDC survey doesn’t ask teens about their political beliefs, but Gimbrone et. al. find not only divergence by gender, but divergence by political ideology. Breaking things down by gender and ideology, they find that liberal girls have the highest increase in depressive affect and conservative boys have the least. But liberal boys are more depressed than conservative girls, suggesting an important independent role for political ideology.

Image

And:

Some of it might be selection effect, with progressive politics becoming a more congenial home for people who are miserable. But I think some of it is poor behavior by adult progressives, many of whom now valorize depressive affect as a sign of political commitment. The thing about depression, though, is that it’s bad.

And:

…progressive institutional leaders have specifically taught young progressives that catastrophizing is a good way to get what they want…

One of the best and most important pieces you will read this year.  I would go as far to say that you cannot understand the American public intellectual sphere without a grasp of the close connection between Left thinking and high neuroticism.  I would add that incorporating gender expectations into Matt’s analysis would give it yet more explanatory power.  Once you see all this, you can never look away again and forget it…

The post Matt Yglesias on depression and political ideology appeared first on Marginal REVOLUTION.

A view that hardly anyone embraces

It is not an airtight view, but it is also not the least plausible view.  Imagine a “basic needs” argument that suggests, a’ la David Braybrooke, that individuals truly have positive rights to a certain degree of sustenance, health care, shelter, and so on.  Yet above that basic needs level, individuals don’t have positive rights to much of anything at all.  They are left to fend for themselves, though of course they will benefit from social cooperation.  After all, positive rights have to stop somewhere if only because of the scarcity constraint.  Furthermore, perhaps what society owes a person is “enough to construct a meaningful life,” but not so much more.

You may or may not agree, but as a view it is not crazy.

To make this more specific, imagine a health care policy that stated individuals have a true right to access any health care technology invented up through say…2004 or so.  Individuals would be guaranteed “2004 value health care lives.”  (In 2004 that certainly seemed pretty good.)  But for subsequent health care developments, a free market will reign.  Is not guaranteeing basic needs an essential part of the egalitarian argument?  Surely not everything needs to be equalized?  Anyway, no one believes in guaranteeing individuals protection against all the rare diseases out there, as that would cost too much.  So a line will be drawn somewhere.

The very title of this post suggested these views are extremely unpopular.  One reason might be their theoretical inadequacy.  But surely another reason is that, at the margin, they suggest you just don’t have too much complaining you can do.  You don’t have too much to say about social arrangements.  You don’t have too many opportunities to express purely negative emotions about how things are.

And perhaps that is a big part of what people find intolerable.

What does that tell you about political views more generally?

The post A view that hardly anyone embraces appeared first on Marginal REVOLUTION.

What should I ask Simon Johnson?

Other than “why don’t you have a better Wikipedia page?”  Here is one excerpt:

Simon H. Johnson…is the Ronald A. Kurtz Professor of Entrepreneurship at the MIT Sloan School of Management… From March 2007 through the end of August 2008, he was Chief Economist of the International Monetary Fund.  He is the author of the 2010 book 13 Bankers: The Wall Street Takeover and the Next Financial Meltdown along with James Kwak, with whom he has also co-founded and regularly contributes to the economics blog The Baseline Scenario.

He has an extensive publication record, including in political economy, economic history, and economic growth, he studied earlier Russian reforms, and he has books on science policy (with Jonathan Gruber) and the national debt (with Kwak).  Most notably his forthcoming book is with Daron Acemoglu and is titled Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, due out in May.  He is a Brit of course.

So what should I ask him?

The post What should I ask Simon Johnson? appeared first on Marginal REVOLUTION.

Update on the New York Times Word Frequency Chart

By David Rozado, who has hit a bunch of home runs lately.  Look at his charts, to my eyes they show woke terminology in the NYT as having peaked and as now declining.  Here goes, they are very different from the earlier charts (also at the link) ending in 2019:

Climate change issues, however, continue to receive more coverage.  Not all of the charts “go my way,” but this is hardly what you would expect if Wokeness were simply rising, rising, rising out of control.  Oh, and check out these trends in pronoun usage.  Also here is more from Rozado, mostly on how the positive sides of woke rhetoric are gaining at the expense of the negative sides.

The post Update on the New York Times Word Frequency Chart appeared first on Marginal REVOLUTION.

My Conversation with Brad DeLong

Here is the audio and transcript, here is part of the summary:

Tyler and Brad discuss what can really be gleaned from the fragmentary economics statistics of the late 19th century, the remarkable changes that occurred from 1870–1920, the astonishing flourishing of German universities in the 19th century, why investment banking allowed America and Germany to pull ahead of Britain economically, what enabled the Royal Society to become a force for progress, what Keynes got wrong, what Hayek got right, whether the middle-income trap persists, his favorite movie and novel, blogging vs. Substack, the Slouching Towards Utopia director’s cut, and much more.

And here is one excerpt:

COWEN: What do you take to be the best understanding of the 17th-century Scientific Revolution, if indeed you view it as a 17th-century revolution?

DELONG: I always think Joel Mokyr is absolutely magnificent on this. I think he understates the role that having printing by movable type played in creating the community of scientific practice and knowledge seeking.

There’s one thing that happens that is extremely unusual. Back before 1870, there’s no possibility at all that humanity is going to be able to bake the economic pie sufficiently large that everyone can have enough. Which means that, principally, politics and governance are going to be some elite constituting itself and elbowing other elites out of the way, and then finding a way to run a force-and-fraud domination and exploitation scheme on society so that they at least can have enough. When Proudhon wrote in 1840s that property is theft, it was not metaphor. It was really fact.

What does this elite consist of? Well, it’s a bunch of thugs with spears, the people who have convinced the thugs with spears that they’re their bosses, and their tame accountants, bureaucrats, and propagandists. Which means, most of the time, when you have a powerfully-moving-forward set of people thinking about ideas, whether the idea is true is likely to be secondary to whether the idea is useful to helping me keep my place as a tame propagandist in the force-and-fraud domination and exploitation elite machine.

This is a point I’ve stolen from Ernest Gellner, and I think it is very true. Yet, somehow, the Royal Society decides, no. The Royal Society decides nothing except through experiment — what we are going to demand that nature tell us, or tell one of us, or at least someone writes us a letter saying they’ve done the experiment about what is true. That is a miraculous and completely unexpected transformation, and one to which I think we owe a huge amount.

Many interesting points are discussed.

The post My Conversation with Brad DeLong appeared first on Marginal REVOLUTION.

UK fact of the day

As of 2017 we [Brits] spent about 5.6 per cent of national income on benefits for those in old age against 7.1 per cent in the US, 7.7 per cent across the OECD as a whole, 10 per cent in Germany and more than 13 per cent in France.

And yet the country is still in economic troubles.  In any case, that is from the new and excellent Paul Johnson book Follow the Money: How Much Does Britain Cost?  This book talks you through both the tax expenditure side of the British government budget.  It is not quite thrilling, but given the topic area it is remarkably interesting and well-executed.  And while the authors is not without his own ideas, the book is more to inform you than to propagandize you.

You can buy it here.  There should be many more books just like this one, but for different topics — take note!

The post UK fact of the day appeared first on Marginal REVOLUTION.

Emergent Ventures winners, 24th cohort

Shakked Noy, MIT economics, to do RCTs on GPTs as teaching and learning tools.

Gabriel Birnbaum, Bay Area, from Fortaleza, Brazil, to investigate lithography as a key technology used in the manufacturing of microchips.

Moritz Wallawitsch, Berkeley. RemNote is his company, educational technology, and to develop a complementary podcast and for general career development.

Katherine Silk, Boston/Cambridge, general career support and to support advice for early-stage startups.

Benjamin Schneider, Brooklyn.  To write a book on the new urbanism.

Joseph Walker, Sydney, Australia, to run and expand the Jolly Swagman podcast.

Avital Balwit, Bay area, travel grant and general career development.

Benjamin Chang, Cambridge, MA. General career support, “I will develop novel RNA riboswitches for gene therapy control in human cells using machine learning.”

Daniel Kang, Berkeley/Champagne-Urbana, biometrics and crypto.

Aamna Zulfifiqar, Karachi, Pakistan, to attend UK higher education to study economics.

Jeremy Stern, Glendale, CA, Tablet magazine.  To write a book.

James Meech, PhD student, Cambridge, UK, to work on a random number generator for better computer architectures.

Arthur Allshire, University of Toronto, background also in Ireland and Australia, robotics and support to attend conferences.

Jason Hausenloy, 17, Singapore, travel and general career development, issues surrounding artificial intelligence.

Sofia Sanchez, Metepec, Mexico, biology and agricultural productivity, to spend a summer at a Stanford lab.

Ukraine tranche:

Andrey Liscovich, eastern Ukraine, formerly of Harvard, to provide equipment for public transportation, communication and emergency power generation to civilian authorities of frontline-adjacent areas in Ukraine which have lost vital infrastructure.

Chris Nicholson, Bay area, working as a broker to maintain internet connectivity in Ukraine.

Andrii Nikolaiev, Arsenii Nikolaiev, Zarina Kodyrova, Kvanta, to advance Ukrainian mathematics, help and train math Olympiad winners.

As usual, India and Africa/Caribbean tranches will be reported separately.

The post Emergent Ventures winners, 24th cohort appeared first on Marginal REVOLUTION.

DEI jobs under fire

As sweeping layoffs plague Big Tech, DEI jobs are taking the brunt of the blow.

According to a Bloomberg report, listings for DEI roles were down 19% last year — a larger downtick than in legal or general human resources departments per data from Textio, a company helping businesses create unbiased job ads.

“I’m cautiously concerned — not that these roles will go to zero but that there will be a spike in ‘Swiss army knife’ type roles,” Textio Chief Executive Officer Kieran Snyder told Bloomberg.

Other sectors besides have dramatically carved into their DEI departments after deploying mass layoffs in anticipation of a pending global recession.

Via RD.  To be clear, I don’t think any one of the measure indicating “peak woke” are all that strong.  You can always argue some other factor is driving the change downward, or the slowing of the trend.  But taken together, they do not look very much like a world where woke will just keep on taking more and more ground.  So when it comes to my earlier call about “woke having peaked,” I think that one is looking pretty good.

The post DEI jobs under fire appeared first on Marginal REVOLUTION.

Chinese charter city in the Marshall Islands?

On a tropical Pacific atoll irradiated by U.S. nuclear testing and twice since evacuated because of the fallout, Cary Yan and Gina Zhou planned to create a unique paradise for Chinese investors.

They wanted to turn Rongelap — an atoll in the Marshall Islands totaling eight square miles of land and 79 people — into a tax-free ministate with its own legal system that, they claimed, would be able to issue passports enabling visa-free travel to the United States.

It would have a port, luxurious beachfront homes, a casino, its own cryptocurrency, and a full suite of services for offshore companies registered in Rongelap. With 420 miles of sea between it and the capital, Majuro, it would be relatively free of oversight.

All the couple had to do to make this a reality was bribe a swath of politicians in the Marshall Islands, once occupied by the United States and now a crucial U.S. ally in the Pacific, to pass laws to enable the creation of a “special administrative region” — the same classification given to the Chinese territories of Hong Kong and Macao.

The venture is not on track to succeed, and the two are now awaiting sentencing.  The entire story reflects one of my broader worries about charter cities.  The most powerful nations in the world, in this case the United States, do not necessarily favor small enclaves that possibly can be turned to favor their rivals.  In other words, the relevant hegemon here did not at all support the charter city plan.

The post Chinese charter city in the Marshall Islands? appeared first on Marginal REVOLUTION.

Shooting down intruders?

What are your comments and rigorous analyses (notice the prompt) on the recent shoot downs?  And no, I do not mean the Chinese balloons.  Comments are open…

The post Shooting down intruders? appeared first on Marginal REVOLUTION.

❌