FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayThe Kramer Is Now

The Fictional Evil Utilitarian N.I.C.E. And The Actual Utilitarian U. K. N.I.C.E. WTF?

I was talking with my partner one morning about the vexing modern problem of putting numbers on amorphous things so you can measure the unmeasurable. And because I've been immersed in a research project on philosophical issues in Cost-Effectiveness Analysis, I brought up the example of quantification in health care resource allocation.

I said something like "They quantify health through QALYs, which evaluate the burdensomeness of health states on a scale from 0 to 1, to prioritize potential treatments according to how many QALYs they produce per dollar. Or sometimes they use a threshold: a specific cost-per-QALY value -- say, $30,000 per QALY. Treatments producing too few QALYs per dollar won't be funded." 

My partner, being an interesting person and not a philosopher, said "Hold on, who is 'they?'"

And I said, "Oh, well in the UK it's N.I.C.E." The National Institute for something something. Care and Excellence. Or something."

"Did you say "N.I.C.E"? Because N.I.C.E. is also the name of the dystopian evil utilitarian organization in C. S. Lewis's That Hideous Strength."

Wait. What? Are you telling me that an actual utilitarian U. K. organization founded in the late 20th century has the same acronymic name as a fiction evil utilitarian organization in a 1945 book by a famous U.K. author? How is that possible?

At the time we had this conversation, I had never read That Hideous Strength, but it had long figured in our family imaginary. For my partner and his daughter, it occupied a  space in the overlap zone between "brilliant" and "problematic" -- problematic because sexist, homophobic, and shot through with imperialism. Despite their warnings, of course I had to read the book immediately.


The author, C. S. Lewis, is the British Christian fantasy writer who also produced the Narnia chronicles. So when I say that N.I.C.E. in Lewis's book is "evil," that's not an exaggerating synonym for "committing bad acts." They're literally evil.

In the novel, N.I.C.E. -- "The National Institute of Coordinated Experiments" -- is publicly a scientific and social planning agency weaning us from sentimental attachments to usher in an era of objective social improvements. Behind the scenes, N.I.C.E. is furtively pursuing its evil program for the exploitation of nature and the annihilation of humanity.  

If you've ever encountered the arguments utilitarians use to defend their idea that the right action is the one that rationally brings about the best consequences, passages from (fictional) N.I.C.E.'s representatives will sound eerily familiar. Their aim is "the scientific reconstruction of the human race in the direction of increased efficiency." Other value judgments based on justice, beauty, or love are "essentially subjective and instinctive." Ethical beliefs turn out on inspection "to be simply an expression of emotion."

Readers of Peter Singer's 1995 "Ethics and Intuitions" may be reminded of his idea that common moral judgements conflicting with utilitarian outcomes are a "biological residue of our evolutionary history." We evolved to have "intuitions" about justice only because punishing wrongdoers was an evolutionary success.

As Singer explains, the status of moral judgments -- especially those reflecting justice -- is significant because these judgments have long been used to discredit utilitarianism. "H. J. McCloskey, writing at a time when lynchings in the U.S. South were still a possibility, thought it a decisive objection to utilitarianism that the theory might direct a sheriff to frame an innocent man in order to prevent a white mob lynching half a dozen innocents in revenge for a rape" (Singer 343-345). That is, our judgment is that framing an innocent person is wrong, regardless of the consequences, because it is unjust. But "bringing about the best consequences" seems to entail that preventing the riot could be the right thing to do. Thus judgments based on justice seem to undermine utilitarianism.

But Singer says not so. Unlike utilitarian judgments like "five deaths is worse than one," which is "rational," our justice-based "intuitions" should have no standing in our figuring out what is right. As with the fictional N.I.C.E., Singer urges that they reflect an "instinctive" sense of reciprocity -- and should be discarded.  

In retrospect, it's not surprising that Lewis would put into the mouths of N.I.C.E.'s representatives talking points familiar from utilitarianism. The rough idea Singer is presenting goes back at least to the British philosopher Sidgwick in the late 19th century. And it's obvious why a Christian ethics would be deeply at odds with utilitarian thinking and why Lewis would be tempted to depict utilitarianism as a manifestation of evil.

The U.K.'s actual real life N.I.C.E. isn't exactly utilitarian, but it does use the utilitarian principles of Cost-Effectiveness Analysis to decide which treatments should be publicly funded. Proposed treatments are evaluated according to how many QALYs they are likely to produce per unit cost: decisions are thus based on bringing about aggregated good consequences. 

Like utilitarianism, CEA leads to outcomes conflicting with our moral judgments. The process can lead to discrimination against people with disabilities, as people with disabilities are often judged to have a lower quality of life than non-disabled people; thus interventions extending their lives may be seen as less effective. Since "a QALY is a QALY," the process is insensitive to distribution and equity, with no priority for younger people or the worse off, and no amelioration of existing health inequities. Because of aggregation, low-cost interventions that benefit many people may be more cost-effective than those bringing enormous benefits to small numbers of people: in one famous example, the state of Oregon carried out a large-scale CEA that resulted in part in the conclusion that paying for capping teeth would be more cost-effective than paying for appendectomies.

How to respond to these problems with CEA is part of my current research project on Cost-Benefit Analysis and its offshoots, but this post isn't about that, it's about ACRONYMS. Who thought it was a good idea for an actual utilitarian U. K. organization promoting social progress and rationality to have the same name as a fictional evil utilitarian organization promoting social progress and rationality?

Did no one on the original board of directions pipe up and say "Hey, I know we're not the evil kind of utilitarians. But don't you think it's going to look weird if we say we're N.I.C.E, for progress, science, and rationality, and that other N.I.C.E. also says it's for science, progress, and rationality -- and the other one is EVIL?"

The only discussion of the acronym issue I could find on the internet was from "LifeSite," describing the case of Leslie Burke, a man with degenerative motor neurone disease who sued the UK government for the right not to be denied nutrition and hydration when his illness rendered him unable to swallow or communicate. The U.K. government appealed an initial ruling in his favor, with a representative for the government explaining that N.I.C.E. guidelines combine considerations of efficacy, quality of life judgments, and economics. "If the principle that "clinicians should be able to follow NICE guidelines without being obliged to accede to patient demands" were undermined, the government argued, then "there would be considerable risk of inefficient use of NHS resources."

Describing the principles of the fictional, evil N.I.C.E. as "a mechanistic and ultra-utilitarian, anti-life philosophy that regards human beings as merely a disposable means to an end," LifeSite says "it seems beyond a coincidental irony that a real-life, government-funded organization that bases its decisions on the same utilitarian principles, could be known by the same acronym: N.I.C.E." With this last part, I agree. It does seem beyond a coincidental irony.

As to the broader question of the status of moral judgments or "intuitions," in his article Singer doesn't say what he thinks about framing an innocent person to prevent a riot -- whether he thinks there is some way that contrary to first appearances framing the innocent person actually doesn't produce the best consequences, or whether he thinks framing an innocent person could be the ethical right action.

As a non-utilitarian, I can say more simply that framing an innocent person is wrong, because it's unjust, and I think that is true partly because moral judgments reflect what we care about, which can include values like justice.

In the spirit of this post you may be wondering: could you modify CEA so that instead of measuring QALYs it quantifies and takes into account these other values and thus becomes a new and improved decision-making method? I am so glad you asked! That is what I am working on. It's complicated, but the short answer seems to be "no."

The Subjectivity Of Everything: Music And Math Edition

Recently from a combination of world-weariness, pandemic-weariness, and me-weariness, I was seized by a desire to think about something abstract, useless, inert, and intellectually challenging. "I know what I'll do," I thought. "I'll buy a book about math."

I went to a small bookstore and asked, "Do you have a math section?" No. But there among the science books I came upon Music By the Numbers by math historian Eli Maor. I thought it would be about music theory and the mathematical underpinnings of classical music structures -- something I vaguely remember being interested in when I read about it in Gödel, Escher, Bach as a teenager forty years ago. But it was more about the fundamental mechanics of western music's organization of sound. Where did all these notes come from?

I knew that with a string, half as long means an octave higher. What I learned first is that the Pythagoreans and their followers based an entire scale on their idea of pleasing fractional intervals. The full story is a bit complicated, but taking fourths and fifths as a starting point, you can create a scale with a version of the "whole tone" interval we're familiar with (C to D, for example) based on a 9/8 ratio of a note to its predecessor (and a half-tone with ratio 256/243).

Maor says that while this way of creating a scale is mathematically elegant, it is out of step with the pitches produced in harmonic overtones -- and thus with the fundamentals of acoustics itself. From a philosophical point of view, Maor argues that their obsession with formal beauty led Pythagorean followers into a self-circular mathematical maze: by insisting on mathematical simplicity, pleasing ratios, and no irrational numbers, they "subject[ed] the laws of nature to their ideals of beauty."

But the more amazing moment for me came later, when I learned about tempering. As I kid, I knew about Bach's Well-Tempered Clavier, and I used to enjoy imagining that the well-tempered clavier had replaced a sour and irritable keyboard instrument known as "the ill-tempered clavier." But in all these years, I never learned what tempering is.

The background for tempering in the Western context happened around 1550, when a new scale was created bringing in intervals of thirds and their inversions. This "just-intonation" scale has intervals close to the ones we'd find on a modern piano. Maor argues that as it is based on the first six members of the natural harmonic series, and has pitches corresponding to the natural harmonics of musical instruments, it is acoustically and empirically superior to the Pythagorean scale. You could say that it has formalism that is not so much mathematical but rather musical.

However: a crucial feature of the just-intonation scale is that not all the tone ratios are the same. For example, in a C-major scale, the ratio of C to D would be 9/8 and from D to E, 10/9.

From a musical point of view, Maor argues, this is as it should be. It's from the practical and social point of view that these differences became a problem. For instruments with keyboards and fixed holes, a just-intonation tuning in one key will have notes at slightly different pitches from those in another key. Increasingly, people wanted to play together, with multiple instruments all at once. What if a group wants to play one piece in one key and another in another? The workarounds were complicated: early harpsichords had multiple keyboards, each tuned to a different key, for pieces written in different key signatures.

"Tempering," then, is creating a scale with even divisions, all the same. You just divide the octave into 12 equal semitones. Now all the instruments can play together, and playing in C is the same set of notes as playing in any other key.

Math people may already see that, unlike the just-intonation scale, the ratio of a note to its predecessor in the tempered scale is based on irrational numbers -- numbers that cannot be expressed as fractions. Having even divisions requires dividing the scale into equal parts -- in this case, 12 equal half-tones -- and to be equal, each ratio between a note and its predecessor in the sale would have to be the twelfth root of 2 to 1.

Maor: "This irrational number would have been regarded with horror by the Pythagoreans, as it cannot be written as a ratio of integers." !!!

Leaving aside Pythagorean worries, Maor describes tempering as "an acceptable compromise" between "the dictates of musical harmony" and "the practicality of playing a piece on the keyboard." The difference between the tempered semi-tone and the just-intonation semi-tone is just barely within what human ears can discern, a difference "most musicians were willing to live with." Maor alludes to a suggestion that Bach's Well-Tempered Clavier was partly PR: written to convince his fellow musicians of the benefits of this sociable way of organizing everything.

I found myself astonished by idea that the structural simplicity I associate with piano keys is the result of an "acceptable compromise" to solve practical problems of musicians playing together.

From a personal point of view, I guess I thought that the Western division of the octave into 12 equal semitones was somehow linked to the fundamental structure of music and sound. Not that it was the only way to exploit that structure -- I've always known that non-Western music was organized differently -- but that it was one way. Now this book says not only is that not so, but it's even weirder: there is structure of music and sound, and those equal piano divisions are really, deeply, not it.

From a philosophical point of view, the analysis provides an interesting reminder about the subjectivity of concepts like simplicity and elegance. For the Pythagoreans, these concepts translated to rationality. For the just-intonation fans, they relate to harmonics and ratios. For the temperers, it's like brute force to make it work -- hey, make 12 that are exactly the same, whether they are rational or not.

From a mathematical point of view, it's striking to see how slippery the difference between natural numbers and other numbers can be. The nineteenth-century mathematician Leopold Kroeneker is famous for having said "God made the integers; all else is the work of man." But when you can describe the tempered scale in terms of "12 equal divisions" or "12 irrational tone ratios" -- well, things start to seem less clear. Even in math, simplicity and elegance can mean one thing in one context and another in another.

From a sociological point of view, I found myself wondering: did books like this used to be more common, or more talked about, or something? I feel like Gödel, Escher Bach was a bit splash of a book, right around the time of other math books like the Gleick book Chaos. But now my bookstore doesn't even have a math section. Did the world change, or is it something about me that's different?

Advice Of The Pandemic, Ranked!

Among the by-products of the pandemic has been a tsunami of advice, especially about living your best life in lockdown. Was it any good?

Advice: if you're working from home, keep your routines

Early on, the news reports were that people working from home were flailing: cleaning the kitchen when they should be working, bingeing shows when they should be eating, eating when they should be cleaning the kitchen. Was this ever true, or was it just irresistible to imagine people collectively unable to get their shit together?

The advice was predictable and perfectly capitalistic. Keep your routines, they said. If you usually dress up and wear full make-up to work, get up in time to dress up and do your make-up, they said. If you usually commute to work, walk around the apartment for 30 minutes before you sit down at your desk. Take a virtual coffee break with your co-workers.

I thought to myself: who would be so stupid as to add the old burdens of their old life to the new burdens of their new life? Me, I worked in my exercise clothes and took naps over lunch. Did this advice work for anyone?



Verdict: terrible.

Advice: go easy on yourself
This one is complicated because clearly, for some people, "go easy on yourself" was the perfect advice. I'm here to tell you about the rest of us.

As for so many people, in March 2020 my job became an endless series of Zoom-style meetings, unanticipated problems, and difficult conversations. Early on, I thought "I'll go easy on myself" -- instead of trying to do things during weekends or early evenings, I'd spend the time "resting" or "hanging out."

Three weeks of this and I was mired in despair. My always-present sense of my life drifting away and passing me by intensified; thoughts in the "what am I doing and what is the point anyway" category buzzed around my brain like a bunch of gnats. I felt the mold between the bathroom tiles mocking me. With extra free time, my irrational body self-criticism meter ramped up from the blue "Guarded" category into the orange "High Alert!" range.

The advice to go easy often reminds me of some advice my mother once gave me about cleaning. I was complaining that on the occasions when I was on my own and cooking, I hated having to cook *and* having to clean up, which does feel to me like an outrageous burden at the end of a long day. My mom -- whose motto in life was "If you don't want to, you don't have to" -- told me I should clean up the dishes in the morning, and that if I didn't want to wake up to a messy kitchen, I should just "put the dishes in the oven" after dinner. I'm sorry, but dirty dishes in the oven? What kind of a bizarre suggestion is this?

Anyway, to stay sane in the pandemic, I switched into "mental discipline" mode. I scrubbed the floors. I did a zillion live-stream workout, yoga, and ballet classes. I made an elaborate system of calendar alerts. I recommitted to Duolingo.

Obviously, for some people this particular challenge didn't arise, because they were working massive hours or juggling child care or whatever, and obviously "go easy on yourself" makes total sense in that context. For other people, "go easy on yourself" was just what they needed to hear for whatever reason. It just didn't work for me.  

I feel like there's a lot of pointless social aggression out there between the "go easy" people and the "mental discipline" people. Can we please have a détente on this issue?

Verdict: variable.

Advice: create a Zoom social spot

This is the idea that if your job is virtual meetings and your social life is virtual meetings, you should do those two things in two different physical spaces in your home.

I love this advice, because it is geared toward people with lizard brains and I am 100 percent a person with lizard brain.

Virtual interactions give me a special kind of social anxiety. I'm not sure why, but one possibility is that when I'm with people, I'm used to relying a lot on subtle physical and facial cues in figuring out how to interact. Also, I feel like in a room with people, I enjoy the small pauses and silences -- I've always been good at being together and being quiet -- but somehow on a screen, I feel like I have to fill the air with talk, like some kind of demented TV show host. Ugh.

Because I have lizard brain, the layers of Zoom-social-anxiety particular to work-Zoom are queued right up for me when I'm in my work spot -- the chair, the wall behind me appearing on my own screen, the light bouncing off the desk in just such a way. What a relief to take the laptop to the dinner table for the friend-Zoom!

Verdict: the best.
 

Why Do I Care About The Gender Of Elena Ferrante?

When I first encountered the kerfuffle around the true identity of the novelist Elena Ferrante, I had two immediate and diametrically opposed reactions. First, smugness: I never concern myself with the autobiographical details of authors I love. So who cares? Second, freak out: wait, "Elena Ferrante" might not be a woman?

Normally I use mental discipline to avoid finding out about fiction writers. If I love a novel, thinking about which parts might be based on real life leads me down a mental rabbit hole of pointless philosophical questions. "If that part is real, did it really happen like that? Or is the author changing the details to manipulate me into feeling one way or another about a thing?" Then I'm like "What do you mean, 'manipulating,' anyway"? Talk about pointless and dumb.

However, I have also been shaken to learn certain facts about books I love, especially books representing the experience of women, especially books representing the experience of women and sexuality.

As a young person, I fell in love with Colette, the early twentieth-century French writer whose work and life challenged gender norms and all other kinds of conventions. Colette is best known now for the book that inspired the movie Gigi, but if that's all you know about Colette then you have the wrong idea -- as so often happens, the novel is darker, weirder, and more interesting. Colette is also famous for her many short stories. But the books I feel hardest for were the Claudine series.  

Over the course of the four Claudine books, Claudine grows from a precocious and rambunctious fifteen-year-old into a comfortably bourgeois but also free-thinking and unconventional adult. The first book, Claudine at School, gives an amusing side-on look at French social life in the provinces and the complex emotions of mid-adolescent feminine life. Among other complexities, Claudine has a massive crush on her young school teacher Aimée; they have a budding sensual romance when Aimée throws her over for the head schoolmistress, mystifying Claudine and giving her minor heartbreak. Aimée's younger sister Luce develops an unreciprocated love for Claudine, who doesn't understand her own desire to hurt Luce and cause her pain.

The way Claudine has romantic feelings for women and men and the way at fifteen she is both highly sensual and also a child both profoundly resonate with me. So I was disturbed to learn that the first version of Claudine at School may have been a more chaste and childlike story, and that Claudine's husband -- the writer Willy, who first told her to write up her memories and initially published the Claudine books under his own name (!), may have told her to tart them up, to liven them up with sex. I don't have sophisticated thoughts about this, but the idea that my favorite things were in there because some guy wanted them in there made me feel sad and weird.

Then a similar thing happened with erotic novel Story of O. That's not a book I loved (and the kind of submission/dominance thing in that book is not my kind of thing), but I always appreciated Story of O as a frank and explicit presentation of a certain version of positive sexuality from a woman's point of view. How many stories depict a woman being sexually adventurous and just basically enjoying it and having a great time? It's not a lot.

So, again, I was unsettled to learn late in life that while the author is a woman, the book may have been originally written for a man. That is, the author was in a relationship with a man, and wanted to get his attention, to turn him on, and to flirt with him. Again, I don't have sophisticated thoughts about this, but I find it disappointing to think that this, too, like so many things depicting women's sexuality, was ultimately crafted to appeal to a man's taste.

I loved Ferrante's Neapolitan novels, and I found The Days of Abandonment conveyed vividly what if feels like to be trapped in your life and in despair. Like Claudine at School, The Lying Life of Adults shows a textured experience of girlhood-womanhood adolescence.

It's been suggested that Ferrante may be a woman, or may not be a woman, or may be a woman and a man writing together. I feel like there's some sense in which I shouldn't care: the books are the books whoever the author is, and Ferrante herself has suggested that it's misogyny and sexism that makes people want to know.

But I also can't shake the feeling that the thoughts and feelings of people who aren't women so often affect what gets presented as the truth about women. It happens in pornography, it happens in movies, and it happens in books; given the way social gender norms work, it probably happens whenever there is an audience. While the books are the books whoever the author is, I also don't want the books by Elena Ferrante to be another example where the tastes, attitudes, and feelings of people who are not women are shaping narratives of our woman- and girl-hood experience.

What Compelled Me to Reread Dracula During Lockdown?

I don't know how it started, but I think it was something in the NYT crossword, something to do with Keanu Reeves. My partner said, "Keanu Reeves was in Dracula." Wait, what? From the miracle of modern self-surveillance, I know that I looked it up and learned that Keanu Reeves did not play Dracula (thank god) but rather Jonathan Harker, the lawyer. Whether Gary Oldman is any less WTF as Dracula I leave as an exercise to the reader.

I have a long history with Dracula, a book that obsessed my father when I was a kid. My father was the kind of guy where a lot of things were like 80 percent jokes and 20 percent serious and others were 20 percent jokes and 80 percent serious and it was always a little murky, probably even in his own mind, where we stood on things. After his annual rereading, my father would sleep with garlic under his pillow out of fear. You might think that's the 80 percent joke, but this was a man whose nightmares tended to actual devils actually chasing him, so I'm not so sure.

Later, I had the widely shared adolescent girl experience of being "into" vampires, whatever that means. I watched the movie The Hunger over and over, and read a lot of Anne Rice. My first reading of Dracula, around that time, I had a vivid sense of the erotic in the vampires' ways -- you may not remember this, but these are literally described as "voluptuous" in the book. Poor Jonathan Harker, on meeting the women in the castle who want to drink his blood, describes their voluptuousness as "both thrilling and repulsive." I remembered the book as basically anti-vampire, but not in any particular way.

On this rereading, by contrast, I felt the full weight of the Christian anti-sex moralizing. As Lucy's appearance begins to shift toward the vampiric zone, her friends are disgusted by the new sensuality of her face; after they destroy her body to free her from Dracula's spell, they're thrilled to see her previous sweet, pure expression and physiognomy return. A "diabolical sweetness" allows vampires to express love and desire to seduce new recruits. I guess I'm as against killing people and sucking their blood as anyone else, but this framing struck me as depressing and dumb.

Rereading Dracula during lockdown, I couldn't help but notice that it's partly a travel book. The best part of the story is when Jonathan Harker first goes to Transylvania, ostensibly to help the Count with some clerical matters, and slowly gets caught up in Dracula's web. There are trains, and ships, and transfers to carriages, and rides on horses. I was like, "Oh yeah -- travel!" Of course, so much of that late nineteenth-century mode, of going to truly unknown places and being completely cut off from anything familiar, is totally lost to us now. These days, Jonathan Harker could read on Yelp, "Castle looks interesting but they will kill you and drink your blood. One star."

The most melancholy aspect of my reading experience was the way that it was stupidly mediated by all the ridiculous parodies, take-offs, and remakes of the Dracula story that I've encountered in my time. Chief among these was the 1979 film Love at First Bite, which as a kid I saw on TV multiple times and found hilarious and awesome. How could I not love a vampire movie that featured the classic disco song "I Love the Nightlife"? There's also Young Frankenstein, which takes place at a castle in Transylvania for who knows what reasons. Memories of those films made me see Renfield as an annoying twerp, the horses and wolves under Dracula's command as side-shows, and even the Count himself in the light of a ridiculous show-boater.

Overall, the whole thing was a sadder and less fun affair than I'd hoped it would be, though whether that's because I'm old, or because of lockdown, or because the book isn't really that good is totally unclear.

Enraged by Irritations: Human Nature Or Aristocratic Problem?

I don't know if you've read The Leopard -- the 1958 book by Giuseppe Tomasi di Lampedusa that tells a story of social change and the decline of the nobility in 19th-century Sicily through the narrative of the Salina family and its patriarch Don Fabrizio.

I love The Leopard. I love the sensuality of the narrative and of Don Fabrizio himself. I love the way Don Fabrizio admires and supports his nephew Tancredi, even though Tancredi represents the decline of the Salina's social class. I love the way that Don Fabrizio's real interest, and true comfort, lies in astronomy and mathematics.

At one point in the story, Father Pirrone - a priest who lives with the Salina family -- is asked to reflect on what the aristocrats think about then coming shifts in social equality. And he tries to answer, he gets caught up in a long and rambling response because he doesn't know how to explain how the nobility see the world and how different it is.

In trying to express the incomprehensibility what the nobility care about and what they don't, he says "I've seen Don Fabrizio get quite testy, wise and serious though he is, because of a badly ironed collar to his shirt; and I know for certain that the Prince of Làscari didn't sleep for a whole night from rage because he was wrongly placed at one of the Viceroy's dinners."

Encountering that passage always gives me a shock of recognition in an uncomfortable way. Because I, too, am frequently thrown by small irritations. I went through a phase where if I was chopping vegetables and a small piece of something would fall on the floor, I would flip out, feel the world was against me. The problem of price stickers leaving sticky residue on elegant objects sends me into a tailspin. Sometimes I get dressed to go out, and realize my shoes won't work with the weather, and realize my outfit won't work with different shoes, and I get a complete feeling of despair come over me. Yesterday morning, my clothes hangers got tangled up and I was like OK, that's it, we're done.

I don't think I'm alone. I've seen friends in a rage because of coffee spilled on a shirt, or a glass dropped on the floor.

I used to buy into the orthodoxy of "underlying mood": that this kind of thing happens because there is an undercurrent of stress and anxiety so intense that the seeming OKness of the surface is a superficial layer, a paint job over roiling chaos. Sometimes that's true. You can always describe it that way if you want to. But often it doesn't feel that way to me. To me, it feels more like a plunge into the essential pointlessness and harrassingness of human existence, a plunge caused by the irritation itself, not requiring unusual life stress as a background condition.

At first, I was inclined to draw the conclusion that Father Pirrone's association of this experience with aristocracy was questionable. I'm not an aristocrat, and neither are my friends. But then I started wondering if maybe just being middle-class and white in North America was a kind of experience of aristocracy -- I mean, that the relevant background needs and social comforts are met at such a high general high level that our idea of what is a "problem" would be more similar to that of the Salina family than to the non-aristocrats of 19th century Sicily.

But then -- "on the third hand," as my mother liked to say -- I got to thinking, maybe Father Pirrone is wrong about it after all -- because what does Giuseppe Tomasi di Lampedusa know about it? He was the last Prince of Lampedusa and owner of a hereditary agricultural estate. He has no special insights into whether being enraged by irritations is a special aristocrat thing or more a general human thing.

I used to have a lot less money than I do now, and for a time I was in difficult circumstances. I've been trying to remember whether I was just enraged by small irritations then as I am now. But I can't quite sort it out. In my mind's eye of that time, I'm just smoking a lot of cigarettes. I do remember that when I worked as a waitress, one of my tasks was to break cold feta cheese into crumbled feta cheese with my hands, and I hated the feeling so badly, I swore I'd never do that by choice. To this day, I use a knife to chop feta into little cubes.

So: enraged by irritations: human nature or aristocratic problem? I'm really not sure. Or -- maybe it's just me?

Trade-Offs Versus Optimization: Is Everything An Optimization Problem?

In my work on ethics, I'm what I think of as a "trade-off" person rather than an "optimize" person. In the informal sense, this means that I see conflicting and competing considerations and values all around, and I think the ethical task is often to figure out how to prioritize among various considerations, instead of thinking that the ethical task is to figure out what is good and then bring about as much of that as possible.

If your first thought is "Wait, how are those really different?" then you are right in lock step with what a lot of other people are thinking.


To me, on the face of it they seem very different. For example: when the pandemic led to conditions where not everyone could be treated because there weren't enough resources, ventilators, and so on, one system of decision-making would be to "maximize overall health by directing care toward those most likely to benefit the most from it." For example, give the resources you have to the people most likely to survive, in ways that maximize the additional healthy years they will live. This is an optimizing strategy as it identifies a good -- healthy years of life to come -- and frames choices as maximizing that good.

A problem with this optimizing strategy is that it leads to discriminatory effects. "Health years" of life is usually understood as meaning years of life without a disability, so other things being equal, a person with a disability would be less likely to be treated than a non-disabled person. Because of social injustice and oppression, Black people in the US often have worse health than white people; if they were therefore less likely to have good outcomes, they would be less prioritized for treatment. Poor people are much more likely to have underlying health conditions and thus would be less likely to be treated. 

The "trade-off" perspective, on the other hand, frames the problem as one in which there are a variety of considerations that have to be weighed and balanced. Producing good effects in the sense of future years of life might be one consideration, but fairness and justice would also be a consideration. You might decide to use subjective measures of quality of life in which having a disability does not make a life less good; you might explicitly bring-anti-racism into the picture. You have to come up with a way of proceeding that weighs multiple considerations against one another. It might be complicated, and you might have to use your judgment.

When I have talked about these issues in classes or at conferences, defending a trade-off approach, occasionally someone will say to me: "If you frame it properly, everything is an optimization problem." I take it they mean something like this: while maximizing healthy years is one way of optimizing, it is not the only way; whatever value you think is good you can run a maximizing strategy on it. For instance, if you think future years of life, fairness, justice and equality are all important, you can create some concept like "overall goodness" that incorporates all of these. Then you can just maximize that. So there isn't really any difference; trading-off is not a separate and different kind of thing; it's more just what you're trying to maximize.

In harmony with this idea, there is a technical result that any set of ethical judgements can be "consequentialized" -- that is, expressed as the result of an optimizing procedure.

So if you were thinking, "Wait, how are those really different?" the answer is that in some deep conceptual sense, maybe they are not really different.

OK. But then I think: what about the other senses -- the ones that are not the deep conceptual senses? Even if you *can* frame your approach in optimizing terms-- should you?

I think the answer to this question is often "No." The details are tricky and probably boring for most people, but here is a short version:

1) Both methods require moral judgment, in the sense of figuring out what is important and how important it is, but "optimizing" has a veneer of objectivity to it, like we're just number-crunching. News flash: we're never just number-crunching. Talking about "trade-offs" reminds us constantly that we're using our human judgment and our values to figure out what to do.

2) "Trade-off" reminds you immediately that no matter what you do, you may have lost something, so that even if you get the right balance something bad happened. The language of "optimizing," however, has unsettling connotation of "everything is all for the best." If you have to prioritize one person over another, and you make a good decision, but the other person dies, do you really want to say "well, that was optimal"? In fact, noticing that it wasn't optimal may prompt you to think or plan differently in the future -- e. g., trying to prevent people from getting sick in the first place.  

3) "Optimization" lends itself to methodologies where the inputs are easily measurable. Yes, you can optimize for things like justice and fairness and anti-oppression, in the sense that you can come to a judgment about what to do that honors those values in the way you think best in the circumstances. But, especially given 1), once you're in the optimization frame of mind, it's natural to start thinking that you're going to be more objective, precise, and accurate if you have numbers to put in -- something like, I don't know, estimates of "healthy future years lived." When those don't reflect the values you wanted to use, you'll end up coming to the wrong answer.

The pandemic and our responses to it are full of massively complex challenging questions: How should we balance protecting our health with the losses that come from lockdowns? How should we express our valuing of children's schooling with protecting everyone from harm? How far should we go in trying to eliminate COVID as opposed to just flattening the curve?

These questions have no easy answers and that's one reason we're all in dismay and disagreement about them. Talk of optimizing, even if conceptually sound, makes it seem like some of us are right and some of us are stupid, and makes us want to invest in computer science. Talk of trade-offs reminds us: honoring multiple values in complex circumstances is difficult and fraught, and it's values all the way down. 

Anti-Racist Values And Decision-Making On Campus

Like a lot of other universities around North America, my university has been talking over the last few weeks about anti-racism and what universities need to do to do better. Among other things, events included a workshop I attended last week. I've been thinking about an important point that the speaker made, which is if you say that you have anti-racist values (which universities do), then you have to put those values into practice, otherwise it's just talk. Success at putting those values into practice is manifested in practical outcomes, and can thus be seen and measured.

This point got me to thinking about the different ways that university systems work to create the outcomes that we do, in fact, experience and see. One thing that happens a lot in universities, to one degree or another, is that decisions are driven by undergraduate enrolment statistics. Departments and faculties get resources if they attract more students and majors. Departments and faculties die if they fail to attract students and majors. Individual classes run, or don't run, based on whether they attract students. As you can imagine, this can influence big decisions, like who gets hired to do what, and vast numbers of smaller decisions, like what gets on a syllabus.

This way of proceeding has always seemed to me a bit bizarre. Are we really going to let the decisions of a bunch of 18-22 year-olds -- and, the narrow slice of them who happen to go to university -- determine the direction of scholarly research and the ideas that a community invests in? This is nothing against young people -- it's just weird to have this tiny cross-section of society wield this enormous power over something that is quite important and complicated.

And even from an abstract point of view, you can see how this way of proceeding might tend away from, rather than toward, teaching and research focused on anti-racism and anti-oppression. Young white people may not want to confront their place in an unjust system. Almost all young people are pressured to study practical subjects. In universities without breadth requirements, students in STEM majors may feel they don't have time in their course schedule for other things. These pressures don't come just from anxious parents, they also come from the way our world is -- hyper competitive, capitalistic, etc. etc.

If I understand correctly, one way of framing decision-making based on enrolment goes something like this: undergraduate tuition pays the bills, so that is the income; a sensible organization of a system lines up the income and the expenses so that the one pays for the other in some linear kind of way.  I've even heard of universities where they say "you eat what you kill": the idea being that self-sufficiency market-based norms coordinating input and output should undergird university decision-making.

There is much to say about this, but what I want to focus on here is the veneer of objectivity and neutrality sometimes placed on this framing. Apportioning resources in a way that seems to line up supply with demand can seem like you are avoiding these problematic value-laden judgments. It may seem like you're taking a step back -- *we* aren't the ones making these decisions. It's just how things shake out when you look at the numbers.

But all ways of making decisions are value-laden and non-neutral. If you do a cost-benefit analysis, you're making judgments about how to weigh everyone's choices and what other values -- like justice -- you're ignoring. If you base everything on consent and individual liberty, you're making judgments that privilege the status quo, and that rule out rectification of historical injustice. The metaphor of the market rests on assumptions that what your customers want and need is what should be created, and that their sense of worth should inform yours.

As racialized people have been saying for a long time, the social structures in place that feel neutral or objective to those in the dominant social group are anything but, and often work to reinforce the injustices of the past.

Of course universities should factor into their decision-making what students are looking for. When they do so, they pay respect to certain values, including respect for student needs and student autonomy. The point here is just that other values matter too -- values that are distinct from these, and may conflict with them. If you say you care about these other values, you have to find a way to make room for them in practical decision-making at various levels, which can mean bringing judgment calls back into the picture.

Science, Judgment, And Authority In The Time Of Pandemic

The Coronavirus moment is reminding us all of the problems with the way we normally do things. Some of these problems have to do with the place of science in our practices, the way we talk to one another, and why we do what we do. These are just a few items that I have found extra personally irritating.

Masks of confusion
I know a lot of people are irritated by the way that we were told not to wear masks, because they were pointless and we'd all fuck it up and wear them wrong and cause mayhem, only to learn later that wearing masks actually works. And sure, I can spare a thought for that annoyance.

But for me, this been massively eclipsed by my feelings about the bizarre communication style about masks right now. Almost everything I read says something like "Here's what to do about masks" or "Here's where masks are mandatory" or "Here is the updated health policy on masks -- without explaining the reason people are being asked to wear masks.

Every public communication about masks should include a basic explanation that the use of basic non-fancy masks works because it prevents asymptomatic infected people from spreading the disease around. People do not know whether they are asymptomatic. So if they're going to be near people, they should wear a mask. Sure, it might help you avoid infection yourself, but that is not the main point.

There still seems to be massive basic confusion about this. I keep seeing people in comment sections saying how it's their choice how much they want to protect themselves, or that they're personally not worried about getting sick, or that only infected people should have to wear masks. Are health communicators being deliberately obscure about the collective responsibility angle, because they think people will assume it's self-interest and thus follow the rules? Are they leaving out the explanation because they think people will just follow the rule? Bad news for you, guys.

"Listen to the science"
This one is trickier, because of course, yes, I think we should base our decisions on the best scientific information that we have. But science alone tells you almost nothing about what to do in a pandemic, because everything you do is going to have complex ripple effects and you have to trade those off against one another. BCE (before Coronavirus era), I used to constantly bore people talking about how many people die every year from car crashes -- in 2016 alone, around 1.35 worldwide and over 37,000 in the US. But no one ever seriously suggests giving up driving.

Please note that I am not saying that the virus is comparable to driving! Clearly, it is much more dangerous. The point is just that structurally, we're always making collective and personal judgments about how much risk is OK for the things we want to do. One thing that's challenging in the Coronavirus case is that different people have different risk tolerance, and yet in the nature of a pandemic, we have to act together. That is a very difficult situation, but it's also one that isn't helped by saying "listen to the science."

Amateur epidemiologists around every corner

These fall into two categories: the data watchers and the microbiology obsessives. The data watchers are checking out the Johns Hopkins site to follow the numbers and see whether their preferred policy response is working and whether countries with leaders they hate are suffering. I'm guilty of this myself, relying on this cool visualization site to compare stats, form hypotheses, and rationalize my existing prejudices. As this Guardian article reminds us, though, there are lies, damned lies, and statistics: massive variation in how cases are counted, and when they are counted, and what counts as "dying from Coronavirus" means it will be years before we have any clear picture of what is happening.

Then there are the people who keep up to date on things like what size of particle travels by aerosol transmission. Whatever floats your boat, I guess -- but, as with most science, a few papers you download from a preprint server is probably not enough for a non-expert to make an informed opinion.

While these are my personal irritations, I will say one thing they have in common is that science, while crucial, is never the whole story: the world still needs judgment, communication, shared deliberation, and all those murky things you find over in the Arts and Humanities departments. So please, please don't destroy us and leave everything to the STEM people.

Policing Practices, Law and Economics, And The Values Of Justice And Efficiency


In the full story of how things in American policing became so completely fucked up, I would like to read an analysis that explores connections among 1) theoretical issues in the framework known as "law and economics," 2) local legal structures that appear to use policing to generate revenue, 3) policing practices, and 4) racism.

For those not up on these things, law and economics is a legal framework that understands laws through the lens of efficiency: good laws bring about good consequences. For example, laws related to civil wrongs could be crafted with an eye toward what would work most productively moving forward, rather than thinking about background rights and values like fairness. This framework emerged around the mid-twentieth century out of work by neo-classical economists (many at the University of Chicago) and legal theorists like the influential Richard Posner, and has a wide range of contemporary applications.

"Positive law and economics" is about explaining and predicting laws, with the hypothesis that, other things being equal, laws that produce efficiency will be adopted. "Normative law and economics" says that such laws not only would be adopted but should be adopted -- so that existing laws can be improved by being made more efficient.

What "efficiency" means here can be complex; it can be the maximizing efficiency of utilitarianism, in which the thing to do is the thing that brings about the best consequences overall, but more typically it is "Pareto efficiency" that is used -- a set up is Pareto efficient when there is no way to one person better off without making another person worse off.  (I wrote about various forms of efficiency here and here.)    

You might be thinking that it's odd to have a legal framework based on efficient future consequences rather than justice and fairness. I do too, though we won't have time to get into that here. If you're interested I recommend this excellent book review.

One can apply the theoretical approach of law and economics in a wide range of ways: even when it comes to something like "efficiency" and the "good" in "good consequences," for instance, you might be trying to promote preference-satisfaction or well-being or you might be trying to create, you know, actual money.

This last bit brings us to 2): legal structures that appear to use policing to generate revenue. This book review by the always brilliant Moe Tkacik explains the idea in vivid detail: the sanctions for crimes are set up so the accused have to pay; the state then raises money while leaders claim not to raise taxes. Judges become like tax-collectors whose subjects are in no position to complain.  

The theoretical connections can be a bit complex, but as I understand it, the reasoning goes  something like: if the fine for driving without a license is X dollars and you drive without a license, you must have in some sense preferred to drive over losing X dollars; the state can set the fine in such a way that it reaps more from the fine than it lost from the crime being committed. In this way the crime is disincentivized but the interaction is kind of a win-win, and is efficient all around.

And thus to 3: I remember after Michael Brown was killed in Ferguson, I kept seeing references to the ways that the over-policing of the citizens there could be traced partly to policing as a way to raise revenue. This post gives a great overview and explains: "In its 2015 report on policing in Ferguson following the killing of Michael Brown, the Civil Rights Division of the United States Justice Department concluded: “Ferguson’s law enforcement practices are shaped by the City’s focus on revenue rather than by public safety needs. This emphasis on revenue has compromised the institutional character of Ferguson’s police department, contributing to a pattern of unconstitutional policing, and has also shaped its municipal court, leading to procedures that raise due process concerns and inflict unnecessary harm on members of the Ferguson community.”

And 4) now you add both individual and structural racism into the mix. Because of structural racism, Black people are much more likely to be poor and powerless than white people. The poorer and less powerful people are then over-policed and abused into becoming ATMs for the government's revenue needs. Among other things, modern algorithms for crime prediction and sentencing actually factor in past arrests so that the original injustice is perpetuated further. And, of course, individual racist police then have a framework for their abusive actions.

I don't know how all of these interrelate -- theoretical law and economics is complicated and I don't know how its theoretical development has impacted practices of policing-as-revenue. But I hope to have shown here why I see them as conceptually interconnected and mutually supporting.

Anyway, if you want to read something else on racialized impacts of framing laws in terms of future consequences instead of past actions, I cannot recommend enough this searing personal essay by classicist and political scientist Danielle Allen about her cousin Michael, who enters the criminal justice system as a result of minor crimes at age 15, gets derailed in life, and ends up dead -- murdered at a young age.  

From a theoretical point of view, proponents of efficiency-based reasoning sometimes cast "justice" as a kind of artificial virtue, something to be explained away, something that reflects prejudices of an evolutionary past, where punishments were needed to keep people in line and bring about good consequences. The implication is that once we see this, we can go right to the consequences and skip the justice part altogether. I don't know all the ways that 1)-4) interrelate, but I'm sure the part about skipping justice altogether must be wrong.

In Which I Venture Into The Thickets Of Data Science And Hume's Problem Of Induction

One of the things I started doing in the middle of lockdown was courses at Data Camp. I started with Machine Learning for Everyone, then moved on to Python for Beginners. In case this isn't your universe, Python is a programming language that is often used for data science.

I want to emphasize that I did not do this because I suddenly had "extra time" on my hands or because I was casting around for something to do. There are different lockdown experiences out there, and the "extra time" experience has not been my experience. For one thing, everything to do with my work seems to take four times as long as it did before.

Rather, the way my emotional life works, I often have a background sadness that I keep at bay through doing things. In normal life, the bustle of activity and the feeling of accomplishment are central to that process. With lockdown, there is no "bustle of activity." So accomplishing things -- or feeling like I am accomplishing things -- has become a huge thing. So why not learn something about data science?

The classes are excellent, with lots of examples and exercises. On encountering these, I immediately started thinking about data science and Hume's problem of induction.

One of the first examples that my course used to illustrate machine learning concepts had to do with predicting how much money a movie would make based on input factors like star power, budget, advertising, and so on. And I was like, "Wait, what"? Is the idea supposed to be using the data of the past to predict earnings in the future? But isn't the popularity of works of art always shifting and changing? Isn't art frequently based on novel ideas? Also, I thought the popularity of films was regarded as wildly unpredictable.

If you've studied philosophy, you won't be surprised to hear that my next thought was, "What about Hume's problem of induction"?

If you haven't: briefly, Hume's problem of induction is that inductive reasoning -- in which we go from past cases to generalities and the future -- always rests implicitly on an assumption that the future is going to be like the past. And yet we have no logical reason to believe that the future is going to be like the past. So inductive reasoning, which is at the core of basically all empirical science, has no justification. You might try saying "Hey, but the future has always been like the past." But to use that to solve the problem would mean applying the past to the future, and so would be induction, and so would be circular.  

You can see right off the bat that these are deep waters we are getting into, and I have to warn you that this is going to be the Phil 101 level version of things because I'm not a specialist in this area, I'm just a person thinking about data science. But I do remember from teaching Phil 101 that the point with Hume isn't just about a lack of certainty. It's no help to say that while we're not sure the future will be exactly like the past, we have reason to believe it will probably be like the past. Because whatever version of "probably" you come to, that judgment relies on thinking that in the future, things will occur with the likelihood that they did in the past. In other words, we're back with the circularity problem.

Anyway, I'd been wondering vaguely for a long time about social science and the problem of induction, and then I started thinking about data science and the problem of induction. In the context of social reality, Hume's problem starts to take on a practical urgency. Because when it comes to people, when is the future ever like the past? Our current moment seems designed to hammer this point home. Ha ha, you thought the future was going to be like the past? Guess again, suckers.

So like anyone else, I then googled "data science," and "Hume" and "problem of induction." (This is where I have to admit that my usual searching via Duck Duck Go got me nowhere and so I was forced to recall to mind the superiority of Google as a search engine).

I found this discussion, which gives a good overview, but which ends by saying that "instead of strictly rejecting or accepting, we can use inductive reasoning in a probable manner." But I didn't understand this, as I thought the problem applied to probabilistic reasoning as described above.

I also found this piece, which covers a lot of interesting territory but which concludes that AI works because "the problem of induction can be managed," which again, I didn't understand. 

So then I was like, Do I not know what is going on? So I went to the Stanford Encyclopedia of Philosophy entry on the Problem of Induction. Yes, there are attempts to get around the problem of induction via "Arguing for a Probable Conclusion." Not surprisingly, the matter turns out to be very complicated, though I note that each subsection seems to end with the author of the article basically saying "this is why that doesn't really work."

Noticing that the entry points the reader also to "Philosophy of Statistics," I went there, and was fascinated to see in the first section:  "Arguably, much of the philosophy of statistics is about coping with this challenge [of the problem of induction], by providing a foundation of the procedures that statistics offers, or else by reinterpreting what statistics delivers so as to evade the challenge... It is debatable that philosophers of statistics are ultimately concerned with the delicate, even ethereal issue of the justification of induction. In fact, many philosophers and scientists accept the fallibility of statistics, and find it more important that statistical methods are understood and applied correctly." 

So at this point, I guess figuring out what I think about data science and the problem of induction will require some intense intellectual effort. I think it will be worth it though. The most interesting item I found in my searching argues that the real challenge that the problem of induction poses for data science is that people "change and grow morally and socially in non-transitive, non-linear ways."   

I agree, and I would add that social institutions and practices also change in complicated ways. We now get into the debate over whether there are simple and uniform laws that lie beneath what looks like social chaos, or whether people and their doings create novelty in ways that are inherently impossible to pin down. You may not be surprised to hear I tend toward the latter view, not because I think free will lies outside the laws of the universe, but more because the creativity and complexity of humans isn't susceptible to that kind of generalizing thinking.

The topic is complex. But on my side can I present the wild success of Parasite, surely a film whose budget and star power would never have led to predictions for its success? 

Guest Post: The Ulysses S. Grant Of Lockdown

This guest post is by my former co-blogger at Commonwealth and Commonwealth, Felix Kent.

The first time I ever made waffles was in the beginning of what California calls shelter-in-place. It was the very beginning and my work hadn't yet figured out how we could telework, so I was officially off the hook, although I couldn’t stop checking email and trying to return voicemails, normally tasks that I avoid. But also I was staying up as late as I wanted and one mid-morning still in my pajamas and with that weird milky morning smell still around me I for the first time used the waffle attachment for the Cuisinart griddle I bought my husband for his birthday a few years ago. I made the batter at Martha Stewart's direction in a big white porcelain bowl. At first the waffles didn’t cook at all, but then I figured out the knobs were improperly calibrated, and I deduced the right temperature, and the waffles rose into fluffy piles, and they were good with blackberry jam, at least as good as an Eggo.

The whole thing had that feeling I only normally get when I go to my mother's house for Christmas, which I haven't done in years and years, a kind of relinquishment of moving forward. It was great, I loved it. I took more baths than I took showers. Once a week I had to go into the office and the lack of cars on the road would make me cry, would remind me that people were dying, but then I would come home and I would read in a way that it felt like I hadn’t read since I was a kid on summer vacation, a total abandon, a loss of self. It was so messed up. Last summer something went wrong in my back and for months and months the nerve that goes down my right leg had been shrieking in pain anytime I stood up and I had been kind of desperate to be at home and then all of a sudden the world closed down and I was home all the time and I loved it.

I had this English teacher in junior high who was one of those well-known great teachers except that I hated her and I don’t think she liked me very much, but at one point in the class she asked us how to treat other people well and I, believing myself both smart and good, raised my hand and said that we should think about what we would want in their situation and she whirled around and looked at me and said, no, people are different from each other. And no other teacher that I’ve had has ever told me anything as useful as that. I joked a lot during the beginning of the pandemic about how this was my Winston-Churchill-in-World-War-II moment, my Ulysses-S.-Grant-plucked-from-his-hardware-store. I was made for this historical moment. I like staying at home; I like not seeing people. A decade ago I flew all the way across country to a friend’s wedding and the night before the wedding this group of friends I hadn’t seen in years -- some of my closest friends -- tried to lure me out of my hotel room until finally the one of them that was closest to me said, are you kidding? The hotel room is her favorite thing. And he was right and I went to sleep and I felt a little bad about it but not really.

It was a cozy apocalypse; my bedsheets were clean and I wore my comfortable stretchy clothes around the house and the CalTrans signs on the freeway told me not to go anywhere and mostly I didn't. And I didn’t know an apocalypse could be cozy in that way, but the other thing I didn’t know, even though I should have known, even though that terrible John Cusack movie tried to tell me, is that the apocalypse would be nicer to people with more money.

The apocalypse was cozy for me. Because I had a job where I didn’t have to go in and because I had a house that is comfortable and because I had a car and because I was still getting paid. This was not the human condition; this was my condition. It was messed up. The internet in my home was super-fast; I bought expensive maple syrup to put on the waffles. Probably I was doing the most useful thing I could do at that particular moment. I could have been just as characterologically well-equipped to stay home and if I were poorer my historical moment for greatness would have passed me by. It wouldn’t have been less unfair if I had hated sheltering in place, but maybe it would have been better in another way. I don't know.

In Which My Colleague And I Read Classic Pandemic Literature

When the lockdown started, my colleague Doreen Fraser and I were both seized by the same impulse: to read classic historical pandemic literature.

Doreen, seeking a sense of historical perspective that is absent from panicked news stories, picked up Daniel Defoe’s A Journal of the Plague Year. Defoe's book was written in 1721 when the plague was spreading across Europe, but describes the earlier 1665 outbreak. Defoe was only five years old when that happened; he wrote the Journal as fiction intended to be historically accurate -- and written as a warning and practical guide to preparations for Londoners.

For myself, I'd happened to read a think-piece that mentioned Alessandro Manzoni's novel The Betrothed, a 19th-century classic of Italian literature that takes place in the early 1600s as the Great Plague of Milan plague swept through the Lombardy region. I had read The Betrothed years ago, and while I had remembered that the plot involved a man and a woman who want to marry and encounter endless obstacles, I had forgotten about the plague. Like Defoe, Manzoni drew on primary sources for accuracy; he also isn't shy about inserting his opinions about the massive failures of information and planning that made things so much worse than they had to be -- and resulted in the death of 25% of the population.

Doreen and I were struck by commonalities of practical advice, fascinated by other parallels, and amused by peculiar philosophical takes. So we wrote this piece together.

Commonalities of practical advice:


While these events took place well before the development of the germ theory of disease, everyone knew that people in proximity spread the plague. Doreen says that we learn from Defoe that the best thing to do to preserve your family from the distemper was to lay in as many provisions as you could and lock yourself up in your abode. Defoe warns his readers that the "necessity of going out of our houses to buy provisions was in a great measure the ruin of the whole city, for the people catched the distemper on these occasions one of another, and even the provisions themselves were often tainted."

In London, self-isolation was initially practiced by merchants from the Netherlands, who had experienced the plague there the previous year. Defoe reports that more than ten thousand people shut themselves up in their ships on the Thames. He describes (presumably based on his uncle's journals) how he himself bought two sacks of meal for baking bread and "laid in a quantity of salt butter and Cheshire cheese." And crucially: "I bought malt and brewed as much beer as all the casks I had would hold, and which seemed enough to serve my house for five or six weeks." No flesh-meat, "for the plague raged so violently among the butchers and slaughterhouses on the other side of our street … that it was not advisable so much as to go over the street among them." (Which has an echo in the current pandemic.)

The Great Plague of 1665-1666 in London from the National Archives

Manzoni also talks about the importance of shutting yourself up. Also, he talks about how rich people went out to their country houses, travelers were required to quarantine themselves on arrival in a new place, and plague survivors went around acting like they had immunity passports. As in our own society, poorer people were much more adversely affected: they could not afford to escape plague-ridden areas, they often had to work and could not self-isolate, and they were seen as dirty and dangerous by wealthier people. The inequality of horrible pandemic effects isn't a new or unforeseen happening.

On a lighter, more practical note, while we moderns tend to think we're so clever, both books describe 17th-century food practices that would be very familiar to us. Defoe talks about how when you bought your joint of meat from the butcher, you would take it directly from the hooks and you would put your money directly in the butcher's jar of vinegar. When Renzo, the hero of The Betrothed, went to a bakery, "The baker signed to him not to come in, and held out a small dish filled with water and vinegar on the blade of shovel, telling him to drop the money in there. Then he passed the two loaves over to Renzo one after the other, with a pair of tongs." I thought: this is just like when I go for take-out and they have a table set out with the food and a sign saying "no cash please."

I was also touched by many small details. Manzoni talks about how all the men ended up with weird long hair and beards, and how friends would greet one another from across the road to chat. When two old friends met up after a long absence, one said to the other, "Now let's go find an open space, out of doors, where we can talk comfortably, without any danger." This made me feel the Lombardians of 1630 were like my old friends. Plague people: they're just like us!

Other interesting parallels:


Doreen noticed that the people of London were watching the numbers as closely as we are and that statistics were the same source of obsession and uncertainty for citizens of 1665 as they are for us. In particular, everyone carefully attended to the weekly bills of mortality. These bills listed the number of burials in each parish broken down by cause of death. This allowed Londoners to gauge which parts of the city and suburbs were currently worst affected. Increases in the numbers occasioned fear and sometimes "inexpressible confusion"; when the numbers decreased people were greatly relieved.

As we are seeing right now, effective counting became challenging and maybe impossible. Defoe says that initially, deaths due to plague were underreported by individuals due to the stigma attached, as we also see now. However, numbers of dead significantly higher than average indicated the presence of the plague anyway, as we suspect today. At the height of the plague in the city, a prodigious number of deaths were recorded in the bills, but the true extent of the devastation was still underreported. The accounting system broke down under the strain -- drivers of the dead-carts either died or fled before their dead were buried, the drivers did not trouble themselves to keep account of the numbers because they were too busy clearing the streets, the parish statisticians died, or entire households perished and the bodies were not found until later.

Bills of mortality, from History Today


The modern day JHU tracker


Meanwhile, we learn from Manzoni that the citizens of Milan had the same problems that we do with plague-deniers, a desire to confer blame, and bizarre conspiracy theories. I couldn't help but think about modern scapegoating and 5G conspiracy theories when I read about the Lombardians' belief in the "anointers." The belief had taken hold in Milan that the plague was being intentionally spread by evil-doers who would spread onto surfaces some substance that would cause the illness.

While the theory was completely false and absolutely without evidence, the desire to blame was so strong that gangs of people would attack and kill anyone suspected of being an anointer. Manzoni describes a case of an old man who wiped off a church bench before kneeling on it. And then "all the people in church (in church, I repeat!) dashed at the old man, seized him by the hair, white as it was, and loaded him with blows and kicks. Some pushing, some pulling, they hustled him to the door. If they spared his life for the moment, it was only so that they could drag him in that battered state to prison, to judgement, to the torture." Manzoni describes his amazement that even the most educated and most skeptical people believed in the anointers: the most they could bring themselves to say was that their role was minimal compared to the effect of the actual plague.

Accusing the Anointers in the Great Plague of Milan

Manzoni vividly describes his dismay at the mistreatment of the officials who were acting to prevent the spread of the plague. They had the best information and actual plans to prevent the spread of disease, but because of misinformation and misplaced blame, were seen as the source of the problem itself. As Monzoni puts it, these people who "saw a terrible catastrophe coming nearer and nearer, and did everything they could to avert it; and at the same time encountered obstacles where they looked for help, became the butt of popular indignation and were regarded as enemies of their country -- 'pro patriae hostibus' in the words of Ripamonti."

Peculiar philosophical takes:


Alongside the belief in the anointers, Manzoni describes a deep and widespread reluctance to believe that the plague was the plague. People wanted to think that the illness was some other kind of illness, or that people were exaggerating, or that it was all a big scam. Educated people saw the cause of the sickness in a comet that appeared in 1628, together with a conjunction of Saturn and Jupiter.

Philosophically-minded scholars, always eager to play their part, contributed to these efforts by proving by syllogism that the plague could not exist. The man of letters Don Ferrante goes about it this way: "'In the nature of things,' he would say, 'there are only two kinds of entity -- substances and accidents. If I prove that contagion cannot be either the one or the other, I shall have proved that it does not exist, that it is a mirage. And here I am to do that very thing.'" To his critics, Don Ferrante says "I'd like to hear them deny that fatal conjunction of Saturn with Jupiter!" Insert your own jokes about philosophy making progress or not.

The end of the story:

Overall, the ending of the Journal was satisfying. (The plague ends!) But there is also a disappointing deus ex machina. Throughout the book, Defoe is scathing in his criticism of miraculous cures, quackery, and charlatans. He offers a sustained defense of the plague having natural causes. (With the tremendous effectiveness of transmission by contact and through the air, why would God need to resort to supernatural means?) But Defoe attributes the lifting of the plague to God’s intervention: "when the condition of the city of London was truly calamitous, just then it pleased God, as it were, by His immediate hand." However, it is worth noting that this view was not universally shared. (No prize for guessing who the contrarians were!) According to Defoe, physicians acknowledged that a natural account of the end of the outbreak could not be given, but the philosophers were hung up on searching for natural causes.

As for Manzoni, you won't be surprised to hear that the story has a happy ending. Well -- obviously not for the massive number of people who died of the plague, but definitely for betrothed themselves, who do finally get to get married and live happily ever after. Interestingly, the chief obstacle that sets the plot in motion is that a rich and corrupt nobleman, Don Rodrigo, is doing everything he can to prevent the marriage because he wants Renzo's bride Lucia for himself. So in addition to being a plague book, The Betrothed is also a #metoo book, showing again how the problems of modern life are often the same problems of everyone.

Lockdown, The Singularity, And The Idea of Post-Humanism

Back in 2010, when we were younger and more naive, a bunch of people were excited about something they called The Singularity, a time in the not-so-distant future when humanity would be replaced with ... something else. Post-humanity. A "superior intelligence that will dominate," so that life will take on a radically altered form that we cannot see and predict now.

Here at TKIN, I expressed my skepticism. Sure -- if you want to make an artificial liver or bionic limbs or whatever, awesome, I love it! But that's not post-humanity. It's the human experience, just somewhat improved. Longer, more fun, less painful, whatever. Beyond the human-upgrades interpretation, the suggestions get more radical but also more vague. We're going to meld minds and machines. We're going to upload your consciousness into a computer. You'll live forever, in some unforeseen Venn diagram overlap zone between virtual and real.

When I pondered this in 2010, I was like WTF, and now that we're in lockdown I feel even more like WTF. Aren't most the best pleasures of life embodied? In 2010 I listed sex, food, wine, sports, music, and dancing as things we like to do that are embodied, seemingly inaccessible to the computer-based post-human. And what's on the other side, on the post-human, singularity playlist? Math? Most people don't even like thinking about math.

I don't think I'm alone when I say that lockdown has made the importance of the embodied life even more vivid to me. We're sick of interacting through screens. We long for the touch of our family and friends. People are flocking to bake bread, grow plants, and acquire pets; the concept of "going for a walk to get some fresh air," at one time a symbol of a life lived quietly and meditatively, is now essential to the happiness of millions of people. I myself have taken care to notice the minute daily progress of leaves coming out on trees in my neighborhood.

I was mentally reviewing all the internet think-pieces I've read about what people are experiencing in lockdown, and the one disembodied activity now flourishing that I could spot was online chess. Touchingly, the New York Times places this news in the Sports section, where they are clearly dying for content.

Anyway, after writing that post in 2010, I expressed my doubts to some guys who were roughly in the robot-biz, and they smiled that guy-smile that comes up when a woman says something they think is stupid. It's not computer-based in that sense, they explained slowly to me. You'll still be able to do all the fun things. It's just that the whole system will be artificial, and therefore more permanent, less flawed, and better.

Fine, but as I've already said, I don't think that's post-humanity -- that's more like keeping your human self while being less susceptible to the world's problems: less vulnerable to injury, less in need of food and medical care, less dependent on others for your well-being. Our desires for life to be less difficult, less painful, less scary and less mortal are very human, and like the embodied pleasures, they have been intensified by the lockdown and the pandemic itself -- as we have all been reminded how vulnerable we are to illness and death, how challenging it is to care for others, and how fragile our little systems are.

Of course, in our radically unequal world, these things bear more heavily on some people than others, and awareness of our shared situation and our interdependence has been a bit of a wake-up call to some people who maybe used to imagine themselves as self-sufficient tech-oriented rich people.

I don't know what those people are dreaming of now, and whether it's still something like The Singularity or whether it's more like a walled city in New Zealand with a stockpile of ventilators. But whatever it is, I hope they'll remember it's not really post-humanism that they're hankering after. It's more like human life made less difficult and scary. And that dream is not only shared by everyone, it's also about the most human thing you could possibly have.

Dance Like No One Is Watching? The Existential Question Of Zoom Video

Before the lockdown I used to go to dance classes. Dance was something I had done as a young person and then didn't do for like ... thirty years or something. Then when I started up again it literally took me years to work up the nerve and motivation to go -- which could, itself, be the topic of another blog post whose conclusion would be something like "just because years have passed doesn't mean you won't do it eventually." I mostly do Contemporary -- but I also supplement that with classes in other styles like ballet and hip-hop.

It's an interesting question why dance class creates anxiety. The most obvious answer is that dancing well can be difficult, and class creates an atmosphere where people are together and naturally you worry that others are judging you. Or maybe you are judging yourself. I feel like one of the weird lingering effects of Anglo-Saxon culture is that there is something about dancing itself that feels especially vulnerable. But I wouldn't know how to connect those dots.

With the lockdown, dance classes are happening online, some through Zoom. It works OK, but I have had a hard time articulating to my friends why dance class via Zoom is such an impoverished experience for me compared to dance class in person. Zoom works OK for me for fitness: the instructors says to do a lunge or a burpee and I do a lunge or a burpee. It's not the same as being in person, but I get a workout and have some echo of the experience of working out with other people.

But with dance? It's just not the same. Part of it is communicative: in class, there is a back-and-forth between the instructor and the students with respect to what students are getting or not getting and how they're feeling about what is happening and what the instructor has in mind for the choreography. In addition to obvious things like "oh you're supposed to turn to the left, do it this way," there is also a complex interplay about the mood and the feeling of the dance. I guess it's not impossible for some of that mood to pass through the internet. But it's not easy.

A more significant factor more me seems to be just the essential pleasure between dancing-with-people -- even for something like Contemporary which is not, on the face of it, what we'd call a "social dance."

And that pleasure brings me to the great question of dance-via-Zoom: video on, or video off? For students, video-on is optional. And I feel like we're still working that out. I've been to classes where almost no one has video on, and I've been to classes where almost everyone has video on. Even though I'm seeing everything through a MacBook Air-sized screen, which means that from the recommended 6-8 feet away I can barely see anything, it is still way better for me when people have their videos on. It's a far cry from being together in a room, but at least there's the feeling of interaction and dancing-with-people.

It's an individual thing of course, and people have all kinds of reasons for choosing video off. I chose it myself recently for a ballet class that I feared was above my level (and it turned out I was right). But I feel like there is definitely a social norm aspect to it. Like, if you log on and everyone has their video off, you think "oh I guess I should too" -- it's embarrassing to be the only one putting yourself out there. But if you log on and everyone has it on you're like "oh ok, we're doing this? I guess I will too."


So I feel like livestream has changed a situation that was necessarily collective -- you're all in a room together, you all have to suck up the vulnerabilities -- to one of those weird individual versus group things where you have to ask yourself whether everyone is doing it and if not whether you're willing to put yourself out there as the only person doing it or what. It's a little sad.

Anyway, unless I have reasons, I try to always turn the video on for dance class, and I always hope that everyone else will too. Even though it's "optional," I was happy to hear a recent instructor admonishing the class to "clean up your room next time" so you can have the video on. Meetings are something else altogether -- but when we're dancing? I miss seeing y'all when your names appear in those black boxes.

I had never really thought much about that thing people say of "dance like no one is watching." I get it -- you're supposed to go all in or something. But one thing I have learned from lockdown is that, like a lot of inspirational quotes you find on coffee mugs and placards, it's not really solid advice when you take it too literally.

Coronavirus, Capitalism, and the Ones Who Walk Away from Omelas

Content warning: this post gets a little dark by moments, so if you're not in the right frame of mind, maybe read it later, or don't read it at all.

The pandemic crisis is prompting a lot of reflection about the dysfunctions of a capitalist system, which I guess is a good thing, though I'm pessimistic about much in the way of positive change. The richer classes are finally like -- "wait -- my food and well-being depends on these people! Are they OK? What if something happens to them?!"

Often when I think about the dysfunction of capitalism, I think about the short story "The Ones Who Walk Away From Omelas," by Ursula Le Guin. The story describes a beautiful well-functioning town in which almost everyone is well and happy, but in which all the good things depend on the utter misery of one small child -- a child who is forever locked in small basement room in squalid conditions and constant suffering.

People talk about the various justifications and injustices of capitalism in many ways, but one common one has to do with overall prosperity: more wealth is created by capitalism than in alternative systems, it is argued, and this wealth improves people's lives. Here the details get a little fuzzy, about whether that wealth has to improve everyone's lives or what exactly.

From a practical point of view, I think it's fair to say that if actual contemporary western capitalism creates wealth, it is doing so in a system that relies on horrible and exploitative conditions for many. It's never been a secret that agricultural workers, people who work elsewhere in the food supply chain, cleaners, gig economy workers, and others work long, long hours in sometimes brutal conditions for little pay -- often working multiple jobs. One of the wake-up calls of the pandemic has been that carers in nursing homes are working multiple jobs at various homes, just to make ends meet, thus increasing the likely spread of disease. And we've written on this blog of the way that modern electronics production relies on getting minerals in exploitative and violent conditions.

How the denizens of modern capitalism frame their faith in overall prosperity with the existence of this suffering and violation of rights has always interested me. The situation is obviously complex, but I think one possibility has to do with the ways that various values create small shifting zones of coherence and acceptability. Inside these little zones things seem to work OK, which makes us forget that things are really not OK at a deeper level.

For example, with respect to the social contract in modern capitalism, I expect that many people who believe in the importance of prosperity/overall GDP/economic growth also endorse some informal version of the following: those who work hard at a full-time job should be able to support themselves or maybe even a family; those at the bottom of the socio-economic ladder should be able to improve their situation with care and hard work; wages should reflect a fair wage for the work involved. Put more abstractly: people care not only about prosperity but also about fairness, equality, and expressions of mutual respect.

One thing that happens in everyday capitalism, I think, is that in certain small zones and in the experience of certain people, the system seems to make these values come together. If your parent came to the US from a life of poverty in Europe in the early 20th century and worked hard to send you to good schools, and you prospered from the contingencies of the 1950s to rise up to the middle-class, then from one perspective it looks like it all comes together: the society's prosperity goes along with the opportunities that the individual takes advantage of; rewards follow.

As is often discussed, this little zone of coherence is just that -- a little zone. For many people, it doesn't work this way at all. They work harder and harder and wages drop; ill health brings catastrophe despite best efforts; racism and discrimination radically diminish options and possibilities.

In this sense, the middle class and those who are even more well-off are somewhat like the denizens of Omelas: we're in a good situation made possible by someone else's horrible situation. Some defenders of capitalism are even kind of explicit about it, arguing against universal basic income on grounds that poor people will then be less motivated to do the crappy jobs that society needs someone to do.

When I talk about capitalism being morally dysfunctional in this way, sometimes my interlocutors are shocked, and they point out to me the horrors of alternative systems. But I think the one doesn't exclude the other. Even if capitalism is the best system out there, it can still be radically unethical. Sometimes in life you have no good options. And even if it is the case that other systems are worse -- then it's even more important for us to talk about the wrongnesses of our system so we can at least try to mitigate them or something.

In the Le Guin story, the residents all learn about the existence of the child in misery during their adolescence, and some go to see the child in person. Most weep and rage but realize they can't do anything about it and go back to normal life. Others, though, do not go back to normal life. What they do is walk away. Le Guin says, "The place they go towards is a place even less imaginable to most of us than the city of happiness. I cannot describe it at all. It is possible that it does not exist. But they seem to know where they are going, the ones who walk away from Omelas."

Like Le Guin, I can't imagine what it would mean to walk away from modern capitalism. In most basic literal terms, an alternative physical space would either be engaging in more capitalism, by buying up land, or infringing on the rights of other people whose land it is. In my darker moments, I fear that to walk away from Omelas would require ending my existence altogether.

But I try to remind myself: we're not in a fictional story land where the fates have decreed that our well-being and our failings are locked together forever, with our happiness inherently resting on others' suffering. We're in the real world, where things are complicated and confusing and can often be changed.

As the pandemic forces people to confront the situation of essential and vulnerable workers, let's remember that there are a lot of people for whom capitalism leads not to opportunity but to misery. Let's keep in mind that caring about overall economic growth and prosperity means facing up to this fact rather than getting to ignore it.

I fear that keeping these things in mind is going to get harder and not easier in the near future, as economies contract, business close, and times get tough. But we have to do it -- because unlike the people in the story, we don't have the option of walking away.

What You Want To Do And What You Should Do: Lockdown Edition

I was sitting around at home on a weekend day feeling dispirited and trying to figure out what to do next. Someone suggested that I might think about what I wanted to do as opposed to what I should do, and do that. Don't worry: this suggestion was socially distanced.

It sounded so plausible as advice. But I had trouble putting it into action. I had no problem coming up with things that seemed immediately appealing, but most of them were things like sleeping and snacking that I knew would make me feel more dispirited in about twenty minutes. I certainly had tasks of various kinds waiting to be performed, but those all seemed like things I should do, not things I wanted to do.

As I should have known, I could have looked to this very blog to understand the source of my confusion. I have often written here about the weirdness of the typical desire-decision matrix of western popular culture. I see it is in 2010 -- ten years ago! -- that I wrote about the lost art of desire-management -- the way we treat our desires as if they are immutable and random forces of nature instead of responsive responses to habit, social context, and our own previous choices -- and thus, things that can be directed and managed.

One of the most destructive elements of that cultural desire-decision matrix is the bifurcation that splits everything into distinct categories of pleasure-yum-love and those of obligation-task-force-yourself-to-get-on-it. Or -- as I described it in 2014 -- that there are things you want, and there are costs to getting them, and the whole question is how much you're willing to "pay" to get your preference satisfied.  

I don't know how this bifurcated perspective got so entrenched in our way of thinking, whether it's a trickle-down from formal thinking and rational choice theory, or whether it's just the metaphors of capitalism creeping in everywhere, or whether it has something to do with the neoliberal entrepreneurial self, or what. But it is not good.

Among its more disturbing aspects is that it leaves out a huge category of things crucially important: the things that are difficult, voluntary, and worthwhile. Like making art, or writing something, or learning new things, or reading what for lack of a better term I'll call literary novels. You don't have to wake up with a burning desire to practice scales to make learning to play the piano a cool and worthwhile thing to do. No one finishes breakfast and discovers they're just dying to get on with re-editing the draft of the writing thing they've been working on and not getting anywhere with.

We don't even really have a good adjective to describe the positive subjective feeling that comes with engaging with these types of activities. "Satisfying" is about as close as I can think of, but that is such an oatmeal kind of word and doesn't really capture it.

The resulting perplexity is, I think, one reason that "goal" culture has so totally come into its own in modern life. If you set a goal, then somehow you're able to communicate -- to yourself and others -- why you're doing a thing and why it feels good to do it, even when it is difficult and motivationally challenging. But for me, anyway, the idea of a "goal" distorts the whole thing. I don't write because I need a badge celebrating ten years of continuous blog writing. I write because it's -- worth doing, or satisfying, or whatever the word is that we don't have.

I feel like lockdown has raised new challenges for these complicated pleasures of life that we don't have a name for. To me, one of the small sadnesses of the culture of lockdown is that this amorphous category has somehow formed a weird locus of interpersonal antagonism. First there were the people who said "Hey, you could use this time to learn something!" Which is true. Then there were the people who said "Speak for yourself! Some of us have kids and and chores and jobs!" Which is also true. Then it was pointed out that even if you don't have other demanding obligations, lockdown is hard, so if you want to eat a cookie, just eat a cookie ffs. Which is also, of course, true. 

I'm the kind of person whose general life sadness comes from the Pointlessness of Life demon rather than the Fear and Anxiety demon. So the answer to my weekend conundrum ultimately consistent in remembering that just because I didn't want to do a thing didn't mean it wasn't worth doing, and that even in lockdown, I had to keep making myself do stuff. Just because there's a global pandemic on doesn't mean I'm off the hook for doing stuff I don't feel like doing. So -- it wasn't really what I wanted to do or what I should do but something else altogether.

As always, your mileage may vary.

My 400-Mile Public Transit Experience: A Multifaceted Analysis

Last weekend I went to a conference at Notre Dame in South Bend, Indiana. Because I've been learning about the contribution that flying makes to climate change, I decided to travel to the conference by alternative means. Here is my story.

The trip:

I took a train from Kitchener, ON to Windsor, ON leaving around 7:15 on a Thursday after class, changing in London, and getting to Windsor around midnight. I stayed over in Windsor, then got up Friday and took the cross-border Detroit-Windsor tunnel bus. I got off that bus and walked about 20 minutes to the Detroit bus station, then at 1:05 pm I got on a Greyhound that got into South Bend around 6:30 -- which should have been time for exercise but which I took to be time for wine and food and rereading Lucky Jim at one of the seventeen Irish themed pubs nearby.

The conference was all-day Saturday and in the morning, Sunday. Then I did basically the same thing in reverse, except I took a taxi instead of the tunnel bus and I had to catch the train from Windsor at 5:30 Monday morning.

The transit experience:
The main thing about the transit experience is wtf is with putting train and bus stations way out in the middle of fucking nowhere? To get to the Detroit station you walk past blocks and blocks of concrete and fences and nothing. There's no coffee or food in the station, or visibly near the station (though I did use Apple Maps to make my way eventually to "King Coney" Coney Island Diner which turned out to be awesome). Right, because what long distance bus traveler needs food or coffee?

I thought Detroit was bad, but then the South Bend bus station is at the airport. Literally -- it is inside the airport. Thus forsaking, for no gain, the main benefit of public transportation, which is that you normally don't have to make some special trip in some special expensive taxi or something to get to the station, because in a normal city, normal people put the station in the middle of the city where the people are.


I realize the aim is to optimize. And that the little bus-taking needs of little bus-taking people just don't add up to anything. But come on -- it's supposed to be a station, not a distribution center. We are people, not boxes of Cheerios.

The personal experience
The main thing about the personal experience was the number of small exchanges I had with people that involved small exchanges of generosity and how surprisingly life-affirming that was. On the bus to South Bend, a woman asked me for a mirror, revealed that she didn't have a phone, then ... I don't want to tell her story even anonymously, but let's just say it turned out she was on her way to a different city for one of those things you do when nothing else in your life is remotely working in any way. She needed to call a place, she needed to transfer prescriptions. We chatted and I loaned her my phone. And she called the place and talked to the people. And she called Walgreens. And then she called her friend. And by the end of the trip the planning and the prescriptions were all sorted out.

In South Bend someone let me share a cab at the very last minute, after a hotel concierge said to me slowly "We find Lyft and Uber are fastest. Don't you have Lyft and Uber?" In Detroit, I went back into the station to use the ATM because the taxi driver was like "Uh, yes cards but I could really use the cash so I could get gas today." I don't know how to describe it, but being able to move cooperatively through these small obstacles with other people made some of the world's other awfulness seem momentarily less bad. And I don't know why driving and flying never seem to have that texture for me, but for whatever reason, they don't.

The pointlessness of the gesture experience
When you take a long bus trip that you could easily have avoided with a pleasant flight from the cute Island Airport via straight shot to Chicago, you have a lot of time to ponder the pointlessness of your choice. People are flying to other cities for destination bridal showers, FFS. I know that in context, my individual sacrifice does not contribute to a solution in any measurable way, and that solving the climate crisis is going to require more than individuals making individual choices.

Still, It didn't feel pointless. Generally, I am disturbed and weighed down by the degree to which for middle-class people on up, the lifestyle leading to climate catastrophe feels so easy and seamless. It freaks me out when I'm in the airport and the easiest thing is to buy heavily packaged snacks and use thirteen single-use plastic containers in a row and pay twenty dollars for a sandwich. I know it is capitalism's job to make me feel this ease, and I bristle. So I found weird small comfort in the idea of a 400-mile trip feeling difficult, like yes, it's a 400-mile trip for a person, which is a difficult thing, and it felt difficult. Plus there were water fountains in all the stations for my reusable water bottle.

More relevantly, it feels to me like between now and some hopeful imaginary future in which things work better, things are going to have to change, and we have to start somewhere changing what we think of as normal activities. Some things I'm not so good at, but taking the bus? As we know, that is often within my wheelhouse.

I've talked to some friends and colleagues about the flying thing, and I got a variety of thoughtful responses, including the idea that individual choices aren't the crux of the matter, which is, in some sense, true. But some people also have said "Well, we're doomed anyway. So." and that one I have trouble with. I mean, yes, probably. But even if that is true in the long run, doesn't it matter whether the coastal cities are flooded in 50 years rather than 30? Or whether kids who are five now get a reasonable world when they're 50? Isn't a slightly less bad climate crisis better than a slightly worse one?

I haven't stopped flying, and I'm going to fly to California to see family, a trip that seems non-negotiable to me in terms of having a life because I love them and need to see them. But I'm taking the advice to make each trip count, and I'm going to do more things there and see more people. Partly this will involve a roughly twelve-hour transit trip from the Bay Area to SoCal. I'm checking out the Flix Bus. I'll keep you posted.

Statistics, Randomness, And My Misspent Youth

I've just been reading Nassim Taleb's Fooled by Randomness. It is a fun read, but at the risk of sounding intellectually immodest, I feel like it is full of things I already knew. Rare, unexpected, and often bad things can happen. It is easy to be fooled by statistics. It is easy to be fooled by life in general. You never really know what's going to happen. Plan accordingly.

Taleb's main target audience is traders and others who think they're succeeding and failing because of skill and insight or lack thereof. He says they are often just the victims of survivorship bias and other forms of bad reasoning. Survivorship bias is when you look at the winners and assume that whatever they did made them winners, while failing to look at the losers and what they were doing. Everyone might be throwing darts at the Wall Street Journal stock pages. Some might be winners, and tautologously, some will be winners relative to others. If you focus on the winners and blindly follow their strategy or their stock picks, you'll be SOL.

More broadly, the book is about unpredictability and how unexpected and surprising things can derail your life. This is something I knew from a young age, mostly because it features prominently in both fiction and reality. You might study for years then develop a brain tumor. You might train as an athlete then get hit by a car. You might raise your kids on organic food and still one of them might die -- from food poisoning, or a really bad flu, or ... anything, for god's sake. If I hadn't had the predictability of unpredictable events hammered home to me enough by the time I was fifteen, I definitely absorbed it when my father died suddenly -- and unexpectedly! -- from a heart attack at that time. He was forty-eight years old.

So I absorbed the epistemological lesson early. From a practical point of view, Taleb says that in the face of uncertainty, a reasonable response is to be risk-averse, and I completely agree. As I understand it, for a trader this means seeker smaller steadier gains, but to be honest I sort of zoned out on that part of the book because I am almost aggressively uninterested in thinking about investment strategy.

At this point you may be wondering: So, Patricia, did you become a risk-averse young person? Why yes -- yes I did! I skipped class, stayed up late drinking with friends, majored in two abstract and relatively useless majors (math and dance), drove around in my mom's ancient and poorly maintained VW Rabbit, and engaged in various activities we won't describe here but that are usually represented as paradigm cases of adolescent devil-may-care behavior.

To me, these were risk-averse activities. My reasoning -- which I stand by today -- was that the pleasures of impulsive behavior have an excellent chance of working out in the immediate moment, while the pleasures of planning and carefulness are distant in time and thus far less likely to actually work out.

The pleasures of hanging out with friends with you should be in class, eating cake for lunch, drinking and smoking, dancing at parties, spending a beautiful sunny day with a romantic interest instead of writing papers, etc. -- when you're a bright-eyed eighteen year-old, these are very dependable pleasures in the short-term run. Sure, they often lead to unhappiness in the long run. But whatever -- the long run is unpredictable. That is the whole point.

To forego these pleasures in the moment to get better grades, avoid lung cancer at middle-age, or even just try to live a longer period of time seemed to me like a crazy and extremely risky strategy. What if you followed all the rules and got hit by a bus at age 20? FFS. To me, my life strategy was based on getting while the getting was good -- a paradigm example of risk-averseness if ever there was one.

Over time, I stopped being so impulsive. Partly it was because the pleasures of short-term pleasures got less intense for me as I got older (as they do for so many people), and I started getting bored. Partly it was because I started to experience the negative effects of doing whatever I wanted to do in the moment: my health declined, I worked at shitty jobs like waitressing, and I had no health insurance. Partly it was because I formed close relationships with people who wanted me to flourish in the long term, which made me want to flourish in the long term as well.

For whatever it's worth, none of these reasons has to do directly with the kind of caring about the future me that is supposed to characterize rational risk-averse thinking in the standard model of human decision-making.

When people talk about risk-averseness and the rationality of overcoming impulse, I feel like there are assumptions in the background: that what it makes sense to do is plan for the future, store up your chestnuts for the winter, try to live to a ripe old age. But from the unpredictability-of-life perspective, those assumptions are peculiar and it's more the other way around. When you do whatever thing you want to do in the moment, that pleasure -- even if it's just the pleasure of a desire immediately satisfied -- is yours, and no one can take it away from you. It's a sure thing. Whereas the long term? You never really know what's going to happen.

The Pointlessness And Paradoxes Of Responsible Consumerism: Apple Edition

I woke up this morning and one of the first things I saw was this headline "Apple’s New Proprietary Software Locks Will Kill Independent Repair on New MacBook Pros."  

This is bad. It's bad for sustainability. It violates the "right to repair." It put small repair shops out of business. It's bad for schools, who fix computers at large scale, and people in rural areas who are not well-served by Apple's own technicians. 

I had one of those vaguely admirable but ultimately ignorable impulses toward responsible consumerism, and I started thinking about breaking my Mac habit. I had long been toying with the idea of learning to use Linux, as a small way bucking the consumer tyranny of the big tech companies. And I thought this could be the moment, the thing that set all that in motion. I felt the momentary warm glow you get when you see some disturbing news item and form some half-baked plan to Do Something.

To learn Linux, I thought, I might buy a used or cheap laptop to learn on. The only computer I have now is my work computer, and for all kinds of reasons I didn't want to use that one. Already my plan was going the way so many responsible consumerism plans go, where you start off wanting to save the planet and you end up buying something. But whatever.

To think about what kind of laptop would make the most sense for this project, I went to iFixit.org. This great site not only has guides for how to fix things, they also rate products with respect to how repairable they are.

I quickly found ratings for phones and tablets, but they didn't have one for laptops. What they did have, though, was a link to Greenpeace's guide to greener electronics.

At the guide, I learned that no consumer electronics company gets an "A" for environmental impact. The only company that gets a "B" or above is Fairphone. As we've discussed before ("I Went Down The Ethical Cell Phone Rabbit Hole,") Fairphone is just phones and they're not even available in North America.

After Fairphone, the best company for environmental impact was ... Apple, with a "B-."

It's a testament to the power of consumer culture how powerfully it hit me that I might be able to satisfy my impulse to responsible consumerism by actually buying something I wanted. I could get a used Mac, and use it to learn Linux. I could even tell myself that this plan was most rational, because if I did ultimately shift over to using Linux all the time, having a Mac would allow me to access the documents I've created in software like Pages, which is Mac only. 

I went to the Apple website to check out what refurbished laptops they had. The have no refurbished MacBook Airs available -- probably because these are affordable and reliable and everyone wants one. What they do have is MacBooks, the fancy high end super thin no-ports machine. I had wanted a MacBook when they came out. Hmmm...

I don't know what to say about the ridiculous of starting with an I'll-show-you-Apple Mac-avoidance plan and ending with MacBook shopping, except that it doesn't feel like a one-off to me. It feels sort of like an experience I have all the time.

I guess the moral of the story is that when you live in a society where people are paid millions to convince you that shopping is the answer to your problems, you tend to think that shopping is the answer to your problems.
❌