FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Radio Atlantic: What AI Means for Search

With Google and Microsoft releasing new AI tools, it feels like the future is now with artificial intelligence. But how transformative are products like ChatGPT? Should we be worried about their impact? Are they a new Skynet or just a new Clippy?

Staff writers Charlie Warzel and Amanda Mull discuss.

Subscribe here: Apple Podcasts | Spotify | Stitcher | Google Podcasts | Pocket Casts

The following is a transcript of the episode:

Charlie Warzel: It feels like we’re moving toward this search paradigm where search is going to get a little more fun, a little more entertaining, and, possibly, not any more accurate.

And instead, the debates that we’re going to have about who gets to decide what are being entrusted to huge complicated machine-learning models that we have no idea how they’re trained or what they’re gonna do. It kind of sounds like a nightmare.

Amanda Mull: Welcome to Radio Atlantic. I’m staff writer Amanda Mull and I’m here with fellow staff writer Charlie Warzel, who writes about technology for us. Lately, Charlie’s been considering the nascent AI arms race between Google and Microsoft and why, as he put it in a recent piece, Clippy has touched the face of God. Charlie, it’s great to have you.

Warzel: Happy to be here. Happy to talk about Clippy.

Mull: Great. Great. So, Charlie, lately you’ve been writing about and reporting on AI now that Bing and Google have mutually decided to make all of us think about AI. But first I want to talk about finding a bookcase.

Warzel: Let’s do it. Let’s let Bing be our search butler here.

Mull: What I would like to find out is if the Ikea Kallax bookcase—a classic—can fit in the back of a 2002 Volkswagen Jetta.

Warzel: Okay. Alright, it’s searching, it’s still kind of creaking and groaning under the weight of your search. Okay.

Hello. This is Bing. The Ikea Kallax bookshelf has a height of 30 inches or 77 centimeters with a 58 inches, or 147 centimeters and a shelf depth of 15 inches or 39 centimeters. The 2002 Volkswagen Jetta has a trunk space of 47 inches wide, 42 inches deep and 22 inches tall. Based on these dimensions, it seems unlikely that the bookshelf can fit inside the trunk of the car. You may need to disassemble the bookshelf or use another vehicle to transport it.

Mull: It seems like we need to ask Bing to use the packaging dimensions and not the assembled dimensions.

Warzel: Ooh. Okay. It’s searching …

Hello, this is Bing, I apologize for my mistake. The Kallax packaging dimensions are not clearly stated on the IKEA website. However, based on some online reviews and forums, it seems the package size is about 60 inches long, 16 inches wide and six inches thick. The weight of the package is about 70 pounds. I hope this helps you plan your transportation better.

Well, I’m gonna say, “So, can it fit in the back of my 2002 Jetta?” Okay.

It might be possible to fit the package in the back of your 2002 Jetta if you fold down the rear seats and slide it in diagonally. However, it might be a tight fit and you should measure the space more carefully before attempting it. Alternatively, you could use a roof rack or trailer to transport the package more safely and easily.

Based off of me reading that back to you and that being your first presumably AI-chatbot experience. Is that underwhelming? Is that overwhelming? Is that whelming?

Mull: Honestly, it did work a little bit better than I expected for that kind of query.

Warzel: But I also didn’t feel like I was living in the future, right? Like I was kind of just like, Okay, cool.

Mull: Right. But I didn’t expect it to quite so efficiently respond to the query. Honestly. Just straight up.

Warzel: Yeah. So you know, points for Bing.

Mull: Yeah, it’s strange, after hearing so much hype about AI and about the promise of AI search and about how this could change the internet and change our experience with the internet, to finally go through the process of using it and end up like, Oh, that was okay.

Warzel: Yeah.

Mull: Can you tell me in basic, straightforward terms, what it is that these services are doing?

Warzel: So it’s kind of a couple of things that are happening at once that give it this feeling of the smart robot who’s returning your every query. You have the GPT-3, GPT-4, whatever style text model, which is completing based off of all of this information it’s absorbed, completing what’s coming next. Then layered on top of that, you have all that Bing can do or that Google can do with its chatbot in search in general, and it’s kind of mashing those two together. So when we say “Can I fit this IKEA bookshelf into my Jetta?” it’s just doing kind of raw Google searches.

Mull: Yeah.

Warzel: And instead of even giving you links, we’re just going to give you the definitive answer. And that to me feels a little like, Yes, this is helpful. Are we also sort of missing the bigger picture of what people are actually looking for when they’re trying to search?

Mull: Right. It seems like Microsoft and Google in their AI-search efforts are perhaps trying to provide us with God’s perfect Redditor. This is the guy who knows the answer and it’s an AI chatbot who has looked at all of Reddit and condensed it for you.

Warzel: This is honestly one of my big concerns about AI-enabled search in general, which is that we’ve been doing this since, I don’t know, like 2016. Having these really furious debates over content moderation and who gets to say what. And in the world of search, there are all these different ideas about what websites Google is going to elevate. You know, “Is it going to be third parties that Google hires to evaluate the quality of different links when it comes to financial information, medical information, news, or is it going to be engineers at Google themselves? Is Google going to be the arbiter of truth here?”

That was in an environment with search where it was still, “We’re going to give you a slew of links,” right? And then it’s choose-your-own-adventure. This is a new paradigm that essentially says, “We’re going to give you a canonical, definitive answer.” The machine is going to scroll through and it’s going to talk to you like a human being, it’s going to speak really confidently, and it’s going to give you the information to the best of its ability and as we know, these machines right now are really good at confidently lying or bullshitting.

Mull: Yeah. It seems like this has enormous capacity to sort of obscure what is happening in a way that seems not helpful. What do you mean by B.S.? Can you give us an example?

Warzel: There are so many examples of people typing stuff in. “Write me a book report about Animal Farm,” [or] some book, and it will come back with very confidently stated false facts. Someone described it as—and I really like it—“spicy autocomplete.” Which is just that these models have scanned billions and billions of pieces of information from books to commentary to literally everything and it’s just predicting, based off of that, what should come next. And so it’s not necessarily trained to give you these canonical definitive answers. The confidence and the sophistication with which it gives you an answer leads you to believe that the machine’s telling you the truth when it’s stating something that’s false.

Mull: We’re going to take a short break and we’ll be right back.

Mull: Are there any good examples that we’ve seen so far in the extremely early days of these tools that demonstrate what some of the potential pitfalls are?

Warzel: A good early example is that in Google’s demo of its own search chatbot tool, it apparently, in one of its promos, surfaced something about the James Webb Space Telescope that was incorrect.

And Google’s stock price slid by like 8 percent on that day. But then online there was this whole series of people—like astrophysicists—who were debating, “Actually like technically … maybe the Google answer was right.”

And so you kind of got this preview almost of what the future of all this is going to be, which is that, you have the machine confidently stating something and then all these different people arguing about whether or not this was true or false and meanwhile, everyone’s gaining and losing fortunes in the stock market based off of it. It was a very kind of dystopian preview.

Mull: Yeah, yeah, absolutely. And I think that when it comes to shopping, recommendations, confident recommendations, [are] a big existential problem in online commerce. Because we have this vast array of options on the internet and they are too many and too confusing for people to make meaningful sense of them and feel confident in purchasing things, often. So the tone of AI and the confidence with which it says things gives it an opportunity to be especially manipulative in a lot of ways, but with commerce in particular, because often what people go to Google for when they’re trying to figure out if they should buy something is to hear someone confidently tell you, “This will work for you.” It seems like this can be exploited by all the same forces making your Google results now absolutely unusable in a lot of ways.

Warzel: Yeah, I mean, to me what’s really tough to wrap my head around is that the whole promise of these large language models that generative AIs are built off of is that it has read the internet and it has read every great book and it has read every … it’s trained off of more information than you could ever even imagine cramming into a million people’s brains.

And if you start monetizing this, what you’re doing then is you’re, in essence, retraining this thing that is supposedly transformative to say, “Yes, we know that like you are the modern Library of Alexandria. But we would really love it if you put this canine chew toy at the top of the stack,” right?

There’s something that’s so corrupt about that premise that you have the totality of the world’s information at your disposal. And this product is actually the thing that you need to show above everything else because somebody paid for it.

And it’s also, to me we’re adding a whole other level of opacity to this. These models are really hard to understand. And then it’s gonna be very difficult to unravel where the money is coming from, and Google right now, it’s just like, “Hey, this thing this box is green around it and it says paid or it says ad.”

You just, even if you click on it, you kind of know, Oh yeah, that’s how the game works. This could be totally different and sort of insidious and maybe not nefarious, but it’s like the virus of capitalism, infecting the host of the AI or something like that, you know?

Mull: Right, right. When some fundamental structure of the internet—and how regular people use the internet—changes, there’s always so many unintended and unpredictable consequences. Even people who are not particularly interested in tech can look at social media and go, Yeah, the advent of these platforms fundamentally changed the way that a lot of people interact with their friends, interact with politics, interact with information.

It’s hard to imagine a scenario in which this doesn’t have just sort of all kinds of messy, unintended consequences that we probably can’t predict right now. The signal that I think these companies are picking up on is that in order to solve problems, people generally want to interact with other people. You want to be able to walk into a store and ask a question of a salesperson. You want to be able to get someone on the phone to fix a problem for you. You want to ask a knowledgeable individual about something you’re unsure of and benefit from their expertise. It seems like they’re just fundamentally misdiagnosing the actual solutions that people want, which is to have more of their life mediated by interactions with other people than interactions with layers of technology.

Warzel: This is, I mean really and truly always what you butt up against with people who are pushing technological change forward. It is seen as a challenge, right? To create an artificial general intelligence.

That has been the stuff of computer-science dreams for decades and decades and it’s sort of this quest to do this thing that feels almost unthinkable, right? And then of course it’s like, Oh well we need to commercialize it, and then you get into this kind of territory.

Mull: Yeah.

Warzel: One of the true fears that a lot of people have, that I think is very well-founded if we are actually entering a new epoch or whatever you wanna call it of life dominated by artificial intelligences, is that there’s going to be this stratification, this class stratification, where it’s like, “Congratulations, you will be able to see an AI doctor anywhere, as soon as you want and it’s going to do like a fairly reasonably good job of diagnosing you with whatever,” right? It’s going to be a really competent thing.

Meanwhile, the people who have the means and the power and the money, they’re going to have the in-person doctor experience, right? They’re going to actually have the person lay hands on them and look at them and talk to them and ask them those questions and use that expertise and have the bedside manner and all those things. Whereas the rest of us are going to get …

Mull: Doc bot.

Warzel: Doc bot. Exactly, and so I think that that’s a very real thing that goes just beyond even the experience of online consumption to buy products or to get information. It goes to like the very heart of very important services, right?

I would love to not trust my taxes to an AI accountant no matter how good it gets. Or have an AI financial adviser who is responding faster than any other human could to algorithmic triggers in the stock market to buy, sell, and trade. But also, a great thing about a financial adviser who’s not responding instantaneously to algorithmic shocks is they have … they have the ability to be patient, to wait, to use their creativity or their knowledge, all these things. I think that’s the scariest part about the era, is that it does take so much humanity out of what we’re headed toward.

And the human interaction becomes even more and more of a luxury. The fanciest technology billionaires don’t let their kids have screen time because they’re terrified of what the screens are doing to them. That kind of thing repeats, and with the AI it’s weird and possibly quite consequential.

Mull: Yeah, nobody should be going to an AI doctor that’s not widely used by billionaires themselves. I’ll let the VCs go first.

Charlie, thank you so much. This was fascinating and unsettling. I appreciate you walking me through all this.

Warzel: “Fascinating and unsettling” is my wheelhouse. Thank you for having me. I appreciate it.

Mull: This episode of Radio Atlantic was produced by A.C. Valdez, Theo Balcomb, and Kevin Townsend. Claudine Ebeid is the executive producer of audio at The Atlantic. Thanks for listening.

Radio Atlantic: This Is Not Your Parents’ Cold War

During the Cold War, NATO had nightmares of hundreds of thousands of Moscow’s troops pouring across international borders and igniting a major ground war with a democracy in Europe. Western governments feared that such a move by the Kremlin would lead to escalation—first to a world war and perhaps even to a nuclear conflict.

That was then; this is now.

Russia’s invasion of Ukraine is nearly a year old, and the Ukrainians are holding on. The Russians, so far, not only have been pushed back, but are taking immense casualties and material losses. For many Americans, the war is now just another conflict in the news. Do we need to worry about the nuclear threat of Putin’s war in Europe the way we worried about such things three decades ago?

Our staff writer Tom Nichols, an expert on nuclear weapons and the Cold War, counsels Americans not to be obsessed with nuclear escalation, but to be aware of the possibilities for accidents and miscalculations. You can hear his thoughts here:

The following is a transcript of the episode:

Tom Nichols: It’s been a year since the Russians invaded Ukraine and launched the biggest conventional war in Europe since the Nazis. One of the things that I think we’ve all worried about in that time is the underlying problem of nuclear weapons.

This is a nuclear-armed power at war with hundreds of thousands of people in the middle of Europe. This is the nightmare that American foreign policy has dreaded since the beginning of the nuclear age.

And I think people have kind of put it out of their mind, how potentially dangerous this conflict is, which is understandable, but also, I think, takes us away from thinking about something that is really the most important foreign problem in the world today.

During the Cold War, we would’ve thought about that every day, but these days, people just don’t think about it, and I think they should.

My name is Tom Nichols. I’m a staff writer at The Atlantic. And I’ve spent a lot of years thinking about nuclear weapons and nuclear war. For 25 years, I was a professor of national-security affairs at Naval War College.

For this episode of Radio Atlantic, I want to talk about nuclear weapons and what I think we should have learned from the history of the Cold War about how to think about this conflict today.

I was aware of nuclear weapons at a pretty young age because my hometown, Chicopee, Massachusetts, was home to a giant nuclear-bomber base, Strategic Air Command’s East Coast headquarters, which had the big B-52s that would fly missions with nuclear weapons directly to the Soviet Union.

I had a classic childhood of air-raid sirens, and hiding in the basement, and going under the desks, and doing all of that stuff. My high-school biology teacher had a grim sense of humor and told us, you know, because of the Air Force base, we were slated for instant destruction. He said, Yeah, if anything ever happens, we’re gone. We’re gone in seven or eight minutes. So I guess the idea of nuclear war and nuclear weapons was a little more present in my life at an earlier age than for a lot of other kids.

It’s been a long time since anyone’s really had to worry about global nuclear war. It’s been over 30 years since the fall of the Berlin Wall. I think people who lived through the Cold War were more than happy to forget about it. I know I am glad to have it far in the past. And I think younger people who didn’t experience it have a hard time understanding what it was all aboutand what that fear was about—because it’s part of ancient history now.

But I think people really need to understand that Cold War history to understand what’s going on today, and how decision makers in Washington and in Europe, and even in Moscow, are playing out this war—because many of these weapons are still right where we left them.

We have fewer of them, but we still have thousands of these weapons, many of them on a very short trigger. We could go from the beginning of this podcast to the end of the world, that short of [a] time. And it’s easy to forget that. During the Cold War, we were constantly aware of it, because it was the central influence on our foreign policy. But it’s important for us to look back at the history of the Cold War because we survived a long and very tense struggle with a nuclear-armed opponent. Now, some of that was through good and sensible policy. And some of it was just through dumb luck.

Of course, the first big crisis that Americans really faced where they had to think about the existential threat of nuclear weapons was the Cuban missile crisis, in October of 1962.

I was barely 2 years old. But living next to this big, plump nuclear target in Massachusetts, we actually knew people in my hometown who built fallout shelters. But we got through the Cuban missile crisis, in part because President Kennedy and Soviet Premier Nikita Khrushchev realized what was at stake.

The gamble to put missiles in Cuba had failed, and that we had to—as Khruschev put it in one of his messages—we had to stop pulling on the ends of the rope and tightening the knot of war. But we also got incredibly lucky.

There was a moment aboard a Soviet submarine where the sub commander thought they were under attack. And he wanted to use nuclear-tipped torpedoes to take out the American fleet, which would’ve triggered a holocaust.

I mean, it would’ve been an incredible amount of devastation on the world. Tens, hundreds of millions of people dead. And, um, fortunately a senior commander who had to consent to the captain’s idea vetoed the whole thing. He said, I don’t think that’s what’s happening. I don’t think they’re trying to sink us, and I do not consent. And so by this one lucky break with this one Soviet officer, we averted the end of the world. I mean, we averted utter catastrophe.

After the Cuban missile crisis, people are now even more aware of this existential threat of nuclear weapons and it starts cropping up everywhere, especially in our pop culture. I mean, they were always there in the ’50s; there were movies about the communist threat and attacks on America. But after the Cuban missile crisis, that’s when you start getting movies like Dr. Strangelove and Fail Safe.

Both were about an accidental nuclear war, which becomes a theme for most of the Cold War. In Dr. Strangelove, an American general goes nuts and orders an attack on Russia. And in Fail Safe, a piece of machinery goes bad and the same thing happens. And I think this reflected this fear that we now had to live with, this constant threat of something that we and the Soviets didn’t even want to do, but could happen anyway.

Even the James Bond movies, which were supposed to be kind of campy and fun, nuclear weapons were really often the source of danger in them. You know, bad guys were stealing them; people were trying to track our nuclear submarines. Throughout the ’60s, the ’70s, the ’80s nuclear weapons really become just kind of soaked into our popular culture.

We all know the Cuban missile crisis because it’s just part of our common knowledge about the world, even for people that didn’t live through it. I think we don’t realize how dangerous other times were. I always think of 1983 as the year we almost didn’t make it.

1983 was an incredibly tense year. President Ronald Reagan began the year calling the Soviet Union an “evil empire.” And announced that the United States would start pouring billions of dollars into an effort to defend against Soviet missiles, including space-based defenses, which the Soviets found incredibly threatening.

The relationship between the United States and the Soviet Union had just completely broken down. Really, by the fall of 1983, it felt like war was inevitable. It certainly felt like to me war was inevitable. There was kind of that smell of gunpowder in the air. We were all pretty scared. I was pretty scared. I was a graduate student at that point. I was 23 years old, and I was certain that this war, this cataclysmic war, was going to happen not only in my lifetime, but probably before I was 30 years old.

And then a lot of things happened in 1983 that elevated the level of tension between the United States and the Soviet Union to extraordinary levels. I would say really dangerous levels. The Soviets did their best to prove they were an evil empire by shooting down a fully loaded civilian airliner, killing 269 people. Just weeks after the shoot-down of the Korean airliner, Soviet Air Defenses got an erroneous report of an American missile launch against them. And this is another one of those cases where we were just lucky. We were just fortunate.

And in this case, it was a Soviet Air Defense officer, a lieutenant colonel, who saw this warning that the Americans had launched five missiles. And he said, You know, nobody starts World War III with five missiles. That seems wrong.

And he said, I just, I think the system—which still had some bugs—I just don’t think the system’s right. We’re gonna wait that out. We’re gonna ignore that. He was actually later reprimanded.

It was almost like he was reprimanded and congratulated at the same time, because if he had called Moscow and said, Look, I’m doing my duty. I’m reporting Soviet Air Defenses have seen American birds are in the air. They’re coming at us and over to you, Kremlin. And from there, a lot of bad decisions could have cascaded into World War III, especially after a year where we had been in such amazingly high conflict with each other.

Once again, just as after the Cuban missile crisis, the increase in tension in the 1980s really comes through in the popular culture. Music, movies, TV puts this sense of threat into the minds of ordinary Americans in a way that we just don’t have now. So people are going to the movies and they’re seeing movies like WarGames, once again about an accidental nuclear war. They’re seeing movies like Red Dawn, about a very intentional war by the Soviet Union against the United States.

The Soviets thought that Red Dawn was actually part of Reagan’s attempt to use Hollywood to prepare Americans for World War III. In music, Ronald Reagan as a character made appearances in videos by Genesis or by Men at Work. That November, the biggest television event in history was The Day After, which was a cinematic representation of World War III.

I mean, it was everywhere. By 1983, ’84, we were soaked in this fear of World War III. Nuclear war and Armageddon, no matter where you looked. I remember in the fall of 1983 going to see the new James Bond movie, one of the last Roger Moore movies, called Octopussy. And the whole plot amazed me because, of course, I was studying this stuff at the time, I was studying NATO and nuclear weapons.

And here’s this opening scene where a mad Soviet general says, If only we can convince the West to give up its nuclear weapons, we can finally invade and take over the world.

I saw all of these films as either a college student or a young graduate student, and again, it was just kind of woven into my life. Well, of course, this movie is about nuclear war. Of course, this movie is about a Soviet invasion. Of course, this movie is about, you know, the end of the world, because it was always there. It was always in the background. But after the end of the Cold War, that remarkable amount of pop-culture knowledge and just general cultural awareness sort of fades away.

I think one reason that people today don’t look back at the Cold War with the same sense of threat is that it all ended so quickly. We went from [these] terrifying year[s] of 1983, 1984. And then suddenly Gorbachev comes in; Reagan reaches out to him; Gorbachev reaches back. They jointly agree in 1985—they issue a statement that to this day, is still considered official policy by the Russian Federation and by the United States of America. They jointly declare a nuclear war can never be won and must never be fought.

And all of a sudden, by the summer of 1985, 1986, it’s just over, and, like, 40 years of tension just came to an end in the space of 20, 24 months. Something I just didn’t think I would see in my lifetime. And I think that’s really created a false sense of security in later generations.

After the Cold War, in the ’90s we have a Russia that’s basically friendly to the United States but nuclear weapons are still a danger. For example, in 1995, Norway launched a scientific satellite on top of a missile—I think they were gonna study the northern lights—and the scientists gave everybody notice, you know, We’re gonna be launching this satellite. You’re gonna see a missile launch from Norway.

Somebody in Russia just didn’t get the message, and the Russian defense people came to President Boris Yeltsin and they said, This might be a NATO attack. And they gave him the option to activate and launch Russian nuclear weapons. Yeltsin conferred with his people, and fortunately—because our relations were good, and because Boris Yeltsin and Bill Clinton had a good relationship, and because tensions were low in the world—Yeltsin says, Yeah, okay. I don’t buy that. I’m sure it’s nothing.

But imagine again, if that had been somebody else.

And that brings us to today. The first thing to understand is: We are in a better place than we were during the Cold War in many ways. During the Cold War, we had tens of thousands of weapons pointed at each other. Now by treaty, the United States and the Russian Federation each have about 1,500 nuclear weapons deployed and ready to go. Now, that’s a lot of nuclear weapons, but 1,500 is a lot better than 30,000 or 40,000.

Nonetheless, we are dealing with a much more dangerous Russian regime with this mafia state led by Vladimir Putin.

Putin is a mafia boss. There is no one to stop him from doing whatever he wants. And he has really convinced himself that he is some kind of great world historical figure who is going to reestablish this Christian Slavic empire throughout the former Soviet Union and remnants of the old Russian empire. And that makes him uniquely dangerous.

People might wonder why Putin is even bothering with nuclear threats, because we’ve always thought of Russia as this giant conventional power because that’s the legacy of the Cold War. We were outnumbered. NATO at the time was only 16 countries. We were totally outnumbered by the Soviets and the Warsaw Pact in everything—men, tanks, artillery—and of course, the only way we could have repulsed an attack by the Soviet Union into Europe would’ve been to use nuclear weapons.

I know earlier I mentioned the movie Octopussy. We’ve come a long way from the days when that mad Russian general could say, If only we got rid of nuclear weapons and NATO’s nuclear weapons, we could roll our tanks from Czechoslovakia to Poland through Germany and on into France.

What people need to understand is that Russia is now the weaker conventional power. The Russians are now the ones saying, Listen, if things go really badly for us and we’re losing, we reserve the right to use nuclear weapons. The difference between Russia now and NATO then is: NATO was threatening these nuclear weapons if they were invaded and they were being just rolled over by Soviet tanks on their way to the English Channel. The Russians today are saying, We started this war, and if it goes badly for us, we reserve the right to use nuclear weapons to get ourselves out of a jam.

This conventional weakness is actually what makes them more dangerous, because they’re now continually being humiliated in the field. And a country that had gotten by by convincing people that they were a great conventional power, that they had a lot of conventional capability, they’re being revealed now as a hollow power. They can’t even defeat a country a third of their own size.

And so when they’re running out of options, you can understand at that point where Putin says, Well, the only way to scramble the deck and to get a do-over here is to use some small nuclear weapon in that area to kind of sober everybody up and shock them into coming to the table or giving me what I want.

Now, I think that would be incredibly stupid. And I think a lot of people around the world, including China and other countries, have told Putin that would be a really bad idea. But I think one thing we’ve learned from this war is that Putin is a really lousy strategist who takes dumb chances because he’s just not very competent.

And that comes back to the Cold War lesson—that you don’t worry about someone starting World War III as much as you worry about bumbling into World War III because of a bunch of really dumb decisions by people who thought they were doing something smart and didn’t understand that they were actually doing something really dangerous.

So where does this leave us? This major war is raging through the middle of Europe, the scenario that we always dreaded during the Cold War; thousands and thousands of Moscow’s troops flooding across borders. What’s the right way to think about this? Perhaps the most important thing to understand is that this really is a war to defend democracy against an aggressive, authoritarian imperial state.

The front line of the fight for civilization, really, is in Ukraine now. If Ukraine loses this war, the world will be a very different place. That’s what makes it imperative that Americans think about this problem. I think it’s imperative to support Ukraine in this fight, but we should do that with a prudent understanding of real risks that haven’t gone away.

And so I think the Cold War provides some really good guidance here, which is to be engaged, to be aware, but not to be panicked. Not to become consumed by this fear every day, because that becomes paralyzing, that becomes debilitating. It’s bad for you as a person. And it’s bad for democracies’ ability to make decisions—because then you simply don’t make any decisions at all, out of fear.

I think it’s important not to fall victim to Cold War amnesia and forget everything we learned. But I also don’t think we should become consumed by a new Cold War paranoia where we live every day thinking that we’re on the edge of Armageddon.

Just how worried should we be about nukes?
❌