FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Elon Musk Really Broke Twitter This Time

Twitter may have just had its worst weekend ever, technically speaking. In response to a series of server emergencies, Elon Musk, the Twitter owner and self-professed free-speech “absolutist,” decided to limit how many tweets people can view, and how they can view them. This was not your average fail whale. It was the social-media equivalent of Costco implementing a 10-items-or-fewer rule, or a 24-hour diner closing at 7 p.m.—a baffling, antithetical business decision for a platform that depends on engaging users (and showing them ads) as much as possible. It costs $44 billion to buy yourself a digital town square. Breaking it, however, is free.

First, Twitter set a policy requiring that web users log in to view tweets—immediately limiting the potential audience for any given post to people who have Twitter—and later, Musk announced limits to how many tweets users can consume in a day, purportedly to counter “extreme levels of data scraping & system manipulation.” Although these measures will supposedly be reversed, as others have been during Musk’s tenure, they amount to a sledgehammering of a platform that’s been quietly wasting away for months: Twitter is now literally unusable if you don’t have an account, or if you do have an account and access it a lot. It is the clearest sign yet that Musk does not have his platform under control—that he cannot deliver a consistently functional experience for what was once one of the most vibrant and important social networks on the planet.

The extreme, even illogical nature of these interventions led to some speculation: Is Twitter’s so-called rate limit a technical mistake that’s being passed off as an executive decision? Or is it the opposite: a daring gambit of 13-dimensional chess, whereby Musk is trying to plunge the company into bankruptcy and restructuring? The situation has made conspiracy theorists out of onlookers who can’t help but wonder whether Musk’s plan has been to slowly and steadily destroy the platform all along.

Such theories are compelling, but they all share a flaw, in that they presuppose both a rational actor and a plan. You may not find either here. I’ve reported on Musk for the past five years, speaking with dozens of employees in the process to try to understand his rationales. The takeaway is clear: His motivations are frequently not what they seem, and chaos is a given. His money and power command attention and his actions have far-reaching consequences, but his behavior is rarely befitting of his station.

Of course, many of his acolytes—especially those in Silicon Valley—have tended to believe that he has everything in hand. “It’s remarkable how many people who’ve never run any kind of company think they know how to run a tech company better than someone who’s run Tesla and SpaceX,” the investor Paul Graham tweeted in November, after Musk took over the social network. “In both those companies, people die if the software doesn’t work right. Do you really think he’s not up to managing a social network?” But it has been clear since the moment we got a glimpse into his phone that Musk’s purchase of Twitter was defined by impulse: It appears to have been triggered in part by getting his feelings hurt by the company’s previous CEO. The decision was rash enough that he tried three times to back out of it.

[Read: Twitter’s slow and painful end]

Musk’s management style at the platform has appeared equally unstrategic. After saddling the company with a mountain of debt to complete his acquisition in October, he decided to tweet baseless conspiracy theories and alienate advertisers; days before this incident, the marketing lead in charge of managing Twitter’s brand partnerships had resigned. Musk quickly unbanned Twitter’s most egregious rule breakers; fired most of the employees, including those in charge of technical duties; and bungled the rollout of Twitter’s paid-verification system. Compared with a year earlier, Twitter’s U.S. advertising revenue for the five weeks beginning April 1 was down 59 percent.

Recently, Musk’s public-facing strategy to turn his company around has been to continue tweeting thinly veiled conspiracy theories and sex jokes, cozy up to far-right politicians, hire a CEO who was initially contractually forbidden to negotiate with some of Twitter’s brand partners, and float fighting Mark Zuckerberg in a cage match. To date, Musk’s leadership has degraded the reliability of Twitter’s service, filled the platform with bigots and spam, and alienated many of its power users. But this weekend’s disasters are different. The decision to limit people’s ability to consume content on the platform is the rapid unscheduled disassembly of the never-ending, real-time feed of information that makes Twitter Twitter.

[Read: Elon Musk’s text messages explain everything]

His supporters are confused and, perhaps, starting to feel the cracks of cognitive dissonance. “Surely someone who can figure out how to build spaceships can figure out how to distinguish scrapers from legit users,” Graham—the same one who supported Musk in November—tweeted on Saturday. What reasonable answer could there be for an advertising company to drastically limit the time that potentially hundreds of millions of users can spend on its website? (Maybe this one: On Saturday, outside developers appeared to discover an unfixed bug in Twitter’s web app that was flooding the network’s own servers with self-requests, to the point that the platform couldn’t function—a problem likely compounded by Twitter’s skeleton crew of engineers. When I reached out for clarification, the company auto-responded with an email containing a poop emoji.)

All the money and trolling can’t hide what’s obvious to anyone who’s been paying attention to his Twitter tenure: Elon Musk is bad at this. His incompetence should unravel his image as a visionary, one whose ambitions extend as far as colonizing Mars. This reputation as a genius, more than his billions, is Musk’s real fortune; it masks the impetuousness he demonstrates so frequently on Twitter. But Musk has spent this currency recklessly. Who in their right mind would explore space with a man who can’t keep a website running?

People Aren’t Falling for AI Trump Photos (Yet)

On Monday, as Americans considered the possibility of a Donald Trump indictment and a presidential perp walk, Eliot Higgins brought the hypothetical to life. Higgins, the founder of Bellingcat, an open-source investigations group, asked the latest version of the generative-AI art tool Midjourney to illustrate the spectacle of a Trump arrest. It pumped out vivid photos of a sea of police officers dragging the 45th president to the ground.

Higgins didn’t stop there. He generated a series of images that became more and more absurd: Donald Trump Jr. and Melania Trump screaming at a throng of arresting officers; Trump weeping in the courtroom, pumping iron with his fellow prisoners, mopping a jailhouse latrine, and eventually breaking out of prison through a sewer on a rainy evening. The story, which Higgins tweeted over the course of two days, ends with Trump crying at a McDonald’s in his orange jumpsuit.

pic.twitter.com/V6Y8hHnGRN

— Eliot Higgins (@EliotHiggins) March 21, 2023

All of the tweets are compelling, but only the scene of Trump’s arrest went mega viral, garnering 5.7 million views as of this morning. People immediately started wringing their hands over the possibility of Higgins’s creations duping unsuspecting audiences into thinking that Trump had actually been arrested, or leading to the downfall of our legal system. “Many people have copied Eliot’s AI generated images of Trump getting arrested and some are sharing them as real. Others have generated lots of similar images and new ones keep appearing. Please stop this,” the popular debunking account HoaxEye tweeted. “In 10 years the legal system will not accept any form of first or second hand evidence that isn’t on scene at the time of arrest,” an anonymous Twitter user fretted. “The only trusted word will be of the arresting officer and the polygraph. the legal system will be stifled by forgery/falsified evidence.”

This fear, though understandable, draws on an imagined dystopian future that’s rooted in the concerns of the past rather than the realities of our strange present. People seem eager to ascribe to AI imagery a persuasion power it hasn’t yet demonstrated. Rather than imagine emergent ways that these tools will be disruptive, alarmists draw on misinformation tropes from the earlier days of the social web, when lo-fi hoaxes routinely went viral.

These concerns do not match the reality of the broad response to Higgins’s thread. Some people shared the images simply because they thought they were funny. Others remarked at how much better AI-art tools have gotten in such a short amount of time. As the writer Parker Molloy noted, the first version of Midjourney, which was initially tested in March 2022, could barely render famous faces and was full of surrealist glitches. Version five, which Higgins used, launched in beta just last week and still has trouble with hands and small details, but it was able to re-create a near-photorealistic imagining of the arrest in the style of a press photo.

[Read: The Trump AI deepfakes had an unintended side effect]

But despite those technological leaps, very few people seem to genuinely believe that Higgins’s AI images are real. That may be a consequence, partially, of the sheer volume of fake AI Trump-arrest images that filled Twitter this week. If you examine the quote tweets and comments on these images, what emerges is not a gullible reaction but a skeptical one. In one instance of a junk account trying to pass off the photos as real, a random Twitter user responded by pointing out the image’s flaws and inconsistencies: “Legs, fingers, uniforms, any other intricate details when you look closely. I’d say you people have literal rocks for brains but I’d be insulting the rocks.”

I asked Higgins, who is himself a skilled online investigator and debunker, what he makes of the response. “It seems most people mad about it are people who think other people might think they’re real,” he told me over email. (Higgins also said that his Midjourney access has been revoked, and BuzzFeed News reported that users are no longer able to prompt the art tool using the word arrested. Midjourney did not immediately respond to a request for comment.)

The attitude Higgins described tracks with research published last month by the academic journal New Media & Society, which found that “the strongest, and most reliable, predictor of perceived danger of misinformation was the perception that others are more vulnerable to misinformation than the self”—a phenomenon called the third-person effect. The study found that participants who reported being more worried about misinformation were also more likely to share alarmist narratives and warnings about misinformation. A previous study on the third-person effect also found that increased social-media engagement tends to heighten both the third-person effect and, indirectly, people’s confidence in their own knowledge of a subject.

The Trump-AI-art news cycle seems like the perfect illustration of these phenomena. It is a true pseudo event: A fake image enters the world; concerned people amplify it and decry it as dangerous to a perceived vulnerable audience that may or may not exist; news stories echo these concerns.

There are plenty of real reasons to be worried about the rise of generative AI, which can reliably churn out convincing-sounding text that’s actually riddled with factual errors. AI art, video, and sound tools all have the potential to create basically any mix of “deepfaked” media you can imagine. And these tools are getting better at producing realistic outputs at a near exponential rate. It’s entirely possible that the fears of future reality-blurring misinformation campaigns or impersonation may prove prophetic.

But the Trump-arrest photos also reveal how conversations about the potential threats of synthetic media tend to draw on generalized fears that news consumers can and will fall for anything—tropes that have persisted even as we’ve become used to living in an untrustworthy social-media environment. These tropes aren’t all well founded: Not everyone was exposed to Russian trolls, not all Americans live in filter bubbles, and, as researchers have shown, not all fake-news sites are that influential. There are countless examples of awful, preposterous, and popular conspiracy theories thriving online, but they tend to be less lazy, dashed-off lies than intricate examples of world building. They stem from deep-rooted ideologies or a consensus that forms in one’s political or social circles. When it comes to nascent technologies such as generative AI and large language models, it’s possible that the real concern will be an entirely new set of bad behaviors we haven’t encountered yet.

[Read: The prophecies of Q]

Chris Moran, the head of editorial innovation at The Guardian, offered one such example. Last week, his team was contacted by a researcher asking why the paper had deleted a specific article from its archive. Moran and his team checked and discovered that the article in question hadn’t been deleted, because it had never been written or published: ChatGPT had hallucinated the article entirely. (Moran declined to share any details about the article. My colleague Ian Bogost encountered something similar recently when he asked ChatGPT to find an Atlantic story about tacos: It fabricated the headline “The Enduring Appeal of Tacos,” supposedly by Amanda Mull.)  

The situation was quickly resolved but left Moran unsettled. “Imagine this in an area prone to conspiracy theories,” he later tweeted. “These hallucinations are common. We may see a lot of conspiracies fuelled by ‘deleted’ articles that were never written.”

Moran’s example—of AIs hallucinating, and accidentally birthing conspiracy theories about cover-ups—feels like a plausible future issue, because this is precisely how sticky conspiracy theories work. The strongest conspiracies tend to allege that an event happened. They offer little proof, citing cover-ups from shadowy or powerful people and shifting the burden of proof to the debunkers. No amount of debunking will ever suffice, because it’s often impossible to prove a negative. But the Trump-arrest images are the inverse. The event in question hasn’t happened, and if it had, coverage would blanket the internet; either way, the narrative in the images is instantly disprovable. A small minority of extremely incurious and uninformed consumers might be duped by some AI photos, but chances are that even they will soon learn that the former president has not (yet) been tackled to the ground by a legion of police.

Even though Higgins was allegedly booted from Midjourney for generating the images, one way to look at his experiment is as an exercise in red-teaming: the practice of using a service adversarially in order to imagine and test how it might be exploited. “It’s been educational for people at least,” Higgins told me. “Hopefully make them think twice when they see a photo of a 3-legged Donald Trump being arrested by police with nonsense written on their hats.”

AI tools may indeed complicate and blur our already fractured sense of reality, but we would do well to have a sense of humility about how that might happen. It’s possible that, after decades of living online and across social platforms, many people may be resilient against the manipulations of synthetic media. Perhaps there is a risk that’s yet to fully take shape: It may be more effective to manipulate an existing image or doctor small details rather than invent something wholesale. If, say, Trump were to be arrested out of the view of cameras, well-crafted AI-generated images claiming to be leaked law-enforcement photos may very well dupe even savvy news consumers.

Things may also get much weirder than we can imagine. Yesterday, Trump shared an AI-generated image of himself praying—a minor fabrication with some political aim that’s hard to make sense of, and that hints at the subtler ways that synthetic media might worm its way into our lives and make the process of information gathering even more confusing, exhausting, and strange.

One of many AI-generated images circulating on Twitter that depict a fabricated scene of former President Donald Trump being arrested

The Vindication of Ask Jeeves

It was a simpler time. A friend introduced us, pulling up a static yellow webpage using a shaky dial-up modem. A man stood forth, dressed in a dapper black pinstriped suit with a red-accented tie. He held one hand out, as if carrying an imaginary waiter’s tray. He looked regal and confident and eminently at my service. “Have a Question?” he beckoned. “Just type it in and click Ask!” And ask, I did. Over and over.

With his steady hand, Jeeves helped me make sense of the tangled mess of the early, pre-Google internet. He wasn’t perfect—plenty of context got lost between my inquiries and his responses. Still, my 11-year-old brain always delighted in the idea of a well-coiffed man chauffeuring me down the information superhighway. But things changed. Google arrived, with its clean design and almost magic ability to deliver exactly the answers I wanted. Jeeves and I grew apart. Eventually, in 2006, Ask Jeeves disappeared from the internet altogether and was replaced with the more generic Ask.com.

Many years later, it seems I owe Jeeves an apology: He had the right idea all along. Thanks to advances in artificial intelligence and the stunning popularity of generative-text tools such as ChatGPT, today’s search-engine giants are making huge bets on AI search chatbots. In February, Microsoft revealed its Bing Chatbot, which has thrilled and frightened early users for its ability to scour the internet and answer questions (not always correctly) with convincingly human-sounding language. The same week, Google demoed Bard, the company’s forthcoming attempt at an AI-powered chat-search product. But for all the hype, when I stare at these new chatbots, I can’t help but see the faint reflection of my former besuited internet manservant. In a sense, Bing and Bard are finishing what Ask Jeeves started. What people want when they ask a question is for an all-knowing, machine-powered guide to confidently present them with the right answer in plain language, just as a reliable friend would.

[Read: AI search is a disaster]

With this in mind, I decided to go back to the source. More than a decade after parting ways, I found myself on the phone with one of the men behind the machine, getting as close to Asking Jeeves as is humanly possible. These days, Garrett Gruener, Ask Jeeves’s co-creator, is a venture capitalist in the Bay Area. He and his former business partner David Warthen eventually sold Ask Jeeves to Barry Diller and IAC for just under $2 billion. Still, I wondered if Gruener had been unsettled by Jeeves’s demise. Did he, like me, see the new chatbots as the final form of his original idea? Did he feel vindicated or haunted by the fact that his creation may have simply been born far too early?

The original conception for Jeeves, Gruener told me, was remarkably similar to what Microsoft and Google are trying to build today. As a student at UC San Diego in the mid-1970s, Gruener—a sci-fi aficionado—got an early glimpse of ARPANET, the pre-browser predecessor to the commercial internet, and fell in love. Just over a decade later, as the web grew and the beginnings of the internet came into view, Gruener realized that people would need a way to find things in the morass of semiconnected servers and networks. “It became clear that the web needed search but that mere mortals without computer-science degrees needed something easy, even conversational,” he said. Inspired by Eliza, the famous chatbot designed by MIT’s Joseph Weizenbaum, Gruener dreamed of a search engine that could converse with people using natural-language processing. Unfortunately, the technology wasn’t sophisticated enough for Gruener to create his ideal conversational search bot.

So Gruener and Warthen tried a work-around. Their code allowed a user to write a statement in English, which was then matched to a preprogrammed vector, which Gruener explained to me as “a canonical snapshot of answers to what the engine thought you were trying to say.” Essentially, they taught the machine to recognize certain words and provide really broad categorical answers. “If you were looking for population stats for a country, the query would see all your words and associated variables and go, Well, this Boolean search seems close, so it’s probably this.” Jeeves would provide the answer, and then you could clarify whether it worked or not.

“We tried to discern what people were trying to say in search, but without actually doing the natural-recognition part of it,” Gruener said. After some brainstorming, they realized that they were essentially building a butler. One of Gruener’s friends mocked up a drawing of the friendly servant, and Jeeves was born.

Pre-Google, Ask Jeeves exploded in popularity, largely because it allowed people to talk with their search engine like a person. Within just two years, the site was handling more than 1 million queries a day. A massive Jeeves balloon floated down Central Park West during Macy’s 1999 Thanksgiving parade. But not long after the butler achieved buoyancy, the site started to lose ground in the search wars. Google’s web-crawling superiority led to hard times for Ask Jeeves. “None of us were very concerned about monetization in the beginning,” Gruener told me. “Everyone in search early on realized, if you got this right, you’d essentially be in the position of being the oracle. If you could be the company to go to in order to ask questions online, you’re going to be paid handsomely.”

[Read: The open secret of Google Search]

Gruener isn’t bitter about losing out to Google. “If anything, I’m really proud of our Jeeves,” he told me. Listening to Gruener explain the history, it’s not hard to see why. In the mid-2000s, Google began to pivot search away from offering only 10 blue links to images, news, maps, and shopping. Eventually, the company began to fulfill parts of the Jeeves promise of answering questions with answer boxes. One way to look at the evolution of big search engines in the 21st century is that all companies are trying their best to create their own intuitive search butlers. Gruener told me that Ask Jeeves’s master plan had two phases, though the company was sold before it could tackle the second. Gruener had hoped that, eventually, Jeeves could act as a digital concierge for users. He’d hoped to employ the same vector technology to get people to ask questions and allow Jeeves to make educated guesses and help users complete all kinds of tasks. “If you look at Amazon’s Alexa, they’re essentially using the same approach we designed for Jeeves, just with voice,” Gruener said. Yesterday’s butler has been rebranded as today’s virtual assistant, and the technology is ubiquitous in many of our home devices and phones. “We were right for the consumer back then, and maybe we’d be right now. But at some point the consumer evolved,” he said.

I’ve been fixated on what might’ve been if Gruener’s vision had come about now. We might all be Jeevesing about the internet for answers to our mundane questions. Perhaps our Jeevesmail inboxes would be overflowing and we’d be getting turn-by-turn directions from an Oxford-educated man with a stiff English accent. Perhaps we’d all be much better off.

Gruener told me about an encounter he’d had during the search wars with one of Google’s founders at a TED conference (he wouldn’t specify which of the two). “I told him that we’re going to learn an enormous amount about the people who are using our platforms, especially as they become more conversational. And I said that it was a potentially dangerous position,” he said. “But he didn’t seem very receptive to my concerns.”

Near the end of our call, I offered an apology for deserting Jeeves like everyone else did. Gruener just laughed. “I find this future fascinating and, if I’m honest, a little validating,” he said. “It’s like, ultimately, as the tech has come around, the big guys have come around to what we were trying to do.”

Radio Atlantic: What AI Means for Search

With Google and Microsoft releasing new AI tools, it feels like the future is now with artificial intelligence. But how transformative are products like ChatGPT? Should we be worried about their impact? Are they a new Skynet or just a new Clippy?

Staff writers Charlie Warzel and Amanda Mull discuss.

Subscribe here: Apple Podcasts | Spotify | Stitcher | Google Podcasts | Pocket Casts

The following is a transcript of the episode:

Charlie Warzel: It feels like we’re moving toward this search paradigm where search is going to get a little more fun, a little more entertaining, and, possibly, not any more accurate.

And instead, the debates that we’re going to have about who gets to decide what are being entrusted to huge complicated machine-learning models that we have no idea how they’re trained or what they’re gonna do. It kind of sounds like a nightmare.

Amanda Mull: Welcome to Radio Atlantic. I’m staff writer Amanda Mull and I’m here with fellow staff writer Charlie Warzel, who writes about technology for us. Lately, Charlie’s been considering the nascent AI arms race between Google and Microsoft and why, as he put it in a recent piece, Clippy has touched the face of God. Charlie, it’s great to have you.

Warzel: Happy to be here. Happy to talk about Clippy.

Mull: Great. Great. So, Charlie, lately you’ve been writing about and reporting on AI now that Bing and Google have mutually decided to make all of us think about AI. But first I want to talk about finding a bookcase.

Warzel: Let’s do it. Let’s let Bing be our search butler here.

Mull: What I would like to find out is if the Ikea Kallax bookcase—a classic—can fit in the back of a 2002 Volkswagen Jetta.

Warzel: Okay. Alright, it’s searching, it’s still kind of creaking and groaning under the weight of your search. Okay.

Hello. This is Bing. The Ikea Kallax bookshelf has a height of 30 inches or 77 centimeters with a 58 inches, or 147 centimeters and a shelf depth of 15 inches or 39 centimeters. The 2002 Volkswagen Jetta has a trunk space of 47 inches wide, 42 inches deep and 22 inches tall. Based on these dimensions, it seems unlikely that the bookshelf can fit inside the trunk of the car. You may need to disassemble the bookshelf or use another vehicle to transport it.

Mull: It seems like we need to ask Bing to use the packaging dimensions and not the assembled dimensions.

Warzel: Ooh. Okay. It’s searching …

Hello, this is Bing, I apologize for my mistake. The Kallax packaging dimensions are not clearly stated on the IKEA website. However, based on some online reviews and forums, it seems the package size is about 60 inches long, 16 inches wide and six inches thick. The weight of the package is about 70 pounds. I hope this helps you plan your transportation better.

Well, I’m gonna say, “So, can it fit in the back of my 2002 Jetta?” Okay.

It might be possible to fit the package in the back of your 2002 Jetta if you fold down the rear seats and slide it in diagonally. However, it might be a tight fit and you should measure the space more carefully before attempting it. Alternatively, you could use a roof rack or trailer to transport the package more safely and easily.

Based off of me reading that back to you and that being your first presumably AI-chatbot experience. Is that underwhelming? Is that overwhelming? Is that whelming?

Mull: Honestly, it did work a little bit better than I expected for that kind of query.

Warzel: But I also didn’t feel like I was living in the future, right? Like I was kind of just like, Okay, cool.

Mull: Right. But I didn’t expect it to quite so efficiently respond to the query. Honestly. Just straight up.

Warzel: Yeah. So you know, points for Bing.

Mull: Yeah, it’s strange, after hearing so much hype about AI and about the promise of AI search and about how this could change the internet and change our experience with the internet, to finally go through the process of using it and end up like, Oh, that was okay.

Warzel: Yeah.

Mull: Can you tell me in basic, straightforward terms, what it is that these services are doing?

Warzel: So it’s kind of a couple of things that are happening at once that give it this feeling of the smart robot who’s returning your every query. You have the GPT-3, GPT-4, whatever style text model, which is completing based off of all of this information it’s absorbed, completing what’s coming next. Then layered on top of that, you have all that Bing can do or that Google can do with its chatbot in search in general, and it’s kind of mashing those two together. So when we say “Can I fit this IKEA bookshelf into my Jetta?” it’s just doing kind of raw Google searches.

Mull: Yeah.

Warzel: And instead of even giving you links, we’re just going to give you the definitive answer. And that to me feels a little like, Yes, this is helpful. Are we also sort of missing the bigger picture of what people are actually looking for when they’re trying to search?

Mull: Right. It seems like Microsoft and Google in their AI-search efforts are perhaps trying to provide us with God’s perfect Redditor. This is the guy who knows the answer and it’s an AI chatbot who has looked at all of Reddit and condensed it for you.

Warzel: This is honestly one of my big concerns about AI-enabled search in general, which is that we’ve been doing this since, I don’t know, like 2016. Having these really furious debates over content moderation and who gets to say what. And in the world of search, there are all these different ideas about what websites Google is going to elevate. You know, “Is it going to be third parties that Google hires to evaluate the quality of different links when it comes to financial information, medical information, news, or is it going to be engineers at Google themselves? Is Google going to be the arbiter of truth here?”

That was in an environment with search where it was still, “We’re going to give you a slew of links,” right? And then it’s choose-your-own-adventure. This is a new paradigm that essentially says, “We’re going to give you a canonical, definitive answer.” The machine is going to scroll through and it’s going to talk to you like a human being, it’s going to speak really confidently, and it’s going to give you the information to the best of its ability and as we know, these machines right now are really good at confidently lying or bullshitting.

Mull: Yeah. It seems like this has enormous capacity to sort of obscure what is happening in a way that seems not helpful. What do you mean by B.S.? Can you give us an example?

Warzel: There are so many examples of people typing stuff in. “Write me a book report about Animal Farm,” [or] some book, and it will come back with very confidently stated false facts. Someone described it as—and I really like it—“spicy autocomplete.” Which is just that these models have scanned billions and billions of pieces of information from books to commentary to literally everything and it’s just predicting, based off of that, what should come next. And so it’s not necessarily trained to give you these canonical definitive answers. The confidence and the sophistication with which it gives you an answer leads you to believe that the machine’s telling you the truth when it’s stating something that’s false.

Mull: We’re going to take a short break and we’ll be right back.

Mull: Are there any good examples that we’ve seen so far in the extremely early days of these tools that demonstrate what some of the potential pitfalls are?

Warzel: A good early example is that in Google’s demo of its own search chatbot tool, it apparently, in one of its promos, surfaced something about the James Webb Space Telescope that was incorrect.

And Google’s stock price slid by like 8 percent on that day. But then online there was this whole series of people—like astrophysicists—who were debating, “Actually like technically … maybe the Google answer was right.”

And so you kind of got this preview almost of what the future of all this is going to be, which is that, you have the machine confidently stating something and then all these different people arguing about whether or not this was true or false and meanwhile, everyone’s gaining and losing fortunes in the stock market based off of it. It was a very kind of dystopian preview.

Mull: Yeah, yeah, absolutely. And I think that when it comes to shopping, recommendations, confident recommendations, [are] a big existential problem in online commerce. Because we have this vast array of options on the internet and they are too many and too confusing for people to make meaningful sense of them and feel confident in purchasing things, often. So the tone of AI and the confidence with which it says things gives it an opportunity to be especially manipulative in a lot of ways, but with commerce in particular, because often what people go to Google for when they’re trying to figure out if they should buy something is to hear someone confidently tell you, “This will work for you.” It seems like this can be exploited by all the same forces making your Google results now absolutely unusable in a lot of ways.

Warzel: Yeah, I mean, to me what’s really tough to wrap my head around is that the whole promise of these large language models that generative AIs are built off of is that it has read the internet and it has read every great book and it has read every … it’s trained off of more information than you could ever even imagine cramming into a million people’s brains.

And if you start monetizing this, what you’re doing then is you’re, in essence, retraining this thing that is supposedly transformative to say, “Yes, we know that like you are the modern Library of Alexandria. But we would really love it if you put this canine chew toy at the top of the stack,” right?

There’s something that’s so corrupt about that premise that you have the totality of the world’s information at your disposal. And this product is actually the thing that you need to show above everything else because somebody paid for it.

And it’s also, to me we’re adding a whole other level of opacity to this. These models are really hard to understand. And then it’s gonna be very difficult to unravel where the money is coming from, and Google right now, it’s just like, “Hey, this thing this box is green around it and it says paid or it says ad.”

You just, even if you click on it, you kind of know, Oh yeah, that’s how the game works. This could be totally different and sort of insidious and maybe not nefarious, but it’s like the virus of capitalism, infecting the host of the AI or something like that, you know?

Mull: Right, right. When some fundamental structure of the internet—and how regular people use the internet—changes, there’s always so many unintended and unpredictable consequences. Even people who are not particularly interested in tech can look at social media and go, Yeah, the advent of these platforms fundamentally changed the way that a lot of people interact with their friends, interact with politics, interact with information.

It’s hard to imagine a scenario in which this doesn’t have just sort of all kinds of messy, unintended consequences that we probably can’t predict right now. The signal that I think these companies are picking up on is that in order to solve problems, people generally want to interact with other people. You want to be able to walk into a store and ask a question of a salesperson. You want to be able to get someone on the phone to fix a problem for you. You want to ask a knowledgeable individual about something you’re unsure of and benefit from their expertise. It seems like they’re just fundamentally misdiagnosing the actual solutions that people want, which is to have more of their life mediated by interactions with other people than interactions with layers of technology.

Warzel: This is, I mean really and truly always what you butt up against with people who are pushing technological change forward. It is seen as a challenge, right? To create an artificial general intelligence.

That has been the stuff of computer-science dreams for decades and decades and it’s sort of this quest to do this thing that feels almost unthinkable, right? And then of course it’s like, Oh well we need to commercialize it, and then you get into this kind of territory.

Mull: Yeah.

Warzel: One of the true fears that a lot of people have, that I think is very well-founded if we are actually entering a new epoch or whatever you wanna call it of life dominated by artificial intelligences, is that there’s going to be this stratification, this class stratification, where it’s like, “Congratulations, you will be able to see an AI doctor anywhere, as soon as you want and it’s going to do like a fairly reasonably good job of diagnosing you with whatever,” right? It’s going to be a really competent thing.

Meanwhile, the people who have the means and the power and the money, they’re going to have the in-person doctor experience, right? They’re going to actually have the person lay hands on them and look at them and talk to them and ask them those questions and use that expertise and have the bedside manner and all those things. Whereas the rest of us are going to get …

Mull: Doc bot.

Warzel: Doc bot. Exactly, and so I think that that’s a very real thing that goes just beyond even the experience of online consumption to buy products or to get information. It goes to like the very heart of very important services, right?

I would love to not trust my taxes to an AI accountant no matter how good it gets. Or have an AI financial adviser who is responding faster than any other human could to algorithmic triggers in the stock market to buy, sell, and trade. But also, a great thing about a financial adviser who’s not responding instantaneously to algorithmic shocks is they have … they have the ability to be patient, to wait, to use their creativity or their knowledge, all these things. I think that’s the scariest part about the era, is that it does take so much humanity out of what we’re headed toward.

And the human interaction becomes even more and more of a luxury. The fanciest technology billionaires don’t let their kids have screen time because they’re terrified of what the screens are doing to them. That kind of thing repeats, and with the AI it’s weird and possibly quite consequential.

Mull: Yeah, nobody should be going to an AI doctor that’s not widely used by billionaires themselves. I’ll let the VCs go first.

Charlie, thank you so much. This was fascinating and unsettling. I appreciate you walking me through all this.

Warzel: “Fascinating and unsettling” is my wheelhouse. Thank you for having me. I appreciate it.

Mull: This episode of Radio Atlantic was produced by A.C. Valdez, Theo Balcomb, and Kevin Townsend. Claudine Ebeid is the executive producer of audio at The Atlantic. Thanks for listening.

Is This The Week AI Changed Everything?

Welcome to the week of AI one-upmanship. On Tuesday, in a surprise announcement, Microsoft unveiled its plans to bring the technology behind OpenAI’s ChatGPT bot to its search engine, Bing. (Remember Bing? Because Bing remembers your jokes.) According to the company, the new tool will be a paradigm shift in the way that humans search the internet. As one early tester demonstrated, the query Find me tickets to a Beyoncé concert in the United States where I won’t need a jacket at night prompts the AI to estimate what constitutes jacket weather, gather tour dates, and then cross-reference those dates with the average temperature in the locations during the time of the show, all to provide a few-sentence answer. In one example from Microsoft’s presentation, Bing helped a user come up with a travel itinerary and then write messages proposing the trip to family members. Clippy, it appears, has touched the face of God.

On its own, all of that would be a lot to take in. But then, one day after Microsoft’s event, Google gave its own presentation for Bard, another generative-AI-powered chatbot search feature. Unlike Microsoft, which is allowing anyone to join a waitlist for the new Bing, Google is releasing the tool to only a group of “trusted testers” to start. But if you believe the press releases and CEO bluster, navigating the internet and accessing information will look completely different in a few mere months.

All of this news is frankly overwhelming. Microsoft’s and Google’s announcements follow last summer’s public debuts of AI art tools including DALL-E 2, Midjourney, and Stable Diffusion, which demonstrated an uncanny ability to create vivid, original images from a simple string of text. And in late November, OpenAI released ChatGPT, which has upended many conceptions of how machines can interact with humans, passing graduate-school exams, flooding the internet with confident bullshit, writing news articles, and helping people get jobs and cheat on tests. It’s hard not to get a sense that we are just at the beginning of an exciting and incredibly fast-moving technological era. So fast-moving, in fact, that parsing what we should be delighted about, and what we should find absolutely terrifying, feels hopeless. AI has always been a mix of both, but the recent developments have been so dizzying that we are in a whole new era of AI vertigo.

Across the internet, technologists and venture capitalists, sensing fortunes to be made, are suggesting that the world is about to be completely reimagined and that the stuff of science fiction is at arm’s reach. Here’s one representative tweet:

ChatGPT reached 100M users in 2 months, and is expanding at an increasing speed.
Google Bard, if fully rolled out, will reach at least 1B users.

We are witnessing 2 largest deployments of big neural nets in history. A dance of giants. Unfolding in real time.

Drawn to scale 👇 pic.twitter.com/2wDrfLj8zL

— Jim Fan (@DrJimFan) February 7, 2023

At present, the new search tools look like a streamlining of the way we search. Those who’ve had early access to the new, AI-powered Bing have described it as a true change, saying that using it feels akin to the first time they searched something on Google. A product rollout that produces this kind of chatter doesn’t happen often. Sometimes, it signals a generational shift, like the unveiling of Windows 95 or the first iPhone. What these announcements have in common is that they don’t just reimagine a piece of technology (desktop operating systems, phones) but rather create their own gravity, reshaping culture and behaviors around their use.

[Read: The most important job skill of this century]

AI enthusiasts will tell you that the sheer size of these new developments is world-changing. Consider the scale of adoption for products such as ChatGPT, which attracted tens of millions of users in its first two months. Then consider the new scale of AI’s abilities. According to researchers, AI’s computational power is doubling every six to 10 months, well ahead of Moore’s Law. The implication is that, however impressive these tools may feel at present, we’ve barely sniffed what they will be capable of in just weeks’ time. The current hype around OpenAI’s GPT-4 is that it will behave in unrecognizable ways compared with its predecessor, which powers ChatGPT.

That said, everything you’ve read thus far might only be hype. Those who are most vocal about the AI paradigm shift, after all, tend to have a vested interest in the technology’s success. Even the sudden rhetorical pivot from Web3 as the internet’s next savior to AI companies should raise suspicions about exactly how real all of this is. And from what we can see of the new Microsoft and Google products—which are largely unavailable to the general public as of this writing—they are imperfect. ChatGPT’s current model is already infamous for confidently stating false information. Yesterday, Reuters reported that one of Bard’s demo answers, which concerned space telescopes, included a factual inaccuracy.

But even if the information these tools surface isn’t false, that doesn’t mean the tools won’t cause new problems. If these chatbots usher in a genuine search revolution, how will the billions of dollars wrapped up in search advertising be reallocated? It’s hard to imagine that the clean design of these new tools won’t later be overrun by ads or that companies won’t broker their own deals to get priority placement, just as they have across traditional Google Search. And, if the engines offer up full summaries and answers without requiring users to click links, what happens to the vital influx of traffic that search directs toward websites and publishers?

A paradigm shift in how we navigate the internet would likely upend the countless microeconomies that depend on search, which raises the question: Have the AI’s creators—or anyone, for that matter—planned for this kind of disruption? Despite its relatively subdued entry into the AI arms race, Google has been developing its Language Model for Dialogue Applications technology for years—perhaps it hasn’t fully integrated its technology into search because doing so threatens to upend its still-lucrative business.

[Read: What happens when AI has read everything?]

Already, Google is facing financial repercussions for its Bard presentation: The report of Bard’s factual error caused the company’s stock to slide as much as 9 percent. It also led to arguments over whether Bard was actually wrong. The Financial Times wrote that the answer was only misinterpreted, whereas an astrophysicist insisted that the error was clear and factual. This confusion is a glimpse into our immediate AI future, one in which humans disagree about whether the machines are telling the truth, while fortunes are gained and lost in the process.

Accuracy isn’t the only thing we’ll be fighting about. If you thought the content-moderation battles of the 2010s and the endless Is X a platform or a publisher? debates were exhausting, whatever is next will be more intense. Fights over censorship on platforms such as Facebook and Twitter and on search engines such as Google pale in comparison to the complexity of arguments over how large language models are trained and who is doing the training. For all their faults, our current platforms still surface information for the consumer to peruse, whereas the AI-powered-chatbot model strives to present fully formed answers with limited footnotes—a kind of post-post-truth search engine. The notion that deep neural networks trained on opaque data sets will soon act as the arbiters of information for millions is sure to raise hackles on both sides of the political aisle. (Indeed, a rudimentary version of that culture war is already brewing over ChatGPT.)

For me, all of this uncertain potential for either progress or disaster manifests as a feeling of stuckness. On the one hand, I’m fascinated by what these tools promise to evolve into and, though it’s early, by what they currently claim to do. There’s an excitement bubbling around this technology that feels genuine, especially compared with crypto and Web3 evangelism, which claimed to be fueling a paradigm shift but offered very few compelling use cases.

On the other hand, the fascination is tempered by the speed with which the field is moving and the potential stakes of this change. There’s a discontinuity in the tenor of the AI discourse: True believers suggest that nothing will be the same and that society might not be emotionally, culturally, or even politically ready for what’s next. But these same people are putting their foot on the gas, our readiness be damned. As Microsoft CEO Satya Nadella told the crowd on Tuesday, “The race starts today, and we’re going to move, and move fast.”

[Read: The dawn of artificial imagination]

AI vertigo comes from trying to balance thorny questions with the excitement posed by a technology that offers to understand us and cater to our whims in unexpected, perhaps unprecedented ways. The idea of generative AI as a new frontier for accessing knowledge, streamlining busywork, and assisting the creative process might exhilarate you. It should also unnerve you. If you’re cynical about technology (and you have every reason to be), it will probably terrify you.

For now, the speed of the change and its destabilizing effects are the most concerning elements of this new era. The possibility of search reorienting itself to privilege computer-generated answers—at a time when users seem more eager than ever to get their answers from real people on places such as Reddit—is nausea-inducing. As the tech critic Michael Sacasas wrote recently, “I’m stuck on the incongruity of populating the world with non-human agents and interfaces that will mediate human experience in an age of mounting loneliness and isolation.”

Feeling AI vertigo doesn’t necessarily mean objecting to the change or the technology, but it does mean acknowledging that the speed feels reckless. Like all transformative technology, AI is evolving without your input. The future is being presented to you whether you consent or not.

❌