FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Counterfeit digital persons: On Dennett’s Intentional Stance, The Road to Serfdom

A few weeks ago, Daniel Dennett published an alarmist essay (“Creating counterfeit digital people risks destroying our civilization”) in The Atlantic that amplified concerns Yuval Noah Harari expressed in the Economist.+ (If you are in a rush, feel free to skip to the next paragraph because what follows are three quasi-sociological remarks.) First, Dennett’s piece is (sociologically) notable because in it he is scathing of the “AI community” (many of whom are his fanbase) and its leading corporations (“Google, OpenAI, and others”). Dennett’s philosophy has not been known for leading one to a left-critical political economy, and neither has Harari’s. In addition, Dennett’s piece is psychologically notable because it goes against his rather sunny disposition — he is a former teacher and sufficiently regular acquaintance — and the rather optimistic persona he has sketched of himself in his writings (recall this recent post); alarmism just isn’t Dennett’s shtick. Third, despite their prominence neither Harari nor Dennett’s pieces really reshaped the public discussion (in so far as there (still) is a public). And that’s because it competes with the ‘AGI induced extinction’ meme, which, despite being a lot more far-fetched, is scarier (human extinction > fall of our civilization) and is much better funded and supported by powerful (rent-seeking) interests.

Here’s Dennett’s core claim(s):

Money has existed for several thousand years, and from the outset counterfeiting was recognized to be a very serious crime, one that in many cases calls for capital punishment because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created… 

Another pandemic is coming, this time attacking the fragile control systems in our brains—namely, our capacity to reason with one another—that we have used so effectively to keep ourselves relatively safe in recent centuries.

You may ask, ‘What does this have to do with the intentional stance?’ For Dennett goes on to write, “Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the “intentional stance”—turns out to be easy to invoke and almost impossible to resist, even for experts. We’re all going to be sitting ducks in the immediate future.” This is a kind of (or at least partial) road to serfdom thesis produced by our disposition to take up the intentional stance. In what follows I show how these concepts come together by the threat posed by AIs designed to fake personhood.

More than a half century ago, Dan Dennett re-introduced a kind of (as-if) teleological explanation into natural philosophy by coining and articulation (over the course of a few decades of refinement), the ‘intentional stance’ and its role in identifying so-called ‘intentional systems,’ which just are those entities to which ascription of the intentional stance is successful. Along the way, he gave different definitions of the intentional stance (and what counts as success). But here I adopt the (1985) one:

It is a familiar fact from the philosophy of science that prediction and explanation can come apart.*  I mention this because it’s important to see that the intentional stance isn’t mere or brute  instrumentalism. The stance presupposes prediction and explanation as joint necessary conditions.

In the preceding two I have treated the intentional stance as (i) an explanatory or epistemic tool that describes a set of strategies for analyzing other entities (including humans and other kinds of agents) studied in cognitive science and economics (one of Dennett’s original examples).** But as the language of ‘stance’ suggests and as Dennett’s examples often reveal the intentional stance also describes our own (ii) ordinary cognitive practice even when we are not doing science. In his 1971 article, Dennett reminds the reader that this is “easily overlooked.” (p.93) For, Dennett the difference between (i-ii) is one of degree (this is his debt to his teacher Quine, but for present purposes it useful to keep them clearly distinct (and when I need to disambiguate I will use ‘intentional stance (i)’ vs ‘intentional stance (ii).’)

Now, as Dennett already remarked in his original (1971) article, but I only noticed after reading Rovane’s (1994) “The Personal Stance,” back in the day, there is something normative about the intentional stance because of the role of rationality in it (and, as Dennett describes, the nature of belief). And, in particular, it seems natural that when we adopt the intentional stance in our ordinary cognitive practice we tacitly or explicitly ascribe personhood to the intentional system. As Dennett puts it back in 1971, “Whatever else a person might be-embodied mind or soul, self-conscious moral agent, “emergent” form of intelligence-he is an Intentional system, and whatever follows just from being an Intentional system thus is true of a person.” Let me dwell on a complication here.

That, in ordinary life, we are right to adopt the intentional stance toward others is due to the fact that we recognize them as persons, which is a moral and/or legal status. In fact, we sometimes even adopt the intentional stance(ii) in virtue of this recognition even in high stakes contexts (e.g., ‘what would the comatose patient wish in this situation?’) That we do so may be the effect of Darwinian natural selection, as Dennett implies, and that it is generally a successful practice may also be the effect of such selection. But it does not automatically follow that when some entity is treated successfully as an intentional system it thereby is or even should be a person. Thus, whatever follows just from being an intentional system is true of a person, but (and this is the complication) it need not be the case that what is true of a person is true of any intentional system. So far so good. With that in place let’s return to Dennett’s alarmist essay in The Atlantic, and why it instantiates, at least in part, a road to serfdom thesis.

At a high level of generality, a road to serfdom thesis holds (this is a definition I use in my work in political theory) that an outcome unintended to social decisionmakers [here profit making corporations and ambitious scientists] is foreseeable to the right kind of observer [e.g., Dennett, Harari] and that the outcome leads to a loss of political and economic freedom over the medium term. I use ‘medium’ here because the consequences tend to follow in a time frame within an ordinary human life, but generally longer than one or two years (which is the short-run), and shorter than the centuries’ long process covered by (say) the rise and fall of previous civilization. (I call it a ‘partial’ road to serfdom thesis because a crucial plank is missing–see below.)

Before I comment on Dennett’s implied social theory, it is worth noting two things (and the second is rather more important): first, adopting the intentional stance is so (to borrow from Bill Wimsatt) entrenched into our ordinary cognitive practices that even those who can know better (“experts”) will do so in cases where they may have grounds to avoid doing so. Second, Dennett recognizes that when we adopt the intentional stance(ii) we have a tendency to confer personhood on the other (recall the complication.) This mechanism helps explain, as Joshua Miller observed, how that Google engineer fooled himself into thinking he was interacting with a sentient person.

Of course, a student of history, or a reader of science fiction, will immediately recognize that this tendency to confer personhood on intentional systems can be highly attenuated. People and animals have been regularly treated as things and instruments. So, what Dennett really means or ought to mean is that we will (or are) encounter(ing) intentional systems designed (by corporations) to make it likely that we will automatically treat them as persons. Since Dennett is literally the expert on this, and has little incentive to mislead the rest us on this very issue, it’s worth taking him seriously and it is rather unsettling that even powerful interests with a manifest self-interest in doing so are not.

Interestingly enough, in this sense the corporations who try to fool us are mimicking Darwinian natural selection because as Dennett himself has emphasized decades ago when the robot Cog was encountered in the lab, we all ordinarily have a disposition to treat, say, even very rudimentary eyes following/staring at us as exhibiting agency and as inducing the intentional stance into us. Software and human factor engineers have been taking advantage of this tendency all along to make our gadgets and tools ‘user friendly.’

Now, it is worth pointing out that while digital environments are important to our civilization, they are not the whole of it. So, even in the worst case scenario — our digital environment is already polluted in the way Dennett worries by self-replication counterfeit people–, you may think we still have some time to avoid conferring personhood on intentional systems in our physical environment and, thereby, also have time to partially cleanse our digital environment. Politicians still have to vote in person and many other social transactions (marriage, winning the NBA) still require in person attendance. This is not to deny that a striking number of transactions can be done virtually or digitally (not the least in the financial sector), but in many of these cases we also have elaborate procedures (and sanctions) to prevent fraud developed both by commercial parties and by civil society and government. This is a known arms race between identity-thieves, including self-replicating AI/LLMs who lack all sentience, and societies.

This known arms race actually builds on the more fundamental fact that society itself is the original identity thief because, generally, for all of us its conventions and laws both fix an identity where either there previously was none or displaces other (possible) identities, as well as, sometimes, takes away or unsettles the identity ‘we’ wish to have kept (and, here, too, there is a complex memetic arms race in which any token of a society is simultaneously the emergent property, but society (understood as a type) is the cause. [See David Haig’s book, From Darwin to Derrida, for more on this insight.]) And, of course, identity-fluidity also has many social benefits (as we can learn from our students or gender studies).

Now, at this point it is worth returning to the counterfeit money example that frames Dennett’s argument. It is not obvious that counterfeit money harmed society. It did harm the sovereign because undermined a very important lever of power (and its sovereignty) namely to insist that taxes are paid/levied in the very same currency/unit-system in which he/she paid salaries (and wrote IOUs) and other expenses. I don’t mean to suggest there are no other harms (inflation and rewarding ingenious counterfeiters), but these were both not that big a deal nor the grounds for making it a capital crime. (In many eras counterfeit money was useful to facilitate commerce in the absence of gold or silver coins.)

And, in fact, as sovereignty shifted to parliaments and people at the start of the nineteenth century, the death penalty for forgery and counterfeiting currency was abolished (and the penalties reduced over time). I suspect this is also due to the realization that where systematic forgeries are successful they do meet a social need and that a pluralist mass society itself is more robust than a sovereign who insists on full control over the mint. Dennett himself implicitly recognizes this, too, when he advocates “strict liability laws, removing the need to prove either negligence or evil intent, would keep them on their toes.” (This is already quite common in product liability and other areas of tort law around the world.)

I am not suggesting complacency about the risk identified by Harari and Dennett. As individuals, associations, corporations, and governments we do need to commit to developing tools that prevent and mitigate the risk from our own tendency to ascribe personhood to intentional systems designed to fool us. We are already partially habitualized to do so with all our passwords, two-factor verification, ID cards, passport controls etc.

In many ways, another real risk here, and which is why I introduced the road to serfdom language up above (despite the known aversion to Hayek among many readers here at CrookedTimber), is that our fear of deception can make us overshoot in risk mitigation and this, too, can undermine trust and many other benefits from relatively open and (so partially) vulnerable networks and practices. So, it would be good if regulators and governments started the ordinary practice of eliciting expert testimony to start crafting well designed laws right now and carefully calibrated them by attending to both the immediate risk from profit hungry AI community, and the long term risk of creating a surveillance society to prevent ascribing personhood to the wrong intentional systems (think Blade Runner). For, crucially for a (full) road to serfdom thesis, in order to ward off some unintended and undesirable consequences, decisions are taken along the way that tend to lock in a worse than intended and de facto bad political unintended outcome.

I could stop here, because this is my main point. But Dennett’s own alarmism is due to the fact that he thinks the public sphere (which ultimately has to support lawmakers) may already be so polluted that no action is possible. I quote again from The Atlantic:

Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. 

I don’t think our liberal democracy depends on the informed consent of the governed. This conflates a highly idealized and normative view of democracy (that one may associate with deliberative or republican theories) with reality. It’s probably an impossible ideal in relatively large societies with complex cognitive division of labor, including the (rather demanding) sciences. (And it is also an ideal that gets abused in arguments for disenfranchisement.) So, while an educated populace should be promoted, in practice we have all kinds of imperfect, overlapping institutions and practices that correct for the lack of knowledge (parties, press, interest groups, consumer associations, academics, and even government bureaucracies, etc.)

It doesn’t follow we should be complacent about the fact that many of the most economically and politically powerful people, corporations, and governments  control our attention which they already do a lot of the time. But this situation is not new; Lippmann and Stebbing diagnosed it over a century ago, and probably is an intrinsic feature of many societies. It is partially to be hoped that a sufficient number of the most economically and politically powerful people, corporations, governments, and the rest of us are spooked into action and social mobilization by Harari and Dennett to create countervailing mechanisms (including laws) to mitigate our tendency to ascribe personhood to intentional systems. (Hence this post.)

There is, of course, an alternative approach: maybe we should treat all intentional systems as persons and redesign our political and social lives accordingly. Arguably some of the Oxford transhumanists and their financial and intellectual allies are betting on this even if it leads to human extirpation in a successor civilization. Modern longtermism seems to be committed to the inference from intentional stance(i) to ascription of personhoodhood or moral worth. From their perspective Dennett and Harari are fighting a rear-guard battle.

 

*Here’s an example: before Newton offered a physics that showed how Kepler’s laws hung together, lots astronomers could marvelously predict eclipses of planetary moons based on inductive generalizations alone. How good were these predictions? They were so good that they generated the first really reliable measure or estimate for the speed of light.

**Fun exercise: read Dennett’s 1971 “Intentional Systems” after you read Milton Friedman’s  “The methodology of positive economics.” (1953) and/or Armen Alchian’s “Uncertainty, evolution, and economic theory” (1950). (No, I am not saying that Dennett is the Chicago economist of philosophy!)

+Full disclosure, I read and modestly commented on Dennett’s essay in draft.

“Am I the unethical one?” A Philosophy Professor & His Cheating Students

“All I did was go to a website that is designed to facilitate cheating and set up a kind of camera to see who visited it.”

That’s Garret Merriam, associate professor of philosophy at Sacramento State University, who recently caught 40 of the 96 students in his online Introduction to Ethics course cheating on a take-home final exam.

[“Girl with a Pearl Earring” by Johannes Vermeer, 1665, (left) with “The Smiling Girl” by an unknown artist, 1925, (right)]

The story begins with him using Google to see if some of the questions on his final exam were online, and finding a copy of one of his previous final exams on the website Quizlet. Ostensibly a study aid website, Quizlet allows users to upload materials to the site, such as exam questions and answers, and is one of many sites students use to cheat on their assignments. He emailed a request to Quizlet that they take down the exam, which they did. But finding the exam gave Merriam an idea.

I decided to ‘poison the well’ by uploading [to Quizlet] a copy of my final with wrong answers. (The final is 70-80 questions, all multiple choice, 5 options each.) Most of these answers were not just wrong, but obviously​ wrong to anyone who had paid attention in class. My thinking was that anyone who gave a sufficient number of those same answers would be exposing themselves, not only as someone who cheated by looking up the final online, but who didn’t even pay enough attention in class to notice how wrong the answers were.

When the students turned in their finals, and he noticed that many of the students had selected the “obviously wrong” answers from the planted version of the final, he had to decide how to distinguish the cheaters from those who merely made mistakes. He ended up using the following standard: if there was no more than a 1 in 100 chance that the number of matching wrong answers a student gave was a coincidence, he counted them as having cheated, as he explains:

When my students turned in their finals this semester, I compared their answers with the wrong answers from the planted test. A total of 45 questions on this semester’s final were on the planted final. (The exact questions change every semester, depending on a number of factors.) As expected, nearly all students had at least a few wrong answers that matched; statistically speaking this is likely given the number of questions. I ran a binomial analysis and found the likelihood that someone whose answers matched on 19 out of the 45 planted questions had about a 1:100 chance of doing so by coincidence. That was my (admittedly somewhat arbitrary) threshold, and anyone who matched at least that many, I suspected of cheating. (The highest match was 40 out of 45, which has a 1:10-Quintillion chance of being a coincidence.)

To my amazement, that threshold implies that 40 out of 96 students looked at and used the planted final for at least a critical mass of questions. 

When he confronted those students about this, most of them admitted they had cheated; the consequences for their grades are still being determined:

I emailed these students telling them what I had done and what I found. About 2/3rds of them confessed right away or denied it at first and quickly changed their tune. The remaining third either haven’t gotten back to me yet or have insisted on their innocence. (I am considering that possibility for one student who is right ‘on the bubble’, but the rest are upwards of 1:1 billion chance, or more.)

I am in discussion with my Chair about exactly what response is appropriate for these students, but a zero on the final is the bare minimum, and an F in the class is likely for some, if not all of those who cheated.

He adds:

As you can probably imagine, this has been exceptionally stressful for me (I’m neither a forensic mathematician, nor a cop, so this work took a lot of time that I would have preferred to have spent grading final essays.)

Professor Merriam wanted to share what happened on Daily Nous to see what other people in philosophy made of the situation and the actions he took. He had discussed it a little on Twitter, and while some people were, he says, “sympathetic and supportive,” others (for example) expressed the view that what he did was itself unethical. He disagrees:

As far as I can tell, their argument seems to boil down to the claim that my actions were deceptive or dishonest. I was accused of ‘entrapment’ and ‘honey-potting.’ More than a few seemed to think that my transgression was as bad or even worse than my students’. They suggested I should have just taken the copy of my test down and left it at that. As far as I can tell most of these people are not teachers of any kind, and none of them seemed to teach philosophy, ethics, or humanities.

These charges don’t make sense to me. I did not encourage or nudge my students to cheat, I did not do anything to make such cheating more likely or easier. Quite the opposite: I tell all my students what will happen if I catch them cheating, and I gave them a comprehensive study guide for the final.

As far as Quizlet goes, all I did was go to the website that is designed to facilitate cheating and set up a kind of camera to see who visited it. I honestly do not see what is objectionable about that. My University has an academic honesty policy that explicitly says that looking at other tests without the instructor’s permission counts as cheating  (Although had I know it would be this much of an issue I would have been explicit about that in my syllabus as well, rather than just linking to the policy, an oversight I plan to correct going forward.)

Though he disagrees with his critics, he “open to the possibility that I might be wrong”

Maybe (as the saying goes) I am the asshole here. But I would take that possibility a lot more seriously if that were the judgment of my immediate peers (philosophers at least, if not specifically ethicists), and even more so still if those peers could articulate an argument beyond simplistic accusations of dishonesty or ‘entrapment.’

So, I thought I would reach out to you and see if you could share this with Daily Nous readers and ask them: Am I the unethical one here?

That’s one question. But it might be more useful to consider more generally: (a) feasible cheat-deterring strategies for professors teaching large classes, (b) what professors should do when they catch their students cheating (when this is not settled by university policy), and (c) the extent to which professors should concern themselves with whether their students are cheating.

Sanders Prize in Political Philosophy

The post “Am I the unethical one?” A Philosophy Professor & His Cheating Students first appeared on Daily Nous.

Trafficking in traffic

Ben Smith picked just the right title for his saga of BuzzFeed, Gawker, and The Huffington Post: Traffic (though in the end, he credits the able sensationalist Michael Wolff with the choice). For what Ben chronicles is both the apotheosis and the end of the age of mass media and its obsessive quest for audience attention, for scale, for circulation, ratings, page views, unique users, eyeballs and engagement. 

Most everything I write these days — my upcoming books The Gutenberg Parenthesis in June and a next book, an elegy to the magazine in November, and another that I’m working on about the internet — is in the end about the death of the mass, a passing I celebrate. I write in The Gutenberg Parenthesis

The mass is the child and creation of media, a descendant of Gutenberg, the ultimate extension of treating the public as object — as audience rather than participant. It was the mechanization and industrialization of print with the steam-powered press and Linotype — exploding the circulation of daily newspapers from an average of 4,000 in the late nineteenth century to hundreds of thousands and millions in the next — that brought scale to media. With broadcast, the mass became all-encompassing. Mass is the defining business model of pre-internet capitalism: making as many identical widgets to sell to as many identical people as possible. Content becomes a commodity to attract the attention of the audience, who themselves are sold as a commodity. In the mass, everything and everyone is commodified.

Ben and the anti-heroes of his tale — BuzzFeed founder Jonah Peretti, Gawker Media founder Nick Denton, HuffPost founder Arianna Huffington, investor Kenny Lerer, and a complete dramatis personae of the early players in pure-play digital media — were really no different from the Hearsts, Pulitzers, Newhouses, Luces, Greeleys, Bennetts, Sarnoffs, Paleys, and, yes, Murdochs, the moguls of mass media’s mechanized, industrialized, and corporate age who built their empires on traffic. The only difference, really, was that the digital moguls had new ways to hunt their prey: social, SEO, clickbait, data, listicles, and snark.

Ben tells the story so very well; he is an admirable writer and reporter. His narrative whizzes by like a local train on the express tracks. And it rings true. I had a seat myself on this ride. I was a friend of Nick Denton’s and a member of the board of his company before Gawker, Moreover; president of the online division of Advance (Condé Nast + Newhouse Newspapers); a board member for another pure-play, Plastic (a mashup of Suck et al); a proto-blogger; a writer for HuffPost; and a media critic who occasionally got invited to Nick’s parties and argued alongside Elizabeth Spiers at his kitchen table that he needed to open up to comments (maybe it’s all our fault). So I quite enjoyed Traffic. Because memories.

Traffic is worthwhile as a historical document of an as-it-turns-out-brief chapter in media history and as Ben’s own memoir of his rise from Politico blogger to BuzzFeed News editor to New York Times media critic to co-founder of Semafor. I find it interesting that Ben does not try to separate out the work of his newsroom from the click-factory next door. Passing reference is made to the prestige he and Jonah wanted news to bring to the brand, but Ben does not shy away from association with the viral side of the house. 

I saw a much greater separation between the two divisions of BuzzFeed — not just reputationally but also in business models. It took me years to understand the foundation of BuzzFeed’s business. My fellow media blatherers would often scold me: “You don’t understand, Jeff,” one said, “BuzzFeed is the first data-driven newsroom.” So what? Every newsroom and every news organization since the 1850s measured itself by its traffic, whether they called it circulation or reach or MAUs. 

No, what separated BuzzFeed’s business from the rest was that it did not sell space or time or even audience. It sold a skill: We know how to make our stuff viral, they said to advertisers. We can make your stuff viral. As a business, it (like Vice) was an ad agency with a giant proof-of-concept attached.

There were two problems. The first was that BuzzFeed depended for four-fifths of its distribution on other platforms: BuzzFeed’s own audience took its content to the larger audience where they were, mostly on Facebook, also YouTube and Twitter. That worked fine until it didn’t — until other, less talented copykittens ruined it for them. The same thing happened years earlier to About.com, where The New York Times Company brought me in to consult after its purchase. About.com had answers to questions people asked in Google search, so Google sent them to About.com, where Google sold the ads. It was a beautiful thing, until crappy content farms like Demand Media came and ruined it for them. In a first major ranking overhaul, Google had to downgrade everything that looked like a content farm, including About. Oh, well. (After learning the skills of SEO and waiting too long, The Times Company finally sold About.com; its remnants labor on in Barry Diller’s content farm, DotDash, where the last survivors of Time Inc. and Meredith toil, mostly post-print.)

The same phenomenon struck BuzzFeed, as social networks became overwhelmed with viral crap because, to use Silicon Valley argot, there was no barrier to entry to making clickbait. In Traffic, Ben reviews the history of Eli Pariser’s well-intentioned but ultimately corrupting startup Upworthy, which ruined the internet and all of media with its invention, the you-won’t-believe-what-happened-next headline. The experience of being bombarded with manipulative ploys for attention was bad for users and the social networks had to downgrade it. Also, as Ben reports, they discovered that many people were more apt to share screeds filled with hate and lies than cute kittens. Enter Breitbart. 

BuzzFeed’s second problem was that BuzzFeed News had no sustainable business model other than the unsustainable business model of the rest of news. News isn’t, despite the best efforts of headline writers, terribly clickable. In the early days, BuzzFeed didn’t sell banner ads on its own content and even if it had, advertisers don’t much want to be around news because it is not “brand safe.” Therein lies a terrible commentary on marketing and media, but I’ll leave that for another day. 

Ben’s book comes out just as BuzzFeed killed News. In the announcement, Jonah confessed to “overinvesting” in it, which is an admirably candid admission that news didn’t have a business model. Sooner or later, the company’s real bosses — owners of its equity — would demand its death. Ben writes: “I’ve come to regret encouraging Jonah to see our news division as a worthy enterprise that shouldn’t be evaluated solely as a business.” Ain’t that the problem with every newsroom? The truth is that BuzzFeed News was a philanthropic gift to the information ecosystem from Jonah and Ben.

Just as Jonah and company believed that Facebook et al had turned on them, they turned on Facebook and Google and Twitter, joining old, incumbent media in arguing that Silicon Valley somehow owed the news industry. For what? For sending them traffic all these years? Ben tells of meeting with the gray eminence of the true evil empire, News Corp., to discuss strategies to squeeze “protection money” (Ben’s words) from technology companies. That, too, is no business model. 

Thus the death of BuzzFeed news says much about the fate of journalism today. In Traffic, Ben tells the tale of the greatest single traffic driver in BuzzFeed’s history: The Dress. You know, this one: 

At every journalism conference where I took the stage after that, I would ask the journalists in attendance how many of their news organizations wrote a story about The Dress. Every single hand would go up. And what does that say about the state of journalism today? As we whine and wail about losing reporters and editors at the hands of greedy capitalists, we nonetheless waste tremendous journalistic resource rewriting each other for traffic: everyone had to have their own story to get their own Googlejuice and likes and links and ad impressions and pennies from them. No one added anything of value to BuzzFeed’s own story. The story, certainly BuzzFeed would acknowledge, had no particular social value; it did nothing to inform public discourse. It was fun. It got people talking. It took their attention. It generated traffic

The virus Ben writes about is one that BuzzFeed — and the every news organization on the internet and the internet as a whole — caught from old, coughing mass media: the insatiable hunger for traffic for its own sake. In the book, Nick Denton plays the role of inscrutable (oh, I can attest to that) philosopher. According to Ben, Nick believed that traffic was the key expression of value: “Traffic, to Nick … was something pure. It was an art, not a science. Traffic meant that what you were doing was working.” Yet Nick also knew where traffic could lead. Ben quotes him telling a journalist in 2014: “It’s not jonah himself I hate, but this stage of internet media for which he is so perfectly optimized. I see an image of his cynical smirk — made you click! — every time a stupid buzzfeed listicle pops on Facebook.”

Nick also believed that transparency was the only ethic that really mattered, for the sake of democracy. Add these two premises, traffic and transparency, together and the sex tape that was the McGuffin that brought down Gawker and Nick at the hands of Peter Thiel was perhaps an inevitability. Ben also credits (or blames?) Nick for his own decision to release the Trump dossier to the public on BuzzFeed. (I still think Ben has a credible argument for doing so: It was being talked about in government and in media and we, the public, had the right to judge for ourselves. Or rather, it’s not our right to decide; it’s a responsibility, which will fall on all of us more and more as our old institutions of trust and authority — editing and publishing — falter in the face of the abundance of talk the net enables.)

The problem in the end is that traffic is a commodity; commodities have no unique value; and commodities in abundance will always decrease in price, toward zero. “Even as the traffic to BuzzFeed, Gawker Media, and other adept digital publishers grew,” Ben writes, “their operators began to feel that they were running on an accelerating treadmill, needing ever more traffic to keep the same dollars flowing in.” Precisely

Traffic is not where the value of the internet lies. No, as I write in The Gutenberg Parenthesis (/plug), the real value of the internet is that it begins to reverse the impact print and mass media have had on public discourse. The internet devalues the notions of content, audience, and traffic in favor of speech. Only it is going to take a long time for society to relearn the conversational skills it has lost and — as with Gutenberg and the Reformation, Counter-Reformation, and Thirty Years’ War that followed — things will be messy in between. 

BuzzFeed, Gawker, The Huffington Post, etc. were not new media at all. They were the last gasp of old media, trying to keep the old ways alive with new tricks. What comes next — what is actually new — has yet to be invented. That is what I care about. That is why I teach. 

The post Trafficking in traffic appeared first on BuzzMachine.

The New New Reading Environment


For larger publications, the upside of newsletters is obvious. Email-bound readers can seamlessly swipe over from their Zocdoc appointment notification to their health insurance bill payment notification to their student loan payment notification to their local mass shooting notification to a Washington Post opinion newsletter about the biggest threat facing the nation (still, somehow, cancel culture). Of course, no one has pursued newsletters as zealously as the legaciest legacy-media operation of them all: the New York Times.

What do AI chatbots know about us, and who are they sharing it with?

AI Chatbots are relatively old by tech standards, but the newest crop — led by OpenAI's ChatGPT and Google's Bard — are vastly more capable than their ancestors, not always for positive reasons. The recent explosion in AI development has already created concerns around misinformation, disinformation, plagiarism and machine-generated malware. What problems might generative AI pose for the privacy of the average internet user? The answer, according to experts, is largely a matter of how these bots are trained and how much we plan to interact with them

In order to replicate human-like interactions, AI chatbots are trained on mass amounts of data, a significant portion of which is derived from repositories like Common Crawl. As the name suggests, Common Crawl has amassed years and petabytes worth of data simply from crawling and scraping the open web. “These models are training on large data sets of publicly available data on the internet,” Megha Srivastava, PhD student at Stanford's computer science department and former AI resident with Microsoft Research, said. Even though ChatGPT and Bard use what they call a "filtered" portion of Common Crawl's data, the sheer size of the model makes it "impossible for anyone to kind of look through the data and sanitize it,” according to Srivastava.

Either through your own carelessness or the poor security practices by a third-party could be in some far-flung corner of the internet right now. Even though it might be difficult to access for the average user, it's possible that information was scraped into a training set, and could be regurgitated by that chatbot down the line. And a bot spitting out someone's actual contact information is in no way a theoretical concern. Bloomberg columnist Dave Lee posted on Twitter that, when someone asked ChatGPT to chat on encrypted messaging platform Signal, it provided his exact phone number. This sort of interaction is likely an edge case, but the information these learning models have access to is still worth considering. "It’s unlikely that OpenAI would want to collect specific information like healthcare data and attribute it to individuals in order to train its models," David Hoelzer, a fellow at security organization the SANS Institute, told Engadget. “But could it inadvertently be in there? Absolutely.”

Open AI, the company behind ChatGPT, did not respond when we asked what measures it takes to protect data privacy, or how it handles personally identifiable information that may be scraped into its training sets. So we did the next best thing and asked ChatGPT itself. It told us that it is "programmed to follow ethical and legal standards that protect users’ privacy and personal information" and that it doesn't "have access to personal information unless it is provided to me." Google for its part told Engadget it programmed similar guardrails into Bard to prevent the sharing of personally identifiable information during conversations.

Helpfully, ChatGPT brought up the second major vector by which generative AI might pose a privacy risk: usage of the software itself — either via information shared directly in chatlogs or device and user information captured by the service during use. OpenAI’s privacy policy cites several categories of standard information it collects on users, which could be identifiable, and upon starting it up, ChatGPT does caution that conversations may be reviewed by its AI trainers to improve systems. 

Google's Bard, meanwhile, does not have a standalone privacy policy, instead uses the blanket privacy document shared by other Google products (and which happens to be tremendously broad.) Conversations with Bard don't have to be saved to the user's Google account, and users can delete the conversations via Google, the company told Engadget. “In order to build and sustain user trust, they're going to have to be very transparent around privacy policies and data protection procedures at the front end,” Rishi Jaitly, professor and distinguished humanities fellow at Virginia Tech, told Engadget.

Despite having a "clear conversations" action, pressing that does not actually delete your data, according to the service’s FAQ page, nor is OpenAI is able to delete specific prompts. While the company discourages users from sharing anything sensitive, seemingly the only way to remove personally identifying information provided to ChatGPT is to delete your account, which the company says will permanently remove all associated data.

Hoelzer told Engadget he’s not worried that ChatGPT is ingesting individual conversations in order to learn. But that conversation data is being stored somewhere, and so its security becomes a reasonable concern. Incidentally, ChatGPT was taken offline briefly in March because a programming error revealed information about users’ chat histories. It's unclear this early in their broad deployment if chat logs from these sorts of AI will become valuable targets for malicious actors.

For the foreseeable future, it's best to treat these sorts of chatbots with the same suspicion users should be treating any other tech product. “A user playing with these models should enter with expectation that any interaction they're having with the model," Srivastava told Engadget, "it's fair game for Open AI or any of these other companies to use for their benefit.”

This article originally appeared on Engadget at https://www.engadget.com/what-do-ai-chatbots-know-about-us-and-who-are-they-sharing-it-with-140013949.html?src=rss

TECH-CHATGPT/

A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken February 23, 2023. REUTERS/Dado Ruvic/Illustration

For Lower-Income Students, Big Tech Internships Can Be Hard to Get

Critics say the intern selection process often favors wealthier students, just like the admission process at some elite colleges.

my proposed law

By: ayjay

“Any online platform and/or application that delivers content to users may deliver only content explicitly requested by said users.” 

That’s it. No algorithms, no autoplay, no “You may also like,” no “Up next.” Only what human beings (AKA “consumers”) choose. Now you don’t have to ban TikTok, and you will reduce the power that Facebook, Twitter, and all the other social-media platforms have over the minds and emotions of their users. It will even reduce, though not eliminate, the ability of Spotify and other streaming platforms to ruin music. 

(I’m sure many other people have made this suggestion.) 

Controlled Digital Lending Takes a Blow in Court

A Federal judge's ruling offered a stern rebuke of the Internet Archive's National Emergency Library and its controlled digital lending service, providing a significant victory for the four publishers that had filed suit.

The post Controlled Digital Lending Takes a Blow in Court appeared first on The Scholarly Kitchen.

The Internet Archive Loses on Controlled Digital Lending

On Friday, the Internet Archive lost its "controlled digital lending" case on summary judgment. Reactions today from our Chefs Rick Anderson, Joseph Esposito, Lisa Janicke Hinchliffe, Roy Kaufman, Roger C. Schonfeld, and Karin Wulf.

The post The Internet Archive Loses on Controlled Digital Lending appeared first on The Scholarly Kitchen.

By: ayjay

Brewster Kahle, Internet Archive:

The dream of the Internet was to democratize access to knowledge, but if the big publishers have their way, excessive corporate control will be the nightmare of the Internet. That is what is at stake. Will libraries even own and preserve collections that are digital? Will libraries serve our patrons with books as we have done for millennia? A positive ruling that affirms every library’s right to lend the books they own, would build a better Internet and a better society.

By: ayjay

Yair Rosenberg:

In 2013, Google shut down its celebrated RSS client, Google Reader, citing a decline in RSS usage. Today, millions of people still use RSS readers, but many times more use social-media sites and don’t even know that RSS exists. This imbalance means that media outlets and other content providers have greater incentive to invest in social-media infrastructure rather than RSS support, leading some to drop the latter entirely. But though the internet’s creative output deserves our attention, social-media companies do not. When the primary way we read online is filtered through the algorithms of capricious corporations that can change what we see on a whim, both writers and readers suffer. RSS is a reminder that it doesn’t have to be this way.

Long-time readers know that I’ve been preaching this message for years and years (see the “RSS” tag at the bottom of this post). If you don’t believe me maybe you’ll believe Yair.

By: ayjay

Cal Newport:

Imagine if the Supreme Court threw caution to the wind and radically rolled back Section 230 protections; to the point where it became legally unviable to operate any sort of major platform that harvests attention using algorithmic-curation of user-generated content. In this thought experiment, Facebook disappears, along with Twitter, Instagram, Tik Tok, and even YouTube.

This certainly would devastate the tech sector for a while. It would also hurt the portfolios of those invested in these companies. But what would the impact be on the average internet user? It might not actually be so bad.

I would quaver a bit at the loss of YouTube, but … okay. You’ve got a deal. Sign me up. 

Inside the NBA’s Great Generation War

After years of the NBA being the most Extremely Online sports league on the planet, the chickens are coming home to roost. ESPN hot takes, podcasts, halftime shows — no matter the medium or the source, players are feeling some kind of way about the scrutiny they’re under. Sure, Alex Wong’s dispatch might be a little inside-baseballbasketball for non-fans, but it’s breezy and distillative enough that you’ll leave with some sense of what the r/nba obsessives among us live with every single day.

Welcome to the NBA’s generational wars, where today’s terminally online athletes are fed up with seeing every detail of their lives analyzed under a microscope. Unlike other, less permissive sports leagues, the NBA has long embraced the amplifying powers of social media—from its early embrace of Instagram recappers like @HouseOfHighlights to its cultivation of NBA Twitter—but now we find ourselves at an inflection point, and its young stars are fed up and lashing out.

Steam now allows you to copy games to Steam Deck and other PCs over a local network

Valve is giving Steam Deck users with slow internet connections or bandwidth caps a new way to install games on their devices. The latest Steam and Steam Deck betas add local network game transfers, a feature that allows you to copy existing files from one PC to another over a local area network. Valve says the tool can reduce internet traffic and lessen the time it takes to install games and updates since you can use it to bypass the need to connect to a Steam content server over the internet.

Hello! We've just shipped a Beta update to Steam and Steam Deck that includes a new feature: Local Network Game Transfers.

This allows Steam users to install games directly from one PC to another over a local network, without having to download and install from the internet. pic.twitter.com/bv9xThZCoS

— Steam Deck (@OnDeck) February 17, 2023

“Local Network Game Transfers are great for Steam Deck owners, multi-user Steam households, dorms, LAN parties, etc,” the company points out. “No more worries about bandwidth or data caps when all the files you need are already nearby.” Once you’ve installed the new software on your devices, Steam will first check if it can transfer a game installation or set of update files over your local network before contacting a public Steam content server. If at any point one of the devices involved in the transfer is disconnected from your local network, Steam will fall back to downloading any necessary files from the internet.

By default, the feature is set to only work between devices logged into the same Steam account, but you can also transfer files between friends on the same local area network. It’s also possible to transfer to any user on the same network, which is something you would do during a LAN tournament. Valve has published a FAQ with more information about local network game transfers, including details on some of the limitations of the feature, over on the Steam website.

Steam Deck

Steam Deck review

By: ayjay

Cory Doctorow: “In its nearly 25-year history, Google has made one and a half successful products: a once-great search engine and a pretty good Hotmail clone. Everything else it built in-house has crashed and burned.” Ouch.

tradeoffs

By: ayjay

David Sax, from The Future Is Analog

“The ideas that come to our mind are around curiosity, creativity, exploration, which come to you when you’re out and moving around,” said Joseph White, the director of workplace futures and insight at the office furniture company Herman Miller. White is a professional fabric designer (he owns a loom), who moved from Brooklyn to Buffalo in the midst of the pandemic, but the longer he worked remotely, the more White noticed how much physical, sensory information his work was lacking. He missed wandering around the rambling Herman Miller campus in Michigan, moving his body, walking between buildings, touching, seeing, and even smelling the company’s different ideas as they took shape in wood, plastic, metal, and fabric. “I used to work from a dozen different spots throughout the day,” White said. “Now I look at the same piece of art all day. I miss the variety of experience. My mind connects to concepts like embodied cognition — our mind connects to the world around us, and by the process of moving around it, we get information that we’re not consciously aware of, and have meaning. We lose that when we’re stuck in the same place over and over again.” Working from home was pitched as liberating, but as my neighbor Lauren discovered each day, glued to her desk, it can easily become a type of incarceration. “[Remote work] degrades the human experience,” White said. “I worry about sensory atrophy. I worry about curiosity, because as soon as curiosity ends, that is the beginning of death.” 

Hmmm. I have some questions: 

  1. Joseph White says he “used to work from a dozen different spots throughout the day” but at home works at one spot. Has he thought about moving around? Maybe working elsewhere in his house, or going to a coffee shop? 
  2. Does White think that most workers have the freedom to work from a dozen different spots in their workplace? 
  3. Or, to put essentially the same question another way: Where are we more likely to be “glued to a desk,” at the office or at home? 
  4. How has White shaped his home life such that his home afflicts him with “sensory atrophy” and “the end of curiosity”? Maybe he could rearrange his furniture or something. 
  5. If we have families at home, then the more analog and connected our work lives are, the more virtual and disconnected our family lives will be; and vice versa. But is it obvious that it’s more important for us to be connected to our co-workers than to our families? That might be great for Capitalism, but not so great for Humans. 

frictionless ignorance

By: ayjay

Andy Baio:

Google used to take pride in minimizing time we spent there, guiding us to relevant pages as quickly as possible. Over time, they tried to answer everything themselves: longer snippets, inline FAQs, search results full of knowledge panels.

Today’s Bard announcement feels like their natural evolution: extracting all value out of the internet for themselves, burying pages at the bottom of each GPT-generated essay like footnotes. 

Yep. Similarly, Joanna Stern thinks the new AI-powered search at Bing is terrific, but note this: When she asked Bing’s AI a question, “Bing’s chatbot typed out the answer, with a bulleted list of winners and a mention of Beyoncé’s most-Grammys-ever record. The answer also contained clickable citations, noting the source of the listed information.” 

My question: Who’s gonna click through to the links? Almost nobody. People who use such services will simply assume that Bard and Bing, that classic comedy duo, provide the correct answers and thus will never leave the search page. Ease of use and superficial plausibility will leave users in a state of frictionless ignorance; sites that contain genuinely useful information will remain unvisited; and the various AI “services” will comprise a new power/knowledge regime. 

Google Fi warns customers that their data has been compromised

Google has notified customers of its Fi mobile virtual network operator (MVNO) service that hackers were able to access some of their information, according to TechCrunch. The tech giant said the bad actors infiltrated a third-party system used for customer support at Fi's primary network provider. While Google didn't name the provider outright, Fi relies on US Cellular and T-Mobile for connectivity. If you'll recall, the latter admitted in mid-January that hackers had been taking data from its systems since November last year.

T-Mobile said the attackers got away with the information of around 37 million postpaid and prepaid customers before it discovered and contained the issue. Back then, the carrier insisted that no passwords, payment information and social security numbers were stolen. Google Fi is saying the same thing, adding that no PINs or text message/call contents were taken, as well. The hackers only apparently had access to users' phone numbers, account status, SMS card serial numbers and some service plan information, like international roaming. 

Google reportedly told most users that they didn't have to do anything and that it's still working with Fi's network provider to "identify and implement measures to secure the data on that third-party system and notify everyone potentially impacted." That said, at least one customer claimed having more serious issues than most because of the breach. They shared a part of Google's supposed email to them on Reddit, telling them that that their "mobile phone service was transferred from [their] SIM card to another SIM card" for almost two hours on January 1st. 

The customer said they received password reset notifications from Outlook, their crypto wallet account and two-factor authenticator Authy that day. They sent logs to 9to5Google to prove that the attackers had used their number to receive text messages that allowed them to access those accounts. Based on their Fi text history, the bad actors started resetting passwords and requesting two-factor authentication codes via SMS within one minute of transferring their SIM card. The customer was reportedly only able regain control of their accounts after turning network access on their iPhone off and back on, though it's unclear if that's what solved the issue. We've reached out to Google for a statement regarding the customers' SIM swapping claim and will update this post when we hear back. 

In this photo illustration, the Google Fi logo is displayed...

BRAZIL - 2022/10/31: In this photo illustration, the Google Fi logo is displayed on a smartphone screen. (Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images)
❌