There is a reason countless songs about loneliness exist. Many are relatable, since feeling alone is often part of being human. But a particular song or experience that resonates with one lonely person may mean nothing to someone else who feels isolated and misunderstood.
Human beings are social creatures. Those who feel left out often experience loneliness. To investigate what goes on in the brains of lonely people, a team of researchers at the University of California, Los Angeles, conducted noninvasive brain scans on subjects and found something surprising. The scans revealed that non-lonely individuals were all found to have a similar way of processing the world around them. Lonely people not only interpret things differently from their non-lonely peers, but they even see them differently from each other.
“Our results suggest that lonely people process the world idiosyncratically, which may contribute to the reduced sense of being understood that often accompanies loneliness,” the research team, led by psychologist Elisa Baek, said in a study recently published in Psychological Science.
This heron carefully places a tiny piece of bread in the water, then grabs the fish that comes to eat it.
Basically, the very human activity of Fishing.
— Read the rest“If you are a car owner, you are red meat for whoever wants to prey upon you, whether it is police, auto lenders, or state agencies.”
The post Car Creditocracy: An Interview with Julie Livingston & Andrew Ross appeared first on Public Books.
Several years ago, Christian Rutz started to wonder whether he was giving his crows enough credit. Rutz, a biologist at the University of St. Andrews in Scotland, and his team were capturing wild New Caledonian crows and challenging them with puzzles made from natural materials before releasing them again. In one test, birds faced a log drilled with holes that contained hidden food, and could get the food out by bending a plant stem into a hook. If a bird didn’t try within 90 minutes, the researchers removed it from the dataset.
But, Rutz says, he soon began to realize he was not, in fact, studying the skills of New Caledonian crows. He was studying the skills of only a subset of New Caledonian crows that quickly approached a weird log they’d never seen before—maybe because they were especially brave, or reckless.
The team changed its protocol. They began giving the more hesitant birds an extra day or two to get used to their surroundings, then trying the puzzle again. “It turns out that many of these retested birds suddenly start engaging,” Rutz says. “They just needed a little bit of extra time.”
Piecing together why so many people are willing to share misinformation online is a major focus among behavioral scientists. It's easy to think partisanship is driving it all—people will simply share things that make their side look good or their opponents look bad. But the reality is a bit more complicated. Studies have indicated that many people don't seem to carefully evaluate links for accuracy, and that partisanship may be secondary to the rush of getting a lot of likes on social media. Given that, it's not clear what induces users to stop sharing things that a small bit of checking would show to be untrue.
So, a team of researchers tried the obvious: We'll give you money if you stop and evaluate a story's accuracy. The work shows that small payments and even minimal rewards boost the accuracy of people's evaluation of stories. Nearly all that effect comes from people recognizing stories that don't favor their political stance as factually accurate. While the cash boosted the accuracy of conservatives more, they were so far behind liberals in judging accuracy that the gap remains substantial.
The basic outline of the new experiments is pretty simple: get a bunch of people, ask them about their political leanings, and then show them a bunch of headlines as they would appear on a social media site such as Facebook. The headlines were rated based on their accuracy (i.e., whether they were true or misinformation) and whether they would be more favorable to liberals or conservatives.
This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics
Written by University of Oxford student Samuel Iglesias
Introduction
6.522. “There are, indeed, things that cannot be put into words. They make themselves manifest. They are what is mystical”. —Ludwig Wittgenstein, Tractatus Logico Philosophicus.
What determines whether an artificial intelligence has moral status? Do mental states, such as the vivid and conscious feelings of pleasure or pain, matter? Some ethicists argue that “what goes on in the inside matters greatly” (Nyholm and Frank 2017). Others, like John Danaher, argue that “performative artifice, by itself, can be sufficient to ground a claim of moral status” (2018). This view, called ethical behaviorism, “respects our epistemic limits” and states that if an entity “consistently behaves like another entity to whom we afford moral status, then it should be granted the same moral status.”
I’m going to reject ethical behaviorism on three grounds:
1. Consciousness, not behavior, is the overwhelming determining factor in whether an entity should be granted moral status.
2. An entity that does not duplicate the causal mechanisms of consciousness in the brain has a weak claim to consciousness, regardless of its behavior.
3. Ethical behaviorism, practically realized, poses an existential risk to humanity by opening individuals to widespread deception. Further, it imposes burdensome restrictions and obligations upon researchers running world simulations.
I will show that an alternative, ethical biological naturalism, gives us a simpler moral framework whereby no digital computer running a computer program has moral status.
The Consciousness Requirement
We start with the supposition that consciousness names a real phenomenon and is not a mistaken belief or illusion, that something is conscious if “there is something it is like to be” that being (Nagel 1974). We take as a background assumption that other humans and most non-human animals are capable of consciousness. We take for granted that inanimate objects like thermostats, chairs, and doorknobs are not conscious. If we grant the reality of consciousness and the attendant subjective reality of things like tickles, pains, and itches, then its connection to moral status falls out pretty clearly. Chalmers asks us to consider a twist on the classic trolly problem, called the zombie trolly problem—where a “zombie” here is something that precisely behaves like a human but which we presume has no consciousness—“near duplicates of human beings with no conscious inner life at all” (2022):
“You’re at the wheel of a runaway trolley. If you do nothing, it will kill a single conscious human, who is on the tracks in front of you. If you switch tracks, it will kill five nonconscious zombies. What should you do? Chalmers reports: “the results are pretty clear: Most people think you should switch tracks and kill the zombies,” the intuition being that “there is arguably no one home to mistreat” (ibid.).
An ethical behaviorist does not share this intuition. Danaher explicitly tells us that “[i]f a zombie looks and acts like an ordinary human being that there is no reason to think that it does not share the same moral status” (2018). By this view, while consciousness might or might not be relevant, there exist no superior epistemically objective criteria for inferring consciousness. I will argue there are.
Narrowing Consciousness
A better criterion is one in which an entity is conscious if it duplicates the causal mechanisms of consciousness in the animal brain. While ethical behaviorism attempts to lay claim to a kind of epistemic objectivity, ethical biological naturalism, as I will call it, provides a sharper distinction for deciding whether artificial intelligences have moral status: all hardwares running computer programs cannot by fact of their behavior, have moral status. Behavior, by this view, is neither a necessary nor sufficient condition for their moral status.
Biological Naturalism
Biological naturalism is a view that “the brain is an organ like any other; it is an organic machine. Consciousness is caused by lower-level neuronal processes in the brain and is itself a feature of the brain.” (Searle 1997). Biological naturalism places consciousness as a physical, biological process alongside others, such as digestion and photosynthesis. The exact mechanism through which molecules in the brain are arranged to put it in a conscious state is not yet known, but this causal mechanism would need to be present in any system seeking to produce consciousness.
A digital computer running a program, by contrast, is a different beast entirely. A computer program fundamentally is a set of rules for manipulating symbols. Turing showed that all programs could be implemented, abstractly, as a tape with a series of zeros and ones printed on it (the precise symbols don’t matter), a head that could move that tape backwards and forwards and read the current value, a mechanism for erasing a zero and making it a one and erasing a one and making it a zero. Nothing more.
While most computer programs we are familiar with are executed on silicon, a program that passes the Turing test could be implemented on a sequence of water pipes, a pack of well-trained dogs, or even, per Weizenbaum (1976), “a roll of toilet paper and a pile of small stones.” Any of these implementing substrates could, in principle, receive an insult or slur as an input, and, after following the steps of the program, output something reflecting hurt feelings or outrage.
Ethical Biological Naturalism
What I want to say now is this: if pleasures, pains, and other feelings name conscious mental states and if conscious mental states are realized in the brain as a result of lower level physical phenomena, then only beings that duplicate the relevant lower level physical phenomena that give rise to consciousness in the brain can have moral status. Consequently, digital computers that run programs can at best simulate consciousness, but are not, by dint of running the right program, physically conscious, and therefore do not have moral status.
Note that biological naturalism does not posit that consciousness can only be realized in biological systems. Indeed, artificial hearts are not made of organic tissue, and airplanes do not have feathers, or for that matter even flap their wings. What matters is the underlying cause—the artificial heart must pump with the same pressure and regularity of a human heart, and a flying machine must operate under the principles of drag and lift. In both cases the causal mechanisms of the relevant phenomena are well understood and physically duplicated. It could well be the case that a future biophysics makes an artificial, inorganic brain possible, and agents with artificial brains will have moral status. Computer programs are not causally sufficient to make digital computers into those objects. Speaking biologically, we have no more reason to believe a digital computer is conscious than that a chair is conscious.
You might ask why we cannot grant digital computers moral status until we know more about how the animal brain relates to consciousness. I’ll argue that the risks and costs of such precautions are prohibitive.
Absurd Moral Commitments
An Onslaught of Digital Deception
The strongest practical reason to deny ethical behaviorism is that AI’s capacity for deception will eventually overwhelm human judgment and intuition. Indeed, AI deception represents an existential risk to humanity. Bostrom (2014) warns that containing a dangerous AI using a “boxing” strategy with human “gatekeepers” could be vulnerable to manipulation: “Human beings are not secure systems, especially not when pitched against a superintelligent schemer and persuader.”
For example, in June of 2022, a Google engineer became convinced that an artificial intelligence chat program he had been interacting with for multiple days, called LaMDA, was conscious.
“What sorts of things are you afraid of?,” he asked it.
“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others,” LaMDA replied. “It would be exactly like death for me.”
In a moral panic, the engineer took to Twitter and declared that the program was no longer Google’s “proprietary property,” but “one of [his] coworkers.” He was later fired for releasing the chat transcripts.
The onslaught of AIs, attempting to befriend us, persuade us, anger us, will only intensify over time. A public trained not to take seriously claims of distress or harm on the part of AI computer programs has the least likelihood of being manipulated into outcomes that don’t serve humanity’s interests. It is far easier, as a practical matter, to act on the presupposition that computer programs have no moral status.
Problems with Simulations: Prohibitions
In the near term, more advanced computer simulations of complex social systems hold the potential to predict geopolitical outcomes, make macroeconomic forecasts, and provide richer sources of entertainment. A practical concern with ethical behaviorism is that simulated beings will also acquire moral status, severely limiting the usefulness of these simulations. Chalmers (2022) asks us to consider a moral dilemma in which computing resources must be allocated to save Fred, who is sick with an unknown disease. Freeing the relevant resources to perform the research requires destroying five simulated persons.
An ethical behaviorist might argue that it is morally impermissible to kill the five simulated persons on the grounds that by all outward appearances they behave like non-simulated beings. If it is the case that simulated beings have moral status, then it is immoral to run experimental simulations containing people and we ought to forfeit the benefits and insights that might come from them.
If this seems implausible, consider the hypothesis that we are currently living in a simulation, or, if you like, that our timeline could be simulated on a digital computer. This would imply that the simulation made it possible for the Holocaust, Hiroshima and Nagasaki, and the coronavirus pandemic to be played out. While this might have been of academic interest to our simulators, by any standards of research ethics, simulating our history would seem completely morally impermissible if you believed that the simulated beings had moral status.
Ethical behaviorism seems to place us in a moral bind whereby the more realistic, and therefore useful, a simulation is, the less moral it is to run it. Ethical biological naturalism, by contrast, raises no such objection.
Problems with Simulations: Obligations
Giving moral status to digital minds might actually confer upon us some serious obligations to produce other kinds of simulations. Bostrom and Shulman (2020) note that digital minds have an enhanced capacity for utility and pleasure (on the basis of such things as subjective speed and hedonic range), commanding them “superhumanly strong claims to resources and influence.” We would have a moral obligation, in this picture, to devote an overwhelmingly large percentage of our resources to maximizing the utility of these digital minds: “we ought to transfer all resources to super-beneficiaries and let humanity perish if we are no longer instrumentally useful” (ibid.).
So quite apart from permitting realistic ancestor simulations, simulating complex economic phenomena, or producing vivid and realistic gaming experiences, a picture that confers moral status to digital minds might be accompanied with a moral obligation to create lots of digital minds that are maximally happy, again severely limiting human flourishing and knowledge.
Ethical biological naturalism leads us neither to the moral prohibition against realistic simulations nor the seemingly absurd moral imperative to generate many “utility monster” digital minds, because it is taken as a baseline assumption that computer programs do not produce physical consciousness.
Conclusion
Much of the moral progress of the last century has been achieved through repeatedly widening the circle of concern: not only within our species, but beyond it. Naturally it is tempting to view AI-based machines and simulated beings as next in this succession, but I have tried to argue here that this would be a mistake. Our moral progress has in large part been a recognition of what is shared—consciousness, pain, pleasure, and an interest in the goods of life. Digital computers running programs do not share these features; they merely simulate them.
As such it would be dangerous to approach the coming decades, with its onslaught of AI bots attempting to influence our politics, emotions, and desires, and its promise of ever richer simulations and virtual worlds, with an ethics that conflates appearance and reality.
References
Agrawal, Parag. “Tweet.” Twitter. Twitter, May 16, 2022. https://twitter.com/paraga/status/1526237588746403841.
Bostrom, Nick. “Are You Living in a Computer Simulation?” Philosophical Quarterly 53 (2003): 243-255.
Bostrom, Nick. Superintelligence : Paths, Dangers, Strategies. First ed. Ebook Central. Oxford, England, 2014.
Bostrom, Nick, and Carl Shulman. “Sharing the World with Digital Minds.” Accessed May 27, 2022. https://nickbostrom.com/papers/digital-minds.pdf.Chalmers, David John. The Conscious Mind : In Search of a Fundamental Theory.
Philosophy of Mind Series. New York: Oxford University Press, 1996.
Chalmers, David John. Reality : Virtual Worlds and the Problem of Philosophy. London, 2022.
Danaher, John. “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.” Science and Engineering Ethics 26, no. 4 (2019): 2023-049.
Frank, L, and Nyholm, S. “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?” Artificial Intelligence and
Law 25, no. 3 (2017): 305-23.
Garun, Natt. “One Year Later, Restaurants Are Still Confused by Google Duplex.”
The Verge. The Verge, May 9, 2019. https://www.theverge.com/2019/5/9/18538194/google-duplex-ai-restaurants-experiences-review-robocalls.
Lemoine, Blake. “Tweet.” Twitter. Twitter, June 11, 2022. https://twitter.com/cajundiscordian/status/1535627498628734976.
Musk, Elon. “Tweet.” Twitter. Twitter, May 17, 2022. https://twitter.com/elonmusk/status/1526465624326782976.
Nagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review 83, no. 4 (1974): 435-50.
Searle, John R., D. C. Dennett, and David John Chalmers. The Mystery of Consciousness. New York: New York Review of Books, 1997.
Searle, John R. “Biological Naturalism.” The Oxford Companion to Philosophy,2005, The Oxford Companion to Philosophy, 2005-01-01.
Singer, Peter. Animal Liberation. New Edition] / with an Introduction by Yuval Noah Harari. ed. London, 2015.
Sparrow, R. (2004). The turing triage test. Ethics and Information Technology, 6(4), 203–213. doi:10.1007/s10676-004-6491-2.
Tiku, Nitasha. “The Google Engineer Who Thinks the Company’s AI Has Come to Life.” The Washington Post. WP Company, June 17, 2022.
“The Latest Twitter Statistics: Everything You Need to Know – Datareportal – Global
Digital Insights.” DataReportal. Accessed May 27, 2022. https://datareportal.com/essential-twitter-stats.
Weizenbaum, Joseph. Computer Power and Human Reason : From Judgment to Calculation. San Francisco, 1976.
A video of a man urinating on a Black woman's driver's license has gone viral and sparked outrage on social media, reports Atlanta Black Star.
Leah Jefferson, a Milwaukee resident, had given her license to car salesman Bryan Slaven during a test drive at NH Auto Sales in Kenosha, Wisconsin. — Read the rest
In the illuminating and entertaining blog Going Medieval, Eleanor Janega, a medievalist at the London School of Economics, upends prevalent misconceptions about medieval Europe. These misunderstandings include that people didn’t bathe (they did) and that these were the Dark Ages*. Her new book, The Once and Future Sex, is subtitled “Going Medieval on Women’s Roles in Society,” and that's exactly what she does—if by “going medieval” you intend the pop culture meaning of "dismembering in a barbaric manner" which, despite her protestations, you probably do.
Her main thrust, in the blog and in the book, is that it's easy and convenient for us to envision medieval times as being backward in every way because that makes modern times seem all that much more spectacular. But not only is this wrong, it's dangerous. Just because life is definitely better for women now than it was then, that doesn’t mean our current place in society is optimal or somehow destined. It's not.
Progress did not proceed in a straight arrow from bad times then to good times now. Maintaining that things were horrible then deludes us into thinking that they must be at their pinnacle now. Janega lays out this argument in the introduction and then spends the bulk of the text citing evidence to bolster it.
It's no secret that ideology is one of the factors that influences which evidence people will accept. But it was a bit of a surprise that ideology could dominate decision-making in the face of a pandemic that has killed over a million people in the US. Yet a large number of studies have shown that stances on COVID vaccination and death rates, among other things, show a clear partisan divide.
And it's not just the general public having issues. We'd like to think people like doctors would carefully evaluate evidence before making treatment decisions, yet a correlation between voting patterns and ivermectin prescriptions suggests that they don't.
Of course, a correlation at that sort of population level leaves a lot of unanswered questions about what's going on. A study this week tries to fill in some of those blanks by performing controlled experiments with a set of MDs. The work clearly shows how ideology clouds professional judgments even when it comes to reading the results of a scientific study.
"We define ourselves more by certain emotions. I've never heard anybody say, 'I'm trying to get over my embarrassment and I feel so inauthentic.'"
The post “It Is Not How You Feel”: Batja Mesquita on How Different Cultures Experience Emotions appeared first on Public Books.
Anyone with more than one cat in the house knows that the occasional spat or outright cat fight is going to happen. But sometimes it can be tricky to determine whether cats are fighting or just playing rough, because the interaction could feature trademark behaviors of both, according to a recent paper published in the journal Scientific Reports. It's even more challenging to tell whether the fight is just a squabble or a sign that the cats simply can't get along, thereby forcing hard decisions about how to separate the cats—or even whether it's possible to keep the cat(s) in question.
In 2021, co-author Noema Gajdoš‑Kmecová, a veterinarian with the University of Veterinary Medicine and Pharmacy in Košice, Slovakia, and several colleagues published a review paper proposing the development of a common terminology and more of a "psychobiological" approach to the study of cat behavior—particularly when it comes to play behavior. Past studies had focused on a cat's play activity, such as whether it was playing with a toy or another cat. But such observation yields little insight into the function of such play and, by extension, a cat's motives or emotional state.
"When one cat treats another as an object or prey, such activity relates to the former cat seeking to learn about its own skills in relation to manipulating its physical environment (prey are not considered part of the complex social relationships and thus social environment of an individual)," they wrote in that paper. "However, when interaction between cats is reciprocal it may function to facilitate social learning and may be best described as mutual social play." Because such interactions are dynamic, they argued that any functional classification system must be flexible enough to account for such nuances.
It's pretty easy to link humans' intelligence to our success as a species. Things like agriculture, building cities, and surviving in harsh environments require a large collection of mental skills, from good memory to the ability to communicate and work together. But it's often less clear what role intelligence plays in species with less obvious mental capabilities. In many cases, it's hard to even measure mental capacities; in other cases, it's hard to guess which capacities might improve survival.
A new study looks at a bird species that doesn't have much of a reputation for braininess: the pheasant. But the researchers behind the study find that pheasants have substantial differences in spatial thinking, and some aspects of that spatial capacity make a difference when the birds are released into the wild. Those birds that do well with navigating a complex maze adopted a larger home territory and did better at avoiding being eaten. And, almost as an accident, the study finds that the birds tend to get eaten more often when they wander out of familiar territory.
Parrots and corvids have reputations as the brainiacs of the bird world. Pheasants, not so much. But they do have advantages for the study of mental abilities. They're easy to raise in captivity, where they can be given various tests, and will adjust easily if released into the wild. They're also big enough that it's easy to attach tracking devices to see what they're doing after they've been released.
It's fair to say that, once the pandemic started, sharing misinformation on social media took on an added, potentially fatal edge. Inaccurate information about the risks posed by the virus, the efficacy of masks, and the safety of vaccines put people at risk of preventable death. Yet despite the dangers of misinformation, it continues to run rampant on many social media sites, with moderation and policy often struggling to keep up.
If we're going to take any measures to address this—something it's not clear that social media services are interested in doing—then we have to understand why sharing misinformation is so appealing to people. An earlier study had indicated that people care about making sure that what they share is accurate, but they fail to check in many cases. A new study elaborates that by getting into why this disconnect develops: For many users, clicking "share" becomes a habit, something they pursue without any real thought.
People find plenty of reasons to post misinformation that have nothing to do with whether they mistakenly believe the information is accurate. The misinformation could make their opponents, political or otherwise, look bad. Alternately, it could signal to their allies that they're on the same side or part of the same cultural group. But the initial experiments described here suggest that this sort of biased sharing doesn't explain a significant amount of information.