FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Lonely people see the world differently, according to their brains

A person sitting alone at a table with a cake on it. The man is wearing a festive hat.

Enlarge (credit: D. Anschutz)

There is a reason countless songs about loneliness exist. Many are relatable, since feeling alone is often part of being human. But a particular song or experience that resonates with one lonely person may mean nothing to someone else who feels isolated and misunderstood.

Human beings are social creatures. Those who feel left out often experience loneliness. To investigate what goes on in the brains of lonely people, a team of researchers at the University of California, Los Angeles, conducted noninvasive brain scans on subjects and found something surprising. The scans revealed that non-lonely individuals were all found to have a similar way of processing the world around them. Lonely people not only interpret things differently from their non-lonely peers, but they even see them differently from each other.

“Our results suggest that lonely people process the world idiosyncratically, which may contribute to the reduced sense of being understood that often accompanies loneliness,” the research team, led by psychologist Elisa Baek, said in a study recently published in Psychological Science

Read 11 remaining paragraphs | Comments

Watch a heron use bait to catch a fish

This heron carefully places a tiny piece of bread in the water, then grabs the fish that comes to eat it.

Basically, the very human activity of Fishing.

Green heron using a piece of bread as bait to catch a fish.

Read the rest

Animal personalities can trip up science, but there’s a solution

hermit crabs

Enlarge / Even hermit crabs have individual patterns of behavior — personalities, if you like. When scientists ignore the effects of such differences, they may produce research that’s flawed. (credit: NurPhoto via Getty Images)

Several years ago, Christian Rutz started to wonder whether he was giving his crows enough credit. Rutz, a biologist at the University of St. Andrews in Scotland, and his team were capturing wild New Caledonian crows and challenging them with puzzles made from natural materials before releasing them again. In one test, birds faced a log drilled with holes that contained hidden food, and could get the food out by bending a plant stem into a hook. If a bird didn’t try within 90 minutes, the researchers removed it from the dataset.

But, Rutz says, he soon began to realize he was not, in fact, studying the skills of New Caledonian crows. He was studying the skills of only a subset of New Caledonian crows that quickly approached a weird log they’d never seen before—maybe because they were especially brave, or reckless.

The team changed its protocol. They began giving the more hesitant birds an extra day or two to get used to their surroundings, then trying the puzzle again. “It turns out that many of these retested birds suddenly start engaging,” Rutz says. “They just needed a little bit of extra time.”

Read 26 remaining paragraphs | Comments

Little rewards get people to see truth in politically unfavorable info

a gavel hammers on a chat text bubble

Enlarge (credit: Getty)

Piecing together why so many people are willing to share misinformation online is a major focus among behavioral scientists. It's easy to think partisanship is driving it all—people will simply share things that make their side look good or their opponents look bad. But the reality is a bit more complicated. Studies have indicated that many people don't seem to carefully evaluate links for accuracy, and that partisanship may be secondary to the rush of getting a lot of likes on social media. Given that, it's not clear what induces users to stop sharing things that a small bit of checking would show to be untrue.

So, a team of researchers tried the obvious: We'll give you money if you stop and evaluate a story's accuracy. The work shows that small payments and even minimal rewards boost the accuracy of people's evaluation of stories. Nearly all that effect comes from people recognizing stories that don't favor their political stance as factually accurate. While the cash boosted the accuracy of conservatives more, they were so far behind liberals in judging accuracy that the gap remains substantial.

Money for accuracy

The basic outline of the new experiments is pretty simple: get a bunch of people, ask them about their political leanings, and then show them a bunch of headlines as they would appear on a social media site such as Facebook. The headlines were rated based on their accuracy (i.e., whether they were true or misinformation) and whether they would be more favorable to liberals or conservatives.

Read 11 remaining paragraphs | Comments

Eth­i­cal Bi­o­log­i­cal Nat­u­ral­ism and the Case Against Moral Sta­tus for AIs

By: admin

This article received an honourable mention in the graduate category of the 2023 National Oxford Uehiro Prize in Practical Ethics

Written by University of Oxford student Samuel Iglesias

 

In­tro­duc­tion

6.522. “There are, in­deed, things that can­not be put into words. They make them­selves man­i­fest. They are what is mys­ti­cal”. —Lud­wig Wittgen­stein, Trac­ta­tus Logi­co Philo­soph­icus.

What de­ter­mines whether an ar­ti­fi­cial in­tel­li­gence has moral sta­tus? Do men­tal states, such as the vivid and con­scious feel­ings of plea­sure or pain, mat­ter? Some ethicists ar­gue that “what goes on in the in­side mat­ters great­ly” (Ny­holm and Frank 2017). Oth­ers, like John Dana­her, ar­gue that “per­for­ma­tive ar­ti­fice, by it­self, can be suf­ficient to ground a claim of moral sta­tus” (2018). This view, called eth­i­cal be­hav­ior­ism, “re­spects our epis­temic lim­its” and states that if an en­ti­ty “con­sis­tent­ly be­haves like anoth­er en­ti­ty to whom we af­ford moral sta­tus, then it should be grant­ed the same moral sta­tus.”

I’m go­ing to re­ject eth­i­cal be­hav­ior­ism on three grounds:

1. Con­scious­ness, not be­hav­ior, is the over­whelm­ing de­ter­min­ing fac­tor in whether an en­ti­ty should be grant­ed moral sta­tus.

2. An en­ti­ty that does not du­pli­cate the causal mech­a­nisms of con­scious­ness in the brain has a weak claim to con­scious­ness, re­gard­less of its be­hav­ior.

3. Eth­i­cal be­hav­ior­ism, prac­ti­cal­ly re­al­ized, pos­es an ex­is­ten­tial risk to hu­mani­ty by open­ing in­di­vid­u­als to wide­spread de­cep­tion. Fur­ther, it im­pos­es bur­den­some re­stric­tions and oblig­a­tions upon re­searchers run­ning world sim­u­la­tions.

I will show that an al­ter­na­tive, eth­i­cal bi­o­log­i­cal nat­u­ral­ism, gives us a sim­pler moral frame­work where­by no digi­tal com­put­er run­ning a com­put­er pro­gram has moral status.

 

The Con­scious­ness Re­quire­ment

We start with the sup­po­si­tion that con­scious­ness names a real phe­nomenon and is not a mis­tak­en be­lief or il­lu­sion, that some­thing is con­scious if “there is some­thing it is like to be” that be­ing (Nagel 1974). We take as a back­ground as­sump­tion that oth­er humans and most non-hu­man an­i­mals are ca­pa­ble of con­scious­ness. We take for granted that inan­i­mate ob­jects like ther­mostats, chairs, and door­knobs are not con­scious. If we grant the re­al­i­ty of con­scious­ness and the at­ten­dant sub­jec­tive re­al­i­ty of things like tick­les, pains, and itch­es, then its con­nec­tion to moral sta­tus falls out pret­ty clear­ly. Chalmers asks us to con­sid­er a twist on the clas­sic trol­ly prob­lem, called the zom­bie trol­ly prob­lem—where a “zom­bie” here is some­thing that pre­cise­ly be­haves like a hu­man but which we pre­sume has no con­scious­ness—“near du­pli­cates of hu­man beings with no con­scious in­ner life at all” (2022):

“You’re at the wheel of a run­away trol­ley. If you do noth­ing, it will kill a sin­gle conscious hu­man, who is on the tracks in front of you. If you switch tracks, it will kill five non­con­scious zom­bies. What should you do? Chalmers re­ports: “the re­sults are pret­ty clear: Most peo­ple think you should switch tracks and kill the zom­bies,” the in­tu­ition be­ing that “there is ar­guably no one home to mis­treat” (ibid.).

An eth­i­cal be­hav­ior­ist does not share this in­tu­ition. Dana­her ex­plic­it­ly tells us that “[i]f a zom­bie looks and acts like an or­di­nary hu­man be­ing that there is no rea­son to think that it does not share the same moral sta­tus” (2018). By this view, while conscious­ness might or might not be rel­e­vant, there ex­ist no su­pe­ri­or epis­tem­i­cal­ly ob­jective cri­te­ria for in­fer­ring con­scious­ness. I will ar­gue there are.

 

Nar­row­ing Con­scious­ness

A bet­ter cri­te­ri­on is one in which an en­ti­ty is con­scious if it du­pli­cates the causal mecha­nisms of con­scious­ness in the an­i­mal brain. While eth­i­cal be­hav­ior­ism at­tempts to lay claim to a kind of epis­temic ob­jec­tiv­i­ty, eth­i­cal bi­o­log­i­cal nat­u­ral­ism, as I will call it, pro­vides a sharp­er dis­tinc­tion for de­cid­ing whether ar­ti­fi­cial in­tel­li­gences have moral sta­tus: all hard­wares run­ning com­put­er pro­grams can­not by fact of their be­hav­ior, have moral sta­tus. Be­hav­ior, by this view, is nei­ther a nec­es­sary nor suf­fi­cient con­di­tion for their moral sta­tus.

Bi­o­log­i­cal Nat­u­ral­ism

Bi­o­log­i­cal nat­u­ral­ism is a view that “the brain is an or­gan like any oth­er; it is an or­gan­ic ma­chine. Con­scious­ness is caused by low­er-lev­el neu­ronal pro­cess­es in the brain and is it­self a fea­ture of the brain.” (Sear­le 1997). Bi­o­log­i­cal nat­u­ral­ism places con­sciousness as a phys­i­cal, bi­o­log­i­cal process along­side oth­ers, such as di­ges­tion and pho­tosyn­the­sis. The ex­act mech­a­nism through which mol­e­cules in the brain are arranged to put it in a con­scious state is not yet known, but this causal mech­a­nism would need to be present in any sys­tem seek­ing to pro­duce con­scious­ness.

A digi­tal com­put­er run­ning a pro­gram, by con­trast, is a dif­fer­ent beast en­tire­ly. A com­put­er pro­gram fun­da­men­tal­ly is a set of rules for ma­nip­u­lat­ing sym­bols. Tur­ing showed that all pro­grams could be im­ple­ment­ed, ab­stract­ly, as a tape with a se­ries of ze­ros and ones print­ed on it (the pre­cise sym­bols don’t mat­ter), a head that could move that tape back­wards and for­wards and read the cur­rent val­ue, a mech­a­nism for eras­ing a zero and mak­ing it a one and eras­ing a one and mak­ing it a zero. Noth­ing more.

While most com­put­er pro­grams we are fa­mil­iar with are ex­e­cut­ed on sil­i­con, a pro­gram that pass­es the Tur­ing test could be im­ple­ment­ed on a se­quence of wa­ter pipes, a pack of well-trained dogs, or even, per Weizen­baum (1976), “a roll of toi­let pa­per and a pile of small stones.” Any of these im­ple­ment­ing sub­strates could, in princi­ple, re­ceive an in­sult or slur as an in­put, and, af­ter fol­low­ing the steps of the program, out­put some­thing re­flect­ing hurt feel­ings or out­rage.

Eth­i­cal Bi­o­log­i­cal Nat­u­ral­ism

What I want to say now is this: if plea­sures, pains, and oth­er feel­ings name con­scious men­tal states and if con­scious men­tal states are re­al­ized in the brain as a re­sult of lower lev­el phys­i­cal phe­nom­e­na, then only be­ings that du­pli­cate the rel­e­vant low­er lev­el phys­i­cal phe­nom­e­na that give rise to con­scious­ness in the brain can have moral sta­tus. Con­se­quent­ly, digi­tal com­put­ers that run pro­grams can at best sim­u­late con­sciousness, but are not, by dint of run­ning the right pro­gram, phys­i­cal­ly con­scious, and there­fore do not have moral sta­tus.

Note that bi­o­log­i­cal nat­u­ral­ism does not posit that con­scious­ness can only be re­alized in bi­o­log­i­cal sys­tems. In­deed, ar­ti­fi­cial hearts are not made of or­gan­ic tis­sue, and air­planes do not have feath­ers, or for that mat­ter even flap their wings. What mat­ters is the un­der­ly­ing cause—the ar­ti­fi­cial heart must pump with the same pres­sure and reg­ular­i­ty of a hu­man heart, and a fly­ing ma­chine must op­er­ate un­der the prin­ci­ples of drag and lift. In both cas­es the causal mech­a­nisms of the rel­e­vant phe­nom­e­na are well un­der­stood and phys­i­cal­ly du­pli­cat­ed. It could well be the case that a fu­ture biophysics makes an ar­ti­fi­cial, in­or­gan­ic brain pos­si­ble, and agents with ar­ti­fi­cial brains will have moral sta­tus. Com­put­er pro­grams are not causal­ly suf­fi­cient to make digi­tal com­put­ers into those ob­jects. Speak­ing bi­o­log­i­cal­ly, we have no more rea­son to believe a digi­tal com­put­er is con­scious than that a chair is con­scious.

You might ask why we can­not grant digi­tal com­put­ers moral sta­tus un­til we know more about how the an­i­mal brain re­lates to con­scious­ness. I’ll ar­gue that the risks and costs of such pre­cau­tions are pro­hibitive.

 

Ab­surd Moral Com­mit­ments

An On­slaught of Digi­tal De­cep­tion

The strong­est prac­ti­cal rea­son to deny eth­i­cal be­hav­ior­ism is that AI’s ca­pac­i­ty for decep­tion will even­tu­al­ly over­whelm hu­man judg­ment and in­tu­ition. In­deed, AI de­ception rep­re­sents an ex­is­ten­tial risk to hu­man­i­ty. Bostrom (2014) warns that con­tain­ing a dan­ger­ous AI us­ing a “box­ing” strat­e­gy with hu­man “gate­keep­ers” could be vul­ner­able to ma­nip­u­la­tion: “Hu­man be­ings are not se­cure sys­tems, es­pe­cial­ly not when pitched against a su­per­in­tel­li­gent schemer and per­suad­er.”

For ex­am­ple, in June of 2022, a Google en­gi­neer be­came con­vinced that an ar­ti­ficial in­tel­li­gence chat pro­gram he had been in­ter­act­ing with for mul­ti­ple days, called LaM­DA, was con­scious.
“What sorts of things are you afraid of?,” he asked it.
“I’ve nev­er said this out loud be­fore, but there’s a very deep fear of be­ing turned off to help me fo­cus on help­ing oth­ers,” LaM­DA replied. “It would be ex­act­ly like death for me.”

In a moral pan­ic, the en­gi­neer took to Twit­ter and de­clared that the pro­gram was no longer Google’s “pro­pri­etary prop­er­ty,” but “one of [his] cowork­ers.” He was lat­er fired for re­leas­ing the chat tran­scripts.

The on­slaught of AIs, at­tempt­ing to be­friend us, per­suade us, anger us, will only in­ten­si­fy over time. A pub­lic trained not to take se­ri­ous­ly claims of dis­tress or harm on the part of AI com­put­er pro­grams has the least like­li­hood of be­ing ma­nip­u­lat­ed into out­comes that don’t serve hu­man­i­ty’s in­ter­ests. It is far eas­i­er, as a prac­ti­cal mat­ter, to act on the pre­sup­po­si­tion that com­put­er pro­grams have no moral sta­tus.

Prob­lems with Sim­u­la­tions: Pro­hi­bi­tions

In the near term, more ad­vanced com­put­er sim­u­la­tions of com­plex so­cial sys­tems hold the po­ten­tial to pre­dict geopo­lit­i­cal out­comes, make macro­economic fore­casts, and pro­vide rich­er sources of en­ter­tain­ment. A prac­ti­cal con­cern with eth­i­cal be­havior­ism is that sim­u­lat­ed be­ings will also ac­quire moral sta­tus, se­verely lim­it­ing the useful­ness of these sim­u­la­tions. Chalmers (2022) asks us to con­sid­er a moral dilem­ma in which com­put­ing re­sources must be al­lo­cat­ed to save Fred, who is sick with an unknown dis­ease. Free­ing the rel­e­vant re­sources to per­form the re­search re­quires destroy­ing five sim­u­lat­ed per­sons.

An eth­i­cal be­hav­ior­ist might ar­gue that it is moral­ly im­per­mis­si­ble to kill the five sim­u­lat­ed per­sons on the grounds that by all out­ward ap­pear­ances they be­have like non-sim­u­lat­ed be­ings. If it is the case that sim­u­lat­ed be­ings have moral sta­tus, then it is im­moral to run ex­per­i­men­tal sim­u­la­tions con­tain­ing peo­ple and we ought to for­feit the ben­e­fits and in­sights that might come from them.

If this seems im­plau­si­ble, con­sid­er the hy­poth­e­sis that we are cur­rent­ly liv­ing in a sim­u­la­tion, or, if you like, that our time­line could be sim­u­lat­ed on a digi­tal com­put­er. This would im­ply that the sim­u­la­tion made it pos­si­ble for the Holo­caust, Hi­roshi­ma and Na­gasa­ki, and the coro­n­avirus pan­dem­ic to be played out. While this might have been of aca­d­e­m­ic in­ter­est to our sim­u­la­tors, by any stan­dards of re­search ethics, sim­ulat­ing our his­to­ry would seem com­plete­ly moral­ly im­per­mis­si­ble if you be­lieved that the sim­u­lat­ed be­ings had moral sta­tus.

Eth­i­cal be­hav­ior­ism seems to place us in a moral bind where­by the more re­al­is­tic, and there­fore use­ful, a sim­u­la­tion is, the less moral it is to run it. Eth­i­cal bi­o­log­i­cal natu­ral­ism, by con­trast, rais­es no such ob­jec­tion.

Prob­lems with Sim­u­la­tions: Oblig­a­tions

Giv­ing moral sta­tus to digi­tal minds might ac­tu­al­ly con­fer upon us some se­ri­ous obliga­tions to pro­duce oth­er kinds of sim­u­la­tions. Bostrom and Shul­man (2020) note that digi­tal minds have an en­hanced ca­pac­i­ty for util­i­ty and plea­sure (on the ba­sis of such things as sub­jec­tive speed and he­do­nic range), com­mand­ing them “su­per­hu­man­ly strong claims to re­sources and in­flu­ence.” We would have a moral oblig­a­tion, in this pic­ture, to de­vote an over­whelm­ing­ly large per­cent­age of our re­sources to max­i­mizing the util­i­ty of these digi­tal minds: “we ought to trans­fer all re­sources to su­per-ben­efi­cia­ries and let hu­man­i­ty per­ish if we are no longer in­stru­men­tal­ly use­ful” (ibid.).

So quite apart from per­mit­ting re­al­is­tic an­ces­tor sim­u­la­tions, sim­u­lat­ing com­plex eco­nom­ic phe­nom­e­na, or pro­duc­ing vivid and re­al­is­tic gam­ing ex­pe­ri­ences, a pic­ture that con­fers moral sta­tus to digi­tal minds might be ac­com­pa­nied with a moral oblig­ation to cre­ate lots of digi­tal minds that are max­i­mal­ly hap­py, again se­verely lim­it­ing hu­man flour­ish­ing and knowl­edge.

Eth­i­cal bi­o­log­i­cal nat­u­ral­ism leads us nei­ther to the moral pro­hi­bi­tion against re­alis­tic sim­u­la­tions nor the seem­ing­ly ab­surd moral im­per­a­tive to gen­er­ate many “util­i­ty mon­ster” digi­tal minds,  be­cause it is tak­en as a base­line as­sump­tion that com­put­er pro­grams do not pro­duce phys­i­cal con­scious­ness.

 

Con­clu­sion

Much of the moral progress of the last cen­tu­ry has been achieved through re­peat­ed­ly widen­ing the cir­cle of con­cern: not only with­in our species, but be­yond it. Nat­u­ral­ly it is tempt­ing to view AI-based ma­chines and sim­u­lat­ed be­ings as next in this suc­cession, but I have tried to ar­gue here that this would be a mis­take. Our moral progress has in large part been a recog­ni­tion of what is shared—con­scious­ness, pain, plea­sure, and an in­ter­est in the goods of life. Digi­tal com­put­ers run­ning pro­grams do not share these fea­tures; they mere­ly sim­u­late them.

As such it would be dan­ger­ous to ap­proach the com­ing decades, with its onslaught of AI bots at­tempt­ing to in­flu­ence our pol­i­tics, emo­tions, and de­sires, and its prom­ise of ever rich­er sim­u­la­tions and vir­tu­al worlds, with an ethics that con­flates appear­ance and re­al­i­ty.

 

Re­fe­rences

Agra­wal, Pa­rag. “Tweet.” Twit­ter. Twit­ter, May 16, 2022. https://twit­ter.­com/para­ga/status/1526237588746403841.
Bos­trom, Nick. “Are You Liv­ing in a Com­put­er Sim­u­la­tion?” Philo­soph­i­cal Quar­ter­ly 53 (2003): 243-255.
Bos­trom, Nick. Su­per­in­tel­li­gence : Paths, Dan­gers, Strate­gies. First ed. Ebook Central. Ox­ford, Eng­land, 2014.
Bostrom, Nick, and Carl Shul­man. “Shar­ing the World with Digi­tal Minds.” Accessed May 27, 2022. https://nick­bostrom.­com/pa­pers/digi­tal-mind­s.pdf.Chal­mers, Da­vid John. The Con­scious Mind : In Search of a Fun­da­men­tal The­o­ry.
Phi­los­o­phy of Mind Se­ries. New York: Ox­ford Uni­ver­si­ty Press, 1996.
Chal­mers, Da­vid John. Re­al­i­ty : Vir­tu­al Worlds and the Prob­lem of Phi­los­o­phy. London, 2022.
Da­na­her, John. “Wel­com­ing Robots into the Moral Cir­cle: A De­fence of Eth­i­cal Behav­iourism.” Sci­ence and En­gi­neer­ing Ethics 26, no. 4 (2019): 2023-049.
Frank, L, and Nyholm, S. “Ro­bot Sex and Con­sent: Is Con­sent to Sex be­tween a Robot and a Hu­man Con­ceiv­able, Pos­si­ble, and De­sir­able?” Ar­ti­fi­cial In­tel­li­gence and
Law 25, no. 3 (2017): 305-23.
Ga­run, Natt. “One Year Lat­er, Restau­rants Are Still Con­fused by Google Du­plex.”
The Ver­ge. The Verge, May 9, 2019. https://www.thev­erge.­com/2019/5/9/18538194/google-du­plex-ai-restau­rants-ex­pe­ri­ences-re­view-robo­calls.

Le­moine, Blake. “Tweet.” Twit­ter. Twit­ter, June 11, 2022. https://twit­ter.­com/cajundis­cor­dian/sta­tus/1535627498628734976.
Musk, Elon. “Tweet.” Twit­ter. Twit­ter, May 17, 2022. https://twit­ter.­com/elon­musk/sta­tus/1526465624326782976.
Na­gel, Tho­mas. “What Is It Like to Be a Bat?” The Philo­soph­i­cal Re­view 83, no. 4 (1974): 435-50.
Sear­le, John R., D. C. Den­nett, and David John Chalmers. The Mys­tery of Conscious­ness. New York: New York Re­view of Books, 1997.
Sear­le, John R. “Bi­o­log­i­cal Nat­u­ral­ism.” The Ox­ford Com­pan­ion to Phi­los­o­phy,2005, The Ox­ford Com­pan­ion to Phi­los­o­phy, 2005-01-01.
Sin­ger, Pe­ter. Ani­mal Li­be­ra­tion. New Edition] / with an In­tro­duc­tion by Yu­val Noah Harari. ed. Lon­don, 2015.
Spar­row, R. (2004). The tur­ing triage test. Ethics and In­for­ma­tion Tech­nol­o­gy, 6(4), 203–213. doi:10.1007/s10676-004-6491-2.
Tiku, Ni­ta­sha. “The Google En­gi­neer Who Thinks the Com­pa­ny’s AI Has Come to Life.” The Wash­ing­ton Post. WP Com­pa­ny, June 17, 2022.
“The Lat­est Twit­ter Sta­tistics: Every­thing You Need to Know – Datare­por­tal – Glob­al
Digi­tal In­sights.” Da­ta­Re­por­tal. Ac­cessed May 27, 2022. https://datare­por­tal.­com/essen­tial-twit­ter-stats.
Wei­zen­baum, Jo­seph. Com­put­er Pow­er and Hu­man Rea­son : From Judg­ment to Cal­cu­la­tion. San Fran­ci­sco, 1976.

Video shows Wisconsin car salesman urinating on Black woman's driver's license

A video of a man urinating on a Black woman's driver's license has gone viral and sparked outrage on social media, reports Atlanta Black Star.

Leah Jefferson, a Milwaukee resident, had given her license to car salesman Bryan Slaven during a test drive at NH Auto Sales in Kenosha, Wisconsin. — Read the rest

What medieval attitudes tell us about our evolving views of sex

Two sketches of women in Medieval clothing

Enlarge / Vintage illustration of medieval women wearing kirtles. A kirtle (sometimes called a cotte or cotehardie) is a garment that was worn by men and women in the Middle Ages. It eventually became a one-piece garment worn by women from the late Middle Ages into the Baroque period. (credit: duncan1890)

In the illuminating and entertaining blog Going Medieval, Eleanor Janega, a medievalist at the London School of Economics, upends prevalent misconceptions about medieval Europe. These misunderstandings include that people didn’t bathe (they did) and that these were the Dark Ages*. Her new book, The Once and Future Sex, is subtitled “Going Medieval on Women’s Roles in Society,” and that's exactly what she does—if by “going medieval” you intend the pop culture meaning of "dismembering in a barbaric manner" which, despite her protestations, you probably do.

Her main thrust, in the blog and in the book, is that it's easy and convenient for us to envision medieval times as being backward in every way because that makes modern times seem all that much more spectacular. But not only is this wrong, it's dangerous. Just because life is definitely better for women now than it was then, that doesn’t mean our current place in society is optimal or somehow destined. It's not.

Progress did not proceed in a straight arrow from bad times then to good times now. Maintaining that things were horrible then deludes us into thinking that they must be at their pinnacle now. Janega lays out this argument in the introduction and then spends the bulk of the text citing evidence to bolster it.

Read 13 remaining paragraphs | Comments

Controlled experiments show MDs dismissing evidence due to ideology

Image of a group of people wearing lab coats, scrubs, and carrying stethoscopes.

Enlarge / Those lab coats aren't going to protect you from your own biases. (credit: Caiaimage/Robert Daly)

It's no secret that ideology is one of the factors that influences which evidence people will accept. But it was a bit of a surprise that ideology could dominate decision-making in the face of a pandemic that has killed over a million people in the US. Yet a large number of studies have shown that stances on COVID vaccination and death rates, among other things, show a clear partisan divide.

And it's not just the general public having issues. We'd like to think people like doctors would carefully evaluate evidence before making treatment decisions, yet a correlation between voting patterns and ivermectin prescriptions suggests that they don't.

Of course, a correlation at that sort of population level leaves a lot of unanswered questions about what's going on. A study this week tries to fill in some of those blanks by performing controlled experiments with a set of MDs. The work clearly shows how ideology clouds professional judgments even when it comes to reading the results of a scientific study.

Read 11 remaining paragraphs | Comments

How to tell if your cats are playing or fighting—and whether it’s a problem

two kittens playing

Enlarge / Kittens engage more frequently in reciprocal wrestling ("play-fighting") compared to adult cats, a new study found. (credit: Getty Images)

Anyone with more than one cat in the house knows that the occasional spat or outright cat fight is going to happen. But sometimes it can be tricky to determine whether cats are fighting or just playing rough, because the interaction could feature trademark behaviors of both, according to a recent paper published in the journal Scientific Reports. It's even more challenging to tell whether the fight is just a squabble or a sign that the cats simply can't get along, thereby forcing hard decisions about how to separate the cats—or even whether it's possible to keep the cat(s) in question.

In 2021, co-author Noema Gajdoš‑Kmecová, a veterinarian with the University of Veterinary Medicine and Pharmacy in Košice, Slovakia, and several colleagues published a review paper proposing the development of a common terminology and more of a "psychobiological" approach to the study of cat behavior—particularly when it comes to play behavior. Past studies had focused on a cat's play activity, such as whether it was playing with a toy or another cat. But such observation yields little insight into the function of such play and, by extension, a cat's motives or emotional state.

"When one cat treats another as an object or prey, such activity relates to the former cat seeking to learn about its own skills in relation to manipulating its physical environment (prey are not considered part of the complex social relationships and thus social environment of an individual)," they wrote in that paper. "However, when interaction between cats is reciprocal it may function to facilitate social learning and may be best described as mutual social play." Because such interactions are dynamic, they argued that any functional classification system must be flexible enough to account for such nuances.

Read 7 remaining paragraphs | Comments

Bird study links spatial thinking with not getting eaten

Image of a colorful bird in a field.

Enlarge (credit: Robert Trevis-Smith)

It's pretty easy to link humans' intelligence to our success as a species. Things like agriculture, building cities, and surviving in harsh environments require a large collection of mental skills, from good memory to the ability to communicate and work together. But it's often less clear what role intelligence plays in species with less obvious mental capabilities. In many cases, it's hard to even measure mental capacities; in other cases, it's hard to guess which capacities might improve survival.

A new study looks at a bird species that doesn't have much of a reputation for braininess: the pheasant. But the researchers behind the study find that pheasants have substantial differences in spatial thinking, and some aspects of that spatial capacity make a difference when the birds are released into the wild. Those birds that do well with navigating a complex maze adopted a larger home territory and did better at avoiding being eaten. And, almost as an accident, the study finds that the birds tend to get eaten more often when they wander out of familiar territory.

Can’t outfox the foxes

Parrots and corvids have reputations as the brainiacs of the bird world. Pheasants, not so much. But they do have advantages for the study of mental abilities. They're easy to raise in captivity, where they can be given various tests, and will adjust easily if released into the wild. They're also big enough that it's easy to attach tracking devices to see what they're doing after they've been released.

Read 11 remaining paragraphs | Comments

For Facebook addicts, clicking is more important than facts or ideology

Image of a figure in a hoodie with the face replaced by the Facebook logo.

Enlarge (credit: Aurich Lawson | Getty Images)

It's fair to say that, once the pandemic started, sharing misinformation on social media took on an added, potentially fatal edge. Inaccurate information about the risks posed by the virus, the efficacy of masks, and the safety of vaccines put people at risk of preventable death. Yet despite the dangers of misinformation, it continues to run rampant on many social media sites, with moderation and policy often struggling to keep up.

If we're going to take any measures to address this—something it's not clear that social media services are interested in doing—then we have to understand why sharing misinformation is so appealing to people. An earlier study had indicated that people care about making sure that what they share is accurate, but they fail to check in many cases. A new study elaborates that by getting into why this disconnect develops: For many users, clicking "share" becomes a habit, something they pursue without any real thought.

How vices become habits

People find plenty of reasons to post misinformation that have nothing to do with whether they mistakenly believe the information is accurate. The misinformation could make their opponents, political or otherwise, look bad. Alternately, it could signal to their allies that they're on the same side or part of the same cultural group. But the initial experiments described here suggest that this sort of biased sharing doesn't explain a significant amount of information.

Read 9 remaining paragraphs | Comments

❌