FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayPPJ Current

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

The other day a colleague of mine shared a YouTube video of the presentation The AI Dilemma, by Tristan Harris and Aza Raskin. In it, they share with the audience their concerns about the rapid and somewhat wild development of artificial intelligence (AI) in the hands of a few tech giants. I highly recommend it. (The video, that is. Not the rapid and somewhat wild development of AI in the hands of a few tech giants).

 

Much like the thousands of signatories of the March open call to “pause giant AI experiments”, and recently the “Godfather of AI” Geoffrey Hinton, Harris and Raskin warn us that we are on the brink of major (negative, dangerous) social disruption due to the power of new AI technologies.

 

Indeed, there’s a bit of a public buzz about “AI ethics” in recent months.

 

While it is good that there is a general awareness and a public discussion about AI – or any majorly disruptive phenomenon for that matter – there’s a potential problem with the abstraction: AI is portrayed as this big, emerging, technological, behemoth which we cannot or will not control. But it has been almost three decades since humans were able to beat an AI at a game of chess. We have been using AI for many things, from medical diagnosis to climate prediction, with little to no concern about it besting us and/or stripping us of agency in these domains. In other domains, such as driving cars, and military applications of drones, there has been significantly more controversy.

All this is just to say that AI ethics is not for hedgehogs – it’s not “one big thing”[i] – and I believe that we need to actively avoid a narrative and a line of thinking which paints it to be. In examining the ethical dimensions of a multitude of AI inventions, then, we ought to take care to limit the scope of our inquiry to the domain in question at the very least.

 

So let us, for argument’s sake, return to that door at the Uehiro Centre, and the voice cautioning visitors to stay clear. Now, as far as I’m aware, the voice and the door are not part of an AI system. I also believe that there is no person who is tasked with waiting around for visitors asking for access, warning them of the impending door swing, and then manually opening the door. I believe it is a quite simple contraption, with a voice recording programmed to be played as the door opens. But does it make a difference to me, or other visitors, which of these possibilities is true?

 

We can call these possibilities:

Condition one (C1): AI door, created by humans.

Condition two (C2): Human speaker & door operator.

Condition three (C3): Automatic door & speaker, programmed by humans.

 

In C3, it seems that the outcome of the visitor’s action will always be the same after the buzzer is pushed or the key card is blipped: the voice will automatically say ‘stay clear of the door’, and the door will open. In C1 and C2, the same could be the case. But it could also be the case that the AI/human has been instructed to assess the risk for visitors on a case-to-case basis, and to only advise caution if there is imminent risk of collision or such (was this the case, I am consistently standing too close to the door when visiting, but that is beside the point).

 

On the surface, I think there are some key differences between these conditions which could have an ethical or moral impact, where some differences are more interesting than others. In C1 and C2, the door opener makes a real-time assessment, rather than following a predetermined cause of action in the way C3’s door opener does. More importantly, C2 is presumed to make this assessment from a place of concern, in a way which is impossible in C1 and C3 because the latter two are not moral agents, and therefore cannot be concerned. They simply do not have the capacity. And our inquiry could perhaps end here.

But it seems it would be a mistake.

 

What if something was to go wrong? Say the door swings open, but no voice warns me to stay clear, and so the door whacks me in the face[ii]. In C2, it seems the human who’s job it is to warn me of the imminent danger might have done something morally wrong, assuming they knew what to expect from opening the door without warning me, but failed in doing so due to negligence[iii]. In C1 and C3, on the other hand, while we may be upset with the door opener(s), we don’t believe that they did anything morally wrong – they just malfunctioned.

 

My colleague Alberto Giubilini highlighted the tensions in the morality of this landscape in what I thought was an excellent piece arguing that “It is not about AI, it is about humans”: we cannot trust AI, because trust is a relationship between moral agents, and AI does not (yet) have the capacity for moral agency and responsibility. We can, however, rely on AI to behave in a certain way (whether we should is a separate issue).

 

Similarly, while we may believe that a human should show concern for their fellow person, we should not expect the same from AIs, because they cannot be concerned.

 

Yet, if the automatic doors continue to whack visitors in the face, we may start feeling that someone should be responsible for this – not only legally, but morally: someone has a moral duty to ensure these doors are safe to pass through, right?

 

In doing so, we expand the field of inquiry, from the door opener to the programmer/constructor of the door opener, and perhaps to someone in charge of maintenance.

 

A couple of things pop to mind here.

 

First, when we find no immediate moral agent to hold responsible for a harmful event, we may expand the search field until we find one. That search seems to me to follow a systematic structure: if the door is automatic, we turn to call the support line, and if the support fails to fix the problem, but turns out to be an AI, we turn to whoever is in charge of support, and so on, until we find a moral agent.

 

Second, it seems to me that, if the door keeps slamming into visitors’ faces in condition in C2, we will not only morally blame the door operator, but also whoever left them in charge of that door. So perhaps the systems-thinking does not only apply when there is a lack of moral agents, but also applies on a more general level when we are de facto dealing with complicated and/or complex systems of agents.

 

Third, let us conjure a condition four (C4) like so: the door is automatic, but in charge of maintenance support is an AI system that is usually very reliable, and in charge of the AI support system, in turn, is a (human) person.

 

If the person in charge of an AI support system that failed to provide adequate service to a faulty automatic door is to blame for anything, it is plausibly for not adequately maintaining the AI support system – but not for whacking people in the face with a door (because they didn’t do that). Yet, perhaps there is some form of moral responsibility for the face-whacking to be found within the system as a whole. I.e. the compound of door-AI-human etc., has a moral duty to avoid face-whacking, regardless of any individual moral agents’ ability to whack faces.

 

If this is correct, it seems to me that we again[iv] find that our traditional means of ascribing moral responsibility fails to capture key aspects of moral life: it is not the case that any agent is individually morally responsible for the face-whacking door, nor are there multiple agents who are individually or collectively responsible for the face-whacking door. Yet, there seems to be moral responsibility for face-whacking doors in the system. Where does it come from, and what is its nature and structure (if it has one)?

 

In this way, not only cognitive processes such as thinking and computing seem to be able to be distributed throughout systems, but perhaps also moral capacities such as concern, accountability, and responsibility.

And in the end, I do not know to what extent it actually matters, at least in this specific domain. Because at the end of the day, I do not care much whether the door opener is human, an AI, or automatic.

 

I just need to know whether or not I need to stay clear of the door.

Notes & References.

[i] Berlin, I. (2013). The hedgehog and the fox: An essay on Tolstoy’s view of history. Princeton University Press.

[ii] I would like to emphasize that this is a completely hypothetical case, and that I take it to be safe to enter the Uehiro centre. The risk of face-whacking is, in my experience, minimal.

[iii] Let’s give them the benefit of the doubt here, and assume it wasn’t maleficence.

[iv] Together with Hazem Zohny, Julian Savulescu, and Ilina Singh, I have previously argued this to be the case in the domain of emerging technologies for collective thinking and decision-making, such as brain-to-brain interfaces. See the Open Access paper Merging Minds for more on this argument.

Earth, Air, Fire, Water, and Entropy: Internet and Synthetic Biology Pioneer Randy Rettberg’s Story on How Information Was Forged

Our first encounter with Randy Rettberg was somewhat surreal. Not that the others weren’t—the sui generis atmosphere is always present—but that first meeting was set in a scenario so far from our everyday reality that it felt like we’d been thrown into a science fiction novel. It happened in 2022 and we were a bit disoriented after ten hours of transatlantic travel and two hours riding Bentleys to the British countryside. It was July, and we had left the cold and dry wind of our almost never rigorous Brazilian winter to find a pleasant summer sun that gently bathed the English lands. The people there were in a good mood and smiling. Someone told us that it was an atypical moment, that life was not so bright most of the time. We got lucky. At least the weather made us feel a little bit at home, but only that.

We were invited to participate in a workshop named “Safe, Secure, & Responsible Synthetic Biology Beyond Containment,” being part of a group of around 30 people, including biotechnology students, government regulators from around the world, union people, and scholars. We stayed in a 2400-hectares property called Wilton Park, in a building that reminded us of a castle—of course, in reality a Victorian mansion, named Wiston House. This event was jointly organized by the British Foreign and Commonwealth Office and the International Genetically Engineered Machine (iGEM) Foundation, the independent, non-profit organization of which Randy—who was also attending the workshop—is president and founder. We got to know iGEM while we were studying for graduation at the University of São Paulo and participated in the student-organized Synthetic Biology Club. Clarissa was carrying out field work as an anthropologist with the club’s participants, and Érico was one of them. Participation in international competitions was one of the club’s main activities, and iGEM was one of those competitions. Created in 2003 as a spin-off of the MIT department Registry of Standard Biological Parts, the international competition iGEM aims to promote the international development of synthetic biology, engaging students, young scientists, and established scientists around the world.

At that first meeting, in the impressive Victorian mansion full of old paintings of men dressed in strange clothes and with menacing looks, we had the opportunity to talk with Randy about his participation in the development of the internet and about the connections of this previous experience with his interest in synthetic biology. A few months later, on an October afternoon, we had the opportunity to record a conversation lasting more than two hours in Randy’s office at iGEM’s Paris office. Both meetings were made possible because Clarissa was hired as a Human Practices Summer Fellow at the iGEM Foundation, working with a team assigned to develop projects and research on responsible practices and synthetic biology, while Erico actively participates as a volunteer in iGEM activities involving biosafety and biosecurity.

Randy is an enigmatic and extraordinary figure. He worked on a range of exciting and society-changing projects, including an important participation in the ARPANET[1] project while working at Bolt, Beranek and Newman (BBN). There he worked on the first internet routers and packet switching protocols, as well as in parallel and distributed computing. Machines he helped create would be used to coordinate US military satellites and address what would become internet routing. He would then move to Apple Computer and to Sun Microsystems—two other leading companies in the personal computer and internet revolution—before joining MIT. Falling in love with synthetic biology through his long-time friend Tom Knight, now owner of NASDAQ-listed synthetic biology company Ginkgo Bioworks, Randy was invited to direct the MIT Registry of Standard Parts, a department that would spin off to create the iGEM Foundation.

Randy’s transition from the development of the internet to becoming a prominent figure in synthetic biology is something that has always caught our attention, as the internet carries with it important constituent elements of synthetic biology itself. We grew up along with the development of the internet. We lived our childhood in a world that no longer exists, nor will it ever exist again. We were formed in a cyberpunk broth, and perhaps due to the savagery of our condition as inhabitants of a forest city,[2] we were never able to ignore the intrusion of nature. Our curiosity to understand more about the roots of our roots—cybernetic and biological—led us to dig into the history of the internet with countercultural tools. We read books like “Neuromancer” and watched movies like “The Net: The Unabomber, LSD and the Internet” and “Wax or the Discovery of Television Among the Bees.” It was from this cyber-bio-punk reference that we approached Randy and formulated our questions for him.

The exercise of listening to the trajectory of scientists is very interesting for an anthropology of science and technology based on a notion of localized knowledge, as proposed by Donna Haraway. By turning to the memories of scientists from an anthropological perspective, we are able to situate techno-scientific work in a given space and time and in relation to broader historical and social processes. At the same time, working with biographies and memories of scientists also makes us capable of bringing to the surface dimensions that account for the specificities of each trajectory. When questioned by us about the origins of the concept of information, Randy alternates between great historical facts, such as the second world war, memories of his work in laboratories, and intimate family memories. This complexity of the web of scientists’ memories is very interesting as raw material. For us, peripheral researchers from the global south who practice science and technology studies as a way of imagining different possible worlds, opening listening spaces in hegemonic places of knowledge production—especially linked to what is understood as the “frontier” of science such as synthetic biology—allows us a certain smuggling between different realities, a true exercise of anthropological alterity.

Randy’s Early Internet Days

“Randomness must be in there, right? And you kind of think this is like earth, air, fire, and water. Those are the elements for a long time. Those were the elements.”

Randy Rettberg was born in 1948. He began the interview telling us that while he was growing up in rural Illinois in the 50s, several things drew his attention towards science and technology. His father, who was very religious (Randy’s grandfather was a lutheran minister) and had been a prisoner of war in Japan during World War II, came back to the US and, thanks to the GI Bill,[3] obtained a degree in Architecture, working in many urban buildings—schools, hospitals, prisons—after graduation.  He says that his childhood and teenage years were lived in a “small world” where complicated machines would be farm machines, though his world kept expanding in several directions while he came in contact with several initiatives fostering curiosity and engagement in science and technology—from Bell Labs[4] films and pictures promoting their own technologies and marvelous inventions, to do-it-yourself science kits that you could buy from magazines. Randy remembers several scientific-fueled teenage adventures like building a radio from one of these kits, playing with chemical reagents with a friend whose father had a pharmacy, building a tin-can telephone network in the backyard, and playing with a huge recorder that came encased in a suitcase and that he bought selling newspapers door-to-door in the 7th grade. The television, a very “fancy” machine at the time, would bring technologic tales as well. A friend’s father was a professor of Physics at the University of Illinois, so Randy together with his friend would spend a huge amount of time playing in an electronic prototype board with switches and lights that could be reassembled to create different combinations of button and light activations. Two other important childhood memories were how computers were beginning to feature in public imagination at the time—as huge and expensive machines with buttons and flashlights—and the launch of the soviet satellite Sputnik[5] in 1957.

In Rettberg’s account, his world definitely expanded widely when he joined MIT in 1965. While during his basic education the teachers would often repress his curiosity, at MIT it was the opposite. Curiosity was rewarded and it would be the norm. Suddenly teachers would consider “taking things a level down” while searching for answers in a specific topic. Another thing that Randy remembers from this time was his first intense contact with a real computer. This computer was the size of a room and could be used by the university staff with individual accounts who could reserve computing time slots. He describes the operating interface as “a big big tube and a light pen.”

When Randy graduated, the Vietnam War was raging on and he didn’t want to fight in it, so he went back to Illinois to get a Master’s degree in Physics, describing it as a “really really hard” experience because of the complexity of the math involved. After obtaining his MSc, he contacted Nicholas Negroponte[6] from MIT’s Architecture Machine Group[7] and was hired as a “computer guy.” He operated an Interdata Model 3, a business computer already “small,” the size of a desk table. Randy remembers how “slow” it was: only 30 thousand instructions a second.[8] From Negroponte he heard of Bolt, Beranek and Neuman (BBN), a government contractor[9] that managed at the time several groups of highly motivated scientists and engineers working at very exciting projects at the edge of science and technology. Randy says BBN was created by three MIT professors who were renowned specialists in acoustics and began working for the Department of Defense in this field, but soon started providing services related to other fields of science and technology, receiving several government contracts including from the Advanced Research Projects Agency (ARPA).[10] In 1972, Rettberg managed to get interviews in different teams within BBN and was invited to join one of the teams that was building the ARPANET project together with people from MIT Lincoln Labs.[11]

ARPANET was an ARPA project aimed at creating a network that would interconnect all US military bases and Department of Defense facilities in a way that information could be securely and effectively shared between them. The ARPANET project created most of the currently used internet protocols, for example, the Transmission Control Protocol (TCP) and the Internet Protocol (IP). ARPANET was the prototype of what would become the internet. In Randy’s words, ARPANET at the time was “a four node network. It was the first packet switching network[12] and it was four different nodes connected together by 50 KB links. So we started very slow, with teletypes[13] terminals, 10 characters per second.” Randy recalls that the group had very interesting ideas about transforming and transporting information reliably. For example, there was the idea that systems fail often, so there must be ways for interconnected information processing systems to check the integrity of sent and received information. From this idea the Transport Control Protocol, one of the backbones of modern internet, would be born. This needed in turn to be coupled to a decentralized network—so it could withstand and route around problems in individual nodes of the network such as a power outage or a military attack—and this decentralized network should be able to be composed of machines of different manufacturers that would follow in hardware and in software certain common procedures and standards that would ensure compatibility and communicability between any type of device able to follow these procedures.

According to Randy—building from the idea of bit encoding from Shannon and early information pioneers[14]—some of the really innovative ideas regarding information transfer were related to packet switching. The use of a network of interconnected nodes (composed of digital computers for a collaborative and decentralized discovery of possible routes for the information to travel on) and the establishment of protocols designed for the computers to speak on a common language (which could be understood by computers of different manufacturers) was how the ARPANET team chose to solve the problem of the ability of information to travel from one place to another. Prior to travel however, the information needed to be encoded and packaged in what would become the “network packet.” The network packet would contain the proper information users wanted to transfer and an additional “header” of information, a complementary message that contained “control information”—needed for the nodes of the network to find the best routes for the information and for the effective forwarding of the messages from one node to another after the best possible way was found. The combination of all protocols and ideas above would form a “packet-switching network.”

Rettberg emphasizes that prior to ARPANET, data could be sent from one place to another, but this task would require specific and expensive equipment. At his account, even in the academy and in the telecom industry most people believed that things had their own essences and while transferring information, these “essences” should be transmitted. For example, music was composed of sound waves, so then the only way to transfer music was to physically reproduce the sound waves from the transmitter to the receiver—and that would require special equipment for each type of “essential” information. From the ARPANET on, everyone with a digital computer, peripheral equipment, and a common phone line could be connected to every other person with a similar setup and transfer any type of information such as audio, video or text in digital format—a format that would subsume the idea of the specific “essences” of each type of information, replacing it with the concept of “digital encoded” information where everything that can be represented can also be digitally represented.

Randy told us two or three times that he and most of his colleagues at the project were against the war in Vietnam and were heavily influenced by the rock and roll movement, so this forms a contradictory background against which these ideas were designed. In ARPANET the engineers embedded a diffuse but real feeling against central control and authority funded by the military itself. With this new technology, the United States military sector would transform itself towards a decentralized informational entity capable of operating anywhere on earth. In a prior conversation, Randy told us that at times the technoscientific problem presented for the team to solve was straightforward military, such as the coordination of military satellites and the livestream of video and audio between them. In fact, the network transfer of audio and video for the military was one the first purposes of the computer Rettberg helped create in the ARPANET project, the Butterfly BBN. BBN itself was brought to the ARPANET project because of the renown associated with the acoustical know-how of the company.

The Butterfly BBN is considered a wonder of the early digital computers. It was one of the first of the modern “supercomputers.” It used commercially available digital processors from Motorola and each machine had up to 512 of these 12-33MHz processing units. It was first programmed to act as a “router” machine in the late 70’s DARPA’s Wideband Packet Satellite Network, making possible a continuous 3 Mbits/s broadcast of digital data— mainly audio and video—around multiple US military bases. The machine would then be used both in the Terrestrial Wide Band Network, a network that physically connected several Department of Defense facilities through high speed capable data cables from the late 1980’s to 1991. From 1991 forward Butterfly BBN was the computer used as the first internet routers, implementing in hardware and in software the first version of the Internet Protocol (IP).

A photo of an eletronic chip with gray, balck and pink components

A “die image” (a photograph of the internal parts of an electronic chip) of the Motorola 6800, the processor used by the first ARPANET routers, including the Butterfly BBN mentioned by Randy. (Photo by Birdman86 at commons.wikimedia.org)

It is funny to note that while telling us everything above, Randy—who had a lutheran minister grandfather and a “very religious” father—refers multiple times to religion as a way of explaining how prior to everything above, ideas about information were kind of mystical and quintessential. We had the impression that, for Randy, the cybernetic revolution which he took part in was almost like a new step in the human relationship with the universe. He, for example, compares cybernetics to the role of religion in English literature, saying that the former formed the backbone to the latter. For Randy, cybernetics is the backbone of our current mode of existence and of understanding the world we live in: in his distinctive atheist mystical language, he likens the development of cybernetics to the addition of entropy[15] to the four “original” elements, earth, fire, air, and water.

In a future blog post, we will describe the second half of the interview on Randy Rettberg’s transition from early internet pioneer to early synthetic biology pioneer. The next blog post also takes a deeper look on Randy’s view of how cybernetics is connected to synthetic biology and to science and technology in general. Until next time!

Notes

[1] ARPANET was an Advanced Projects Research Agency (ARPA) project aimed at creating a network that would interconnect all US military bases and Department of Defense facilities in a  way that information could be securely and effectively shared between them. The ARPANET project created most of the currently used internet protocols, for example, the Transmission Control Protocol (TCP) and the Internet Protocol (IP). ARPANET was the prototype of what would become the internet.

[2] São Paulo is the financial capital of Brazil, a city surrounded and restrained by both the Atlantic Forest and the booming agribusiness.

[3] The GI Bill, formally Servicemen’s Readjustment Act of 1944, was a US law aimed at rewarding war veterans for their participation in World War II. Through this law, war veterans would have a facilitated process for getting superior and technical education.

[4] Bell Labs was founded by Alexander Graham Bell and was one of the first R&D intensive companies in the world. It became a large and important government contractor, conducting research and development for the US government, especially the US military. Researchers from Bell Labs were responsible for the invention of several technologies that form the backbone of contemporary industrial mode of living. Some of these inventions were the transistor, laser technology, the UNIX operating system, photovoltaic cells, and several others.

[5] Sputnik was the first man-made satellite to be launched and successfully orbit the earth. It was launched by the Soviets on the 4th of October in 1957. It was one of the events that started the space race.

[6] Nicholas Negroponte is known to be the founder of the MIT Media Lab (and prior to that, the MIT’s Architecture Machine Group) and to be an early internet evangelizer, being one of the founders of the WIRED magazine.

[7] In 1985, the lab would be reassembled into the now famous MIT Media Lab.

[8] Today a personal computer can run at 1-10 trillion operations per second. A Geforce GTX 1080 graphic card used in gaming today runs at 8.9 trillion floating point operations per second (unit used to measure computing speed).

[9] A government contractor is a private company that works under contracts to governments.

[10] Advanced Research Projects Agency, now Defense Advanced Research Projects Agency, is one of the most important US government institutions dedicated to the creation of new technologies that could be used in defense purposes. DARPA funded projects include the modern jet engine, as well as the technologies behind the integrated circuits, super computers and the internet.

[11] Lincoln Labs is another R&D laboratory that works under government contracts. Founded in 1950 as the MIT Lincoln Laboratory, recently it spun-off from MIT, becoming a private laboratory. Lincoln Labs is historically tied to the US Department of Defense, having developed the computer network called SAGE in the 50s (the first military computers to be connected to others and to have graphical user interfaces, used to coordinate radar sites around the US). Most of what Lincoln Labs does is classified, but we know they are also interested in synthetic biology because their staff participates in iGEM’s events.

[12] Packet switching is one of the theoretical basis of the current internet and of modern telecommunications. The idea behind it is to create a procedure that two or more computers must follow to securely and reliably exchange information. It involves a series of steps that the machines will have to know and follow in order to ensure that the information has really been transmitted between them, even if problems arise due to inconsistent connection.

[13] A teletype is an electromechanical device that could be used to send and receive messages from other teletypes and later, to and from computers. Teletypes would then be used as computer interfaces as Randy mentions here.

[14] Shannon – whose research was also funded by the US military – proposes the idea of encoding information as sequences of zeros and ones, what he calls “binary digits” or bits in his paper “A mathematical theory of communication” from 1948.

[15]  In information theory, entropy measures the amount of information that a certain event contains.

How Chess.com Became ‘the Wild West of the Streaming World’

During the pandemic, the Netflix series The Queen’s Gambit famously sparked a huge increase in chess interest — but one online play engine turned what would have been a spike into a groundswell that eclipses even esports. Jessica Lucas traces how the royal game became a digital juggernaut.

By January 2023, Chess.com reported hitting over 10 million active players in a single day—more than the daily average of World of Warcraft, Grand Theft Auto, and Among Us combined—leading the site’s servers to crash. Its online schedule now features a who’s who of chess grand masters who provide content for users almost 24 hours a day. A new class of chess celebrities, like sisters Alexandra and Andrea Botez, who recently surpassed 1 million Twitch subscribers on their joint account, and international master Levy Rozman, who has over 3 million YouTube subscribers, regularly appear on Chess.com.

A Vaccine Dispute Turns Deadly

A photo of Leslie Hu and her son Pierce O’Loughlin.

Eric PapeThe Atavist Magazine |March 2023 | 2,005 words (7 minutes)

This is an excerpt from issue no. 137, “Sins of the Father.” 


A small, good-natured boy named Pierce O’Loughlin was growing up between the homes of his divorced parents in San Francisco. Nine-year-old Pierce was accustomed to custody handoffs taking place at Convent and Stuart Hall, the Catholic school he attended. On changeover days, one parent dropped him off in the morning at the hilltop campus overlooking the bay, and the other picked him up in the afternoon. The parents avoided seeing each other. Their split had been ugly.

The Atavist Magazine, our sister site, publishes one deeply reported, elegantly designed story each month. Support The Atavist by becoming a member.

On the afternoon of January 13, 2021, Lesley Hu, Pierce’s mother, arrived at Convent and Stuart Hall for a scheduled pickup. Hu planned to take Pierce to a Coinstar machine to exchange a small bucket of coins for a gift card he could use to buy toys. Then they would go to dinner at a restaurant called House of Prime Rib, because Pierce loved to eat meat.

But Hu’s son wasn’t waiting for her at the school. Staff told her that he had been absent that day. They didn’t know why.

Another mom might have assumed that her child had a cold or that his dad had let him skip school and taken him somewhere fun for the day, but not Hu. She wondered if Pierce had been kidnapped—not by a stranger but by his own father.

Over the course of their marriage, Hu had watched as her now ex-husband, Stephen O’Loughlin, became obsessed with pseudoscience, self-help gurus, and conspiracy theories, spending long nights watching videos online, then sharing the details of fantastical plots with Hu, their friends, and people he barely knew. The COVID-19 pandemic had only made things worse. O’Loughlin huddled for hours at his computer streaming YouTube clips and poring over right-wing websites—what he called “doing research.”

One of O’Loughlin’s fixations was vaccines. He believed that Pierce had been damaged by the routine inoculations he received as a baby. O’Loughlin was adamant that the boy be given no more shots—not for COVID-19, when a vaccine was eventually authorized for kids, nor for any other disease.

In 2020, Hu had filed for the sole legal right to make decisions about her son’s medical care, which would empower her to vaccinate Pierce regardless of what her ex wanted. She felt good about her chances in court. On January 11, as a condition for a continuance he had requested in the medical custody case, O’Loughlin suddenly agreed to let Pierce receive two vaccinations. In retrospect, according to Hu’s attorney, Lorie Nachlis, “it all seemed too easy.”

When Hu discovered that Pierce wasn’t at school, she wondered if O’Loughlin had agreed to the vaccinations only because he was plotting to steal Pierce away before their son could receive them. To Hu it wasn’t improbable—her ex seemed that far gone.

Hu and her boyfriend, Jim Baaden, had recently decided to move in together; Hu was planning to tell Pierce the news that evening at dinner. Now Baaden picked Hu up at Pierce’s school, and together the couple sped to O’Loughlin’s home in San Francisco’s posh Marina District, trying not to dwell on worst-case scenarios.

When they arrived outside O’Loughlin’s Mediterranean-style apartment building, they noticed that the blinds in the living room, which was on the ground floor of the unit, were drawn but disheveled. For a moment, Baaden recoiled. O’Loughlin was a gun owner. What if he’d barricaded himself and Pierce in the apartment? Baaden imagined O’Loughlin aiming the barrel between the blinds, ready to shoot.

Baaden and Hu approached the building’s intercom and buzzed O’Loughlin’s apartment. No one answered. Hu began banging on the door to the building and screaming. She considered breaking in, but Baaden told her to call 911 instead.

Hu could not fathom how someone like O’Loughlin—a man of means and privilege—had come to believe outrageous lies. She knew that various misinformation networks and snake-oil salesmen had facilitated her ex’s paranoia and exploited his psychological fragility. But Hu had always stayed focused on what she considered her most important task: raising and protecting Pierce.

There would be time in the future to consider, almost endlessly, what happened to O’Loughlin. For now, in a panic, all Hu could do was wonder: Where had he taken their son?


Adozen years earlier, Stephen O’Loughlin was a very different man. At least he seemed to be when Hu first met him at an Italian wine bar. O’Loughlin, then in his mid-thirties, with a strong jaw and a slightly crooked smile, started chatting her up. He said that he was in finance and that he worked out. Hu, 28, wasn’t interested in his advances. She considered herself an independent woman. She worked in midlevel management and had served as the executive director of the Hong Kong Association of Northern California, a business group. The child of immigrants, she had aspirations to achieve more, to make her parents proud. Besides, she had gotten out of a long relationship recently, and she wasn’t at the bar looking for a date—she was there to cheer up a friend going through a tough time.

But O’Loughlin was persistent, and after several glasses of champagne, Hu decided that he was funny. He asked her charming if oddly specific questions: What was her favorite kind of wine? What sort of bottled water did she drink? As Hu prepared to leave, O’Loughlin asked for her number. She hesitated but gave it to him.

He texted to ask her out. She had a busy work schedule at her family’s company, which leased shipping containers, but O’Loughlin insisted that they find time to meet as soon as possible. When they did, he picked Hu up in a brand-new car stocked with her favorite water. A bottle of sparkling rosé she liked was waiting at the restaurant where they’d be dining. “He remembered everything I said the night we met,” Hu explained.

They began going out with friends for fun, alcohol-infused nights at clubs around San Francisco. O’Loughlin often brought Hu flowers. He was generous, picking up the tab on club nights and when dining out with Hu and her parents. “He was like that for months,” Hu recalled. “He said that he’d talked to his Asian friend and that he should be generous with my family.” Reaching for his wallet at the end of a meal, O’Loughlin would insist, “No, I’ve got this.” (Hu later learned that he’d been using his professional expense account.)

Early in their relationship, O’Loughlin, who grew up in Ridgefield, Connecticut, painted an incomplete picture of his parents and sister. His mother, he told Hu, was “the greatest person in the world.” He was more reserved when talking about his father. He said that he adored his two nieces, and when he and Hu visited the girls on the East Coast, O’Loughlin took them to Toys “R” Us and bought them whatever they wanted. “They were elated, so surprised,” Hu said. She told O’Loughlin she wanted kids of her own. He said he did, too.

Still, when O’Loughlin proposed after about a year of dating, Hu wasn’t sold on the idea. She didn’t like the way O’Loughlin, an arch conservative, got blustery when talking about politics. Hu, a Democrat, didn’t feel like he listened when she spoke about serious issues. O’Loughlin projected such certainty about their future as a couple, however, that Hu found herself saying yes to marriage.

Almost immediately after the engagement, O’Loughlin changed. The flowers, gifts, and other gestures of affection disappeared. He stopped paying for meals with Hu’s parents. Hu realized that O’Loughlin’s generosity had been transactional. He was a salesman by trade, peddling financial services for the firm Eaton Vance, and he brought the strategy of his job to his personal life: Once he landed a deal, he stopped spending time and energy on it.

Hu’s parents were concerned. Her dad took O’Loughlin out for a drink and suggested the couple at least wait a while to get married. “Steve came back really angry,” Hu said. After that, O’Loughlin attended gatherings of Hu’s family only begrudgingly. He wore what Hu called his “shit face,” looking bored or angry. He urged Hu to quit her job at her family’s company.

The situation became so bad that Hu gave her engagement ring back. “I can’t do this,” she told O’Loughlin. “It’s really hard.” As both of them wept, O’Loughlin promised to do better. Hu wanted to believe him. In return, she agreed to leave her job. “It was the only way it would work,” she said. O’Loughlin couched distancing Hu from her family and their business as an opportunity: He suggested that she could find employment in fashion retail, a field he knew she was interested in.

Figuring out a new career path, however, took a back seat to wedding planning. Hu threw herself into designing a celebration in Italy, until O’Loughlin nixed the idea. Instead, they reserved space at a resort in Santa Barbara. They were married in front of 150 guests on October 10, 2010.

For their honeymoon they traveled to the Maldives, the tropical archipelago in the Indian Ocean. Hu described it as “paradise.” The newlyweds stayed in an elegant cabin suspended over pale blue water alive with stingrays and other aquatic life. They were supposed to be spontaneous, to relish nature, to jump in the water whenever they felt like it. But O’Loughlin was hardly in the moment; he took part in a single activity with his wife each day, then went back to their room to immerse himself in self-help books. He complained to Hu and was rude to hotel staff, especially waiters. When he learned that most of the employees, like nearly all residents of the Maldives, were Muslim, he seemed disturbed.

Hu noticed something else: O’Loughlin wouldn’t walk beside her. He was always a few steps ahead. “Anywhere we went,” Hu said, “I was secondary.”

It was all enough to make her contemplate a quick divorce right after the honeymoon. But when they were back in California, Hu was hit with waves of nausea. A test confirmed that she was pregnant. She decided it was no time to break up the marriage.

Despite what he’d said while courting her, O’Loughlin didn’t seem excited by the prospect of having a child. According to Hu, he acted as if she wasn’t pregnant. He didn’t ask how she was feeling and didn’t want to put his hand on her belly when the baby kicked. He took Hu on a babymoon to Australia, only to reveal that the trip coincided with an installment of Unleash the Power Within, an event organized by self-help guru Tony Robbins. Among other things, O’Loughlin was drawn to Robbins’s idea that nutrition was an essential building block of self-improvement. He started eating dressing-free salads and supplement-filled health shakes that he insisted Hu prepare for him.

O’Loughlin also became convinced that Eaton Vance was swindling him. He talked Hu in circles about how he should have been earning far more money through commissions than he was, and he became argumentative with his bosses. Late in Hu’s third trimester, O’Loughlin sat down with colleagues for what he thought was a regular meeting. Instead, they took his work computers and informed him that he was fired. As Hu’s due date approached, O’Loughlin became preoccupied with the idea of suing the company.

Hu went into labor on July 27, 2011, nine months and 17 days after her marriage to O’Loughlin. It was a difficult birth. Hu, a petite woman, had to deliver an 8.3-pound baby. She was in such tremendous pain that doctors pumped her full of medication. “I couldn’t push the baby out, so they used a vacuum [extractor],” Hu said. Once Pierce arrived, there were more complications—his oxygen levels were dangerously low.

Rather than express concern for the baby or his wife, O’Loughlin seemed put off by everything that was happening. He had expected a cinematic birth. “He kept saying, ‘That wasn’t normal,’ ” Hu recalled. “He was so obsessed with the birth not being right.”

Nothing, it seemed, was ever right for O’Loughlin.

Star-shaped cells may play role in how your brain merges info

Two bike paths merge to become one.

Astrocytes may be a key player in the brain’s ability to process external and internal information simultaneously, according to a new study.

Long thought of as “brain glue,” the star-shaped cells called astrocytes are members of a family of cells found in the central nervous system called glial that help regulate blood flow and synaptic activity, keep neurons healthy, and play an important role in breathing.

Despite this growing appreciation for astrocytes, much remains unknown about the role these cells play in helping neurons and the brain process information.

“We believe astrocytes can add a new dimension to our understanding of how external and internal information is merged in the brain,” says Nathan Smith, associate professor of neuroscience at the Del Monte Institute for Neuroscience at the University of Rochester.

“More research on these cells is necessary to understand their role in the process that allows a person to have an appropriate behavioral response and also the ability to create a relevant memory to guide future behavior.”

The way our body integrates external with internal information is essential to survival. When something goes awry in these processes, behavioral or psychiatric symptoms may emerge.

Smith and coauthors point to evidence that astrocytes may play a key role in this process. Previous research has shown astrocytes sense the moment neurons send a message and can simultaneously sense sensory inputs. These external signals could come from various senses such as sight or smell.

Astrocytes respond to this influx of information by modifying their calcium Ca2+ signaling directed towards neurons, providing them with the most suitable information to react to the stimuli.

The authors hypothesize that this astrocytic Ca2+ signaling may be an underlying factor in how neurons communicate and what may happen when a signal is disrupted. But much is still unknown in how astrocytes and neuromodulators, the signals sent between neurons, work together.

“Astrocytes are an often-overlooked type of brain cell in systems neuroscience,” Smith says. “We believe dysfunctional astrocytic calcium signaling could be an underlying factor in disorders characterized by disrupted sensory processing, like Alzheimer’s and autism spectrum disorder.”

Smith has spent his career studying astrocytes. As a graduate student at the University of Rochester School of Medicine and Dentistry, Smith was part of the team who discovered an expanded role for astrocytes. Apart from absorbing excess potassium, astrocytes themselves could cause potassium levels around the neuron to drop, halting neuronal signaling. This research showed, for the first time, that astrocytes did more than tend to neurons, they also could influence the actions of neurons.

“I think once we understand how astrocytes integrate external information from these different internal states, we can better understand certain neurological diseases. Understanding their role more fully will help propel the future possibility of targeting astrocytes in neurological disease,” Smith says.

The communication between neurons and astrocytes is far more complicated than previously thought. Evidence suggests that astrocytes can sense and react to change—a process that is important for behavioral shifts and memory formation.

The study authors believe discovering more about astrocytes will lead to a better understanding of cognitive function and lead to advances in treatment and care.

The study appears in Trends in Neuroscience.

Additional coauthors are from the University of Copenhagen.

The National Institutes of Health, the National Science Foundation, the European Union under the Marie Skłodowska-Curie Fellowship, the ONO Rising Star Fellowship, the Lundbeck Foundation Experiment Grant, and the Novo Nordisk Foundation supported the work.

Source: University of Rochester

The post Star-shaped cells may play role in how your brain merges info appeared first on Futurity.

Conference Coverage: Political Epistemology Network

This post is a part of the Blog's 2023 APA Conference coverage, showcasing the research of APA members across the country. The APA Eastern Conference session covered in this post was organized by the Political Epistemology Network. The political world is what William James called “a blooming, buzzing confusion.”  The size and complexity of modern […]

Book Review: Not Exactly Lying: Fake News and Fake Journalism in American History by Andie Tucher

By: Taster
In Not Exactly Lying: Fake News and Fake Journalism in American History, Andie Tucher explores how journalistic practice has often pivoted on disinformation throughout US history. This is a first-rate study that will give readers a greater understanding of the origins, role and impact of fake news in the past and present, writes Jeff Roquen. … Continued

Well-Informed About Misinformation

Misinformation. Disinformation. Fake research. Fake news. Nowadays, these terms are everywhere. Misinformation, in its various guises, seems to threaten every aspect of our lives: political elections, medical advice, and public health policy, to name just a few. By some accounts, the spread of misinformation has acquired epidemic proportions. And, of course, as misinformation has proliferated, […]

Digital hoarding

By: John Q

Yesterday, I dug into the deepest nest of folders on my MacBook Pro to find an article I wrote on a 512K Mac in 1987, for a magazine that no longer exists and isn’t (AFAICT) digitally archived. The file must have made transitions from “hard floppies” to removable 44Mb drives (remember them?) to hard drive to SSD and then, when that filled up, to my iCloud backup.

Today, I read about “digital hoarding“. Count me in!

Whatever the psychological causes, it’s hard to imagine negative real-world consequences from storing files. And it’s easier to search for stuff when you need it than to spend a lot of time filing. I used to sort my email, but now I just delete 90 per cent as it comes in, and archive the rest every couple of years.

In the physical world, I’m the opposite. I’m hopelessly untidy, but I follow Marie Kondo in throwing out anything that no longer sparks joy, and in trying to avoid acquiring stuff I don’t need. Being free of paper has been a huge boon in this respect.

Facts Don’t Change Minds – Social Networks, Group Dialogue, and Stories Do

By: Taster
There is often a presumption amongst scientists that communicating the evidence on a given issue is on its own persuasive enough to change minds. Anne H. Toomey argues thinking in this way itself ignores evidence from other fields of research and presents four ways by which researchers can engage with findings from the social sciences … Continued
❌