FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

The Morning After: Twitter limits unverified users to seeing 600 posts a day

In a baffling move over the weekend, Elon Musk said the company would restrict unverified accounts to reading 600 posts per day – and new accounts to only 300 tweets daily. Meanwhile, Twitter will allow verified accounts to read 6,000 posts each day. For most people, that means, short of paying for Twitter Blue, they can spend about a minute or two on Twitter before encountering a "rate limit exceeded" error. Less than two hours later, Musk said Twitter would "soon" ease the limits to 8,000 for verified accounts and 800 for those without Twitter Blue.

Musk claimed the "temporary" limits were in place to address "extreme levels of data scraping" and "system manipulation." The day before, Twitter began preventing people not logged into the site from viewing tweets. Musk did not say what "new" means in the context of an account, nor did he say how long Twitter plans to restrict users.

It’s a challenging change to Twitter when it’s courting advertisers and paid users. Fewer eyes for ads and fewer benefits for subscribers.

– Mat Smith

You can get these reports delivered daily direct to your inbox. Subscribe right here!

The biggest stories you might have missed

AI pioneer Geoffrey Hinton isn't convinced good AI will triumph over bad AI

Europe’s Euclid space telescope launches to map the dark universe

Hitting the Books: How SNAP's digital services became an online quagmire

Apple's next AirPods Pro could feature a built-in temperature sensor

The company is also reportedly working on new hearing-health features.

Bloomberg’s Mark Gurman says Apple is working on bringing new health features to its AirPods family. The first could be a hearing test. Already, AirPods support audiogram profiles, so iOS can tune earbuds to adjust for hearing loss. Some apps, like Mimi, already tap into this feature. Gurman added Apple is exploring how to market the AirPods Pro as a hearing aid since the FDA made it easier for Americans to buy over-the-counter options last year.

Apple is also working on an earbud feature to measure your body temperature. The company already introduced a built-in temperature sensor with the Watch Series 8 and Watch Ultra. But a sensor can get a more accurate temperature reading from the ear canal than the wrist. However, Gurman warns both features are “several months or even years away.”

Continue reading.

Cowboy Cruiser e-bike offers a more upright ride

The new model allows for a Dutch riding style and has a wider saddle.

TMA
Cowboy

When Cowboy introduced its Adaptive Power update earlier this year, I wondered when a new model would arrive. The answer is, well, today. The new variant encourages a more upright design for a relaxed Dutch riding position. It also has a wider saddle — a feature at the top of our wish list for the Belgian company’s latest iteration. The e-bike has a wireless charging phone mount, and, like all its models, its companion app integrates with Google Maps. The Cowboy Cruiser is available for an “introductory price” of £2,690 (roughly $3,400) in black or white.

Continue reading.

Gfycat is shutting down on September 1st

Nothing lasts forever on the internet.

Gfycat is shutting down. Snap, Gfycat’s parent company, said it plans to shut down the GIF database and delete all user content in a few short weeks. “The Gfycat service is being discontinued,” a notification at the top of the website reads. Gfycat’s impending demise won’t come as a surprise to anyone who has tried to use the site in recent months. In May, Motherboard reported Snap had allowed Gfycat’s security certificates to expire. The oversight meant the platform temporarily became inaccessible to most internet users.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-twitter-limits-unverified-users-to-seeing-600-posts-a-day-113153535.html?src=rss

TWITTER-MUSK/

A view of the Twitter logo at its corporate headquarters in San Francisco, California, U.S. November 18, 2022. REUTERS/Carlos Barria

Apple's next AirPods Pro could feature a built-in temperature sensor

In his latest Power On newsletter, Bloomberg’s Mark Gurman reports Apple is working on new health features for its line of audio products. The first of those would allow your AirPods to conduct a hearing test. As The Verge notes, AirPods already support audiogram profiles, which iOS can use to tune the output of the earbuds to adjust for hearing loss. Right now, you can generate those profiles through third-party apps like Mimi, so if Apple were to move forward with its own test, it would be another instance of the company “Sherlocking” a feature from a third-party developer.

Separately, Gurman writes Apple is exploring how to position and market the AirPods Pro as a hearing aid after the FDA made it easier for Americans to buy over-the-counter options last year. He reports the company recently hired engineers with experience working on traditional hearing aids as part of that initiative.

The other new AirPods feature Apple is working on is a way for the earbuds to measure your body temperature. A built-in temperature sensor was one of the features the company introduced with the Watch Series 8 and Watch Ultra. As for why Apple would replicate a feature already found on one of its existing products, a sensor can get a more accurate temperature reading from the ear canal than the wrist. 

Gurman warns both features are “several months or even years away.” In the more immediate future, he reports Apple plans to refresh the AirPods Pro with a USB-C charging case, adding that all of the company’s audio products will eventually move away from Lighting as well. All of that is part of a move Apple must make to comply with the EU’s new common charging standard. “Despite the USB-C shift, I wouldn’t anticipate the imminent launch of all-new AirPods Pro hardware,” Gurman notes, pointing to the fact Apple only just refreshed the earbuds last year.

Before then, AirPods users can at least look forward to the new software features Apple said it would introduce alongside iOS 17. Those include a new button press to mute yourself on calls and improvements to device switching. Gurman speculates Apple could also permanently lower the price of its second-generation AirPods to $99, down from $129, to make them more competitive against low-cost options from Samsung, Sony and Amazon.

This article originally appeared on Engadget at https://www.engadget.com/apples-next-airpods-pro-could-feature-a-built-in-temperature-sensor-162531033.html?src=rss

AirPods Pro (2022) review

Despite the unchanged design, Apple has packed an assortment of updates into the new AirPods Pro. All of the conveniences from the 2019 model are here as well, alongside additions like Adaptive Transparency, Personalized Spatial Audio and a new touch gesture in tow. There’s room to further refine the familiar formula, but Apple has given iPhone owners several reasons to upgrade.

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

The other day a colleague of mine shared a YouTube video of the presentation The AI Dilemma, by Tristan Harris and Aza Raskin. In it, they share with the audience their concerns about the rapid and somewhat wild development of artificial intelligence (AI) in the hands of a few tech giants. I highly recommend it. (The video, that is. Not the rapid and somewhat wild development of AI in the hands of a few tech giants).

 

Much like the thousands of signatories of the March open call to “pause giant AI experiments”, and recently the “Godfather of AI” Geoffrey Hinton, Harris and Raskin warn us that we are on the brink of major (negative, dangerous) social disruption due to the power of new AI technologies.

 

Indeed, there’s a bit of a public buzz about “AI ethics” in recent months.

 

While it is good that there is a general awareness and a public discussion about AI – or any majorly disruptive phenomenon for that matter – there’s a potential problem with the abstraction: AI is portrayed as this big, emerging, technological, behemoth which we cannot or will not control. But it has been almost three decades since humans were able to beat an AI at a game of chess. We have been using AI for many things, from medical diagnosis to climate prediction, with little to no concern about it besting us and/or stripping us of agency in these domains. In other domains, such as driving cars, and military applications of drones, there has been significantly more controversy.

All this is just to say that AI ethics is not for hedgehogs – it’s not “one big thing”[i] – and I believe that we need to actively avoid a narrative and a line of thinking which paints it to be. In examining the ethical dimensions of a multitude of AI inventions, then, we ought to take care to limit the scope of our inquiry to the domain in question at the very least.

 

So let us, for argument’s sake, return to that door at the Uehiro Centre, and the voice cautioning visitors to stay clear. Now, as far as I’m aware, the voice and the door are not part of an AI system. I also believe that there is no person who is tasked with waiting around for visitors asking for access, warning them of the impending door swing, and then manually opening the door. I believe it is a quite simple contraption, with a voice recording programmed to be played as the door opens. But does it make a difference to me, or other visitors, which of these possibilities is true?

 

We can call these possibilities:

Condition one (C1): AI door, created by humans.

Condition two (C2): Human speaker & door operator.

Condition three (C3): Automatic door & speaker, programmed by humans.

 

In C3, it seems that the outcome of the visitor’s action will always be the same after the buzzer is pushed or the key card is blipped: the voice will automatically say ‘stay clear of the door’, and the door will open. In C1 and C2, the same could be the case. But it could also be the case that the AI/human has been instructed to assess the risk for visitors on a case-to-case basis, and to only advise caution if there is imminent risk of collision or such (was this the case, I am consistently standing too close to the door when visiting, but that is beside the point).

 

On the surface, I think there are some key differences between these conditions which could have an ethical or moral impact, where some differences are more interesting than others. In C1 and C2, the door opener makes a real-time assessment, rather than following a predetermined cause of action in the way C3’s door opener does. More importantly, C2 is presumed to make this assessment from a place of concern, in a way which is impossible in C1 and C3 because the latter two are not moral agents, and therefore cannot be concerned. They simply do not have the capacity. And our inquiry could perhaps end here.

But it seems it would be a mistake.

 

What if something was to go wrong? Say the door swings open, but no voice warns me to stay clear, and so the door whacks me in the face[ii]. In C2, it seems the human who’s job it is to warn me of the imminent danger might have done something morally wrong, assuming they knew what to expect from opening the door without warning me, but failed in doing so due to negligence[iii]. In C1 and C3, on the other hand, while we may be upset with the door opener(s), we don’t believe that they did anything morally wrong – they just malfunctioned.

 

My colleague Alberto Giubilini highlighted the tensions in the morality of this landscape in what I thought was an excellent piece arguing that “It is not about AI, it is about humans”: we cannot trust AI, because trust is a relationship between moral agents, and AI does not (yet) have the capacity for moral agency and responsibility. We can, however, rely on AI to behave in a certain way (whether we should is a separate issue).

 

Similarly, while we may believe that a human should show concern for their fellow person, we should not expect the same from AIs, because they cannot be concerned.

 

Yet, if the automatic doors continue to whack visitors in the face, we may start feeling that someone should be responsible for this – not only legally, but morally: someone has a moral duty to ensure these doors are safe to pass through, right?

 

In doing so, we expand the field of inquiry, from the door opener to the programmer/constructor of the door opener, and perhaps to someone in charge of maintenance.

 

A couple of things pop to mind here.

 

First, when we find no immediate moral agent to hold responsible for a harmful event, we may expand the search field until we find one. That search seems to me to follow a systematic structure: if the door is automatic, we turn to call the support line, and if the support fails to fix the problem, but turns out to be an AI, we turn to whoever is in charge of support, and so on, until we find a moral agent.

 

Second, it seems to me that, if the door keeps slamming into visitors’ faces in condition in C2, we will not only morally blame the door operator, but also whoever left them in charge of that door. So perhaps the systems-thinking does not only apply when there is a lack of moral agents, but also applies on a more general level when we are de facto dealing with complicated and/or complex systems of agents.

 

Third, let us conjure a condition four (C4) like so: the door is automatic, but in charge of maintenance support is an AI system that is usually very reliable, and in charge of the AI support system, in turn, is a (human) person.

 

If the person in charge of an AI support system that failed to provide adequate service to a faulty automatic door is to blame for anything, it is plausibly for not adequately maintaining the AI support system – but not for whacking people in the face with a door (because they didn’t do that). Yet, perhaps there is some form of moral responsibility for the face-whacking to be found within the system as a whole. I.e. the compound of door-AI-human etc., has a moral duty to avoid face-whacking, regardless of any individual moral agents’ ability to whack faces.

 

If this is correct, it seems to me that we again[iv] find that our traditional means of ascribing moral responsibility fails to capture key aspects of moral life: it is not the case that any agent is individually morally responsible for the face-whacking door, nor are there multiple agents who are individually or collectively responsible for the face-whacking door. Yet, there seems to be moral responsibility for face-whacking doors in the system. Where does it come from, and what is its nature and structure (if it has one)?

 

In this way, not only cognitive processes such as thinking and computing seem to be able to be distributed throughout systems, but perhaps also moral capacities such as concern, accountability, and responsibility.

And in the end, I do not know to what extent it actually matters, at least in this specific domain. Because at the end of the day, I do not care much whether the door opener is human, an AI, or automatic.

 

I just need to know whether or not I need to stay clear of the door.

Notes & References.

[i] Berlin, I. (2013). The hedgehog and the fox: An essay on Tolstoy’s view of history. Princeton University Press.

[ii] I would like to emphasize that this is a completely hypothetical case, and that I take it to be safe to enter the Uehiro centre. The risk of face-whacking is, in my experience, minimal.

[iii] Let’s give them the benefit of the doubt here, and assume it wasn’t maleficence.

[iv] Together with Hazem Zohny, Julian Savulescu, and Ilina Singh, I have previously argued this to be the case in the domain of emerging technologies for collective thinking and decision-making, such as brain-to-brain interfaces. See the Open Access paper Merging Minds for more on this argument.

Earth, Air, Fire, Water, and Entropy: Internet and Synthetic Biology Pioneer Randy Rettberg’s Story on How Information Was Forged

Our first encounter with Randy Rettberg was somewhat surreal. Not that the others weren’t—the sui generis atmosphere is always present—but that first meeting was set in a scenario so far from our everyday reality that it felt like we’d been thrown into a science fiction novel. It happened in 2022 and we were a bit disoriented after ten hours of transatlantic travel and two hours riding Bentleys to the British countryside. It was July, and we had left the cold and dry wind of our almost never rigorous Brazilian winter to find a pleasant summer sun that gently bathed the English lands. The people there were in a good mood and smiling. Someone told us that it was an atypical moment, that life was not so bright most of the time. We got lucky. At least the weather made us feel a little bit at home, but only that.

We were invited to participate in a workshop named “Safe, Secure, & Responsible Synthetic Biology Beyond Containment,” being part of a group of around 30 people, including biotechnology students, government regulators from around the world, union people, and scholars. We stayed in a 2400-hectares property called Wilton Park, in a building that reminded us of a castle—of course, in reality a Victorian mansion, named Wiston House. This event was jointly organized by the British Foreign and Commonwealth Office and the International Genetically Engineered Machine (iGEM) Foundation, the independent, non-profit organization of which Randy—who was also attending the workshop—is president and founder. We got to know iGEM while we were studying for graduation at the University of São Paulo and participated in the student-organized Synthetic Biology Club. Clarissa was carrying out field work as an anthropologist with the club’s participants, and Érico was one of them. Participation in international competitions was one of the club’s main activities, and iGEM was one of those competitions. Created in 2003 as a spin-off of the MIT department Registry of Standard Biological Parts, the international competition iGEM aims to promote the international development of synthetic biology, engaging students, young scientists, and established scientists around the world.

At that first meeting, in the impressive Victorian mansion full of old paintings of men dressed in strange clothes and with menacing looks, we had the opportunity to talk with Randy about his participation in the development of the internet and about the connections of this previous experience with his interest in synthetic biology. A few months later, on an October afternoon, we had the opportunity to record a conversation lasting more than two hours in Randy’s office at iGEM’s Paris office. Both meetings were made possible because Clarissa was hired as a Human Practices Summer Fellow at the iGEM Foundation, working with a team assigned to develop projects and research on responsible practices and synthetic biology, while Erico actively participates as a volunteer in iGEM activities involving biosafety and biosecurity.

Randy is an enigmatic and extraordinary figure. He worked on a range of exciting and society-changing projects, including an important participation in the ARPANET[1] project while working at Bolt, Beranek and Newman (BBN). There he worked on the first internet routers and packet switching protocols, as well as in parallel and distributed computing. Machines he helped create would be used to coordinate US military satellites and address what would become internet routing. He would then move to Apple Computer and to Sun Microsystems—two other leading companies in the personal computer and internet revolution—before joining MIT. Falling in love with synthetic biology through his long-time friend Tom Knight, now owner of NASDAQ-listed synthetic biology company Ginkgo Bioworks, Randy was invited to direct the MIT Registry of Standard Parts, a department that would spin off to create the iGEM Foundation.

Randy’s transition from the development of the internet to becoming a prominent figure in synthetic biology is something that has always caught our attention, as the internet carries with it important constituent elements of synthetic biology itself. We grew up along with the development of the internet. We lived our childhood in a world that no longer exists, nor will it ever exist again. We were formed in a cyberpunk broth, and perhaps due to the savagery of our condition as inhabitants of a forest city,[2] we were never able to ignore the intrusion of nature. Our curiosity to understand more about the roots of our roots—cybernetic and biological—led us to dig into the history of the internet with countercultural tools. We read books like “Neuromancer” and watched movies like “The Net: The Unabomber, LSD and the Internet” and “Wax or the Discovery of Television Among the Bees.” It was from this cyber-bio-punk reference that we approached Randy and formulated our questions for him.

The exercise of listening to the trajectory of scientists is very interesting for an anthropology of science and technology based on a notion of localized knowledge, as proposed by Donna Haraway. By turning to the memories of scientists from an anthropological perspective, we are able to situate techno-scientific work in a given space and time and in relation to broader historical and social processes. At the same time, working with biographies and memories of scientists also makes us capable of bringing to the surface dimensions that account for the specificities of each trajectory. When questioned by us about the origins of the concept of information, Randy alternates between great historical facts, such as the second world war, memories of his work in laboratories, and intimate family memories. This complexity of the web of scientists’ memories is very interesting as raw material. For us, peripheral researchers from the global south who practice science and technology studies as a way of imagining different possible worlds, opening listening spaces in hegemonic places of knowledge production—especially linked to what is understood as the “frontier” of science such as synthetic biology—allows us a certain smuggling between different realities, a true exercise of anthropological alterity.

Randy’s Early Internet Days

“Randomness must be in there, right? And you kind of think this is like earth, air, fire, and water. Those are the elements for a long time. Those were the elements.”

Randy Rettberg was born in 1948. He began the interview telling us that while he was growing up in rural Illinois in the 50s, several things drew his attention towards science and technology. His father, who was very religious (Randy’s grandfather was a lutheran minister) and had been a prisoner of war in Japan during World War II, came back to the US and, thanks to the GI Bill,[3] obtained a degree in Architecture, working in many urban buildings—schools, hospitals, prisons—after graduation.  He says that his childhood and teenage years were lived in a “small world” where complicated machines would be farm machines, though his world kept expanding in several directions while he came in contact with several initiatives fostering curiosity and engagement in science and technology—from Bell Labs[4] films and pictures promoting their own technologies and marvelous inventions, to do-it-yourself science kits that you could buy from magazines. Randy remembers several scientific-fueled teenage adventures like building a radio from one of these kits, playing with chemical reagents with a friend whose father had a pharmacy, building a tin-can telephone network in the backyard, and playing with a huge recorder that came encased in a suitcase and that he bought selling newspapers door-to-door in the 7th grade. The television, a very “fancy” machine at the time, would bring technologic tales as well. A friend’s father was a professor of Physics at the University of Illinois, so Randy together with his friend would spend a huge amount of time playing in an electronic prototype board with switches and lights that could be reassembled to create different combinations of button and light activations. Two other important childhood memories were how computers were beginning to feature in public imagination at the time—as huge and expensive machines with buttons and flashlights—and the launch of the soviet satellite Sputnik[5] in 1957.

In Rettberg’s account, his world definitely expanded widely when he joined MIT in 1965. While during his basic education the teachers would often repress his curiosity, at MIT it was the opposite. Curiosity was rewarded and it would be the norm. Suddenly teachers would consider “taking things a level down” while searching for answers in a specific topic. Another thing that Randy remembers from this time was his first intense contact with a real computer. This computer was the size of a room and could be used by the university staff with individual accounts who could reserve computing time slots. He describes the operating interface as “a big big tube and a light pen.”

When Randy graduated, the Vietnam War was raging on and he didn’t want to fight in it, so he went back to Illinois to get a Master’s degree in Physics, describing it as a “really really hard” experience because of the complexity of the math involved. After obtaining his MSc, he contacted Nicholas Negroponte[6] from MIT’s Architecture Machine Group[7] and was hired as a “computer guy.” He operated an Interdata Model 3, a business computer already “small,” the size of a desk table. Randy remembers how “slow” it was: only 30 thousand instructions a second.[8] From Negroponte he heard of Bolt, Beranek and Neuman (BBN), a government contractor[9] that managed at the time several groups of highly motivated scientists and engineers working at very exciting projects at the edge of science and technology. Randy says BBN was created by three MIT professors who were renowned specialists in acoustics and began working for the Department of Defense in this field, but soon started providing services related to other fields of science and technology, receiving several government contracts including from the Advanced Research Projects Agency (ARPA).[10] In 1972, Rettberg managed to get interviews in different teams within BBN and was invited to join one of the teams that was building the ARPANET project together with people from MIT Lincoln Labs.[11]

ARPANET was an ARPA project aimed at creating a network that would interconnect all US military bases and Department of Defense facilities in a way that information could be securely and effectively shared between them. The ARPANET project created most of the currently used internet protocols, for example, the Transmission Control Protocol (TCP) and the Internet Protocol (IP). ARPANET was the prototype of what would become the internet. In Randy’s words, ARPANET at the time was “a four node network. It was the first packet switching network[12] and it was four different nodes connected together by 50 KB links. So we started very slow, with teletypes[13] terminals, 10 characters per second.” Randy recalls that the group had very interesting ideas about transforming and transporting information reliably. For example, there was the idea that systems fail often, so there must be ways for interconnected information processing systems to check the integrity of sent and received information. From this idea the Transport Control Protocol, one of the backbones of modern internet, would be born. This needed in turn to be coupled to a decentralized network—so it could withstand and route around problems in individual nodes of the network such as a power outage or a military attack—and this decentralized network should be able to be composed of machines of different manufacturers that would follow in hardware and in software certain common procedures and standards that would ensure compatibility and communicability between any type of device able to follow these procedures.

According to Randy—building from the idea of bit encoding from Shannon and early information pioneers[14]—some of the really innovative ideas regarding information transfer were related to packet switching. The use of a network of interconnected nodes (composed of digital computers for a collaborative and decentralized discovery of possible routes for the information to travel on) and the establishment of protocols designed for the computers to speak on a common language (which could be understood by computers of different manufacturers) was how the ARPANET team chose to solve the problem of the ability of information to travel from one place to another. Prior to travel however, the information needed to be encoded and packaged in what would become the “network packet.” The network packet would contain the proper information users wanted to transfer and an additional “header” of information, a complementary message that contained “control information”—needed for the nodes of the network to find the best routes for the information and for the effective forwarding of the messages from one node to another after the best possible way was found. The combination of all protocols and ideas above would form a “packet-switching network.”

Rettberg emphasizes that prior to ARPANET, data could be sent from one place to another, but this task would require specific and expensive equipment. At his account, even in the academy and in the telecom industry most people believed that things had their own essences and while transferring information, these “essences” should be transmitted. For example, music was composed of sound waves, so then the only way to transfer music was to physically reproduce the sound waves from the transmitter to the receiver—and that would require special equipment for each type of “essential” information. From the ARPANET on, everyone with a digital computer, peripheral equipment, and a common phone line could be connected to every other person with a similar setup and transfer any type of information such as audio, video or text in digital format—a format that would subsume the idea of the specific “essences” of each type of information, replacing it with the concept of “digital encoded” information where everything that can be represented can also be digitally represented.

Randy told us two or three times that he and most of his colleagues at the project were against the war in Vietnam and were heavily influenced by the rock and roll movement, so this forms a contradictory background against which these ideas were designed. In ARPANET the engineers embedded a diffuse but real feeling against central control and authority funded by the military itself. With this new technology, the United States military sector would transform itself towards a decentralized informational entity capable of operating anywhere on earth. In a prior conversation, Randy told us that at times the technoscientific problem presented for the team to solve was straightforward military, such as the coordination of military satellites and the livestream of video and audio between them. In fact, the network transfer of audio and video for the military was one the first purposes of the computer Rettberg helped create in the ARPANET project, the Butterfly BBN. BBN itself was brought to the ARPANET project because of the renown associated with the acoustical know-how of the company.

The Butterfly BBN is considered a wonder of the early digital computers. It was one of the first of the modern “supercomputers.” It used commercially available digital processors from Motorola and each machine had up to 512 of these 12-33MHz processing units. It was first programmed to act as a “router” machine in the late 70’s DARPA’s Wideband Packet Satellite Network, making possible a continuous 3 Mbits/s broadcast of digital data— mainly audio and video—around multiple US military bases. The machine would then be used both in the Terrestrial Wide Band Network, a network that physically connected several Department of Defense facilities through high speed capable data cables from the late 1980’s to 1991. From 1991 forward Butterfly BBN was the computer used as the first internet routers, implementing in hardware and in software the first version of the Internet Protocol (IP).

A photo of an eletronic chip with gray, balck and pink components

A “die image” (a photograph of the internal parts of an electronic chip) of the Motorola 6800, the processor used by the first ARPANET routers, including the Butterfly BBN mentioned by Randy. (Photo by Birdman86 at commons.wikimedia.org)

It is funny to note that while telling us everything above, Randy—who had a lutheran minister grandfather and a “very religious” father—refers multiple times to religion as a way of explaining how prior to everything above, ideas about information were kind of mystical and quintessential. We had the impression that, for Randy, the cybernetic revolution which he took part in was almost like a new step in the human relationship with the universe. He, for example, compares cybernetics to the role of religion in English literature, saying that the former formed the backbone to the latter. For Randy, cybernetics is the backbone of our current mode of existence and of understanding the world we live in: in his distinctive atheist mystical language, he likens the development of cybernetics to the addition of entropy[15] to the four “original” elements, earth, fire, air, and water.

In a future blog post, we will describe the second half of the interview on Randy Rettberg’s transition from early internet pioneer to early synthetic biology pioneer. The next blog post also takes a deeper look on Randy’s view of how cybernetics is connected to synthetic biology and to science and technology in general. Until next time!

Notes

[1] ARPANET was an Advanced Projects Research Agency (ARPA) project aimed at creating a network that would interconnect all US military bases and Department of Defense facilities in a  way that information could be securely and effectively shared between them. The ARPANET project created most of the currently used internet protocols, for example, the Transmission Control Protocol (TCP) and the Internet Protocol (IP). ARPANET was the prototype of what would become the internet.

[2] São Paulo is the financial capital of Brazil, a city surrounded and restrained by both the Atlantic Forest and the booming agribusiness.

[3] The GI Bill, formally Servicemen’s Readjustment Act of 1944, was a US law aimed at rewarding war veterans for their participation in World War II. Through this law, war veterans would have a facilitated process for getting superior and technical education.

[4] Bell Labs was founded by Alexander Graham Bell and was one of the first R&D intensive companies in the world. It became a large and important government contractor, conducting research and development for the US government, especially the US military. Researchers from Bell Labs were responsible for the invention of several technologies that form the backbone of contemporary industrial mode of living. Some of these inventions were the transistor, laser technology, the UNIX operating system, photovoltaic cells, and several others.

[5] Sputnik was the first man-made satellite to be launched and successfully orbit the earth. It was launched by the Soviets on the 4th of October in 1957. It was one of the events that started the space race.

[6] Nicholas Negroponte is known to be the founder of the MIT Media Lab (and prior to that, the MIT’s Architecture Machine Group) and to be an early internet evangelizer, being one of the founders of the WIRED magazine.

[7] In 1985, the lab would be reassembled into the now famous MIT Media Lab.

[8] Today a personal computer can run at 1-10 trillion operations per second. A Geforce GTX 1080 graphic card used in gaming today runs at 8.9 trillion floating point operations per second (unit used to measure computing speed).

[9] A government contractor is a private company that works under contracts to governments.

[10] Advanced Research Projects Agency, now Defense Advanced Research Projects Agency, is one of the most important US government institutions dedicated to the creation of new technologies that could be used in defense purposes. DARPA funded projects include the modern jet engine, as well as the technologies behind the integrated circuits, super computers and the internet.

[11] Lincoln Labs is another R&D laboratory that works under government contracts. Founded in 1950 as the MIT Lincoln Laboratory, recently it spun-off from MIT, becoming a private laboratory. Lincoln Labs is historically tied to the US Department of Defense, having developed the computer network called SAGE in the 50s (the first military computers to be connected to others and to have graphical user interfaces, used to coordinate radar sites around the US). Most of what Lincoln Labs does is classified, but we know they are also interested in synthetic biology because their staff participates in iGEM’s events.

[12] Packet switching is one of the theoretical basis of the current internet and of modern telecommunications. The idea behind it is to create a procedure that two or more computers must follow to securely and reliably exchange information. It involves a series of steps that the machines will have to know and follow in order to ensure that the information has really been transmitted between them, even if problems arise due to inconsistent connection.

[13] A teletype is an electromechanical device that could be used to send and receive messages from other teletypes and later, to and from computers. Teletypes would then be used as computer interfaces as Randy mentions here.

[14] Shannon – whose research was also funded by the US military – proposes the idea of encoding information as sequences of zeros and ones, what he calls “binary digits” or bits in his paper “A mathematical theory of communication” from 1948.

[15]  In information theory, entropy measures the amount of information that a certain event contains.

Asked to Delete References to Racism From Her Book, an Author Refused

The case, involving Scholastic, led to an outcry among authors and became an example of how the culture wars behind a surge in book banning in schools has reached publishers.

Maggie Tokuda-Hall declined Scholastic’s offer to license her book, “Love in the Library,” on the condition that she edit her author’s note to remove a description of past and present instances of racism.

U.S. History Scores for 8th Graders Plunge

The latest test results continue a nearly decade-long decline. Try a sample quiz to test your knowledge.

The dip in civics performance was the first decline since the test began being administered in the late 1990s.

How Chess.com Became ‘the Wild West of the Streaming World’

During the pandemic, the Netflix series The Queen’s Gambit famously sparked a huge increase in chess interest — but one online play engine turned what would have been a spike into a groundswell that eclipses even esports. Jessica Lucas traces how the royal game became a digital juggernaut.

By January 2023, Chess.com reported hitting over 10 million active players in a single day—more than the daily average of World of Warcraft, Grand Theft Auto, and Among Us combined—leading the site’s servers to crash. Its online schedule now features a who’s who of chess grand masters who provide content for users almost 24 hours a day. A new class of chess celebrities, like sisters Alexandra and Andrea Botez, who recently surpassed 1 million Twitch subscribers on their joint account, and international master Levy Rozman, who has over 3 million YouTube subscribers, regularly appear on Chess.com.

The best smartwatches for 2023

Just a few years ago, the case for buying a smartwatch was unclear. The market wasn't as saturated as it is today, and features were more limited. Today, the wearable world is filled with various high-quality options, and a few key players, like the Apple Watch, Samsung Galaxy Watch and Fitbit Versa, have muscled their way to the front of the pack with their smart features. Chances are, if you’re reading this guide, you’ve probably already decided that it’s time to upgrade whatever's on your wrist - be it a standard timepiece or an aging smartwatch. Regardless of which category you fall into, the list of factors you’ll want to consider before deciding which is the best smartwatch for you to buy is a long one, and we'll help you make sense of it.

What to look for in a smartwatch

Google WearOS interface on a smartwatch.
Cherlynn Low

Compatibility

Apple Watches only work with iPhones, while Wear OS devices play nice with both iOS and Android phones. Smartwatches made by Samsung, Garmin, Fitbit and others are also compatible with Android and iOS, but you’ll need to install a companion app.

The smartwatch OS will also dictate the type and number of third-party apps you’ll have access to. Many of these aren’t useful, though, making this factor a fairly minor one in the grand scheme of things.

Price

The best smartwatches generally cost between $300 and $400. Compared to budget smartwatches, which cost between $100 and $250, these pricier devices have advanced communications, music and fitness features. They also often include perks like onboard GPS tracking, music storage and NFC, which budget devices generally don’t.

Some companies make specialized fitness watches: Those can easily run north of $500, and we’d only recommend them to serious athletes. Luxury smartwatches from brands like TAG Heuer and Hublot can also reach sky-high prices, but we wouldn’t endorse any of them. These devices can cost more than $1,000, and you’re usually paying for little more than a brand name and some needlessly exotic selection of build materials.

Battery life

Battery life remains one of our biggest complaints about smartwatches, but there’s hope as of late. You can expect two full days from Apple Watches and most Wear OS devices. Watches using the Snapdragon Wear 3100 processor support extended battery modes that promise up to five days of battery life on a charge — if you’re willing to shut off most features aside from, you know, displaying the time. Snapdragon’s next-gen Wear 4100 and 4100+ processors were announced in 2020, but only a handful of devices – some of which aren’t even available yet – are using them so far. Other models can last five to seven days, but they usually have fewer features and lower-quality displays. Meanwhile, some fitness watches can last weeks on a single charge.

A few smartwatches now support faster charging, too. For example, Apple promises the Series 7 can go from zero to 80 percent power in only 45 minutes, and get to full charge in 75 minutes. The OnePlus Watch is even speedier, powering up from zero to 43 percent in just 10 minutes. (Mind you that turned out to be one of the only good things about that device.)

Communication

Any smartwatch worth considering delivers call, text and app notifications to your wrist. Call and text alerts are self explanatory, but if those mean a lot to you, consider a watch with LTE. They’re more expensive than their WiFi-only counterparts, but data connectivity allows the smartwatch to take and receive phone calls, and do the same with text messages, without your device nearby. As far as app alerts go, getting them delivered to your wrist will let you glance down to the watch face and see if you absolutely need to check your phone right now.

Fitness tracking

Activity tracking is a big reason why people turn to smartwatches. An all-purpose timepiece should function as a fitness tracker, logging your steps, calories and workouts, and most of today’s wearables have a heart rate monitor as well.

Many smartwatches' fitness features include a built-in GPS, which is useful for tracking distance for runs and bike rides. Swimmers will want something water resistant, and thankfully most all-purpose devices now can withstand at least a dunk in the pool. Some smartwatches from companies like Garmin are more fitness focused than others and tend to offer more advanced features like heart-rate-variance tracking, recovery time estimation, onboard maps and more.

Health tracking on smartwatches has also seen advances over the years. Both Apple and Fitbit devices can estimate blood oxygen levels and measure ECGs. But the more affordable the smartwatch, the less likely it is that it has these kinds of in-depth health tracking features; if collecting that type of data is important to you, you’ll have to pay for the privilege.

Samsung Galaxy Watch Active
Engadget

Music

Your watch can not only track your morning runs but also play music while you’re exercising. Many smartwatches let you save your music locally, so you can connect wireless earbuds and listen to tunes without bringing your phone. Those that don’t have onboard storage for music usually have on-watch music controls, so you can control playback without whipping out your phone. And if your watch has LTE, local saving isn’t required — you’ll be able to stream music directly from the watch to your paired earbuds.

Always-on displays

Most flagship smartwatches today have some an always-on display - some have it on by default while others let you enable it via tweaked settings. This smart feature allows you to glance down at your watch to check the time and any other information you’ve set it to show on its watchface without lifting your wrist. This will no doubt affect your device’s battery life, but thankfully most always-on modes dim the display’s brightness so it’s not running at its peak unnecessarily. Cheaper devices won’t have this feature; instead, their screens will automatically turn off to conserve battery life and you’ll have to intentionally check your watch to turn on the display again.

NFC

Many smartwatches have NFC, letting you pay for things without your wallet. After saving your credit or debit card information, you can hold your smartwatch up to an NFC reader to pay for a cup of coffee on your way home from a run. Keep in mind that different watches use different payment systems: Apple Watches use Apple Pay, Wear OS devices use Google Pay, Samsung devices use Samsung Pay and so forth.

Apple Pay is one of the most popular NFC payment systems, with support for multiple banks and credit cards in 72 different countries, while Samsung and Google Pay work in fewer regions. It’s also important to note that both NFC payment support varies by device as well for both Samsung and Google’s systems.

Best overall: Apple Watch

The Apple Watch has evolved into one of the best wearables on the market since its debut in 2015. It’s the best smartwatch for iPhone users, and we wouldn’t judge you for switching to an iPhone just to be able to use an Apple Watch. The latest model, Apple Watch Series 8, has solid fitness-tracking features that will satisfy the needs of beginners and serious athletes alike. It also detects if you’ve been in a car crash, can carry out electrocardiogram (ECG) tests and measures blood oxygen levels. Plus, this Apple smartwatch offers NFC, onboard music storage and many useful apps as well as a variety of ways to respond to messages.

There aren't a ton of differences between the Series 8 and the Series 7 that came before it. The design is largely unchanged, and while the Apple Watch Series 8 runs on a newer S8 SiP, it didn't feel dramatically faster in our testing. It lasted a little bit longer, and we were impressed by the new low-power mode, which kept the watch going for an additional two hours after already being down to 20 percent battery life.

There are two other options now at the opposite ends of the spectrum. The new Apple Watch Ultra is probably overkill for most people, but it has a ton of extra features like extra waterproofing to track diving, an even more accurate GPS and the biggest battery of any Apple Watch to date. Apple designed it for the most outdoorsy among us, but for your average person, it likely has more features than they'd ever need.

The $250 Apple Watch SE, on the other hand, is less feature-rich than the Series 8, but it will probably suffice for most people. We actually regard the Watch SE as the best smartwatch option for first-time buyers, or people on stricter budgets. You’ll get all the core Apple Watch features as well as things like fall and crash detection, noise monitoring and emergency SOS, but you’ll have to do without more advanced hardware perks like an always-on display, a blood oxygen sensor, an ECG monitor and a skin temperature sensor.

Buy Apple Watch Ultra at Amazon - $799Buy Apple Watch SE at Amazon - $249

Best budget: Fitbit Versa 2

Dropping $400 on a smartwatch isn’t feasible for everyone, which is why we recommend the Fitbit Versa 2 as the best sub-$200 option. It’s our favorite budget watch because it offers a bunch of features at a great price. You get all of these essentials: Fitbit’s solid exercise-tracking abilities (including auto-workout detection), sleep tracking, water resistance, connected GPS, blood oxygen (SpO2) tracking and a six-day battery life. It also supports Fitbit Pay using NFC and it has built-in Amazon Alexa for voice commands. While the Versa 2 typically costs $150, we’ve seen it for as low as $100.

Best for Android users: Samsung Galaxy Watch 5

Samsung may not have brought many upgrades to the latest version of its popular Galaxy Watch, but that doesn't mean the Watch 5 isn't still the best smartwatch for Android users. Improvements like a more durable screen and refined curvature don't sound exciting, but they make the Watch 5 more resilient and reliable. Plus, the Galaxy Watch offers the most comprehensive fitness and health tracking, including body composition analysis, on Wear OS, and the company added a sleep coaching feature this year that is meant to help guide you towards better rest.

If you don't mind oversized watches, consider the Galaxy Watch 5 Pro. It's more expensive at $450, but comes with a larger 45mm titanium case, a more durable screen and a larger battery. Though Samsung markets this Android smartwatch as an outdoor-oriented device, you're better off thinking of it as a big timepiece that lasts longer than the standard model. It has all the same smartwatch features as the 40mm and 44mm versions, except it supports the GPX route format for workouts so you can get turn-by-turn directions while you hike and bike.

All three watches are also water-resistant so they can track swims or survive a sudden storm, and last more than a day (without the Always On Display enabled). They also run Wear OS 3.5, which is so similar to Samsung's previous Tizen OS that longtime wearers won't need to worry about adjusting to a new system. Ultimately, the Galaxy Watch 5 series is a capable, well-rounded set of smartwatches that will serve most Android users well.

Fashion-forward options

Michael Kors Access Gen 5e MKGO at CES 2021
Fossil

Yes, there are still companies out there trying to make “fashionable” smartwatches. Back when wearables were novel and generally ugly, brands like Fossil, Michael Kors and Skagen found their niche in stylish smartwatches that took cues from analog timepieces. You also have the option to pick up a “hybrid” smartwatch from companies like Withings and Garmin – these devices look like standard wrist watches but incorporate some limited functionality like activity tracking and heart rate monitoring. They remain good options if you prefer that look, but thankfully, wearables made by Apple, Samsung, Fitbit and others have gotten much more attractive over the past few years.

Ultimately, the only thing you can’t change after you buy a smartwatch is its case design. If you’re not into the Apple Watch’s squared-off corners, all of Samsung’s smartwatches have round cases that look a little more like a traditional watch. Most wearables are offered in a choice of colors and you can pay extra for premium materials like stainless steel. Once you decide on a case, your band options are endless – there are dozens of first- and third-party watch straps available for most major smartwatches, allowing you to change up your look whenever you please.

Cherlynn Low contributed to this guide.

This article originally appeared on Engadget at https://www.engadget.com/best-smartwatches-153013118.html?src=rss

Apple Watch Series 8

The Apple Watch Series 8 on a person's wrist, showing a paused workout tracking screen.

Samsung warns of lower profits amid falling demand for memory chips

Samsung has warned of plummeting profits and plans to cut back on memory chip production in response to falling demand, The Korea Herald has reported. It expects to earn just 600 billion won ($455 million) for the first quarter of 2023, a drop of 96 percent from the same period last year. It blamed falling demand for memory chips, a situation that could be a bad sign for the tech industry as a whole. 

"We’re adjusting to lower memory production to a meaningful level... in addition to optimizing line operations that are already underway,” Samsung said in a statement. It added that it would continue to invest in clean room infrastructure and expand R&D spending, as it sees improved memory chip demand in the mid- to long-term. 

Although it trails Taiwan's TSMC in other areas, Samsung is the global leader in DRAM and NAND flash memory chip production with 40.7 and 31.4 percent shares respectively. Such chips are used in consumer devices of all kinds, ranging from smartwatches to mobile phones and laptops. The oversupply of memory chips is therefore a sign that demand for such products has fallen significantly due to an ongoing global economic slowdown. 

The slowdown comes just a short time after one of the biggest tech industry booms of all time, powered by the COVID-19 pandemic. Since late in 2021, however, memory prices have dropped through the floor, with DRAM and NAND prices down by 20 and 15 percent in just the last quarter alone. One bright spot for Samsung has been sales of its new Galaxy S23 smartphone, which helped bolster profits, the company said. It will reveal more details in its earnings report set to drop at the end of April. 

This article originally appeared on Engadget at https://www.engadget.com/samsung-warns-of-lower-profits-amid-falling-demand-for-memory-chips-113551159.html?src=rss

Flash Memory Companies Photo Illustrations

Samsung logo displayed on a phone screen with a binary code reflected on it, a laptop keyboard, a memory card, an adaper and cables are seen in this illustration photo taken in Krakow, Poland on January 30, 2023. (Photo by Jakub Porzycki/NurPhoto via Getty Images)

Apple's Mac Mini M2 models fall to new all-time lows

Apple's Mac Mini M2 computers are the cheapest way to get the company's latest processors, and now Amazon is offering them at the lowest prices we've ever seen. You can buy the 256GB version for just $500 ($100 off the regular price) and the 512GB model for $690, or $110 off — both all-time lows. 

In our Engadget review, we gave the Mac Mini a solid score of 86, lauding the performance and classic aluminum design. It's tiny but mighty, with the M2 processor easily powerful enough for productivity chores and multitasking. The chip combines computing operations, like CPU, GPU, input/output interfaces and certain memory processes into one "system on a chip," allowing for faster processing speeds.

On top of that, you get killer connectivity, with two Thunderbolt 4 USB-C connections, HDMI 2.0 (with 4K 240Hz and 8K 60Hz output), two USB-A ports, a headphone jack and gigabit Ethernet (upgradeable to 10 gigabit). The M2 Pro model adds two additional USB-C ports, making it even more useful for creatives with a ton of accessories.

The Mac Mini M2 won't replace your gaming machine, but it can handle nearly everything else you throw at it. We wouldn't recommend the overpriced storage or RAM upgrades either, as the M2 is much more efficient with RAM than typical PCs. Still, if you're looking for a cheap but powerful Mac, this is the way to go.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/apples-mac-mini-m2-models-fall-to-new-all-time-lows-100512675.html?src=rss

Apple's Mac Mini M2 models fall to a new all-time lows

Apple's Mac Mini M2 models fall to a new all-time lows

Google's WebGPU is coming to Chrome to boost online gaming and graphics

Google will soon start rolling out WebGPU, a new feature that will allow Chrome browsers to use your graphics card to accelerate games, graphics and AI, the company announced. It'll be enabled by default in Chrome 113 set to arrive in a few weeks for Windows PCs (via Direct3D 12), MacOS (Metal) and ChromeOS (Vulkan).

WebGPU will give web apps better access to your graphics card, Google said, allowing developers to achieve the same level of graphics with far less code. That could make for new and interesting Chrome browser-based 3D apps, and without a doubt, better games. 

At the same time, it will power over "three times improvements in machine learning model inferences," the company added. That could pave the way for machine learning apps that run locally, like NVIDIA Broadcast's strange "eye contact" feature. 

Google calls the initial release a "building block for future updates and enhancements," as developers begin to dig into it and create new applications. The API has been in development for over six years, and should be available to Firefox and Safari down the road (Edge often gets features at the same time as Chrome), and expand to more operating systems like Android. You can try the feature for yourself if you're on the Chrome Beta track using a demo called Babylon.js, which already offers full WebGPU support. 

This article originally appeared on Engadget at https://www.engadget.com/googles-webgpu-is-coming-to-chrome-to-boost-online-gaming-and-graphics-085305456.html?src=rss

Google-Blocking Ads

FILE - This Tuesday, Oct. 8, 2013, file photo, shows the Chrome logo displayed at a Google event, in New York. Google says its Chrome browser will block ads on websites it deems too annoying for web users starting in 2018. The digital-ad giant’s announcement comes as hundreds of millions of internet users fed up with ads that track them and make browsing sites difficult have already installed ad blockers on their desktop computers and phones. (AP Photo/Mark Lennihan, File)

Researchers built sonar glasses that track facial movements for silent communication

A Cornell University researcher has developed sonar glasses that “hear” you without speaking. The eyeglass attachment uses tiny microphones and speakers to read the words you mouth as you silently command it to pause or skip a music track, enter a passcode without touching your phone or work on CAD models without a keyboard.

Cornell Ph.D. student Ruidong Zhang developed the system, which builds off a similar project the team created using a wireless earbud — and models before that which relied on cameras. The glasses form factor removes the need to face a camera or put something in your ear. “Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible,” said Cheng Zhang, Cornell assistant professor of information science. “We’re moving sonar onto the body.”

The researchers say the system only requires a few minutes of training data (for example, reading a series of numbers) to learn a user’s speech patterns. Then, once it’s ready to work, it sends and receives sound waves across your face, sensing mouth movements while using a deep learning algorithm to analyze echo profiles in real time “with about 95 percent accuracy.”

A pair of black-plastic-framed eyeglasses sitting upside down on a white surface. The frames have small circuit boards with tiny speakers and microphones attached below the lenses.
Ruidong Zhang / Cornell University

The system does this while offloading data processing (wirelessly) to your smartphone, allowing the accessory to remain small and unobtrusive. The current version offers around 10 hours of battery life for acoustic sensing. Additionally, no data leaves your phone, eliminating privacy concerns. “We’re very excited about this system because it really pushes the field forward on performance and privacy,” said Cheng Zhang. “It’s small, low-power and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world.”

Privacy also comes into play when looking at potential real-world uses. For example, Ruidong Zhang suggests using it to control music playback controls (hands- and eyes-free) in a quiet library or dictating a message at a loud concert where standard options would fail. Perhaps its most exciting prospect is people with some types of speech disabilities using it to silently feed dialogue into a voice synthesizer, which would then speak the words aloud.

If things go as planned, you can get your hands on one someday. The team at Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab is exploring commercializing the tech using a Cornell funding program. They’re also looking into smart-glasses applications to track facial, eye and upper body movements. “We think glass will be an important personal computing platform to understand human activities in everyday settings,” said Cheng Zhang.

This article originally appeared on Engadget at https://www.engadget.com/researchers-built-sonar-glasses-that-track-facial-movements-for-silent-communication-171508573.html?src=rss

Sonar glasses

Cornell University researcher Ruidong Zhang modeling a prototype glasses add-on, which includes microphones and speakers below the lenses. He is looking straight into the camera while holding the arms of his worn glasses.

A Vaccine Dispute Turns Deadly

A photo of Leslie Hu and her son Pierce O’Loughlin.

Eric PapeThe Atavist Magazine |March 2023 | 2,005 words (7 minutes)

This is an excerpt from issue no. 137, “Sins of the Father.” 


A small, good-natured boy named Pierce O’Loughlin was growing up between the homes of his divorced parents in San Francisco. Nine-year-old Pierce was accustomed to custody handoffs taking place at Convent and Stuart Hall, the Catholic school he attended. On changeover days, one parent dropped him off in the morning at the hilltop campus overlooking the bay, and the other picked him up in the afternoon. The parents avoided seeing each other. Their split had been ugly.

The Atavist Magazine, our sister site, publishes one deeply reported, elegantly designed story each month. Support The Atavist by becoming a member.

On the afternoon of January 13, 2021, Lesley Hu, Pierce’s mother, arrived at Convent and Stuart Hall for a scheduled pickup. Hu planned to take Pierce to a Coinstar machine to exchange a small bucket of coins for a gift card he could use to buy toys. Then they would go to dinner at a restaurant called House of Prime Rib, because Pierce loved to eat meat.

But Hu’s son wasn’t waiting for her at the school. Staff told her that he had been absent that day. They didn’t know why.

Another mom might have assumed that her child had a cold or that his dad had let him skip school and taken him somewhere fun for the day, but not Hu. She wondered if Pierce had been kidnapped—not by a stranger but by his own father.

Over the course of their marriage, Hu had watched as her now ex-husband, Stephen O’Loughlin, became obsessed with pseudoscience, self-help gurus, and conspiracy theories, spending long nights watching videos online, then sharing the details of fantastical plots with Hu, their friends, and people he barely knew. The COVID-19 pandemic had only made things worse. O’Loughlin huddled for hours at his computer streaming YouTube clips and poring over right-wing websites—what he called “doing research.”

One of O’Loughlin’s fixations was vaccines. He believed that Pierce had been damaged by the routine inoculations he received as a baby. O’Loughlin was adamant that the boy be given no more shots—not for COVID-19, when a vaccine was eventually authorized for kids, nor for any other disease.

In 2020, Hu had filed for the sole legal right to make decisions about her son’s medical care, which would empower her to vaccinate Pierce regardless of what her ex wanted. She felt good about her chances in court. On January 11, as a condition for a continuance he had requested in the medical custody case, O’Loughlin suddenly agreed to let Pierce receive two vaccinations. In retrospect, according to Hu’s attorney, Lorie Nachlis, “it all seemed too easy.”

When Hu discovered that Pierce wasn’t at school, she wondered if O’Loughlin had agreed to the vaccinations only because he was plotting to steal Pierce away before their son could receive them. To Hu it wasn’t improbable—her ex seemed that far gone.

Hu and her boyfriend, Jim Baaden, had recently decided to move in together; Hu was planning to tell Pierce the news that evening at dinner. Now Baaden picked Hu up at Pierce’s school, and together the couple sped to O’Loughlin’s home in San Francisco’s posh Marina District, trying not to dwell on worst-case scenarios.

When they arrived outside O’Loughlin’s Mediterranean-style apartment building, they noticed that the blinds in the living room, which was on the ground floor of the unit, were drawn but disheveled. For a moment, Baaden recoiled. O’Loughlin was a gun owner. What if he’d barricaded himself and Pierce in the apartment? Baaden imagined O’Loughlin aiming the barrel between the blinds, ready to shoot.

Baaden and Hu approached the building’s intercom and buzzed O’Loughlin’s apartment. No one answered. Hu began banging on the door to the building and screaming. She considered breaking in, but Baaden told her to call 911 instead.

Hu could not fathom how someone like O’Loughlin—a man of means and privilege—had come to believe outrageous lies. She knew that various misinformation networks and snake-oil salesmen had facilitated her ex’s paranoia and exploited his psychological fragility. But Hu had always stayed focused on what she considered her most important task: raising and protecting Pierce.

There would be time in the future to consider, almost endlessly, what happened to O’Loughlin. For now, in a panic, all Hu could do was wonder: Where had he taken their son?


Adozen years earlier, Stephen O’Loughlin was a very different man. At least he seemed to be when Hu first met him at an Italian wine bar. O’Loughlin, then in his mid-thirties, with a strong jaw and a slightly crooked smile, started chatting her up. He said that he was in finance and that he worked out. Hu, 28, wasn’t interested in his advances. She considered herself an independent woman. She worked in midlevel management and had served as the executive director of the Hong Kong Association of Northern California, a business group. The child of immigrants, she had aspirations to achieve more, to make her parents proud. Besides, she had gotten out of a long relationship recently, and she wasn’t at the bar looking for a date—she was there to cheer up a friend going through a tough time.

But O’Loughlin was persistent, and after several glasses of champagne, Hu decided that he was funny. He asked her charming if oddly specific questions: What was her favorite kind of wine? What sort of bottled water did she drink? As Hu prepared to leave, O’Loughlin asked for her number. She hesitated but gave it to him.

He texted to ask her out. She had a busy work schedule at her family’s company, which leased shipping containers, but O’Loughlin insisted that they find time to meet as soon as possible. When they did, he picked Hu up in a brand-new car stocked with her favorite water. A bottle of sparkling rosé she liked was waiting at the restaurant where they’d be dining. “He remembered everything I said the night we met,” Hu explained.

They began going out with friends for fun, alcohol-infused nights at clubs around San Francisco. O’Loughlin often brought Hu flowers. He was generous, picking up the tab on club nights and when dining out with Hu and her parents. “He was like that for months,” Hu recalled. “He said that he’d talked to his Asian friend and that he should be generous with my family.” Reaching for his wallet at the end of a meal, O’Loughlin would insist, “No, I’ve got this.” (Hu later learned that he’d been using his professional expense account.)

Early in their relationship, O’Loughlin, who grew up in Ridgefield, Connecticut, painted an incomplete picture of his parents and sister. His mother, he told Hu, was “the greatest person in the world.” He was more reserved when talking about his father. He said that he adored his two nieces, and when he and Hu visited the girls on the East Coast, O’Loughlin took them to Toys “R” Us and bought them whatever they wanted. “They were elated, so surprised,” Hu said. She told O’Loughlin she wanted kids of her own. He said he did, too.

Still, when O’Loughlin proposed after about a year of dating, Hu wasn’t sold on the idea. She didn’t like the way O’Loughlin, an arch conservative, got blustery when talking about politics. Hu, a Democrat, didn’t feel like he listened when she spoke about serious issues. O’Loughlin projected such certainty about their future as a couple, however, that Hu found herself saying yes to marriage.

Almost immediately after the engagement, O’Loughlin changed. The flowers, gifts, and other gestures of affection disappeared. He stopped paying for meals with Hu’s parents. Hu realized that O’Loughlin’s generosity had been transactional. He was a salesman by trade, peddling financial services for the firm Eaton Vance, and he brought the strategy of his job to his personal life: Once he landed a deal, he stopped spending time and energy on it.

Hu’s parents were concerned. Her dad took O’Loughlin out for a drink and suggested the couple at least wait a while to get married. “Steve came back really angry,” Hu said. After that, O’Loughlin attended gatherings of Hu’s family only begrudgingly. He wore what Hu called his “shit face,” looking bored or angry. He urged Hu to quit her job at her family’s company.

The situation became so bad that Hu gave her engagement ring back. “I can’t do this,” she told O’Loughlin. “It’s really hard.” As both of them wept, O’Loughlin promised to do better. Hu wanted to believe him. In return, she agreed to leave her job. “It was the only way it would work,” she said. O’Loughlin couched distancing Hu from her family and their business as an opportunity: He suggested that she could find employment in fashion retail, a field he knew she was interested in.

Figuring out a new career path, however, took a back seat to wedding planning. Hu threw herself into designing a celebration in Italy, until O’Loughlin nixed the idea. Instead, they reserved space at a resort in Santa Barbara. They were married in front of 150 guests on October 10, 2010.

For their honeymoon they traveled to the Maldives, the tropical archipelago in the Indian Ocean. Hu described it as “paradise.” The newlyweds stayed in an elegant cabin suspended over pale blue water alive with stingrays and other aquatic life. They were supposed to be spontaneous, to relish nature, to jump in the water whenever they felt like it. But O’Loughlin was hardly in the moment; he took part in a single activity with his wife each day, then went back to their room to immerse himself in self-help books. He complained to Hu and was rude to hotel staff, especially waiters. When he learned that most of the employees, like nearly all residents of the Maldives, were Muslim, he seemed disturbed.

Hu noticed something else: O’Loughlin wouldn’t walk beside her. He was always a few steps ahead. “Anywhere we went,” Hu said, “I was secondary.”

It was all enough to make her contemplate a quick divorce right after the honeymoon. But when they were back in California, Hu was hit with waves of nausea. A test confirmed that she was pregnant. She decided it was no time to break up the marriage.

Despite what he’d said while courting her, O’Loughlin didn’t seem excited by the prospect of having a child. According to Hu, he acted as if she wasn’t pregnant. He didn’t ask how she was feeling and didn’t want to put his hand on her belly when the baby kicked. He took Hu on a babymoon to Australia, only to reveal that the trip coincided with an installment of Unleash the Power Within, an event organized by self-help guru Tony Robbins. Among other things, O’Loughlin was drawn to Robbins’s idea that nutrition was an essential building block of self-improvement. He started eating dressing-free salads and supplement-filled health shakes that he insisted Hu prepare for him.

O’Loughlin also became convinced that Eaton Vance was swindling him. He talked Hu in circles about how he should have been earning far more money through commissions than he was, and he became argumentative with his bosses. Late in Hu’s third trimester, O’Loughlin sat down with colleagues for what he thought was a regular meeting. Instead, they took his work computers and informed him that he was fired. As Hu’s due date approached, O’Loughlin became preoccupied with the idea of suing the company.

Hu went into labor on July 27, 2011, nine months and 17 days after her marriage to O’Loughlin. It was a difficult birth. Hu, a petite woman, had to deliver an 8.3-pound baby. She was in such tremendous pain that doctors pumped her full of medication. “I couldn’t push the baby out, so they used a vacuum [extractor],” Hu said. Once Pierce arrived, there were more complications—his oxygen levels were dangerously low.

Rather than express concern for the baby or his wife, O’Loughlin seemed put off by everything that was happening. He had expected a cinematic birth. “He kept saying, ‘That wasn’t normal,’ ” Hu recalled. “He was so obsessed with the birth not being right.”

Nothing, it seemed, was ever right for O’Loughlin.

Star-shaped cells may play role in how your brain merges info

Two bike paths merge to become one.

Astrocytes may be a key player in the brain’s ability to process external and internal information simultaneously, according to a new study.

Long thought of as “brain glue,” the star-shaped cells called astrocytes are members of a family of cells found in the central nervous system called glial that help regulate blood flow and synaptic activity, keep neurons healthy, and play an important role in breathing.

Despite this growing appreciation for astrocytes, much remains unknown about the role these cells play in helping neurons and the brain process information.

“We believe astrocytes can add a new dimension to our understanding of how external and internal information is merged in the brain,” says Nathan Smith, associate professor of neuroscience at the Del Monte Institute for Neuroscience at the University of Rochester.

“More research on these cells is necessary to understand their role in the process that allows a person to have an appropriate behavioral response and also the ability to create a relevant memory to guide future behavior.”

The way our body integrates external with internal information is essential to survival. When something goes awry in these processes, behavioral or psychiatric symptoms may emerge.

Smith and coauthors point to evidence that astrocytes may play a key role in this process. Previous research has shown astrocytes sense the moment neurons send a message and can simultaneously sense sensory inputs. These external signals could come from various senses such as sight or smell.

Astrocytes respond to this influx of information by modifying their calcium Ca2+ signaling directed towards neurons, providing them with the most suitable information to react to the stimuli.

The authors hypothesize that this astrocytic Ca2+ signaling may be an underlying factor in how neurons communicate and what may happen when a signal is disrupted. But much is still unknown in how astrocytes and neuromodulators, the signals sent between neurons, work together.

“Astrocytes are an often-overlooked type of brain cell in systems neuroscience,” Smith says. “We believe dysfunctional astrocytic calcium signaling could be an underlying factor in disorders characterized by disrupted sensory processing, like Alzheimer’s and autism spectrum disorder.”

Smith has spent his career studying astrocytes. As a graduate student at the University of Rochester School of Medicine and Dentistry, Smith was part of the team who discovered an expanded role for astrocytes. Apart from absorbing excess potassium, astrocytes themselves could cause potassium levels around the neuron to drop, halting neuronal signaling. This research showed, for the first time, that astrocytes did more than tend to neurons, they also could influence the actions of neurons.

“I think once we understand how astrocytes integrate external information from these different internal states, we can better understand certain neurological diseases. Understanding their role more fully will help propel the future possibility of targeting astrocytes in neurological disease,” Smith says.

The communication between neurons and astrocytes is far more complicated than previously thought. Evidence suggests that astrocytes can sense and react to change—a process that is important for behavioral shifts and memory formation.

The study authors believe discovering more about astrocytes will lead to a better understanding of cognitive function and lead to advances in treatment and care.

The study appears in Trends in Neuroscience.

Additional coauthors are from the University of Copenhagen.

The National Institutes of Health, the National Science Foundation, the European Union under the Marie Skłodowska-Curie Fellowship, the ONO Rising Star Fellowship, the Lundbeck Foundation Experiment Grant, and the Novo Nordisk Foundation supported the work.

Source: University of Rochester

The post Star-shaped cells may play role in how your brain merges info appeared first on Futurity.

UK watchdog no longer thinks Microsoft’s Activision Blizzard merger is a threat to console competition (updated)

The UK is softening its objections to Microsoft's buyout of Activision Blizzard. The Competition and Markets Authority (CMA) has revised its findings and determined that the merger won't lead to significantly reduced competition in the console space. While the evidence remains the same, the watchdog now finds that Microsoft's ownership of franchises like Call of Duty would "not materially affect" Sony's ability to compete with PlayStation systems. Microsoft could "degrade" the PlayStation's appeal, the CMA says, but Sony's platform has a strong-enough catalog that the damage would be limited.

The Authority also reconsidered its view that Microsoft might benefit from shutting out Call of Duty's PlayStation audience. More recent Microsoft data suggests the company could suffer substantial losses in "any plausible scenario" by making the game franchise Xbox-only or offering exclusive perks, according to the CMA. Not enough gamers would switch from PlayStation to Xbox to make that strategy work, in other words.

The CMA makes clear that the updated findings don't change its concerns about cloud gaming services. In February, the regulator said Microsoft represented up to 70 percent of the worldwide cloud gaming market, and completing the Activision Blizzard purchase could harm gamers who can't afford an expensive console or PC.

We've asked Microsoft and Sony for comment. Activision Blizzard tells Engadget in a statement that the CMA now has an "improved understanding" of the console market, and that Microsoft already has solutions in place for remaining issues. Activision maintains that Sony is only trying to "protect its dominance" by contesting the acquisition.

Sony has long opposed the deal and calls it a "threat to our industry" that could hurt the quality of Call of Duty on PlayStation. Microsoft has made a number of concessions in hopes of clinching the Activision Blizzard takeover, including 10-year multi-platform promises for Call of Duty as well as arrangements to bring games to rival cloud platforms.

The UK overseer's change of heart doesn't overcome resistance to the merger in other countries. The US Federal Trade Commission is still suing to block the union, and points out that Microsoft made future titles like Redfall Xbox-exclusive on consoles despite assurances to European Union officials. Objectors can also comment on the findings before the end of March. However, the new stance still improves Microsoft's chances of completing the acquisition — a major regulatory body just dropped its main point of contention.

Update 3/24 4:10PM ET: Microsoft President Brad Smith says his company "appreciates" the CMA's review, and believes it "underscores a growing consensus" that the Activision Blizzard pact will create more competition.  

This article originally appeared on Engadget at https://www.engadget.com/uk-watchdog-no-longer-thinks-microsofts-activision-blizzard-merger-is-a-threat-to-console-competition-140206114.html?src=rss

ACTIVISION-M&A/MICROSOFT

Microsoft logo is seen on a smartphone placed on displayed Activision Blizzard's games characters in this illustration taken January 18, 2022. REUTERS/Dado Ruvic/Illustration

PayPal launches passkey logins for Android in the US

PayPal is expanding access to passkey logins to Android users in the US, so long as they access the website on the Chrome browser. The payment processor first introduced passkey logins for Apple's computers and tablets running macOS Ventura and iPadOS16 in October last year. Google had yet to release stable passkey support for Android and Chrome at the time, but PayPal promised to make the password alternative available to other platforms and countries in the future. 

By December last year, passkeys rolled out to stable Chrome. Now PayPal is making good on its promise, with some limitations. The login option isn't available for the payment processor's Android app yet, and users can only activate it if they're using Chrome on a device running Android 9. 

The new authentication technology allows users to access websites and services that support it without having to type in usernames and passwords. While it can use biometric authentication to verify a user's identity, it's not quite the same as current login tech that auto-populates login boxes using facial or fingerprint recognition. The technology creates a cryptographic key pair — one public and one private — that becomes associated with a user's account. Apps and services that support passkeys use the public key to confirm a person's identity by matching it to the private key, which is kept in the user's device. As The Verge notes, some password managers can now sync passkeys between devices, as well. 

To activate passkeys for PayPal on Android, eligible users have to log in the traditional way on a Chrome browser first. Then, the option to "create a passkey" will appear, and they will be prompted to verify their identity using their biometrics or their phone's passcode. After they're done setting up, they'll find that they'll no longer have to type in anything to quickly check out purchases with PayPal on Chrome. Passkeys also provide stronger security, since they're resistant to phishing. And, since one of the key pairs is kept on the user's device, people's login information won't be compromised in case of an app or service data breach.

This article originally appeared on Engadget at https://www.engadget.com/paypal-launches-passkey-logins-for-android-web-in-the-us-070157855.html?src=rss

PayPal passkey

Photo of a person activating PayPal passkey on an Android phone.
❌