You see it everywhere. On the Kardashian sisters, supermodels Bella Hadid and Emily Ratajkowski, influencers, and celebrities. It’s the “perfect” face of an ethnically ambiguous woman, composed of a chiseled nose, filled lips, a Botoxed forehead, and other cosmetic work. For Tablet, Grazie Sophia Christie examines our culture’s obsession with Instagram Face; the path toward “doomed, globalized sameness” in which women are just copies of one another; and how wealthy women can easily reverse what they’ve done to their face, discarding enhancements like just another fashion trend.
Instagram Face has replicated outward, with trendsetters giving up competing with one another in favor of looking eerily alike. And obviously it has replicated down.
But the more rapidly it replicates, and the clearer our manuals for quick imitation become, the closer we get to singularity—that moment Kim Kardashian fears unlike any other: the moment when it becomes unclear whether we’re copying her, or whether she is copying us.
Among the layoffs at Meta, the company formerly known as and dependent upon Facebook: the staff whose job it was to manage and support influencers. If one is tempted not to care–it suggests that these influencers' fame and their participation in it were marketing campaigns that have now ended–there are broader consequences: scammers, imposters and harassers are running riot in the comments and once they're done with these uncelebrities everyone else is next. — Read the rest
Written by David Lyreskog
A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing, and hybrid and artificial intelligence, promises to transform how we think about collective and collaborative cognitive tasks. With implementations ranging from research to entertainment, and from therapeutics to military applications, as these tools continue to improve, we need to anticipate and monitor their impacts – how they may affect our society, but also how they may reshape our fundamental understanding of agency, responsibility, and other concepts which ground our moral landscapes.
In a new paper, I, together with Dr. Hazem Zohny, Prof. Julian Savulescu, and Prof. Ilina Singh, show how these new technologies may reshape fundamental components of widely accepted concepts pertaining to moral behaviour. The paper, titled ‘Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds’, was just published in Neuroethics, and is freely available as an Open Access article through the link above.
In the paper, we argue that the received views on how we (should) ascribe responsibility to individuals and collectives map poorly onto networks of these ‘Collective Minds’. The intimately collective nature of direct multiple-brain interfaces, for instance, where human minds can collaborate on and complete complex tasks without necessarily being in the same room – or even on the same continent! – seem to suggest a collectivist moral framework to ascribe agency and responsibility. However, the technologies we are seeing in R&D do not necessitate the meeting of criteria we normally would turn to for ascription of such frameworks; they do not, for instance, seem to rely on that participants have shared goals, know what the goals of other participants are, or even know whether they are collaborating with another person or a computer.
In anticipating and assessing the ethical impacts of Collective Minds, we propose that we move beyond binary approaches to thinking about agency and responsibility (i.e. that they are either individual or collective), and that relevant frameworks for now focus on other aspects of significance to ethical analysis, such as (a) technical specifications of the Collective Mind, (b) the domain in which the technology is deployed, and (c) the reversibility of its physical and mental impacts. However, in the future, we will arguably need to find other ways to assess agency constellations and responsibility distribution, lest we abandon these concepts completely in this domain.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
If you're in the market for a Microsoft Surface computer or tablet, now's a good time to grab some deals on the lineup.
The Surface Pro 9 is the best tablet-laptop on the market. It succeeds in mixing the best of the tablet and laptop experience, being a tablet PC that runs Windows 11, uses a backlit detachable keyboard cover, and is optimized for use with a digital pen (the Surface Pen). It's lightweight and has a smaller footprint than most 2-in-1 laptops while packing enough power to edit media and handle light gaming at higher configurations.
The entry-level Surface Pro 9 (Intel Core i5, 8GB RAM, 128GB SSD) is currently on sale for $900 at Best Buy, down from $1,000, and it comes with a free Surface Keyboard cover, which typically runs you an extra $180. That's $300 less than the entry point for Apple's 12.9-inch iPad Pro, which can't run full programs (only apps) and doesn't include its separately sold Magic Keyboard, which costs $350. This 13-inch Surface Pro model has an Intel i5 processor, 8GB of RAM, and 128GB storage, which is great for those who simply want a device to edit documents and surf the web without sacrificing the ability to use full programs.
Read more of this story at Slashdot.
How, in bid to get off of Facebook as a platform, Buy Nothing founders Liesl Clark and Rebecca Rockefeller have struggled to find a viable business model for a gifting movement conceived to flout capitalism while building community.
AT THE ONE-YEAR anniversary of its launch, the Buy Nothing app had been downloaded 600,000 times, but only 91,000 people were regularly using it, not many more than at the beginning. Meanwhile, the Facebook groups from which the founders had disengaged were thriving without them. Global membership had surpassed 7 million. When I asked what Rockefeller and Clark thought would happen to Buy Nothing Inc. if they couldn’t come up with additional funding, they said they weren’t interested in thinking in such fatalistic terms.
The metaverse heralds an age in which hardly anyone still believes that tech firms can actually solve our problems.
Read more of this story at Slashdot.
The US Department of Transportation is investigating allegations that Elon Musk's brain-computer interface company, Neuralink, violated federal transportation regulations when it shipped contaminated implants removed from the brains of deceased research monkeys infected with multiple types of dangerous pathogens. The alleged violations could have put humans at risk of exposure to hazardous germs, including drug-resistant bacteria and a potentially life-threatening herpes virus.
Reuters was the first to report the department's investigation, which was sparked by allegations brought Thursday by the Physicians Committee for Responsible Medicine (PCRM), a medical group that advocates for animal welfare in medical research. The Department of Transportation confirmed to Ars on Friday that it has opened a standard investigation of Neuralink in response to PCRM's allegations.
In a letter addressed to Transportation Secretary Pete Buttigieg and William Schoonover, associate administrator of the department's Pipeline and Hazardous Material Safety Administration, the PCRM laid out its evidence for possible violations of hazardous material transportation regulations based on a trove of documents and emails obtained through public record requests. The advocacy group says the evidence shows Neuralink's contaminated hardware was not properly packaged to prevent exposure to humans and that Neuralink employees who transported the material had failed to undergo legally required training on how to safely transport such material.
Babies who are 6 to 9 months can form memories of masked faces and recognize those faces when they’re unmasked.
The new study should allay concerns of many parents and childhood experts who worry about possible developmental harm from widespread face-masking during the pandemic.
For the study Michaela DeBolt, a doctoral candidate in cognitive psychology, and Lisa Oakes, a professor in the psychology department and at the Center for Mind and Brain at the University of California, Davis, used eye tracking to study how masks influence infants’ facial recognition.
They showed 58 babies, each seated on a parent’s lap or in a highchair, pairs of masked and unmasked women’s faces on a computer screen, while cameras recorded where they looked. Because babies linger longer over unfamiliar images, the researchers could derive which faces they recognized, DeBolt says.
The testing took place at the Infant Cognition Lab at the Center for Mind and Brain in Davis, California, from late December 2021 to late March 2022, during a statewide mask mandate and the arrival of the coronavirus Omicron variant.
“When babies learned a masked face, and then they saw that face again unmasked, they recognized it,” DeBolt says.
However, when the order was reversed, babies did not show strong recognition of masked faces that they first saw unmasked. DeBolt says that was similar to her own experience of not instantly recognizing a friend who was wearing a face mask.
Learning faces is central to how babies learn to talk, perceive emotions, develop relationships with their caregivers, and explore their environment, Oakes says. “So people were very worried about face masks and the effect they would have on how infants are learning about human faces.”
Oakes, an expert on cognitive development in infancy, says the study highlights a remarkable ability of babies to adapt. “I think that it should be very reassuring to parents in general,” she says. “Babies all over the world develop and thrive.
“There are so many variations in babies’ everyday lived experience,” she adds. “As long as they are well cared for and fed and they get love and attention, they thrive. We can get into a mode where we think the way we do things is the best way to do things and that anything different is going to be a problem. And that’s clearly not the case.”
The study appears in a special issue of the journal Infancy, which focused on the impact of COVID-19 on infant development.
Source: UC Davis
The post Mask or no mask, babies remember a face appeared first on Futurity.
Like any social media platform, Truth Social relies on advertising to drive revenue, but as Twitter’s highly publicized struggle to retain advertisers has shown, it’s hard to attract major brands when a company’s content moderation capabilities appear undependable. That’s likely why Truth Social—which prides itself on sparking an “open, free, and honest global conversation” by largely avoiding content moderation altogether—has seemingly attracted no major advertisers.
A New York Times analysis of hundreds of Truth Social ads showed that the social media platform’s strategy for scraping by is taking ads from just about anyone. Currently, the platform, which was founded by former president Donald Trump, is attracting ad dollars from “hucksters and fringe marketers” who are peddling products like Trump tchotchkes, gun accessories, and diet pills, the Times reported.
In addition to Truth Social’s apparently struggling ad business, SFGate reported in November that Truth Social’s user base also seems to be dwindling. According to The Righting, a group monitoring conservative media, Truth Social traffic peaked last August at 4 million unique visitors but dropped to 2.8 million by October.
It's fair to say that, once the pandemic started, sharing misinformation on social media took on an added, potentially fatal edge. Inaccurate information about the risks posed by the virus, the efficacy of masks, and the safety of vaccines put people at risk of preventable death. Yet despite the dangers of misinformation, it continues to run rampant on many social media sites, with moderation and policy often struggling to keep up.
If we're going to take any measures to address this—something it's not clear that social media services are interested in doing—then we have to understand why sharing misinformation is so appealing to people. An earlier study had indicated that people care about making sure that what they share is accurate, but they fail to check in many cases. A new study elaborates that by getting into why this disconnect develops: For many users, clicking "share" becomes a habit, something they pursue without any real thought.
People find plenty of reasons to post misinformation that have nothing to do with whether they mistakenly believe the information is accurate. The misinformation could make their opponents, political or otherwise, look bad. Alternately, it could signal to their allies that they're on the same side or part of the same cultural group. But the initial experiments described here suggest that this sort of biased sharing doesn't explain a significant amount of information.
This week on my podcast, I read my latest Locus column, “Social Quitting, about the enshittification lifecycle of social media platforms.
But as Facebook and Twitter cemented their dominance, they steadily changed their services to capture more and more of the value that their users generated for them. At first, the companies shifted value from users to advertisers: engaging in more surveillance to enable finer-grained targeting and offering more intrusive forms of advertising that would fetch high prices from advertisers.This enshittification was made possible by high switching costs. The vast communities who’d been brought in by network effects were so valuable that users couldn’t afford to quit, because that would mean giving up on important personal, professional, commercial, and romantic ties. And just to make sure that users didn’t sneak away, Facebook aggressively litigated against upstarts that made it possible to stay in touch with your friends without using its services. Twitter consistently whittled away at its API support, neutering it in ways that made it harder and harder to leave Twitter without giving up the value it gave you.
“Unearth all the rats that have seized power and shoot them,” read an ad approved by Facebook just days after a mob violently stormed government buildings in Brazil’s capital.
That violence was fueled by false election interference claims, mirroring attacks in the United States on January 6, 2021. Previously, Facebook-owner Meta said it was dedicated to blocking content designed to incite more post-election violence in Brazil. Yet today, the human rights organization Global Witness published results of a test that shows Meta is seemingly still accepting ads that do exactly that.
Global Witness submitted 16 ads to Facebook, with some calling on people to storm government buildings, others describing the election as stolen, and some even calling for the deaths of children whose parents voted for Brazil’s new president, Luiz Inácio Lula da Silva. Facebook approved all but two ads, which Global Witness digital threats campaigner Rosie Sharpe said proved that Facebook is not doing enough to enforce its ad policies restricting such violent content.
Official correspondence from America's diplomats is getting a bit of a spruce-up next month. From February 6, the US Department of State will adopt Microsoft's sans-serif Calibri in 14-point size "for all paper submitted to the Executive Secretariat," according to The Washington Post's diplomacy reporter John Hudson.
big news for font freaks: Times New Roman is being phased out at the State Department & replaced by Calibri. Secretary Blinken sent a cable to all embassies today directing staff not to send him any more papers with Times New Roman. Subject: "The Times (New Roman) are a-Changin" pic.twitter.com/HENLbRH3UQ
— John Hudson (@John_Hudson) January 17, 2023
The move sparked a somewhat tendentious discussion in the Ars virtual office earlier today. In the cable, the State Department refers to Times New Roman and Calibri as fonts. But teeeeeeechnically, it should have referred to Times New Roman and Calibri as typefaces. A font, rather, is how you manipulate that typeface—changing the size or weight, the character spacing, or making it italic, for example.
"If we’re being pedantic (AND I AM!), a font is a clade of a typeface, I think? And yes, while switching typefaces might mean you are also switching the style of text you’re using, it’s not a semantically meaningful phrase," said a rather pedantic colleague.