FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

The Morning After: Meta's Instagram-linked Twitter rival could arrive this week

As Twitter continues to figuratively kneecap itself by limiting tweet views, Meta is hustling to bring its Twitter rival to reality. A listing for an app called Threads was spotted on the iOS App Store with an estimated release date of July 6th. In May, a report said the microblogging service was nearing completion and could be out as soon as the end of June. While an end-of-June launch didn't quite happen, the app could arrive when Twitter users are more willing (and maybe even eager) to finally jump ship.

Twitter boss Elon Musk announced at the weekend that verified accounts — which translates to paying users — can read 6,000 posts daily, while non-paying users can only read 600. He said the website is adopting the measure to "address extreme levels of data scraping [and] system manipulation."

From both the app listing and rumors, we’re expecting Threads will migrate your followers and circles from your existing Instagram handle, ensuring you should have an active timeline right from the outset. That is, if you’re an Instagram user.

– Mat Smith

You can get these reports delivered daily direct to your inbox. Subscribe right here!​​

The biggest stories you might have missed

Amazon is offering a $5 credit when you buy a $50 eGift card for Prime Day

Blue Origin is planning to open new launch sites outside the US

Tidal is increasing its HiFi plan to $11 per month

The best mobile microphones for 2023

The best cameras for 2023

Compacts, DSLRs, action cams and, of course, mirrorless cameras.

TMA
Engadget

Since smartphones started eating casual photography’s lunch, camera makers have focused on devices designed for very specific uses. Action cams provide sharp, fluid video. Compact cameras target both tourists and vloggers. And DSLRs are available at some of the best prices we’ve seen. Then there are mirrorless cameras, which continue to improve their autofocus and video. And that’s where some guidance helps. Whether you’re a creator looking for just the right vlogging camera, an aspiring wildlife photographer or a sports enthusiast, we’ll help you find the perfect camera to match your budget and needs.

Continue reading.

There’s an animated GIF generator now

Are your group chats ready for this level of nonsense?

Remember the early days of the AI hype train, when everyone spent their time making stupid images using text prompts? If you want to recapture the nostalgic haze of, uh, late 2022, Picsart has you covered. The popular image editor just launched an AI-powered animated GIF generator, calling the tool its “most unhinged” platform yet. Type a bunch of nonsense into the chat box, wait a minute or so and marvel at your “chaotic and eccentric” creation. The platform’s integrated into the regular Picsart app and is available for iOS, Android devices and on the web.

Continue reading.

Twitter launches 'new' TweetDeck as the old version breaks down

The feature will also be exclusive to Twitter Blue in 30 days.

If you've been having trouble using Twitter recently, you aren't alone — the service has been having issues since it started limiting the daily number of posts users could view. Although many of the platform's issues stabilized over the weekend, TweetDeck remains broken unless users switch to the beta version of the list aggregator. Now, Twitter is gearing up to solve the issue by making that beta version of TweetDeck the main version. According to Twitter Support, the feature will become exclusive to Twitter Blue subscribers in the near future, noting that "in 30 days, users must be Verified to access TweetDeck."

Continue reading.

Some HBO shows are streaming on Netflix in the US for the first time

'Insecure' is now on the platform, with 'Band of Brothers' and 'Six Feet Under' arriving later.

There really is an HBO show on Netflix. All five seasons of Issa Rae's highly acclaimed comedy-drama series Insecure are now streaming on Netflix in the US. Even more HBO shows are on the way, too. Band of Brothers, The Pacific, Six Feet Under and Ballers are also coming to Netflix as part of the deal, the company told Deadline. Meanwhile, Netflix users outside the US can stream True Blood on the service.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-metas-instagram-linked-twitter-rival-could-arrive-this-week-111508536.html?src=rss

Threads

Screenshots of the iPhone screen showing a new app called Threads by Meta.

A Case for AI Wellbeing (guest post)

“There are good reasons to think that some AIs today have wellbeing.”

In this guest post, Simon Goldstein (Dianoia Institute, Australian Catholic University) and Cameron Domenico Kirk-Giannini (Rutgers University – Newark, Center for AI Safety) argue that some existing artificial intelligences have a kind of moral significance because they’re beings for whom things can go well or badly.

This is the sixth in a series of weekly guest posts by different authors at Daily Nous this summer.

[Posts in the summer guest series will remain pinned to the top of the page for the week in which they’re published.]

 


A Case for AI Wellbeing
by Simon Goldstein and Cameron Domenico Kirk-Giannini 

We recognize one another as beings for whom things can go well or badly, beings whose lives may be better or worse according to the balance they strike between goods and ills, pleasures and pains, desires satisfied and frustrated. In our more broad-minded moments, we are willing to extend the concept of wellbeing also to nonhuman animals, treating them as independent bearers of value whose interests we must consider in moral deliberation. But most people, and perhaps even most philosophers, would reject the idea that fully artificial systems, designed by human engineers and realized on computer hardware, may similarly demand our moral consideration. Even many who accept the possibility that humanoid androids in the distant future will have wellbeing would resist the idea that the same could be true of today’s AI.

Perhaps because the creation of artificial systems with wellbeing is assumed to be so far off, little philosophical attention has been devoted to the question of what such systems would have to be like. In this post, we suggest a surprising answer to this question: when one integrates leading theories of mental states like belief, desire, and pleasure with leading theories of wellbeing, one is confronted with the possibility that the technology already exists to create AI systems with wellbeing. We argue that a new type of AI—the artificial language agent—has wellbeing. Artificial language agents augment large language models with the capacity to observe, remember, and form plans. We also argue that the possession of wellbeing by language agents does not depend on them being phenomenally conscious. Far from a topic for speculative fiction or future generations of philosophers, then, AI wellbeing is a pressing issue. This post is a condensed version of our argument. To read the full version, click here.

1. Artificial Language Agents

Artificial language agents (or simply language agents) are our focus because they support the strongest case for wellbeing among existing AIs. Language agents are built by wrapping a large language model (LLM) in an architecture that supports long-term planning. An LLM is an artificial neural network designed to generate coherent text responses to text inputs (ChatGPT is the most famous example). The LLM at the center of a language agent is its cerebral cortex: it performs most of the agent’s cognitive processing tasks. In addition to the LLM, however, a language agent has files that record its beliefs, desires, plans, and observations as sentences of natural language. The language agent uses the LLM to form a plan of action based on its beliefs and desires. In this way, the cognitive architecture of language agents is familiar from folk psychology.

For concreteness, consider the language agents built this year by a team of researchers at Stanford and Google. Like video game characters, these agents live in a simulated world called ‘Smallville’, which they can observe and interact with via natural-language descriptions of what they see and how they act. Each agent is given a text backstory that defines their occupation, relationships, and goals. As they navigate the world of Smallville, their experiences are added to a “memory stream” in the form of natural language statements. Because each agent’s memory stream is long, agents use their LLM to assign importance scores to their memories and to determine which memories are relevant to their situation. Then the agents reflect: they query the LLM to make important generalizations about their values, relationships, and other higher-level representations. Finally, they plan: They feed important memories from each day into the LLM, which generates a plan for the next day. Plans determine how an agent acts, but can be revised on the fly on the basis of events that occur during the day. In this way, language agents engage in practical reasoning, deciding how to promote their goals given their beliefs.

2. Belief and Desire

The conclusion that language agents have beliefs and desires follows from many of the most popular theories of belief and desire, including versions of dispositionalism, interpretationism, and representationalism.

According to the dispositionalist, to believe or desire that something is the case is to possess a suitable suite of dispositions. According to ‘narrow’ dispositionalism, the relevant dispositions are behavioral and cognitive; ‘wide’ dispositionalism also includes dispositions to have phenomenal experiences. While wide dispositionalism is coherent, we set it aside here because it has been defended less frequently than narrow dispositionalism.

Consider belief. In the case of language agents, the best candidate for the state of believing a proposition is the state of having a sentence expressing that proposition written in the memory stream. This state is accompanied by the right kinds of verbal and nonverbal behavioral dispositions to count as a belief, and, given the functional architecture of the system, also the right kinds of cognitive dispositions. Similar remarks apply to desire.

According to the interpretationist, what it is to have beliefs and desires is for one’s behavior (verbal and nonverbal) to be interpretable as rational given those beliefs and desires. There is no in-principle problem with applying the methods of radical interpretation to the linguistic and nonlinguistic behavior of a language agent to determine what it believes and desires.

According to the representationalist, to believe or desire something is to have a mental representation with the appropriate causal powers and content. Representationalism deserves special emphasis because “probably the majority of contemporary philosophers of mind adhere to some form of representationalism about belief” (Schwitzgebel).

It is hard to resist the conclusion that language agents have beliefs and desires in the representationalist sense. The Stanford language agents, for example, have memories which consist of text files containing natural language sentences specifying what they have observed and what they want. Natural language sentences clearly have content, and the fact that a given sentence is in a given agent’s memory plays a direct causal role in shaping its behavior.

Many representationalists have argued that human cognition should be explained by positing a “language of thought.” Language agents also have a language of thought: their language of thought is English!

An example may help to show the force of our arguments. One of Stanford’s language agents had an initial description that included the goal of planning a Valentine’s Day party. This goal was entered into the agent’s planning module. The result was a complex pattern of behavior. The agent met with every resident of Smallville, inviting them to the party and asking them what kinds of activities they would like to include. The feedback was incorporated into the party planning.

To us, this kind of complex behavior clearly manifests a disposition to act in ways that would tend to bring about a successful Valentine’s Day party given the agent’s observations about the world around it. Moreover, the agent is ripe for interpretationist analysis. Their behavior would be very difficult to explain without referencing the goal of organizing a Valentine’s Day party. And, of course, the agent’s initial description contained a sentence with the content that its goal was to plan a Valentine’s Day party. So, whether one is attracted to narrow dispositionalism, interpretationism, or representationalism, we believe the kind of complex behavior exhibited by language agents is best explained by crediting them with beliefs and desires.

3. Wellbeing

What makes someone’s life go better or worse for them? There are three main theories of wellbeing: hedonism, desire satisfactionism, and objective list theories. According to hedonism, an individual’s wellbeing is determined by the balance of pleasure and pain in their life. According to desire satisfactionism, an individual’s wellbeing is determined by the extent to which their desires are satisfied. According to objective list theories, an individual’s wellbeing is determined by their possession of objectively valuable things, including knowledge, reasoning, and achievements.

On hedonism, to determine whether language agents have wellbeing, we must determine whether they feel pleasure and pain. This in turn depends on the nature of pleasure and pain.

There are two main theories of pleasure and pain. According to phenomenal theories, pleasures are phenomenal states. For example, one phenomenal theory of pleasure is the distinctive feeling theory. The distinctive feeling theory says that there is a particular phenomenal experience of pleasure that is common to all pleasant activities. We see little reason why language agents would have representations with this kind of structure. So if this theory of pleasure were correct, then hedonism would predict that language agents do not have wellbeing.

The main alternative to phenomenal theories of pleasure is attitudinal theories. In fact, most philosophers of wellbeing favor attitudinal over phenomenal theories of pleasure (Bramble). One attitudinal theory is the desire-based theory: experiences are pleasant when they are desired. This kind of theory is motivated by the heterogeneity of pleasure: a wide range of disparate experiences are pleasant, including the warm relaxation of soaking in a hot tub, the taste of chocolate cake, and the challenge of completing a crossword. While differing in intrinsic character, all of these experiences are pleasant when desired.

If pleasures are desired experiences and AIs can have desires, it follows that AIs can have pleasure if they can have experiences. In this context, we are attracted to a proposal defended by Schroeder: an agent has a pleasurable experience when they perceive the world being a certain way, and they desire the world to be that way. Even if language agents don’t presently have such representations, it would be possible to modify their architecture to incorporate them. So some versions of hedonism are compatible with the idea that language agents could have wellbeing.

We turn now from hedonism to desire satisfaction theories. According to desire satisfaction theories, your life goes well to the extent that your desires are satisfied. We’ve already argued that language agents have desires. If that argument is right, then desire satisfaction theories seem to imply that language agents can have wellbeing.

According to objective list theories of wellbeing, a person’s life is good for them to the extent that it instantiates objective goods. Common components of objective list theories include friendship, art, reasoning, knowledge, and achievements. For reasons of space, we won’t address these theories in detail here. But the general moral is that once you admit that language agents possess beliefs and desires, it is hard not to grant them access to a wide range of activities that make for an objectively good life. Achievements, knowledge, artistic practices, and friendship are all caught up in the process of making plans on the basis of beliefs and desires.

Generalizing, if language agents have beliefs and desires, then most leading theories of wellbeing suggest that their desires matter morally.

4. Is Consciousness Necessary for Wellbeing?

We’ve argued that language agents have wellbeing. But there is a simple challenge to this proposal. First, language agents may not be phenomenally conscious — there may be nothing it feels like to be a language agent. Second, some philosophers accept:

The Consciousness Requirement. Phenomenal consciousness is necessary for having wellbeing.

The Consciousness Requirement might be motivated in either of two ways: First, it might be held that every welfare good itself requires phenomenal consciousness (this view is known as experientialism). Second, it might be held that though some welfare goods can be possessed by beings that lack phenomenal consciousness, such beings are nevertheless precluded from having wellbeing because phenomenal consciousness is necessary to have wellbeing.

We are not convinced. First, we consider it a live question whether language agents are or are not phenomenally conscious (see Chalmers for recent discussion). Much depends on what phenomenal consciousness is. Some theories of consciousness appeal to higher-order representations: you are conscious if you have appropriately structured mental states that represent other mental states. Sufficiently sophisticated language agents, and potentially many other artificial systems, will satisfy this condition. Other theories of consciousness appeal to a ‘global workspace’: an agent’s mental state is conscious when it is broadcast to a range of that agent’s cognitive systems. According to this theory, language agents will be conscious once their architecture includes representations that are broadcast widely. The memory stream of Stanford’s language agents may already satisfy this condition. If language agents are conscious, then the Consciousness Requirement does not pose a problem for our claim that they have wellbeing.

Second, we are not convinced of the Consciousness Requirement itself. We deny that consciousness is required for possessing every welfare good, and we deny that consciousness is required in order to have wellbeing.

With respect to the first issue, we build on a recent argument by Bradford, who notes that experientialism about welfare is rejected by the majority of philosophers of welfare. Cases of deception and hallucination suggest that your life can be very bad even when your experiences are very good. This has motivated desire satisfaction and objective list theories of wellbeing, which often allow that some welfare goods can be possessed independently of one’s experience. For example, desires can be satisfied, beliefs can be knowledge, and achievements can be achieved, all independently of experience.

Rejecting experientialism puts pressure on the Consciousness Requirement. If wellbeing can increase or decrease without conscious experience, why would consciousness be required for having wellbeing? After all, it seems natural to hold that the theory of wellbeing and the theory of welfare goods should fit together in a straightforward way:

Simple Connection. An individual can have wellbeing just in case it is capable of possessing one or more welfare goods.

Rejecting experientialism but maintaining Simple Connection yields a view incompatible with the Consciousness Requirement: the falsity of experientialism entails that some welfare goods can be possessed by non-conscious beings, and Simple Connection guarantees that such non-conscious beings will have wellbeing.

Advocates of the Consciousness Requirement who are not experientialists must reject Simple Connection and hold that consciousness is required to have wellbeing even if it is not required to possess particular welfare goods. We offer two arguments against this view.

First, leading theories of the nature of consciousness are implausible candidates for necessary conditions on wellbeing. For example, it is implausible that higher-order representations are required for wellbeing. Imagine an agent who has first order beliefs and desires, but does not have higher order representations. Why should this kind of agent not have wellbeing? Suppose that desire satisfaction contributes to wellbeing. Granted, since they don’t represent their beliefs and desires, they won’t themselves have opinions about whether their desires are satisfied. But the desires still are satisfied. Or consider global workspace theories of consciousness. Why should an agent’s degree of cognitive integration be relevant to whether their life can go better or worse?

Second, we think we can construct chains of cases where adding the relevant bit of consciousness would make no difference to wellbeing. Imagine an agent with the body and dispositional profile of an ordinary human being, but who is a ‘phenomenal zombie’ without any phenomenal experiences. Whether or not its desires are satisfied or its life instantiates various objective goods, defenders of the Consciousness Requirement must deny that this agent has wellbeing. But now imagine that this agent has a single persistent phenomenal experience of a homogenous white visual field. Adding consciousness to the phenomenal zombie has no intuitive effect on wellbeing: if its satisfied desires, achievements, and so forth did not contribute to its wellbeing before, the homogenous white field should make no difference. Nor is it enough for the consciousness to itself be something valuable: imagine that the phenomenal zombie always has a persistent phenomenal experience of mild pleasure. To our judgment, this should equally have no effect on whether the agent’s satisfied desires or possession of objective goods contribute to its wellbeing. Sprinkling pleasure on top of the functional profile of a human does not make the crucial difference. These observations suggest that whatever consciousness adds to wellbeing must be connected to individual welfare goods, rather than some extra condition required for wellbeing: rejecting Simple Connection is not well motivated. Thus the friend of the Consciousness Requirement cannot easily avoid the problems with experientialism by falling back on the idea that consciousness is a necessary condition for having wellbeing.

We’ve argued that there are good reasons to think that some AIs today have wellbeing. But our arguments are not conclusive. Still, we think that in the face of these arguments, it is reasonable to assign significant probability to the thesis that some AIs have wellbeing.

In the face of this moral uncertainty, how should we act? We propose extreme caution. Wellbeing is one of the core concepts of ethical theory. If AIs can have wellbeing, then they can be harmed, and this harm matters morally. Even if the probability that AIs have wellbeing is relatively low, we must think carefully before lowering the wellbeing of an AI without producing an offsetting benefit.


[Image made with DALL-E]

Some related posts:
Philosophers on GPT-3
Philosophers on Next-Generation Large Language Models
GPT-4 and the Question of Intelligence
We’re Not Ready for the AI on the Horizon, But People Are Trying
Researchers Call for More Work on Consciousness
Dennett on AI: We Must Protect Ourselves Against ‘Counterfeit People’
Philosophy, AI, and Society Listserv
Talking Philosophy with Chat-GPT

The post A Case for AI Wellbeing (guest post) first appeared on Daily Nous.

AMAs Are the Latest Casualty In Reddit's API War

By: BeauHD
An anonymous reader quotes a report from Ars Technica: Ask Me Anything (AMA) has been a Reddit staple that helped popularize the social media platform. It delivered some unique, personal, and, at times, fiery interviews between public figures and people who submitted questions. The Q&A format became so popular that many people host so-called AMAs these days, but the main subreddit has been r/IAmA, where the likes of then-US President Barack Obama and Bill Gates have sat in the virtual hot seat. But that subreddit, which has been called its own "juggernaut of a media brand," is about to look a lot different and likely less reputable. On July 1, Reddit moved forward with changes to its API pricing that has infuriated a large and influential portion of its user base. High pricing and a 30-day adjustment period resulted in many third-party Reddit apps closing and others moving to paid-for models that developers are unsure are sustainable. The latest casualty in the Reddit battle has a profound impact on one of the most famous forms of Reddit content and signals a potential trend in Reddit content changing for the worse. On Saturday, the r/IAmA moderators announced that they will no longer perform these duties: - Active solicitation of celebrities or high-profile figures to do AMAs. - Email and modmail coordination with celebrities and high-profile figures and their PR teams to facilitate, educate, and operate AMAs. (We will still be available to answer questions about posting, though response time may vary). - Running and maintaining a website for scheduling of AMAs with pre-verification and proof, as well as social media promotion. - Maintaining a current up-to-date sidebar calendar of scheduled AMAs, with schedule reminders for users. - Sister subreddits with categorized cross-posts for easy following. - Moderator confidential verification for AMAs. - Running various bots, including automatic flairing of live posts The subreddit, which has 22.5 million subscribers as of this writing, will still exist, but its moderators contend that most of what makes it special will be undermined. "Moving forward, we'll be allowing most AMA topics, leaving proof and requests for verification up to the community, and limiting ourselves to removing rule-breaking material alone. This doesn't mean we're allowing fake AMAs explicitly, but it does mean you'll need to pay more attention," the moderators said. The mods will also continue to do bare minimum tasks like keeping spam out and rule enforcement, they said. Like many other Reddit moderators Ars has spoken to, some will step away from their duties, and they'll reportedly be replaced "as needed."

Read more of this story at Slashdot.

Twitter launches 'new' Tweetdeck as the old version breaks down

If you've been having trouble using Twitter recently, you aren't alone — the service has been having issues ever since it started limiting the number of posts users could view each day. Although many of the platform's issues stabilized over the weekend, Tweetdeck remains broken unless users switch to the beta version of the list aggregator. Now, Twitter is gearing up to solve the issue by making that beta version of Tweetdeck the main version, announcing on Monday that it has "launched a new, improved version of Tweetdeck."

We have just launched a new, improved version of TweetDeck. All users can continue to access their saved searches & workflows via https://t.co/2WwL3hNVR2 by selecting “Try the new TweetDeck” in the bottom left menu.

Some notes on getting started and the future of the product…

— Twitter Support (@TwitterSupport) July 3, 2023

Despite officially launching, this "new" Tweetdeck still calls itself the "Tweetdeck Preview" while in app, and users still need to opt-in to using it in the menu of the original Tweetdeck interface. Even so, switching to the new interface does indeed restore basic Tweetdeck functionality for users that rely on its list aggregation features. Twitter says the process should be fairly straightforward as well, promising that saved searches, lists and columns should carry over instantly. Although Twitter says that the updated preview build should now support Twitter Spaces, polls and other features that were previously missing, it notes that Teams functionality is currently unavailable.

Twitter hasn't officially announced that it's retiring the old version of Tweetdeck, but in a thread discussing the issues a Twitter employee suggested the change would be permanent, stating that they were "migrating everyone to the preview version." 

Hey folks, looks like the recent changes have broken the legacy TweetDeck, so we're working on migrating everyone to the preview version

— Ben  (@ayroblu) July 3, 2023

Although switching to the new version of Tweetdeck potentially resolves the issue, many legacy users may still find themselves without access to the power-user tool in the near future. According to Twitter Support, the feature will become exclusive to Twitter Blue subscribers in the near future, noting that "in 30 days, users must be Verified to access Tweetdeck." It's unclear if that change will be applied to all users in early August, or if all users will have a 30-day trial of the new Tweetdeck before being prompted to subscribe.

This article originally appeared on Engadget at https://www.engadget.com/twitter-launches-new-tweetdeck-as-the-old-version-breaks-down-231939160.html?src=rss

Twitter Issues

FILE - A sign at Twitter headquarters is shown in San Francisco on Nov. 18, 2022. Thousands of people logged complaints about problems accessing Twitter on Saturday, July 1, 2023, after owner Elon Musk limited most users to viewing 600 tweets a day — restrictions he described as an attempt to prevent unauthorized scraping of potentially valuable data from the site. (AP Photo/Jeff Chiu, File)

UK universities draw up guiding principles on generative AI

All 24 Russell Group universities have reviewed their academic conduct policies and guidance

UK universities have drawn up a set of guiding principles to ensure that students and staff are AI literate, as the sector struggles to adapt teaching and assessment methods to deal with the growing use of generative artificial intelligence.

Vice-chancellors at the 24 Russell Group research-intensive universities have signed up to the code. They say this will help universities to capitalise on the opportunities of AI while simultaneously protecting academic rigour and integrity in higher education.

Continue reading...

AMAs are the latest casualty in Reddit’s API war

CLOSE UP OF PRESS CONFERENCE MICROPHONES

Enlarge (credit: Getty)

Ask Me Anything (AMA) has been a Reddit staple that helped popularize the social media platform. It delivered some unique, personal, and, at times, fiery interviews between public figures and people who submitted questions. The Q&A format became so popular that many people host so-called AMAs these days, but the main subreddit has been r/IAmA, where the likes of then-US President Barack Obama and Bill Gates have sat in the virtual hot seat. But that subreddit, which has been called its own "juggernaut of a media brand," is about to look a lot different and likely less reputable.

On July 1, Reddit moved forward with changes to its API pricing that has infuriated a large and influential portion of its user base. High pricing and a 30-day adjustment period resulted in many third-party Reddit apps closing and others moving to paid-for models that developers are unsure are sustainable.

The latest casualty in the Reddit battle has a profound impact on one of the most famous forms of Reddit content and signals a potential trend in Reddit content changing for the worse.

Read 20 remaining paragraphs | Comments

Twitter's apps are breaking following Elon Musk's decision to cap tweet rates

Over the last few days, Twitter not only stopped showing tweets unless you're logged in, but also started capping the number of tweets users can read each day ("rate limiting") — ostensibly due to "data scraping," according to Elon Musk. Those actions are starting to have an impact elsewhere across Twitter's ecosystem, with many users reporting that Tweetdeck (a power-user version of Twitter) no longer works. In addition, Google Search is reportedly showing up to 50 percent fewer Twitter URLs due to the logged-in requirement, Search Engine Roundtable reported. 

For a lot of users (including Engadget), Tweetdeck effectively stopped functioning, just showing a spinning wheel above most columns. That may be because a bug in Twitter's web app is sending requests in an infinite loop, effectively creating a "self-DDOS" (distributed denial of service), Waxy reported. As researcher Molly White tweeted, that effect is multiplied in Tweetdeck for anything other than the "Home" column, as it keeps "repeatedly retrying 404s," she wrote. 

twitter's self-DDoS is worse with tweetdeck 💀 pic.twitter.com/krcLhjnsA2

— Molly White (@molly0xFFF) July 2, 2023

It's possible to at least get your columns to show up by using a new beta version of Tweetdeck, as Engadget's Matt Brian tweeted. However, those columns are still subject to the rate limits (800 tweets for non-Twitter Blue subscribers), and so most users will stop seeing new tweets shortly after Tweetdeck loads.

On top of that, Google Search may be showing up to 50 percent fewer Twitter URLs following Musk's move to block unregistered users. Using the site command, Search Engine Roundtable's Barry Schwartz found that Google now has about 52 percent fewer Twitter URLs in its index than it did on Friday. It's still showing recent tweets in the Search carousel, but normal indexing seems to be broken at the moment. "Not that a site command is the best measure, but... Twitter is down [around] 162 million indexed pages so far since this change," Schwartz tweeted

There's no confirmation that the "self-DDOS" theory is accurate, but a post from developer Sheldon Chang (on Mastodon) indicated that shutting off anonymous access to Twitter may be playing a role in the issues. Twitter has promised that the login requirement and rate limiting are "temporary," but has yet to give a date for eliminating those restrictions. 

This article originally appeared on Engadget at https://www.engadget.com/twitters-apps-are-breaking-following-elon-musks-decision-to-cap-tweet-rates-125028807.html?src=rss

FRANCE-TWITTER

The twitter's logo is pictured on screen reflected by mirrors in Mulhouse, eastern France on May 30, 2023. (Photo by SEBASTIEN BOZON / AFP) (Photo by SEBASTIEN BOZON/AFP via Getty Images)

Report: Vision Pro Headset's Complex Design Forcing Apple to Make 'Drastic' Production Cuts

Apple has been forced to make "drastic cuts" to production forecasts for its Vision Pro mixed reality headset due to the complexity of its design and difficulties in production, reported the Financial Times on Monday.


Apple's $3,500 headset is set to go on sale early next year, but analysts have suggested the lengthy gap between its unveiling at WWDC in June and official launch could be more to do with supply chain problems than a way to give developers time to create apps for the device.

According to the paywalled report, Apple is said to have been unhappy with the productivity of manufacturing partners tasked with supplying the two micro-OLED displays for the wearer's eyes and the outward-facing curved lenticular lens. The micro-OLED displays were reportedly supplied by Sony and TSMC for the prototypes, but it is not known who is supplying them at scale.

As a result of production challenges, Apple is preparing to make fewer than 400,000 units in 2024, according to the report, citing sources close to Apple and Luxshare, the Chinese contract manufacturer that will initially assemble the device. Meanwhile, two China-based sole suppliers of certain components for the Vision Pro reportedly said Apple was only asking them for enough for 130,000 to 150,000 units in the first year.

"Both projections imply a significant cut to production from an earlier, internal sales target of 1mn units in the first 12 months," wrote the FT. "The forecasts for low volumes reflect Apple's lack of confidence in being able to scale production, according to analysts and industry experts, following years of missed deadlines in launching the device," the report added.

Meanwhile, Apple is said to have pushed back plans to launch a more affordable version of the headset that it hopes will appeal to the mass market. Apple is reportedly working with Korean display makers Samsung and LG on the second-generation headset, and has explored using mini-LED for the displays to drive the price lower. However, FT's sources claimed Apple was insisting on using micro-OLED even for the non-Pro headset, despite suppliers failing to match expectations.

Bloomberg's Mark Gurman previously reported that Apple plans to launch a more affordable version of its Vision Pro headset by the end of 2025, likely to be called "Apple Vision One," or more simply, "Apple Vision."
Related Roundup: Apple Vision Pro
Related Forum: Apple Vision Pro

This article, "Report: Vision Pro Headset's Complex Design Forcing Apple to Make 'Drastic' Production Cuts" first appeared on MacRumors.com

Discuss this article in our forums

Red wall Tory MPs put pressure on Sunak over net migration

Group issues 12-point plan calling for stricter immigration rules for care workers, students and refugees

Rishi Sunak is facing demands from “red wall” Conservative MPs to slash the number of overseas care workers, foreign students and refugees allowed into the UK in time for the next election.

The MPs from the 2017 and 2019 intake, who call themselves the New Conservatives, have issued a 12-point plan to cut net migration to Britain from 606,000 to 226,000 before the end of 2024.

A cap of 20,000 on the number of refugees accepted for resettlement in the UK.

Caps on future humanitarian schemes such as the Ukraine, Afghanistan and Hong Kong schemes should the predicted 168,000 reductions not be realised.

Implementation of the provisions of the illegal migration bill, which it is claimed would lead to a reduction of at least 35,000 from LTIM.

A raise in the minimum combined income threshold to £26,200 for sponsoring a spouse and raising the minimum language requirement to B1 (intermediate level). This should lead to an estimated 20,000 reduction in LTIM, the MPs claim.

Making the migration advisory committee report on the effect of migration on housing and public services, not just the jobs market, by putting future demand on a par with labour requirements in all studies.

A 5% cap on the amount of social housing that councils can give to non-UK nationals.

Raising the immigration health surcharge to £2,700 per person a year.

Continue reading...

Twitter puts strict cap on how many tweets users can read each day

Twitter has begun aggressively limiting how many tweets users can view per day. On Saturday afternoon, Elon Musk said the company would restrict unverified accounts to reading 600 posts per day and new accounts to only 300 tweets daily. Meanwhile, Twitter will allow verified accounts to read 6,000 posts each day. For most people, that means, short of paying for Twitter Blue, they can spend about a minute or two on Twitter before encountering a "rate limit exceeded" error. Less than two hours later, Musk said Twitter would "soon" ease the limits to 8,000 for verified accounts and 800 for those without Twitter Blue. 

To address extreme levels of data scraping & system manipulation, we’ve applied the following temporary limits:

- Verified accounts are limited to reading 6000 posts/day
- Unverified accounts to 600 posts/day
- New unverified accounts to 300/day

— Elon Musk (@elonmusk) July 1, 2023

Musk claimed the "temporary" limits were put in place to address "extreme levels of data scraping" and "system manipulation." The day before, Twitter began preventing people not logged into the site from viewing tweets. Like the usage limit, Musk has claimed the login restriction will only be temporary and was put in place in response to data scrapers. "Several hundred organizations (maybe more) were scraping Twitter data extremely aggressively, to the point where it was affecting the real user experience," Musk said Friday. He later claimed "almost every company doing AI" was scraping Twitter to train their models. "It is rather galling to have to bring large numbers of servers online on an emergency basis just to facilitate some AI startup’s outrageous valuation," he said.  

Musk did not say what "new" means in the context of an account, nor did he say how long Twitter plans to restrict users in the way it's doing so currently. He also didn't state if viewing ads counts against a user's view limit. Either way, the restrictions severely limit the useability of Twitter, making it difficult, for instance, to verify if a screenshot of a tweet is authentic. A cynical view of the situation would suggest Twitter is trying to find ways to squeeze every bit of money it can from its user base. In March, the company introduced API changes that could cost some organizations as much as $42,000 a month. However, that move and the introduction of Twitter Blue don't appear to have offset the advertising revenue Twitter has lost since Musk's takeover. Limiting how many tweets, and by extension ads, users can see is unlikely to make the company’s remaining clients happy.

This article originally appeared on Engadget at https://www.engadget.com/twitter-puts-strict-cap-on-how-many-tweets-users-can-read-each-day-182623928.html?src=rss

TWITTER-MUSK/

A view of the Twitter logo at its corporate headquarters in San Francisco, California, U.S. November 18, 2022.

Swipe Right Into Tinder’s 7-Story West Hollywood Headquarters

Swipe Right Into Tinder’s 7-Story West Hollywood Headquarters

Tinder’s new headquarters in West Hollywood, California designed by Rapt Studio could be imagined as a thoughtful response to the transformative changes that have affected the corporate workplace dynamics the last few years. The seven-story, 77,000-square-foot project, handled by the same creative consultancy responsible for developing other creative spaces for the likes of Google, Dropbox, and Vans, is imagined to reestablish the pandemic-frayed ties that bind individuals into creative collaborative teams – and by extension, between the app users they seek to support – designing a multi-level headquarters layered with a multitude of opportunities for collaboration and connection.

One young woman seated with two young men across sectional sofa conversing in Tinder HQ surrounded by palm plants.

Rapt Studio began the project by researching existing public space typologies, from the town square to the speakeasy, that empower a progressive deepening of ties that bind workers with their work in an organic manner.

Wide open communal space with open doors, and several Tinder employees conversing, others checking their mobile devices.

Modeled after a town square, The Commons is the largest and most expansive of the spaces, and also the entry point into Tinder’s new headquarters. The airy environment is intended to encourage casual interactions and large enough to accommodate for company-wide gatherings.

Woman at green round table seated at her laptop at Tinder's in-house coffee cafe seated around a circular built-in cushioned seating area.

Tinder's coffee bar with barista preparing a shot with two employees seated at the bar laughing and woman in the background seated at a table work from her laptop.

The café — or “Boost Bar” — sits on the second floor, giving employees access to the skills of an in-house barista, and in turn providing an informal space to work away from the desk.

Back view of man in green long sleeve shirt looking at white wall displaying a mix of emojis and other icon-based graphics protruding from the surface, alongside a "Game Over" sign glowing to the left.

Wide view of long glass table following a long white wall displaying a mix of emojis and other icon-based graphics protruding from the surface, alongside a "Game Over" sign glowing to the left. Orange bannered lighting is overhead.

The IT help desk is fashioned after the nostalgic memories of the neighborhood arcade.

Room with two sides of corner sliding doors open with a "La Galleria" sign outside with "WIP" displayed on it; people inside are moving standing desks on wheels.

Diffuse lighting, custom modular furniture on wheels, and walls clad in top-to-bottom whiteboards all inhabit La Galleria, a room drawing its atmosphere from the workshops and displays of an artist studio.

Two women seated at a hot pink desk and chairs near floor to ceiling bookshelves inside Tinder headquarters.

A custom hot-pink central table with cutouts and bookshelves filled with a few books and design objects, with muted pink carpeting.

A custom hot-pink central table with cutouts along the edges offers a surprisingly idiosyncratic hue to the space’s otherwise muted purpose.

Young woman seated and laughing, looking at her laptop seated in a gray armchair with bookshelf in background and backpack on the floor.

Floor six is dedicated to quieter activities and appropriately demarcated as The Stacks, a tranquil communal space fashioned after a library.

Nine Tinder employees standing and seated across various plush deep blue upholstered seats undulating across a muted blue checker carpeted floor.

Deep blue hues across plush fabrics, with curvilinear walls and curtains framing windows overlooking the LA skyline give the pinnacle seventh floor a nightclub vibe. Seating arrangements are situated to encourage engagement within intimate groups – a “secret” employee getaway of sorts.

“Connection is at the heart of the Tinder brand,” says Rapt Studio CEO and Chief Creative Officer David Galullo. “To design a space that deepens connection within Tinder, we looked to the places where we typically build relationships and then mapped them onto a floor plan. The end project emphasizes how design itself can be a force of connection.”

Outdoor seating area with brick floor, black metal chairs and tables, alongside a trio of red and light purple stools surrounded by large palm leaf plants.

Tinder’s new HQ shares some similarities to Rapt Studio’s previous project, The Schoolhouse, a creative office for The Google School for Leaders. Each share the goal to spur informal engagements between team members by carving out both shared and intimate spaces, and furnished to empower employees to adapt those spaces to their needs on an as-needed basis.

Mizetto’s Summer Collection Tests Design’s Boundaries

Mizetto’s Summer Collection Tests Design’s Boundaries

Creative and fun, Mizetto’s Summer 2023 Collection lives somewhere between work and play. The brand has pushed its own capabilities, exploring new materials, production methods, and functionality. Made in Sweden, the latest release includes a wood chair, a versatile table with attachments, a leaning piece, modular planters, and a trash/recycling bin. All share the qualities of clean lines and curves and leave you wanting to experience each for yourself. Known for its color combinations, Mizetto has also added five new “Nordic noir” hues: rusty burgundy, cloudy latte, forest green, latte, and dusty blue.

long dark maroon leaning bench with small attached round table

Lumber by Addi \\\ Photo: Jonas Lindstrom

Perhaps the most curious addition is Lumber by Addi, a piece meant for leaning, lingering, and loitering. The soft beam’s release marks the first upholstered product introduced by the brand. It’s a great answer to adding seating to small spaces, and we can’t help but note its resemblance to a dynamic piece of gymnastics equipment. A quick place to stop on the go for a coffee or email check, Lumber’s small tray-like table adds further functionality to a piece with no obvious front or back. It can even be hung on a wall for maximum space saving. Lumber’s upholstery is flameproof wool, with a cover that’s fully removable, repairable, and exchangeable. The legs are powder coated metal.

long black leaning bench with small attached round table mounted to a wall

Lumber by Addi \\\ Photo: Jonas Lindstrom

long dark maroon leaning bench with small attached round table and small version mounted to the wall

Lumber by Addi \\\ Photo: Jonas Lindstrom

two long black leaning benches with small attached round table mounted to the wall

Lumber by Addi \\\ Photo: Jonas Lindstrom

monochromatic styled blue space with three chairs

Embrace Chair by Sami Kallio \\\ Photo: Jonas Lindstrom

A wooden chair is new territory for Mizetto, so they turned to an expert for help – Finish-Swedish furniture designer and woodworker Sami Kallio. The Embrace armchair was a result of the brand lacking seating in their own spaces, and shortly after, Kallio walked in with a fully functioning prototype.

“A few alterations later, Embrace was born; a chair that seemingly hugs its user. I love how it can be hung on a tabletop and stacked, but still provide us with all the beauty and comfort we seek in a piece of furniture,” said Rickard Muskala, founder, and chief of product development.

Kallio is also behind the multi-purpose table in the Embrace series.

styled space with two dining chairs

Embrace Chair + Embrace Table by Sami Kallio \\\ Photo: Jonas Lindstrom

styled space with arm pushing a blue dining chair under a wood dining table

Embrace Chair + Embrace Table by Sami Kallio \\\ Photo: Jonas Lindstrom

detail of wood dining chair

Embrace Chair by Sami Kallio \\\ Photo: Jonas Lindstrom

modular beige planter with greenery against a beige background

Plant Here by addi \\\ Photo: Jonas Lindstrom

Playful, fun, and modular, Addi’s Plant Here gives our green friends a pedestal fitting of their mood-enhancing ways. The planter pays attention to the various needs of different varietals through its accessible design, whether you’re a balcony or office gardener. Features include a generous depth, transparent inner pot for easy planting, different heights, shapes, sizes, and colors. Combine two or more to form endlessly possible installations.

modular dark maroon and beige planters with greenery against a beige background

Plant Here by addi \\\ Photo: Jonas Lindstrom

three tall cylindrical garbage cans

Pelican by Studio Nooi

Trash and recycling bins are a necessity, but that doesn’t mean they have to look like one. Pelican by Studio Nooi turns them into minimal decorative objects with touchless interaction. Their semicircular shape allows for modular design, creating an oval when placed back to back. Pelican’s design is suitable for residential as well as commercial spaces, and comes in two sizes and a variety of colors.

living space with a staircase, side table, and two tall cylindrical garbage cans

Pelican by Studio Nooi

two tall black cylindrical garbage cans against a black wall

Pelican by Studio Nooi \\\ Photo: Jonas Lindstrom

tall beige cylindrical garbage can against a beige wall

Pelican by Studio Nooi \\\ Photo: Jonas Lindstrom

seven tall cylindrical garbage cans in various muted tones

Pelican by Studio Nooi

To learn more about Mizetto’s Summer 2023 collection, visit mizetto.se.

F5: Crystal Williams Is Far More Than RISD’s President

F5: Crystal Williams Is Far More Than RISD’s President

As Rhode Island School of Design’s (RISD) 18th president, Crystal Williams believes that education, art and design, and staying committed to equity and justice are essential to transforming our society. At RISD, the Detroit-born activist is working to drive meaningful change centered on expanding inclusion, equity, and access. To back that up, Crystal has more than two decades of higher education experience as a professor of English as well as serving in roles that oversaw diversity, equity, and inclusion at Boston University, Bates College, and Reed College. The ultimate goal behind Crystal’s role at RISD is to enhance the learning environment by making sure it includes diverse experiences, viewpoints, and talents.

brown-skinned woman with short black hair wearing a black turtleneck and long gold earrings looks into the camera

Photo: Jo Sittenfeld

However, Crystal’s talents go beyond the halls and classrooms of colleges and universities – she’s also an award-winning poet and essayist. So far, she’s published four collections of poems and is the recipient of several artistic fellowships, grants, and honors. Most recently Detroit as Barn, was named as a finalist for the National Poetry Series, Cleveland State Open Book Prize, and the Maine Book Award. Crystal’s third collection, Troubled Tongues, was awarded the 2009 Naomi Long Madgett Poetry Prize and was a finalist for the 2009 Oregon Book Award, the Idaho Poetry Prize, and the Crab Orchard Poetry Prize. Her first two books were Kin and Lunatic, published in 2000 and 2002. Crystal’s work regularly appears in leading journals and magazines nationwide.

Today, Crystal Williams is joining us for Friday Five!

high contrast orange sunset of a large body of water

Martha’s Vineyard \\\ Photo: Crystal Williams

1. Silence

Originally, I was going to write about a place that inspires me. But when I truly started to consider places I find inspiring, I realized that each of them elicits and enables silence and stillness, a refraction of silence (at least for me). So then, silence itself is the thing that inspires me. Silence inspires me to delve and investigate and allows me to situate myself in wonder and awe – in the amplitude and magnitude of who and what and how we are as a species, to sometimes take issue with personal fears or traumas or worse – the behaviors that ultimately impede personal and spiritual growth or insight.

For me, silence is a great gift. Perhaps the greatest. It is a balm. Through it, I connect to the world not as Crystal Williams of this particular body but as a congregation of embodied energy and spirit. In this way, it is the catalyst through which all good art, poetry, ideas, and leadership emerge. So it is among the most inspirational things in my life – and among the most rare, given my life.

book opened to a page with a poem

Photo: Crystal Williams

2. Lucille Clifton Poem

I admire many poems. But Lucille Clifton’s “won’t you celebrate with me” (which is how it is commonly known although Clifton did not, in “Book of Light” originally title the poem), is the one that inspires me the most. It is a poem that speaks to resilience, fortitude, bravery, imagination, hope, and it names what being a Black woman in the United States can and often does elicit.

“won’t you celebrate with me
what I have shaped into
a kind of life? i had no model.
born in babylon
both nonwhite and woman
….
…come celebrate
with me that everyday
something has tried to kill me
and has failed.”

video still of a brown-skinned woman in a black dress singing into a microphone

Nancy Wilson, Carnegie Hall, 1987 \\\ Video still courtesy YouTube

3. Nancy Wilson, “How Glad I Am,” Carnegie Hall, 1987

There are moments in art when an artist transforms one thing into another, utterly broadening, deepening, and transmuting the original meaning. In this live version of “How Glad I Am,” her encore performance at the 1987 “Live at Carnegie Hall” performance, Wilson – a vocalist I listened to obsessively as a younger person – transforms a simple song between lovers into a rousing tribute from an artist to her audience. This performance is the most profoundly loving example I have witnessed of an artist speaking directly and forcefully to the mutuality between artists and audiences. And it’s become a kind of personal soundtrack when I’m walking through my life, especially my life as a poet and now as president. Often, when I’m among creatives, I hear Wilson’s gorgeous, gravely voice imploring: “you don’t know how glad I am [for you].”

two people wearing black face masks work on a lighting project on a large white table

RISD students \\\ Photo: Jo Sittenfeld

4. Young Creatives

Listen, these young people at RISD and young creatives everywhere are our best-case scenario. They are our visionaries, if only we can amplify them, listen to them, and then get out of their way. They have all the love (and strategy and insight and knowledge) we need if we can help them wield it successfully. They have all the intelligence and ingenuity we need to help solve our challenges and advance what is good, right, and just among our species. Added to those attributes are other facts: they are funny and curious and eager to learn and gloriously unusual.

I watch them here at RISD in their multi-colored outfits, hair-dos, and platform shoes, giggling with each other in front of the snack machine or intensely applying their best thinking to each others’ work during critiques. I listen to them grappling with big ideas, considering, reconsidering, and redesigning our world as if on slant, eschewing the boxes into which we have crammed stale ideas that continue to guide our actions. And I watch them in their magnitude – in the more quotidian actions of their lives trudging up and down the severe hill outside with their humongous portfolios and unwieldy art projects, and think through it all, “Wow” and think “to be so young and so powerful and necessary” and think “thank God” and think “Thank you, young people, for saying yes to the impulse that brought you here.” Not only do they inspire me, they humble me and they – each one of them – feel like a balm, like hope incarnate.

brown-skinned man wearing a suit, light-skinned woman with dark hair wearing a patterned dress, and a brown-skinned baby girl in a white dress posing for a family portrait

Photo: Crystal Williams

5. My Parents

My folks married in 1967 against all odds. They were of different ethnicities – he Black, she white. Different places – he from the Jim Crow South, she from Detroit, Michigan. Different eras – he born in 1907, she in 1936. Different careers – he a jazz musician and automotive foundry worker, she a public school teacher. And different educational backgrounds – he, we think, not a high school graduate, she a college graduate. And yet, they found each other over the keys of a piano and decided, against society’s cruel eye and hard palm, to love each other and to love me. I now understand the courage it took for all of that to be true, for them to make a way, for them to walk through the world in 1967 as a couple and with me as their child. That courage inspires me. Those decisions inspire me. They inspire me. Everyday. All day.

 

Work by Crystal Williams:

orange book cover reading Kin by Crystal Williams

Kin by Crystal Williams, 2000 \\\ Williams utilizes memory and music as she lyrically weaves her way through American culture, pointing to the ways in which alienation, loss, and sensed “otherness” are corollaries of recent phenomena.

red book cover reading Lunatic: Poems by Crystal Williams

Lunatic: Poems by Crystal Williams, 2002 \\\ Williams confronts large-scale social and cultural events such as September 11, the death of Amadou Diallo, and the Chicago Race Riots in addition to exploring the often paralyzing terrain of loss, desire, and displacement. Among its most common themes is personal responsibility.

white book cover with a photo of green plants that reads Troubled Tongues by Crystal Williams

Troubled Tongues by Crystal Williams, 2009 \\\ In each of the three sections of this book is a prose poem meant to be read aloud in which a character, interacting with other characters, is named for a quality. They are Beauty, Happiness, and Patience.

predominantly grey book cover reading Detroit as Barn: Poems by Crystal Williams

Detroit as Barn: Poems by Crystal Williams, 2014

This post contains affiliate links, so if you make a purchase from an affiliate link, we earn a commission. Thanks for supporting Design Milk!

Division Twelve’s Twigz Is Small in Stature, Big on Impact

Division Twelve’s Twigz Is Small in Stature, Big on Impact

High impact meets compact design in Division Twelve’s new Twigz café collection, created in collaboration with design duo Jones & de Leval. The furniture family’s throughline is a minimal frame with a small footprint, proving you don’t need visual heft to make a big impact. Twigz’s design details are ready to add plenty of interest to any small space, with both indoor and outdoor options available. Combine stackable chairs, benches, and tables to create a unique setup that’s all your own.

Twigz offers plenty of options to make it happen. Steel or upholstered chairs, round or rectangular table, and 20 powder coat colors are your creative playground. The one thing you won’t have deliberate is whether to play up form or function – Twigz does it all. Furthermore, the collection does so while being fully carbon neutral. Watch below to learn more about Twigz:

Stay Clear of the Door

An AI door, according to a generative AI

Written by David Lyreskog 

 

In what is quite possibly my last entry for the Practical Ethics blog, as I’m sadly leaving the Uehiro Centre in July, I would like to reflect on some things that have been stirring my mind the last year or so.

In particular, I have been thinking about thinking with machines, with people, and what the difference is.

The Uehiro Centre for Practical Ethics is located in an old carpet warehouse on an ordinary side street in Oxford. Facing the building, there is a gym to your left, and a pub to your right, mocking the researchers residing within the centre walls with a daily dilemma. 

As you are granted access to the building – be it via buzzer or key card – a dry, somewhat sad, voice states “stay clear of the door” before the door slowly swings open.

The other day a colleague of mine shared a YouTube video of the presentation The AI Dilemma, by Tristan Harris and Aza Raskin. In it, they share with the audience their concerns about the rapid and somewhat wild development of artificial intelligence (AI) in the hands of a few tech giants. I highly recommend it. (The video, that is. Not the rapid and somewhat wild development of AI in the hands of a few tech giants).

 

Much like the thousands of signatories of the March open call to “pause giant AI experiments”, and recently the “Godfather of AI” Geoffrey Hinton, Harris and Raskin warn us that we are on the brink of major (negative, dangerous) social disruption due to the power of new AI technologies.

 

Indeed, there’s a bit of a public buzz about “AI ethics” in recent months.

 

While it is good that there is a general awareness and a public discussion about AI – or any majorly disruptive phenomenon for that matter – there’s a potential problem with the abstraction: AI is portrayed as this big, emerging, technological, behemoth which we cannot or will not control. But it has been almost three decades since humans were able to beat an AI at a game of chess. We have been using AI for many things, from medical diagnosis to climate prediction, with little to no concern about it besting us and/or stripping us of agency in these domains. In other domains, such as driving cars, and military applications of drones, there has been significantly more controversy.

All this is just to say that AI ethics is not for hedgehogs – it’s not “one big thing”[i] – and I believe that we need to actively avoid a narrative and a line of thinking which paints it to be. In examining the ethical dimensions of a multitude of AI inventions, then, we ought to take care to limit the scope of our inquiry to the domain in question at the very least.

 

So let us, for argument’s sake, return to that door at the Uehiro Centre, and the voice cautioning visitors to stay clear. Now, as far as I’m aware, the voice and the door are not part of an AI system. I also believe that there is no person who is tasked with waiting around for visitors asking for access, warning them of the impending door swing, and then manually opening the door. I believe it is a quite simple contraption, with a voice recording programmed to be played as the door opens. But does it make a difference to me, or other visitors, which of these possibilities is true?

 

We can call these possibilities:

Condition one (C1): AI door, created by humans.

Condition two (C2): Human speaker & door operator.

Condition three (C3): Automatic door & speaker, programmed by humans.

 

In C3, it seems that the outcome of the visitor’s action will always be the same after the buzzer is pushed or the key card is blipped: the voice will automatically say ‘stay clear of the door’, and the door will open. In C1 and C2, the same could be the case. But it could also be the case that the AI/human has been instructed to assess the risk for visitors on a case-to-case basis, and to only advise caution if there is imminent risk of collision or such (was this the case, I am consistently standing too close to the door when visiting, but that is beside the point).

 

On the surface, I think there are some key differences between these conditions which could have an ethical or moral impact, where some differences are more interesting than others. In C1 and C2, the door opener makes a real-time assessment, rather than following a predetermined cause of action in the way C3’s door opener does. More importantly, C2 is presumed to make this assessment from a place of concern, in a way which is impossible in C1 and C3 because the latter two are not moral agents, and therefore cannot be concerned. They simply do not have the capacity. And our inquiry could perhaps end here.

But it seems it would be a mistake.

 

What if something was to go wrong? Say the door swings open, but no voice warns me to stay clear, and so the door whacks me in the face[ii]. In C2, it seems the human who’s job it is to warn me of the imminent danger might have done something morally wrong, assuming they knew what to expect from opening the door without warning me, but failed in doing so due to negligence[iii]. In C1 and C3, on the other hand, while we may be upset with the door opener(s), we don’t believe that they did anything morally wrong – they just malfunctioned.

 

My colleague Alberto Giubilini highlighted the tensions in the morality of this landscape in what I thought was an excellent piece arguing that “It is not about AI, it is about humans”: we cannot trust AI, because trust is a relationship between moral agents, and AI does not (yet) have the capacity for moral agency and responsibility. We can, however, rely on AI to behave in a certain way (whether we should is a separate issue).

 

Similarly, while we may believe that a human should show concern for their fellow person, we should not expect the same from AIs, because they cannot be concerned.

 

Yet, if the automatic doors continue to whack visitors in the face, we may start feeling that someone should be responsible for this – not only legally, but morally: someone has a moral duty to ensure these doors are safe to pass through, right?

 

In doing so, we expand the field of inquiry, from the door opener to the programmer/constructor of the door opener, and perhaps to someone in charge of maintenance.

 

A couple of things pop to mind here.

 

First, when we find no immediate moral agent to hold responsible for a harmful event, we may expand the search field until we find one. That search seems to me to follow a systematic structure: if the door is automatic, we turn to call the support line, and if the support fails to fix the problem, but turns out to be an AI, we turn to whoever is in charge of support, and so on, until we find a moral agent.

 

Second, it seems to me that, if the door keeps slamming into visitors’ faces in condition in C2, we will not only morally blame the door operator, but also whoever left them in charge of that door. So perhaps the systems-thinking does not only apply when there is a lack of moral agents, but also applies on a more general level when we are de facto dealing with complicated and/or complex systems of agents.

 

Third, let us conjure a condition four (C4) like so: the door is automatic, but in charge of maintenance support is an AI system that is usually very reliable, and in charge of the AI support system, in turn, is a (human) person.

 

If the person in charge of an AI support system that failed to provide adequate service to a faulty automatic door is to blame for anything, it is plausibly for not adequately maintaining the AI support system – but not for whacking people in the face with a door (because they didn’t do that). Yet, perhaps there is some form of moral responsibility for the face-whacking to be found within the system as a whole. I.e. the compound of door-AI-human etc., has a moral duty to avoid face-whacking, regardless of any individual moral agents’ ability to whack faces.

 

If this is correct, it seems to me that we again[iv] find that our traditional means of ascribing moral responsibility fails to capture key aspects of moral life: it is not the case that any agent is individually morally responsible for the face-whacking door, nor are there multiple agents who are individually or collectively responsible for the face-whacking door. Yet, there seems to be moral responsibility for face-whacking doors in the system. Where does it come from, and what is its nature and structure (if it has one)?

 

In this way, not only cognitive processes such as thinking and computing seem to be able to be distributed throughout systems, but perhaps also moral capacities such as concern, accountability, and responsibility.

And in the end, I do not know to what extent it actually matters, at least in this specific domain. Because at the end of the day, I do not care much whether the door opener is human, an AI, or automatic.

 

I just need to know whether or not I need to stay clear of the door.

Notes & References.

[i] Berlin, I. (2013). The hedgehog and the fox: An essay on Tolstoy’s view of history. Princeton University Press.

[ii] I would like to emphasize that this is a completely hypothetical case, and that I take it to be safe to enter the Uehiro centre. The risk of face-whacking is, in my experience, minimal.

[iii] Let’s give them the benefit of the doubt here, and assume it wasn’t maleficence.

[iv] Together with Hazem Zohny, Julian Savulescu, and Ilina Singh, I have previously argued this to be the case in the domain of emerging technologies for collective thinking and decision-making, such as brain-to-brain interfaces. See the Open Access paper Merging Minds for more on this argument.

“Lying” in computer-generated texts: hallucinations and omissions

An image of a human head made with colourful pipe cleaners to illustrate the blog post "'Lying' in computer-generated texts: hallucinations and omissions" by Kees van Deemter and Ehud Reiter

“Lying” in computer-generated texts: hallucinations and omissions

There is huge excitement about ChatGPT and other large generative language models that produce fluent and human-like texts in English and other human languages. But these models have one big drawback, which is that their texts can be factually incorrect (hallucination) and also leave out key information (omission).

In our chapter for The Oxford Handbook of Lying, we look at hallucinations, omissions, and other aspects of “lying” in computer-generated texts. We conclude that these problems are probably inevitable.

Omissions are inevitable because a computer system cannot cram all possibly-relevant information into a text that is short enough to be actually read. In the context of summarising medical information for doctors, for example, the computer system has access to a huge amount of patient data, but it does not know (and arguably cannot know) what will be most relevant to doctors.

Hallucinations are inevitable because of flaws in computer systems, regardless of the type of system. Systems which are explicitly programmed will suffer from software bugs (like all software systems). Systems which are trained on data, such as ChatGPT and other systems in the Deep Learning tradition, “hallucinate” even more. This happens for a variety of reasons. Perhaps most obviously, these systems suffer from flawed data (e.g., any system which learns from the Internet will be exposed to a lot of false information about vaccines, conspiracy theories, etc.). And even if a data-oriented system could be trained solely on bona fide texts that contain no falsehoods, its reliance on probabilistic methods will mean that word combinations that are very common on the Internet may also be produced in situations where they result in false information.

Suppose, for example, on the Internet, the word “coughing” is often followed by “… and sneezing.” Then a patient may be described falsely, by a data-oriented system, as “coughing and sneezing” in situations where they cough without sneezing. Problems of this kind are an important focus for researchers working on generative language models. Where this research will lead us is still uncertain; the best one can say is that we can try to reduce the impact of these issues, but we have no idea how to completely eliminate them.

“Large generative language models’ texts can be factually incorrect (hallucination) and leave out key information (omission).”

The above focuses on unintentional-but-unavoidable problems. There are also cases where a computer system arguably should hallucinate or omit information. An obvious example is generating marketing material, where omitting negative information about a product is expected. A more subtle example, which we have seen in our own work, is when information is potentially harmful and it is in users’ best interests to hide or distort it. For example, if a computer system is summarising information about sick babies for friends and family members, it probably should not tell an elderly grandmother with a heart condition that the baby may die, since this could trigger a heart attack.

Now that the factual accuracy of computer-generated text draws so much attention from society as a whole, the research community is starting to realize more clearly than before that we only have a limited understanding of what it means to speak the truth. In particular, we do not know how to measure the extent of (un)truthfulness in a given text.

To see what we mean, suppose two different language models answer a user’s question in two different ways, by generating two different answer texts. To compare these systems’ performance, we would need a “score card” that allowed us to objectively score the two texts as regards their factual correctness, using a variety of rubrics. Such a score card would allow us to record how often each type of error occurs in a given text, and aggregate the result into an overall truthfulness score for that text. Of particular importance would be the weighing of errors: large errors (e.g., a temperature reading that is very far from the actual temperature) should weigh more heavily than small ones, key facts should weigh more heavily than side issues, and errors that are genuinely misleading should weigh more heavily than typos that readers can correct by themselves. Essentially, the score card would work like a fair school teacher who marks pupils’ papers.

We have developed protocols for human evaluators to find factual errors in generated texts, as have other researchers, but we cannot yet create a score card as described above because we cannot assess the impact of individual errors.

What is needed, we believe, is a new strand of linguistically informed research, to tease out all the different parameters of “lying” in a manner that can inform the above-mentioned score cards, and that may one day be implemented into a reliable fact-checking protocol or algorithm. Until that time, those of us who are trying to assess the truthfulness of ChatGPT will be groping in the dark.

Featured image by Google DeepMind Via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

❌