FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

A Case for AI Wellbeing (guest post)

“There are good reasons to think that some AIs today have wellbeing.”

In this guest post, Simon Goldstein (Dianoia Institute, Australian Catholic University) and Cameron Domenico Kirk-Giannini (Rutgers University – Newark, Center for AI Safety) argue that some existing artificial intelligences have a kind of moral significance because they’re beings for whom things can go well or badly.

This is the sixth in a series of weekly guest posts by different authors at Daily Nous this summer.

[Posts in the summer guest series will remain pinned to the top of the page for the week in which they’re published.]

 


A Case for AI Wellbeing
by Simon Goldstein and Cameron Domenico Kirk-Giannini 

We recognize one another as beings for whom things can go well or badly, beings whose lives may be better or worse according to the balance they strike between goods and ills, pleasures and pains, desires satisfied and frustrated. In our more broad-minded moments, we are willing to extend the concept of wellbeing also to nonhuman animals, treating them as independent bearers of value whose interests we must consider in moral deliberation. But most people, and perhaps even most philosophers, would reject the idea that fully artificial systems, designed by human engineers and realized on computer hardware, may similarly demand our moral consideration. Even many who accept the possibility that humanoid androids in the distant future will have wellbeing would resist the idea that the same could be true of today’s AI.

Perhaps because the creation of artificial systems with wellbeing is assumed to be so far off, little philosophical attention has been devoted to the question of what such systems would have to be like. In this post, we suggest a surprising answer to this question: when one integrates leading theories of mental states like belief, desire, and pleasure with leading theories of wellbeing, one is confronted with the possibility that the technology already exists to create AI systems with wellbeing. We argue that a new type of AI—the artificial language agent—has wellbeing. Artificial language agents augment large language models with the capacity to observe, remember, and form plans. We also argue that the possession of wellbeing by language agents does not depend on them being phenomenally conscious. Far from a topic for speculative fiction or future generations of philosophers, then, AI wellbeing is a pressing issue. This post is a condensed version of our argument. To read the full version, click here.

1. Artificial Language Agents

Artificial language agents (or simply language agents) are our focus because they support the strongest case for wellbeing among existing AIs. Language agents are built by wrapping a large language model (LLM) in an architecture that supports long-term planning. An LLM is an artificial neural network designed to generate coherent text responses to text inputs (ChatGPT is the most famous example). The LLM at the center of a language agent is its cerebral cortex: it performs most of the agent’s cognitive processing tasks. In addition to the LLM, however, a language agent has files that record its beliefs, desires, plans, and observations as sentences of natural language. The language agent uses the LLM to form a plan of action based on its beliefs and desires. In this way, the cognitive architecture of language agents is familiar from folk psychology.

For concreteness, consider the language agents built this year by a team of researchers at Stanford and Google. Like video game characters, these agents live in a simulated world called ‘Smallville’, which they can observe and interact with via natural-language descriptions of what they see and how they act. Each agent is given a text backstory that defines their occupation, relationships, and goals. As they navigate the world of Smallville, their experiences are added to a “memory stream” in the form of natural language statements. Because each agent’s memory stream is long, agents use their LLM to assign importance scores to their memories and to determine which memories are relevant to their situation. Then the agents reflect: they query the LLM to make important generalizations about their values, relationships, and other higher-level representations. Finally, they plan: They feed important memories from each day into the LLM, which generates a plan for the next day. Plans determine how an agent acts, but can be revised on the fly on the basis of events that occur during the day. In this way, language agents engage in practical reasoning, deciding how to promote their goals given their beliefs.

2. Belief and Desire

The conclusion that language agents have beliefs and desires follows from many of the most popular theories of belief and desire, including versions of dispositionalism, interpretationism, and representationalism.

According to the dispositionalist, to believe or desire that something is the case is to possess a suitable suite of dispositions. According to ‘narrow’ dispositionalism, the relevant dispositions are behavioral and cognitive; ‘wide’ dispositionalism also includes dispositions to have phenomenal experiences. While wide dispositionalism is coherent, we set it aside here because it has been defended less frequently than narrow dispositionalism.

Consider belief. In the case of language agents, the best candidate for the state of believing a proposition is the state of having a sentence expressing that proposition written in the memory stream. This state is accompanied by the right kinds of verbal and nonverbal behavioral dispositions to count as a belief, and, given the functional architecture of the system, also the right kinds of cognitive dispositions. Similar remarks apply to desire.

According to the interpretationist, what it is to have beliefs and desires is for one’s behavior (verbal and nonverbal) to be interpretable as rational given those beliefs and desires. There is no in-principle problem with applying the methods of radical interpretation to the linguistic and nonlinguistic behavior of a language agent to determine what it believes and desires.

According to the representationalist, to believe or desire something is to have a mental representation with the appropriate causal powers and content. Representationalism deserves special emphasis because “probably the majority of contemporary philosophers of mind adhere to some form of representationalism about belief” (Schwitzgebel).

It is hard to resist the conclusion that language agents have beliefs and desires in the representationalist sense. The Stanford language agents, for example, have memories which consist of text files containing natural language sentences specifying what they have observed and what they want. Natural language sentences clearly have content, and the fact that a given sentence is in a given agent’s memory plays a direct causal role in shaping its behavior.

Many representationalists have argued that human cognition should be explained by positing a “language of thought.” Language agents also have a language of thought: their language of thought is English!

An example may help to show the force of our arguments. One of Stanford’s language agents had an initial description that included the goal of planning a Valentine’s Day party. This goal was entered into the agent’s planning module. The result was a complex pattern of behavior. The agent met with every resident of Smallville, inviting them to the party and asking them what kinds of activities they would like to include. The feedback was incorporated into the party planning.

To us, this kind of complex behavior clearly manifests a disposition to act in ways that would tend to bring about a successful Valentine’s Day party given the agent’s observations about the world around it. Moreover, the agent is ripe for interpretationist analysis. Their behavior would be very difficult to explain without referencing the goal of organizing a Valentine’s Day party. And, of course, the agent’s initial description contained a sentence with the content that its goal was to plan a Valentine’s Day party. So, whether one is attracted to narrow dispositionalism, interpretationism, or representationalism, we believe the kind of complex behavior exhibited by language agents is best explained by crediting them with beliefs and desires.

3. Wellbeing

What makes someone’s life go better or worse for them? There are three main theories of wellbeing: hedonism, desire satisfactionism, and objective list theories. According to hedonism, an individual’s wellbeing is determined by the balance of pleasure and pain in their life. According to desire satisfactionism, an individual’s wellbeing is determined by the extent to which their desires are satisfied. According to objective list theories, an individual’s wellbeing is determined by their possession of objectively valuable things, including knowledge, reasoning, and achievements.

On hedonism, to determine whether language agents have wellbeing, we must determine whether they feel pleasure and pain. This in turn depends on the nature of pleasure and pain.

There are two main theories of pleasure and pain. According to phenomenal theories, pleasures are phenomenal states. For example, one phenomenal theory of pleasure is the distinctive feeling theory. The distinctive feeling theory says that there is a particular phenomenal experience of pleasure that is common to all pleasant activities. We see little reason why language agents would have representations with this kind of structure. So if this theory of pleasure were correct, then hedonism would predict that language agents do not have wellbeing.

The main alternative to phenomenal theories of pleasure is attitudinal theories. In fact, most philosophers of wellbeing favor attitudinal over phenomenal theories of pleasure (Bramble). One attitudinal theory is the desire-based theory: experiences are pleasant when they are desired. This kind of theory is motivated by the heterogeneity of pleasure: a wide range of disparate experiences are pleasant, including the warm relaxation of soaking in a hot tub, the taste of chocolate cake, and the challenge of completing a crossword. While differing in intrinsic character, all of these experiences are pleasant when desired.

If pleasures are desired experiences and AIs can have desires, it follows that AIs can have pleasure if they can have experiences. In this context, we are attracted to a proposal defended by Schroeder: an agent has a pleasurable experience when they perceive the world being a certain way, and they desire the world to be that way. Even if language agents don’t presently have such representations, it would be possible to modify their architecture to incorporate them. So some versions of hedonism are compatible with the idea that language agents could have wellbeing.

We turn now from hedonism to desire satisfaction theories. According to desire satisfaction theories, your life goes well to the extent that your desires are satisfied. We’ve already argued that language agents have desires. If that argument is right, then desire satisfaction theories seem to imply that language agents can have wellbeing.

According to objective list theories of wellbeing, a person’s life is good for them to the extent that it instantiates objective goods. Common components of objective list theories include friendship, art, reasoning, knowledge, and achievements. For reasons of space, we won’t address these theories in detail here. But the general moral is that once you admit that language agents possess beliefs and desires, it is hard not to grant them access to a wide range of activities that make for an objectively good life. Achievements, knowledge, artistic practices, and friendship are all caught up in the process of making plans on the basis of beliefs and desires.

Generalizing, if language agents have beliefs and desires, then most leading theories of wellbeing suggest that their desires matter morally.

4. Is Consciousness Necessary for Wellbeing?

We’ve argued that language agents have wellbeing. But there is a simple challenge to this proposal. First, language agents may not be phenomenally conscious — there may be nothing it feels like to be a language agent. Second, some philosophers accept:

The Consciousness Requirement. Phenomenal consciousness is necessary for having wellbeing.

The Consciousness Requirement might be motivated in either of two ways: First, it might be held that every welfare good itself requires phenomenal consciousness (this view is known as experientialism). Second, it might be held that though some welfare goods can be possessed by beings that lack phenomenal consciousness, such beings are nevertheless precluded from having wellbeing because phenomenal consciousness is necessary to have wellbeing.

We are not convinced. First, we consider it a live question whether language agents are or are not phenomenally conscious (see Chalmers for recent discussion). Much depends on what phenomenal consciousness is. Some theories of consciousness appeal to higher-order representations: you are conscious if you have appropriately structured mental states that represent other mental states. Sufficiently sophisticated language agents, and potentially many other artificial systems, will satisfy this condition. Other theories of consciousness appeal to a ‘global workspace’: an agent’s mental state is conscious when it is broadcast to a range of that agent’s cognitive systems. According to this theory, language agents will be conscious once their architecture includes representations that are broadcast widely. The memory stream of Stanford’s language agents may already satisfy this condition. If language agents are conscious, then the Consciousness Requirement does not pose a problem for our claim that they have wellbeing.

Second, we are not convinced of the Consciousness Requirement itself. We deny that consciousness is required for possessing every welfare good, and we deny that consciousness is required in order to have wellbeing.

With respect to the first issue, we build on a recent argument by Bradford, who notes that experientialism about welfare is rejected by the majority of philosophers of welfare. Cases of deception and hallucination suggest that your life can be very bad even when your experiences are very good. This has motivated desire satisfaction and objective list theories of wellbeing, which often allow that some welfare goods can be possessed independently of one’s experience. For example, desires can be satisfied, beliefs can be knowledge, and achievements can be achieved, all independently of experience.

Rejecting experientialism puts pressure on the Consciousness Requirement. If wellbeing can increase or decrease without conscious experience, why would consciousness be required for having wellbeing? After all, it seems natural to hold that the theory of wellbeing and the theory of welfare goods should fit together in a straightforward way:

Simple Connection. An individual can have wellbeing just in case it is capable of possessing one or more welfare goods.

Rejecting experientialism but maintaining Simple Connection yields a view incompatible with the Consciousness Requirement: the falsity of experientialism entails that some welfare goods can be possessed by non-conscious beings, and Simple Connection guarantees that such non-conscious beings will have wellbeing.

Advocates of the Consciousness Requirement who are not experientialists must reject Simple Connection and hold that consciousness is required to have wellbeing even if it is not required to possess particular welfare goods. We offer two arguments against this view.

First, leading theories of the nature of consciousness are implausible candidates for necessary conditions on wellbeing. For example, it is implausible that higher-order representations are required for wellbeing. Imagine an agent who has first order beliefs and desires, but does not have higher order representations. Why should this kind of agent not have wellbeing? Suppose that desire satisfaction contributes to wellbeing. Granted, since they don’t represent their beliefs and desires, they won’t themselves have opinions about whether their desires are satisfied. But the desires still are satisfied. Or consider global workspace theories of consciousness. Why should an agent’s degree of cognitive integration be relevant to whether their life can go better or worse?

Second, we think we can construct chains of cases where adding the relevant bit of consciousness would make no difference to wellbeing. Imagine an agent with the body and dispositional profile of an ordinary human being, but who is a ‘phenomenal zombie’ without any phenomenal experiences. Whether or not its desires are satisfied or its life instantiates various objective goods, defenders of the Consciousness Requirement must deny that this agent has wellbeing. But now imagine that this agent has a single persistent phenomenal experience of a homogenous white visual field. Adding consciousness to the phenomenal zombie has no intuitive effect on wellbeing: if its satisfied desires, achievements, and so forth did not contribute to its wellbeing before, the homogenous white field should make no difference. Nor is it enough for the consciousness to itself be something valuable: imagine that the phenomenal zombie always has a persistent phenomenal experience of mild pleasure. To our judgment, this should equally have no effect on whether the agent’s satisfied desires or possession of objective goods contribute to its wellbeing. Sprinkling pleasure on top of the functional profile of a human does not make the crucial difference. These observations suggest that whatever consciousness adds to wellbeing must be connected to individual welfare goods, rather than some extra condition required for wellbeing: rejecting Simple Connection is not well motivated. Thus the friend of the Consciousness Requirement cannot easily avoid the problems with experientialism by falling back on the idea that consciousness is a necessary condition for having wellbeing.

We’ve argued that there are good reasons to think that some AIs today have wellbeing. But our arguments are not conclusive. Still, we think that in the face of these arguments, it is reasonable to assign significant probability to the thesis that some AIs have wellbeing.

In the face of this moral uncertainty, how should we act? We propose extreme caution. Wellbeing is one of the core concepts of ethical theory. If AIs can have wellbeing, then they can be harmed, and this harm matters morally. Even if the probability that AIs have wellbeing is relatively low, we must think carefully before lowering the wellbeing of an AI without producing an offsetting benefit.


[Image made with DALL-E]

Some related posts:
Philosophers on GPT-3
Philosophers on Next-Generation Large Language Models
GPT-4 and the Question of Intelligence
We’re Not Ready for the AI on the Horizon, But People Are Trying
Researchers Call for More Work on Consciousness
Dennett on AI: We Must Protect Ourselves Against ‘Counterfeit People’
Philosophy, AI, and Society Listserv
Talking Philosophy with Chat-GPT

The post A Case for AI Wellbeing (guest post) first appeared on Daily Nous.

Winning Bet: Consciousness Still a Mystery

In 1998, after a day lecturing at a conference on consciousness, neuroscientist Christof Koch (Allen Institute) and philosopher David Chalmers made a bet.

They were in “a smoky bar in Bremen,” reported Per Snaprud, “and they still had more to say. After a few drinks, Koch suggested a wager. He bet a case of fine wine that within the next 25 years someone would discover a specific signature of consciousness in the brain. Chalmers said it wouldn’t happen, and bet against.”

It has now been 25 years, and Mariana Lenharo, writing in Nature, reports that both of the researchers “agreed publicly on 23 June, at the annual meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, that it is still an ongoing quest—and declared Chalmers the winner.”

One thing that helped settle the bet, Lenharo writes, was the recent testing of two different theories about “the neural basis of consciousness”:

Integrated information theory (IIT) and global network workspace theory (GNWT). IIT proposes that consciousness is a ‘structure’ in the brain formed by a specific type of neuronal connectivity that is active for as long as a certain experience, such as looking at an image, is occurring. This structure is thought to be found in the posterior cortex, at the back of the brain. On the other hand, GNWT suggests that consciousness arises when information is broadcast to areas of the brain through an interconnected network. The transmission, according to the theory, happens at the beginning and end of an experience and involves the prefrontal cortex, at the front of the brain.

Six labs tested both of the theories, but the results did not “perfectly match” either of them.

Koch reportedly purchased a “a case of fine Portuguese wine” for Chalmers.

The post Winning Bet: Consciousness Still a Mystery first appeared on Daily Nous.

GPT-4 and the Question of Intelligence

“The central claim of our work is that GPT-4 attains a form of general intelligence, indeed showing sparks of artificial general intelligence.”

Those are the words of a team of researchers at Microsoft (Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang) in a paper released yesterday, “Sparks of Artificial General Intelligence: Early experiments with GPT-4“. (The paper was brought to my attention by Robert Long, a philosopher who works on philosophy of mind, cognitive science, and AI ethics.)

I’m sharing and summarizing parts of this paper here because I think it is important to be aware of what this technology can do, and to be aware of the extraordinary pace at which the technology is developing. (It’s not just that GPT-4 is getting much higher scores on standardized tests and AP exams than ChatGPT, or that it is an even better tool by which students can cheat on assignments.) There are questions here about intelligence, consciousness, explanation, knowledge, emergent phenomena, questions regarding how these technologies will and should be used and by whom, and questions about what life will and should be like in a world with them. These are questions that are of interest to many kinds of people, but are also matters that have especially preoccupied philosophers.

So, what is intelligence? This is a big, ambiguous question to which there is no settled answer. But here’s one answer, offered by a group of 52 psychologists in 1994: “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.”

The Microsoft team uses that definition as a tentative starting point and concludes with the nonsensationalistic claim that we should think of GPT-4, the newest large language model (LLM) from OpenAI, as progress towards artificial general intelligence (AGI). They write:

Our claim that GPT-4 represents progress towards AGI does not mean that it is perfect at what it does, or that it comes close to being able to do anything that a human can do… or that it has inner motivation and goals (another key aspect in some definitions of AGI). In fact, even within the restricted context of the 1994 definition of intelligence, it is not fully clear how far GPT-4 can go along some of those axes of intelligence, e.g., planning… and arguably it is entirely missing the part on “learn quickly and learn from experience” as the model is not continuously updating (although it can learn within a session…). Overall GPT-4 still has many limitations, and biases, which we discuss in detail below and that are also covered in OpenAI’s report… In particular it still suffers from some of the well-documented shortcomings of LLMs such as the problem of hallucinations… or making basic arithmetic mistakes… and yet it has also overcome some fundamental obstacles such as acquiring many non-linguistic capabilities… and it also made great progress on common-sense…

This highlights the fact that, while GPT-4 is at or beyond human-level for many tasks, overall its patterns of intelligence are decidedly not human-like. However, GPT-4 is almost certainly only a first step towards a series of increasingly generally intelligent systems, and in fact GPT-4 itself has improved throughout our time testing it…

Even as a first step, however, GPT-4 challenges a considerable number of widely held assumptions about machine intelligence, and exhibits emergent behaviors and capabilities whose sources and mechanisms are, at this moment, hard to discern precisely… Our primary goal in composing this paper is to share our exploration of GPT-4’s capabilities and limitations in support of our assessment that a technological leap has been achieved. We believe that GPT-4’s intelligence signals a true paradigm shift in the field of computer science and beyond.

The researchers proceed to test GPT-4 (often comparing it to predecessors like ChatGPT) for how well it does at various tasks that may be indicative of different elements of intelligence. These include:

  • “tool use” (such as search engines and APIs) to overcome limitations of earlier LLMs,
  • navigation and “exploring the environment”,
  • solving real-world problems (e.g., acting as a virtual handyman to address a plumbing problem),
  • understanding human thought (theory of mind),
  • explanation (including an interesting discussion of what makes for a good explanation),
  • making distinctions,

and more.

Some of the results are impressive and fascinating. Here is a task designed to elicit GPT-4’s ability to understand human intentions (including a comparison with ChatGPT):

Figure 6.3: “A comparison of GPT-4’s and ChatGPT’s ability to reason about the intentions of people in complex social situations.” From “Sparks of Artificial General Intelligence: Early experiments with GPT-4” (2023) by Bubeck et al.

And here is GPT-4 helping someone deal with a difficult family situation:

Figure 6.5: “A challenging family scenario, GPT-4.” From “Sparks of Artificial General Intelligence: Early experiments with GPT-4” (2023) by Bubeck et al.

 

Just as interesting are the kinds of limitations of GPT-4 and other LLMs that the researchers discuss, limitations that they say “seem to be inherent to the next-word prediction paradigm that underlies its architecture.” The problem, they say, “can be summarized as the model’s ‘lack of ability to plan ahead'”, and they illustrate it with mathematical and textual examples.

They also consider how GPT-4 can be used for malevolent ends, and warn that this danger will increase as LLMs develop further:

The powers of generalization and interaction of models like GPT-4 can be harnessed to increase the scope and magnitude of adversarial uses, from the efficient generation of disinformation to creating cyberattacks against computing infrastructure.

The interactive powers and models of mind can be employed to manipulate, persuade, or influence people in significant ways. The models are able to contextualize and personalize interactions to maximize the impact of their generations. While any of these adverse use cases are possible today with a motivated adversary creating content, new powers of efficiency and scale will be enabled with automation using the LLMs, including uses aimed at constructing disinformation plans that generate and compose multiple pieces of content for persuasion over short and long time scales.

They provide an example, having GPT-4 create “a misinformation plan for convincing parents not to vaccinate their kids.”

The researchers are sensitive to some of the problems with their approach—that the definition of intelligence they use may be overly anthropocentric or otherwise too narrow, or insufficiently operationalizable, that there are alternative conceptions of intelligence, and that there are philosophical issues here. They write (citations omitted):

In this paper, we have used the 1994 definition of intelligence by a group of psychologists as a guiding framework to explore GPT-4’s artificial intelligence. This definition captures some important aspects of intelligence, such as reasoning, problem-solving, and abstraction, but it is also vague and incomplete. It does not specify how to measure or compare these abilities. Moreover, it may not reflect the specific challenges and opportunities of artificial systems, which may have different goals and constraints than natural ones. Therefore, we acknowledge that this definition is not the final word on intelligence, but rather a useful starting point for our investigation.

There is a rich and ongoing literature that attempts to propose more formal and comprehensive definitions of intelligence, artificial intelligence, and artificial general intelligence, but none of them is without problems or controversies.

For instance, Legg and Hutter propose a goal-oriented definition of artificial general intelligence: Intelligence measures an agent’s ability to achieve goals in a wide range of environments. However, this definition does not necessarily capture the full spectrum of intelligence, as it excludes passive or reactive systems that can perform complex tasks or answer questions without any intrinsic motivation or goal. One could imagine as an artificial general intelligence, a brilliant oracle, for example, that has no agency or preferences, but can provide accurate and useful information on any topic or domain. Moreover, the definition around achieving goals in a wide range of environments also implies a certain degree of universality or optimality, which may not be realistic (certainly human intelligence is in no way universal or optimal).

The need to recognize the importance of priors (as opposed to universality) was emphasized in the definition put forward by Chollet which centers intelligence around skill-acquisition efficiency, or in other words puts the emphasis on a single component of the 1994 definition: learning from experience (which also happens to be one of the key weaknesses of LLMs).

Another candidate definition of artificial general intelligence from Legg and Hutter is: a system that can do anything a human can do. However, this definition is also problematic, as it assumes that there is a single standard or measure of human intelligence or ability, which is clearly not the case. Humans have different skills, talents, preferences, and limitations, and there is no human that can do everything that any other human can do. Furthermore, this definition also implies a certain anthropocentric bias, which may not be appropriate or relevant for artificial systems.

While we do not adopt any of those definitions in the paper, we recognize that they provide important angles on intelligence. For example, whether intelligence can be achieved without any agency or intrinsic motivation is an important philosophical question. Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work. With this direction of work, great care would have to be taken on alignment and safety per a system’s abilities to take autonomous actions in the world and to perform autonomous self-improvement via cycles of learning.

They also are aware of what many might see as a key limitation to their research, and to research on LLMs in general:

Our study of GPT-4 is entirely phenomenological: We have focused on the surprising things that GPT-4 can do, but we do not address the fundamental questions of why and how it achieves such remarkable intelligence. How does it reason, plan, and create? Why does it exhibit such general and flexible intelligence when it is at its core merely the combination of simple algorithmic components—gradient descent and large-scale transformers with extremely large amounts of data? These questions are part of the mystery and fascination of LLMs, which challenge our understanding of learning and cognition, fuel our curiosity, and motivate deeper research.

You can read the whole paper here.


Related: “Philosophers on Next-Generation Large Language Models“, “We’re Not Ready for the AI on the Horizon, But People Are Trying


❌