FreshRSS

๐Ÿ”’
โŒ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

ChatGPT gets โ€œeyes and earsโ€ with plugins that can interface AI with the world

An illustration of an eyeball

Enlarge (credit: Aurich Lawson | Getty Images)

On Thursday, OpenAI announced a plugin system for its ChatGPT AI assistant. The plugins give ChatGPT the ability to interact with the wider world through the Internet, including booking flights, ordering groceries, browsing the web, and more. Plugins are bits of code that tell ChatGPT how to use an external resource on the Internet.

Basically, if a developer wants to give ChatGPT the ability to access any network service (for example: "looking up current stock prices") or perform any task controlled by a network service (for example: "ordering pizza through the Internet"), it is now possible, provided it doesn't go against OpenAI's rules.

Conventionally, most large language models (LLM) like ChatGPT have been constrained in a bubble, so to speak, only able to interact with the world through text conversations with a user. As OpenAI writes in its introductory blog post on ChatGPT plugins, "The only thing language models can do out-of-the-box is emit text."

Read 18 remaining paragraphs | Comments

AI-powered Bing Chat gains three distinct personalities

Three different-colored robot heads.

Enlarge (credit: Benj Edwards / Ars Technica)

On Wednesday, Microsoft employee Mike Davidson announced that the company has rolled out three distinct personality styles for its experimental AI-powered Bing Chat bot: Creative, Balanced, or Precise. Microsoft has been testing the feature since February 24 with a limited set of users. Switching between modes produces different results that shift its balance between accuracy and creativity.

Bing Chat is an AI-powered assistant based on an advanced large language model (LLM) developed by OpenAI. A key feature of Bing Chat is that it can search the web and incorporate the results into its answers.

Microsoft announced Bing Chat on February 7, and shortly after going live, adversarial attacks regularly drove an early version of Bing Chat to simulated insanity, and users discovered the bot could be convinced to threaten them. Not long after, Microsoft dramatically dialed back Bing Chat's outbursts by imposing strict limits on how long conversations could last.

Read 6 remaining paragraphs | Comments

Microsoft โ€œlobotomizedโ€ AI-powered Bing Chat, and its fans arenโ€™t happy

Microsoft โ€œlobotomizedโ€ AI-powered Bing Chat, and its fans arenโ€™t happy

Enlarge (credit: Aurich Lawson | Getty Images)

Microsoft's new AI-powered Bing Chat service, still in private testing, has been in the headlines for its wild and erratic outputs. But that era has apparently come to an end. At some point during the past two days, Microsoft has significantly curtailed Bing's ability to threaten its users, have existential meltdowns, or declare its love for them.

During Bing Chat's first week, test users noticed that Bing (also known by its code name, Sydney) began to act significantly unhinged when conversations got too long. As a result, Microsoft limited users to 50 messages per day and five inputs per conversation. In addition, Bing Chat will no longer tell you how it feels or talk about itself.

In a statement shared with Ars Technica, a Microsoft spokesperson said, "Weโ€™ve updated the service several times in response to user feedback, and per our blog are addressing many of the concerns being raised, to include the questions about long-running conversations. Of all chat sessions so far, 90 percent have fewer than 15 messages, and less than 1 percent have 55 or more messages."

Read 8 remaining paragraphs | Comments

โŒ