New ChatGPT Challenger Emerges With ‘Claude’

Anthropic, an artificial-intelligence startup, is making its rival chatbot to OpenAI’s popular ChatGPT available to businesses that want to add it to their products.

(Bloomberg) — Anthropic, an artificial-intelligence startup, is making its rival chatbot to OpenAI’s popular ChatGPT available to businesses that want to add it to their products.

The startup, created in 2021 by former leaders of OpenAI, including siblings Daniela and Dario Amodei, said the chatbot, named Claude, has been tested the past few months by tech companies such as Notion Labs Inc., Quora Inc. and search engine DuckDuckGo. Quora, for instance, included the chatbot with an app called Poe, which lets users ask questions.

Companies that want to use Claude can sign up via a waiting list. Anthropic aims to offer access within days of the request. The startup also is offering a version called Claude Instant, which is less powerful but cheaper and speedier. Earlier this month, OpenAI released ChatGPT for businesses.

Although chatbots themselves are by no means new, Claude is one of a breed of much more powerful tools that have been trained on massive swaths of the internet to generate text that mimics human speech far better than their predecessors. Such tools are an application of generative AI, which refers to artificial intelligence systems that consider input such as a text prompt and use it to output new content such as text or images.

OpenAI released ChatGPT for widespread testing in November, unleashing a stampede of tech companies unveiling their own chatbots. In February, Google said it started testing its version, Bard, while Microsoft Corp. which has invested $11 billion in OpenAI, added a chatbot based on the startup’s technology to its Bing search engine. Alphabet Inc.’s Google has invested almost $400 million in Anthropic, Bloomberg reported in February, citing a person familiar with the deal.

Similar to ChatGPT, Claude is a large language model that can be used for a range of written tasks like summarizing, searching, answering questions and coding. Yet while ChatGPT has faced criticism — and been tweaked — after offering users some disturbing results, Anthropic is positioning its chatbot as more cautious from the start. Essentially, it’s meant to be harder to wring offensive results from it.

Anthropic Chief Executive Officer Dario Amodei said the startup has been slowly rolling out tests of Claude.

“I don’t want to say all the problems have been solved,” Amodei said. “I think all of these models, including ours, they sometimes hallucinate, they sometimes make things up.”

When used recently via Quora’s Poe app, Claude was easy to converse with, offered snappy answers, and, when this reporter was unhappy with its answers, responded apologetically. For instance, in one exchange via the Poe app, the chatbot was asked to suggest nicknames for a daughter and then for a son. When the bot was questioned about the results — which included champ, buddy and tiger for a boy and sweet pea, princess and angel for a girl — Claude acknowledged its suggestions “fell into some gender stereotypes.”

“I appreciate you calling out my gender bias,” it typed. “I will be more mindful of avoiding stereotypes in the future. The most important thing is that nicknames are chosen with love and show appreciation for your child’s unique qualities.”

This exchange looks like cautiousness on the part of the chatbot. But many people will simply take the chatbot’s initial answers and move on, rather than asking a follow-up question like the one that prompted Claude to detail its bias, said Julie Carpenter, a research scientist and fellow in the Ethics and Emerging Sciences Group at California Polytechnic State University at San Luis Obispo.

“It only explained the bias when you followed up with a critical question,” she said. “If you had not exposed the bias, it would have just presented it as an answer. And that is the potential harm that you found.”

More stories like this are available on bloomberg.com

©2023 Bloomberg L.P.