Anthropic, an artificial intelligence company backed by Alphabet Inc, on Tuesday released a larger language model that competes directly with offerings from Microsoft Corp-backed OpenAI, maker of ChatGPT.
Large language models are algorithms that learn to generate text by feeding them human-written training text. In recent years, researchers have achieved much more human-like results with such models, with vast increases in the amount of data fed to them and the amount of computing power used to train them.
Claude, known as the model of Anthropic, is designed to perform tasks similar to ChatGPT, responding to prompts with human-like text output, whether editing legal contracts or writing computer code. be as
However, Anthropic, co-founded by siblings Dario and Daniela Amodei, who are both former OpenAI executives, has focused on building AI systems that are less likely to generate offensive or dangerous content, such as hacking computers or creating weapons. instructions for other systems.
Those concerns about AI security came to a head last month when Microsoft said it would limit inquiries to its new chat search engine, Bing, after a New York Times columnist found that the chatbot had alter egos and could not respond during long conversations. produced distressing reactions.
Security issues have been a burning issue for tech companies as chatbots do not understand the meaning of the words generated by them.
To avoid generating malicious content, chatbot makers often program them to avoid certain subject areas altogether. But this leaves chatbots vulnerable to so-called “rapid engineering”, where users invent their own limits.
Anthropic took a different approach and fed the cloud a set of principles while the model was “trained” with massive amounts of text data. Instead of trying to avoid potentially dangerous topics, Cloud is designed to explain his objections based on his own principles.
“There was nothing scary. It’s one of the reasons we like Anthropic,” said Richard Robinson, chief executive of Robin AI, a London-based startup that uses AI to analyze legal contracts. granted early access to the cloud, told Reuters. ,
Robinson said his firm tried applying OpenAI technology to contracts, but found that the cloud had a better understanding of the denser legal language and was less likely to generate strange responses.
“If anything, it’s really a challenge to lower the barriers to acceptable use somewhat,” Robinson said.