Google invests $300 million in Anthropic as race to compete with ChatGPT heats up

Check out all the on-demand sessions from the Intelligent Security Summit here.


According to new reporting from the Financial Times, Google has invested $300 million in one of the most buzzy OpenAI rivals, Anthropic, whose recently-debuted generative AI model Claude is considered competitive with ChatGPT.

According to the reporting, Google will take a stake of around 10% and Anthropic will be required to use the money to buy computing resources from Google Cloud. The new funding will value the San Francisco-based company at around $5 billion.

The news comes only a little over a week since Microsoft announced a reported $10 billion investment in OpenAI, and signals an increasingly-competitive Big Tech race in the generative AI space.

Anthropic founded by OpenAI researchers

Anthropic was founded in 2021 by several researchers who left OpenAI, and gained more attention last April when, after less than a year in existence, it suddenly announced a whopping $580 million in funding. Most of that money, it turns out, came from Sam Bankman-Fried and the folks at FTX, the now-bankrupt cryptocurrency platform accused of fraud. There have been questions as to whether that money could be recovered by a bankruptcy court.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

Anthropic — and FTX — has also been tied to the Effective Altruism movement, which former Google researcher Timnit Gebru called out recently in a Wired opinion piece as a “dangerous brand of AI safety.”

Also Read : NoiseFit Force smartwatch with Bluetooth calling feature launched: Details

Google will have access to Claude

Anthropic’s AI chatbot, Claude — currently available in closed beta through a Slack integration — is reportedly similar to ChatGPT and has even demonstrated improvements. Anthropic, which describes itself as “working to build reliable, interpretable, and steerable AI systems,” created Claude using a process called “Constitutional AI,” which it says is based on concepts such as beneficence, non-maleficence and autonomy.

According to an Anthropic paper detailing Constitutional AI, the process involves a supervised learning and a reinforcement learning phase: “As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them.”

Originally appeared on: TheSpuzz

iSlumped