Anthropic PBC
TypePrivate
IndustryArtificial intelligence
Founded2021 (2021)
Founders
HeadquartersSan Francisco, California, U.S.
ProductsClaude
Number of employees
160 (July 2023)[2]
Websiteanthropic.com

Anthropic PBC is an American artificial intelligence (AI) startup company, founded by former members of OpenAI.[3][4] Anthropic develops general AI systems and large language models.[5] It is a public-benefit corporation, and has been connected to the effective altruism movement.

As of July 2023, Anthropic had raised US$1.5 billion in funding. In September, Amazon announced an investment of up to US$4 billion, followed by a $2 billion commitment from Google the following month.[6][7]

History

Dario Amodei, Anthropic co-founder

Anthropic was founded in 2021 by former senior members of OpenAI, siblings Daniela and Dario Amodei, the latter of whom served as OpenAI's Vice President of Research.[8][9][10] The Amodei siblings were among those who left OpenAI due to directional differences, specifically regarding OpenAI's ventures with Microsoft in 2019.[11]

By late 2022, Anthropic had raised US$700 million in funding, out of which US$500 million came from Alameda Research. Google's cloud division followed with an investment of US$300 million for a 10% stake, in a deal requiring Anthropic to buy computing resources from Google Cloud.[12][13] In May 2023, Anthropic raised US$450 million in a round led by Spark Capital.[14]

In February 2023, Anthropic was sued by Texas-based Anthrop LLC for use of their registered trademark "Anthropic A.I."[15]

Kevin Roose of The New York Times described the company as the "Center of A.I. Doomerism". He reported that some employees "compared themselves to modern-day Robert Oppenheimers".[2]

Journalists often connect Anthropic with the effective altruism movement; some founders and team members were part of the community or at least interested in it. One of the investors of Series B round was Sam Bankman-Fried of the cryptocurrency exchange FTX that collapsed in 2022.[2][16]

On September 25, 2023, Amazon announced a partnership, with Amazon becoming a minority stakeholder by investing up to US$4 billion, including an immediate investment of $1.25bn. As part of the deal, Anthropic would use Amazon Web Services (AWS) as its primary cloud provider and planned to make its AI models available to AWS customers.[6][17] The next month, Google invested $500 million in Anthropic, and committed to an additional $1.5bn over time.[7]

Projects

Claude

Comprising former researchers involved in OpenAI's GPT-2 and GPT-3 model development,[2] Anthropic embarked on the development on its own AI chatbot, named Claude.[18] Similar to ChatGPT, Claude uses a messaging interface where users can submit questions or requests and receive highly detailed and relevant responses.[19]

Initially available in closed beta through a Slack integration, Claude is now accessible via a website claude.ai.

The name, "Claude", was chosen either as a reference to Claude Shannon, or as "a friendly, male-gendered name designed to counterbalance the female-gendered names (Alexa, Siri, Cortana) that other tech companies gave their A.I. assistants".[2]

Claude 2 was launched in July 2023, and initially was available only in the US and the UK. The Guardian reported that safety was a priority during the model training. Anthropic calls their safety method "Constitutional AI":[20]

The chatbot is trained on principles taken from documents including the 1948 Universal Declaration of Human Rights and Apple’s terms of service, which cover modern issues such as data privacy and impersonation. One example of a Claude 2 principle based on the 1948 UN declaration is: “Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood.”

Claude 2.1 has been released in November 2023.[21]

In the same month, research conducted by Patronus AI, an artificial intelligence startup company, compared performance of Claude2, OpenAI's GPT-4 and GPT-4-Turbo, and Meta AI's LLaMA-2 on two versions of a 150-question test about information in SEC filings (e.g. Form 10-K, Form 10-Q, Form 8-K, earnings reports, earnings call transcripts) submitted by public companies to the agency where one version of the test required the generative AI models to use a retrieval system to locate the specific SEC filing to answer the questions while the other version provided the specific SEC filing to the models to answer the question (i.e. in a long context window). On the retrieval system version, GPT-4-Turbo and LLaMA-2 both failed to produce correct answers to 81% of the questions, while on the long context window version, GPT-4-Turbo and Claude-2 failed to produce correct answers to 21% and 24% of the questions respectively.[22][23]

Constitutional AI

Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest. CAI does this by defining a "constitution" for the AI that consists of a set of high-level normative principles that describe the desired behavior of the AI. These principles are then used to train the AI to avoid harm, respect preferences, and provide true information.[24]

Interpretability research

Anthropic also publishes research on the interpretability of machine learning systems, focusing on the transformer architecture.[25][26]

See also

References

  1. Nellis, Stephen (9 May 2023). "Alphabet-backed Anthropic outlines the moral values behind its AI bot". Reuters. Archived from the original on 5 June 2023. Retrieved 4 June 2023.
  2. 1 2 3 4 5 Roose, Kevin (11 July 2023). "Inside the White-Hot Center of A.I. Doomerism". The New York Times. Archived from the original on 12 July 2023. Retrieved 13 July 2023.
  3. "ChatGPT must be regulated and A.I. 'can be used by bad actors,' warns OpenAI's CTO". Fortune. 5 February 2023. Archived from the original on 2023-02-06. Retrieved 2023-02-06 via finance.yahoo.com.
  4. Vincent, James (2023-02-03). "Google invested $300 million in AI firm founded by former OpenAI researchers". The Verge. Archived from the original on 2023-02-06. Retrieved 2023-02-06.
  5. O'Reilly, Mathilde (2023-06-30). "Anthropic releases paper revealing the bias of large language models". dailyai.com. Archived from the original on 2023-07-05. Retrieved 2023-07-01.
  6. 1 2 Dastin, Jeffrey (September 28, 2023). "Amazon steps up AI race with Anthropic investment". Reuters. Archived from the original on 2023-11-05. Retrieved 2023-10-02.
  7. 1 2 Hu, Krystal (October 27, 2023). "Google agrees to invest up to $2 billion in OpenAI rival Anthropic". Reuters. Archived from the original on 2023-11-02. Retrieved 2023-10-30.
  8. Papadopoulos, Loukia (2023-02-05). "Google invests $400 million in Anthropic to battle the power of ChatGPT". interestingengineering.com. Archived from the original on 2023-02-06. Retrieved 2023-02-06.
  9. "OpenAI Is Making Headlines. It's Also Seeding Talent Across Silicon Valley". The Information. Archived from the original on 2023-02-06. Retrieved 2023-02-06.
  10. "Daniela and Dario Amodei on Anthropic". Future of Life Institute. Archived from the original on 2023-02-06. Retrieved 2023-02-06.
  11. "As Anthropic seeks billions to take on OpenAI, 'industrial capture' is nigh. Or is it?". VentureBeat. 2023-04-07. Archived from the original on 2023-05-24. Retrieved 2023-05-24.
  12. Waters, Richard; Shubber, Kadhim (3 February 2023). "Google invests $300mn in artificial intelligence start-up Anthropic". Financial Times. Archived from the original on 31 October 2023. Retrieved 4 July 2023.
  13. "Google invests $300 million in Anthropic as race to compete with ChatGPT heats up". VentureBeat. 2023-02-03. Archived from the original on 2023-02-06. Retrieved 2023-02-06.
  14. Hu, Krystal; Shekhawat, Jaiveer; Hu, Krystal (23 May 2023). "Google-backed Anthropic raises $450 mln in latest AI funding". Reuters. Archived from the original on 2023-10-24. Retrieved 2023-07-04.
  15. Setty, Riddhi (2023-02-23). "Anthropic A.I. Sues AI Company for Trademark Infringement". Bloomberg Law. Archived from the original on 2023-02-22. Retrieved 2023-02-23.
  16. Matthews, Dylan (17 July 2023). "The $1 billion gamble to ensure AI doesn't destroy humanity". Vox. Archived from the original on 3 October 2023. Retrieved 23 July 2023.
  17. "Amazon and Anthropic announce strategic collaboration to advance generative AI". US About Amazon (Press release). 2023-09-25. Archived from the original on 2023-09-25. Retrieved 2023-09-25.
  18. "ChatGPT must be regulated and A.I. 'can be used by bad actors,' warns OpenAI's CTO". Fortune. Archived from the original on 2023-02-05. Retrieved 2023-02-06.
  19. "Meet Claude: Anthropic's Rival to ChatGPT | Blog | Scale AI". ScaleAI. Archived from the original on 2023-02-05. Retrieved 2023-02-06.
  20. Milmo, Dan (12 July 2023). "Claude 2: ChatGPT rival launches chatbot that can summarise a novel". The Guardian. Archived from the original on 8 November 2023. Retrieved 13 July 2023.
  21. "Introducing Claude 2.1". Retrieved 21 November 2023.
  22. Leswing, Kif (December 19, 2023). "GPT and other AI models can't analyze an SEC filing, researchers find". CNBC. Retrieved December 19, 2023.
  23. "Patronus AI Launches Industry-first LLM Benchmark for Finance to Address Hallucinations" (Press release). PR Newswire. November 16, 2023. Retrieved December 19, 2023.
  24. Bai, Yuntao; Kadavath, Saurav; Kundu, Sandipan; Askell, Amanda; Kernion, Jackson; Jones, Andy; Chen, Anna; Goldie, Anna; Mirhoseini, Azalia (2022-12-15). "Constitutional AI: Harmlessness from AI Feedback". arXiv:2212.08073 [cs.CL].
  25. "Transformer Circuits Thread". transformer-circuits.pub. Archived from the original on 2023-02-04. Retrieved 2023-02-09.
  26. "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning". Archived from the original on 9 October 2023. Retrieved 10 October 2023.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.