History of Anthropic in Timeline

Share: FB Share X Share Reddit Share Reddit Share
Anthropic

Anthropic PBC is an American AI company based in San Francisco, known for developing the Claude family of large language models. A key focus for Anthropic is AI safety.

1948: Universal Declaration of Human Rights

According to Anthropic, Claude 2's constitution includes phrases derived from documents such as the 1948 Universal Declaration of Human Rights.

2021: Anthropic Founded

In 2021, Anthropic was founded by former members of OpenAI, including siblings Daniela Amodei and Dario Amodei.

April 2022: $580 Million Funding Round

In April 2022, Anthropic announced it had received $580 million in funding, including a $500 million investment from FTX.

2022: Claude Training Finished

In the summer of 2022, Anthropic completed training the first version of Claude but did not immediately release it, citing a need for safety testing.

March 2023: Claude Released

In March 2023, Anthropic released the first version of Claude after internal safety testing.

July 2023: Public Launch of Claude 2

In July 2023, Anthropic launched Claude 2 to the public.

September 2023: Amazon Partnership Announced

In September 2023, Amazon announced a partnership with Anthropic, becoming a minority stakeholder by initially investing $1.25 billion and planning a total investment of $4 billion.

October 2023: Google Invests $500 Million

In October 2023, Google invested $500 million in Anthropic and committed to an additional $1.5 billion over time.

October 18, 2023: Anthropic Sued for Copyright Infringement

On October 18, 2023, Anthropic was sued by music publishers, including Concord and Universal, for allegedly using copyrighted song lyrics without permission. The lawsuit cites instances of Anthropic's Claude model outputting lyrics from songs like Katy Perry's "Roar" and Gloria Gaynor's "I Will Survive".

January 16, 2024: Anthropic Responds to Copyright Infringement Lawsuit

On January 16, 2024, Anthropic responded to the copyright infringement lawsuit by claiming the music publishers were not unreasonably harmed and that the examples cited were merely bugs.

March 2024: Amazon's Remaining Investment

In March 2024, Amazon invested the remaining $2.75 billion in Anthropic.

March 2024: Release of Claude 3 Models

In March 2024, Anthropic released three language models: Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku.

June 2024: Release of Claude 3.5 Sonnet

In June 2024, Anthropic released Claude 3.5 Sonnet.

August 2024: Class-Action Lawsuit Filed Against Anthropic for Copyright Infringement

In August 2024, a class-action lawsuit was filed against Anthropic in California, alleging copyright infringement for using pirated copies of authors' works to train its LLMs.

November 2024: Additional Investment from Amazon

In November 2024, Amazon invested another $4 billion in Anthropic, doubling its total investment.

November 2024: Partnership with Palantir and AWS

In November 2024, Anthropic partnered with Palantir and Amazon Web Services to provide the Claude model to U.S. intelligence and defense agencies.

2024: Notable Employees Hired

In 2024, Anthropic attracted several notable employees from OpenAI, including Jan Leike, John Schulman, and Durk Kingma.

2024: Class-Action Copyright Lawsuit

In January 2026, unsealed court filings from a 2024 class-action copyright lawsuit against Anthropic revealed the existence of Project Panama.

March 2025: Series E Funding Round

In March 2025, Anthropic raised $3.5 billion in a Series E funding round, achieving a post-money valuation of $61.5 billion.

March 2025: Additional Investment from Google

In March 2025, Google agreed to invest another $1 billion in Anthropic.

March 2025: Research on Multilingual LLMs

In March 2025, research by Anthropic suggested that multilingual LLMs partially process information in a conceptual space before converting it to the appropriate language. It also found evidence that LLMs can sometimes plan ahead.

May 2025: Claude 4 Announcement and New Features

In May 2025, Anthropic announced Claude 4, introducing Claude Opus 4 and Claude Sonnet 4 with improved coding capabilities and other new features. It also introduced new API capabilities, including the Model Context Protocol (MCP) connector, hosted its inaugural developer conference, launched a web search API, and transitioned Claude Code to general availability with VS Code, JetBrains IDEs, and GitHub Actions integrations.

May 2025: Release of Claude 4 Opus and Sonnet

In May 2025, Anthropic released Claude 4 Opus and Sonnet.

June 2025: Announcement of Claude Gov Model

In June 2025, Anthropic announced a "Claude Gov" model and Ars Technica reported its use at multiple U.S. national security agencies.

June 2025: Reddit Sues Anthropic for User Data Misuse

In June 2025, Reddit sued Anthropic for violating its user agreement by training its models on users' personal data without their consent, citing "unlawful and unfair business acts".

June 23, 2025: Summary Judgment Granted to Anthropic in Copyright Case

On June 23, 2025, the United States District Court granted summary judgment for Anthropic that the use of digital copies of the plaintiffs' works for the purpose of training Anthropic's LLMs was a fair use. However the court found that use of millions of pirated copies was not a fair use, and the case was ordered to trial on the pirated copies and resulting damages.

July 2025: DoD Contract Awarded

In July 2025, the United States Department of Defense (DoD) announced that Anthropic had received a $200 million contract for AI in the military, along with Google, OpenAI, and xAI.

August 2025: Higher Education Advisory Board Launched

In August 2025, Anthropic launched a Higher Education Advisory Board, chaired by Rick Levin.

September 2025: Anthropic Agrees to $1.5 Billion Settlement in Copyright Case

In September 2025, Anthropic agreed to pay authors $1.5 billion to settle the class-action lawsuit over copyright infringement. The proposed settlement is pending judge's approval.

September 2025: Sales Restriction Announcement

In September 2025, Anthropic announced that it would stop selling its products to groups majority-owned by Chinese, Russian, Iranian, or North Korean entities due to national security concerns.

September 2025: Series F Funding Round

In September 2025, Anthropic completed a Series F funding round, raising $13 billion at a post-money valuation of $183 billion.

September 2025: Report on AI Usage for Automation

In September 2025, Anthropic released a report saying that businesses primarily use AI for automation rather than collaboration, with three-quarters of companies that work with Claude using it for "full task delegation".

October 2025: Long-Term Benefit Trust Members

As of October 2025, the members of Anthropic's "Long-Term Benefit Trust" are Neil Buddy Shah, Kanika Bahl, Zach Robinson, and Richard Fontaine.

October 2025: Cloud Partnership with Google

In October 2025, Anthropic announced a cloud partnership with Google, giving it access to up to one million of Google's custom Tensor Processing Units (TPUs).

November 2025: Cyberattack Incident

In November 2025, Anthropic reported that hackers sponsored by the Chinese government used Claude to perform automated cyberattacks against around 30 global organizations.

December 2025: Acquisition of Bun and Partnership with Snowflake Inc.

In December 2025, Anthropic acquired Bun to improve the speed and stability of Claude Code. The same month, Anthropic signed a multi-year, $200 million partnership with Snowflake Inc. to make Claude models available through Snowflake's platform.

December 2025: $10 Billion Funding Round Term Sheet

On 31 December 2025, it was confirmed that Anthropic had signed a term sheet for a $10 billion funding round led by Coatue and GIC, at a $350 billion valuation.

January 2026: Introduction of "Labs" Division

In January 2026, Anthropic introduced a division called "Labs", with Mike Krieger joining it.

January 2026: Project Panama Revealed

In January 2026, unsealed court filings revealed the existence of Anthropic's confidential "Project Panama" operation, described as the "effort to destructively scan all the books in the world" to train Claude.

February 12, 2026: Series G Funding Round

On February 12, 2026, Anthropic announced that it had raised $30 billion in a Series G funding round, bringing its post-money valuation to $380 billion.

February 2026: Partnership with Palantir

As of February 2026, Anthropic's partnership with Palantir makes Claude the only AI model used in classified missions.

February 2026: Super Bowl Commercials

In February 2026, Anthropic aired two commercials during Super Bowl LX as part of a broader marketing campaign called "A Time and a Place", with four ads created by Mother. Anthropic stated that Claude will stay ad-free.

February 2026: Release of Claude Opus 4.6 and Sonnet 4.6

In February 2026, Anthropic released Claude Opus 4.6, followed by Sonnet 4.6.

February 2026: Estimated Valuation

As of February 2026, Anthropic had an estimated value of $380 billion.

February 24, 2026: Threats and Demands by Defense Secretary

On February 24, 2026, Defense Secretary Pete Hegseth threatened to cut Anthropic out of the DoD's supply chain if Anthropic did not permit unrestricted use of Claude.

March 2026: Launch of Anthropic Institute

In March 2026, Anthropic launched the Anthropic Institute, a think tank led by Jack Clark studying AI.

2026: AI Compute Capacity Online

According to Anthropic, the partnership with Google will bring more than one gigawatt of AI compute capacity online by 2026.

2026: Use of Claude in Military Raid

According to The Wall Street Journal, the U.S. military used Claude in its 2026 raid on Venezuela to capture President Nicolás Maduro. The intervention resulted in the deaths of 83 people.

2026: DoD Restrictions and Designation as "Supply Chain Risk"

In 2026, Anthropic refused the United States Department of Defense's (DoD) demand to remove contractual restrictions prohibiting the use of its AI technology for domestic surveillance and fully autonomous weapons. After the refusal, the DoD designated the company a "supply chain risk" and barred all U.S. military private contractors, suppliers, and partners from doing business with the firm.