Talan.tech

Claude

by Anthropic · San Francisco, CA

Claude is a safety-focused AI assistant developed by Anthropic.

Relevant industries:EngineeringGovernment

Risk Score: 32/100 (Moderate) · 18+ incidents · Legal 62 · Safety 25 · Privacy 36 · Regulatory 20 · Security 0

Risk Score

32/ 100
Moderate Risk

Apr 27, 2026

Risk Score Breakdown

Legal Risk

Court cases & lawsuits

62/100

Safety Risk

Incidents & harm events

25/100

Privacy Risk

Breaches & GDPR actions

36/100

Regulatory Risk

FTC, EU enforcement

20/100

Security Risk

CVEs & vulnerabilities

0/100

Incident Timeline

18 total incidents · showing 5 most recent

Apr 2026

LOWData BreachACTIVE
The Hacker News: Mythos Changed the Math on Vulnerability Discovery. Most Teams Aren't Ready for the Remediation Side

Anthropic’s Claude Mythos Preview was described as a cybersecurity-focused AI system that can identify software vulnerabilities at scale. The available information does not describe a confirmed breach, affected users, or exposed data.

#hackernews #security #breach

Apr 2026

LOWData BreachACTIVE
The Hacker News: Project Glasswing Proved AI Can Find the Bugs. Who's Going to Fix Them?

Anthropic developed an AI model called Project Glasswing that can discover software vulnerabilities and delayed public release while providing access to major tech companies. Limited public details are available on any specific breach or affected users.

#hackernews #security #breach

Apr 2026

MEDIUMCourt CaseACTIVE2:26-cv-00039
Court Case: Comstock v. Microsoft Corporation

A RICO racketeering lawsuit (2:26-cv-00039) was filed in the U.S. District Court for the District of Montana naming multiple parties, including Betts Patterson & Mines P.S. and King County Superior Court Administration. Limited public details are available about any connection to Claude/Anthropic.

Court: District Court, D. Montana#courtlistener #lawsuit #court-case

Apr 2026

CRITICALData BreachACTIVE
The Hacker News: Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain

Researchers identified a critical design vulnerability in Anthropic’s Model Context Protocol (MCP) that could allow remote code execution, potentially impacting users and downstream systems in the AI supply chain.

#hackernews #security #breach

Apr 2026

MEDIUMCourt CaseACTIVE4:26-cv-03299
Court Case: Veleber v. Alphabet Inc.

A RICO racketeering lawsuit (4:26-cv-03299) was filed in the U.S. District Court for the Northern District of California naming SoftBank Group Corp. as plaintiff and the NSA, CIA, and Verizon Communications as defendants. Limited public details are available on the claims and impact.

Court: District Court, N.D. California#courtlistener #lawsuit #court-case

Frequently Asked Questions

What is Claude's AI risk score?

Claude has an AI Risk Score of 32/100 (Moderate Risk). This score is calculated from 18+ documented public incidents across legal, safety, privacy, regulatory, and security categories.

Is Claude safe to use?

Claude by Anthropic has a moderate risk profile based on public data. Organizations should review the full incident list and conduct their own due diligence. This score does not constitute legal advice.

Does Claude have lawsuits?

Yes — our public records show 2 court case(s) for Claude, including: Court Case: Comstock v. Microsoft Corporation; Court Case: Veleber v. Alphabet Inc..

How is the AI Risk Score calculated?

Scores are weighted across 5 categories: Legal (25%), Safety (25%), Privacy (20%), Regulatory (15%), Security (15%). Each incident is scored by severity and type, then decayed based on age. Active lawsuits and fatal incidents do not decay.

Stay ahead of AI risk

Get alerts when Claude risk score changes

New lawsuits, breaches, and regulatory actions — delivered to your inbox.