Why “High-Risk” Classification Matters Under the EU AI Act
If your software uses AI (or AI-like techniques such as machine learning, deep learning, or certain forms of automated decision-making), the EU AI Act may impose extra obligations when it is classified as a high-risk AI system. High-risk classification affects what you must do before placing the system on the market or putting it into service in the EU—typically involving governance, documentation, testing, monitoring, and in some cases conformity assessment.
This guide helps you self-diagnose whether your AI system is likely to fall into the high-risk category and what to do next.
Step 1: Confirm You’re Looking at an “AI System” (Not Just Regular Software)
Start with a simple check: does your product take in data and generate outputs (predictions, recommendations, classifications, decisions, content) that can influence real-world outcomes?
Your system is more likely to be treated as an AI system if it:
- Learns patterns from data (training) and applies them in production
- Produces probabilistic outputs (scores, ranks, likelihoods)
- Adapts over time (including periodic retraining)
- Automates decisions or strongly steers human decisions
If your software is purely deterministic (fixed rules only), you may still have compliance duties in other frameworks, but high-risk AI Act obligations may be less likely—unless you are embedding an AI component from a vendor.
Step 2: Identify the “Intended Purpose” and Who Uses It
High-risk classification hinges on intended purpose and context of use, not just model type.
Write a one-paragraph “intended purpose” statement:
- Who are the users? (HR teams, doctors, teachers, police, banks, consumers)
- Who are the affected persons? (employees, students, patients, customers)
- What decisions does it support or automate?
- What happens if it is wrong? (denial of access, injury, legal impact, loss of livelihood)
Tip: Marketing claims matter. If you advertise that your system “screens candidates,” “detects fraud,” “assesses creditworthiness,” or “supports diagnosis,” you are signaling a regulated use case.
Step 3: Check Whether You’re in a High-Risk Area (Fast Screening)
High-risk AI systems are commonly those used in sensitive domains where errors can meaningfully harm health, safety, or fundamental rights. Use this quick screen:
You are more likely high-risk if your system is used for:
- Employment and workforce management (hiring, firing, promotions, scheduling, performance scoring)
- Education and vocational training (admissions, grading, exam proctoring, student assessment)
- Access to essential services (credit, insurance eligibility, housing, certain social services)
- Healthcare (diagnosis support, triage, treatment recommendations, patient risk scoring)
- Law enforcement or public safety (risk assessment, profiling, detecting or predicting offenses)
- Migration, asylum, and border management (risk assessment, identity verification in border contexts)
- Justice and democratic processes (supporting judicial decisions, legal risk scoring, influencing voting behavior)
If any of these apply, proceed assuming you may be in high-risk territory and move to the next steps.
Step 4: Determine If Your System Is High-Risk Because It’s a “Safety Component” (Product-Linked High Risk)
Some AI systems are high-risk because they are part of a regulated product or act as a safety component—meaning failure could endanger health or safety.
Ask:
- Is the AI embedded in a product already subject to EU safety rules (for example, certain medical devices, machinery, vehicles, or other regulated equipment)?
- Does the AI perform a safety function (detect hazards, prevent collisions, control critical operations, trigger alarms)?
If yes, your system may be high-risk even if it doesn’t touch employment, education, finance, or policing. In these cases, your compliance work often must align with existing product conformity processes.
Step 5: Determine If Your System Is High-Risk Because of the Use Case (Standalone High Risk)
If you’re not clearly “product safety component” high-risk, check whether you are high-risk because of what the system does in a sensitive domain.
Use this practical checklist. You are more likely high-risk if your system:
-
Evaluates, scores, or ranks people in a way that impacts access to opportunities
Examples: candidate scoring, student ranking, credit scoring, benefit eligibility scoring -
Makes or strongly drives decisions with legal or similarly significant effects
Examples: approving/denying services, setting insurance premiums, terminating employment, disciplinary action -
Performs identity-related functions in sensitive contexts
Examples: biometric identification/authentication used for access control in high-stakes environments -
Supports public authority decisions about individuals
Examples: risk assessments used by public bodies for enforcement, allocation of resources, or eligibility decisions
If you check any of these boxes, treat the system as “high-risk candidate” and validate carefully.
Step 6: Watch for Common “False Negatives” (Where Teams Misclassify)
Many systems look “low-stakes” until you examine deployment realities. Red flags include:
-
“Decision support only” that becomes de facto automation
If humans rarely override the AI, regulators may view it as functionally automated. -
Downstream use you “don’t intend” but enable
If you sell a general scoring engine and your customers use it for hiring or credit decisions, you may still be pulled into high-risk obligations through foreseeable use. -
Bundled features
A harmless chatbot becomes high-risk if integrated into workflows that decide admissions, benefits, or employment actions. -
Third-party models
Using a foundation model or vendor API doesn’t remove your duties. You still need to assess your system’s risk and controls for your use case.
Step 7: Classify Your Role: Provider, Deployer, Importer, Distributor
Your obligations depend on your role in the AI supply chain:
- Provider: develops the AI system or has it developed and places it on the market under its name
- Deployer: uses the AI system in its organization (for example, HR department using a screening tool)
- Importer/Distributor: brings a system into the EU market or makes it available
If you are the provider of a high-risk system, you typically face the heaviest requirements (risk management, technical documentation, quality management, monitoring, and more). Deployers also have important duties (proper use, oversight, incident reporting in certain cases, and governance).
Step 8: Do a Practical “High-Risk Triage” in 30 Minutes
Run this short internal workshop with product, legal/compliance, security, and a domain expert:
-
Map the decision chain
- What input data is used?
- What output is produced?
- Who acts on the output?
- What is the final decision and impact?
-
List affected rights and harms
- Discrimination, exclusion, loss of income, denial of services, health impact, due process concerns
-
Assess autonomy
- Is the AI advisory, or does it determine the outcome in practice?
-
Identify regulated domains
- Employment, education, essential services, healthcare, public services, law enforcement, border, justice
-
Decide a preliminary classification
- High-risk likely / uncertain / unlikely
Document the reasoning. Even if you later reclassify, having a traceable rationale is valuable.
Step 9: If You’re Likely High-Risk, Start These Actions Immediately
You don’t need to wait for perfect certainty to begin foundational work. These steps help regardless:
-
Define and freeze the intended purpose
- Prevent scope creep into high-risk uses without controls.
-
Implement a risk management process
- Identify foreseeable misuse, evaluate severity/likelihood, and define mitigations.
-
Strengthen data governance
- Track data sources, labeling quality, representativeness, bias checks, and data leakage controls.
-
Build technical documentation discipline
- Model versioning, training runs, evaluation results, limitations, and known failure modes.
-
Design human oversight that actually works
- Ensure users can understand outputs, challenge results, and override when needed.
-
Plan monitoring and incident handling
- Logging, performance drift checks, complaint handling, and escalation paths.
-
Clarify roles with suppliers
- Get model documentation, usage constraints, and update policies from vendors.
Step 10: If You’re “Uncertain,” Use a Conservative Approach
When classification isn’t clear, treat the system as potentially high-risk until proven otherwise. Practical tactics:
- Limit deployment scope (pilot with constraints, no high-stakes decisions at first)
- Add procedural safeguards (second review, appeal mechanisms, sampling audits)
- Avoid sensitive use claims in marketing until your compliance posture matches
- Seek alignment internally between product, sales, and legal on permitted use cases
A Simple Self-Diagnosis Summary
Use this quick conclusion rule set:
- High-risk is likely if your AI influences decisions in employment, education, essential services, healthcare, law enforcement, border/migration, or justice—or if it is a safety component in a regulated product.
- High-risk is possible if it ranks/scores people, affects access to opportunities, or becomes de facto automated decision-making.
- High-risk is less likely if it is purely internal analytics with no significant impact on individuals, or consumer features with low-stakes outcomes—provided it doesn’t drift into sensitive domains.
Final Checklist: What to Put in Your File Today
Create a single internal record containing:
- Intended purpose statement and non-intended uses
- Domain mapping (which high-risk areas might apply)
- Decision chain map and impact analysis
- Preliminary classification and rationale
- Mitigation plan and ownership (who does what by when)
If your software is anywhere near the high-risk list, treating classification as a product requirement—not a legal afterthought—will save time, reduce rework, and make your rollout far smoother.