AI Incident Reporting (Article 73)
Providers of high-risk AI systems must report serious incidents to the market surveillance authority within strict deadlines. Article 73 defines what constitutes a serious incident, who must report, and the notification timeline.
Why incident reporting is a critical compliance obligation
Serious incidents involving high-risk AI systems must be reported to national market surveillance authorities. This is not optional — it is a legal duty for providers under the AI Act.
The reporting obligation complements the post-market monitoring requirement (Article 72) and the risk management system (Article 9). Together, they form a continuous governance loop.
Failure to report a serious incident is a Tier 2 violation: up to €15 million or 3% of global annual turnover. Late reporting or incomplete notifications can also trigger enforcement action.
What is a "serious incident"?
Article 3(49) defines a serious incident as an incident or malfunctioning of a high-risk AI system that directly or indirectly leads to any of the following:
- Death of a person or serious damage to a person's health
- Serious and irreversible disruption of the management or operation of critical infrastructure (energy, transport, water, healthcare networks)
- Breach of obligations under Union law intended to protect fundamental rights
- Serious damage to property or the environment
- Serious harm to the health or safety of persons, including psychological harm
The threshold is actual or potential serious harm. A near-miss that could have caused one of the outcomes above should also be assessed for reporting.
Notification deadlines
Initial incident notification
Report the serious incident to the market surveillance authority of the Member State(s) where the incident occurred. Include the system identity, nature of the incident, and initial corrective measures.
Detailed follow-up report
Submit a comprehensive report with root cause analysis (if available), detailed description of harm, corrective and preventive measures taken or planned, and supporting evidence.
Final report and closure
Provide a final report once root cause analysis is complete and all corrective measures are implemented. Include lessons learned and systemic changes to prevent recurrence.
Who must report?
The provider of the high-risk AI system bears the primary reporting obligation. However, the deployer also has a duty to inform the provider when they become aware of a serious incident (Article 26(5)).
If the provider and deployer are in different Member States, the provider must notify the authority in each Member State where the incident occurred or had effects. Distributors and importers who become aware of incidents must also inform the provider.
Incident reports must reference the Annex IV technical documentation and may need to be cross-referenced with the FRIA report if the incident affects fundamental rights. Staff involved in incident response should have completed AI literacy training to understand their obligations.
How to handle an AI incident: step by step
1. Detect and classify the incident
Monitor AI system performance continuously. When an anomaly, malfunction, or adverse outcome is detected, assess whether it meets the "serious incident" threshold under Article 3(49). Not every bug is a serious incident — the test is actual or potential serious harm.
2. Contain and mitigate immediate harm
Take immediate measures to stop or limit the harm: suspend the AI system, activate fallback procedures, deploy human override, notify affected persons. Document every action and its timestamp.
3. Notify the market surveillance authority
Submit the initial notification within 15 days to the competent authority of the Member State(s) where the incident occurred. If the incident occurred in multiple Member States, notify each relevant authority.
4. Conduct root cause analysis
Investigate why the incident occurred: data quality issues, model drift, adversarial inputs, system integration failures, human factor errors. Involve technical and compliance teams. Document the methodology and findings.
5. Implement corrective measures
Based on the root cause analysis, implement technical fixes (model retraining, guardrails, input validation), process improvements (human oversight, testing), and governance changes (policy updates, training).
6. Submit follow-up and final reports
Provide the detailed follow-up report to the authority. Once all corrective measures are implemented and validated, submit the final closure report with lessons learned and preventive measures for similar incidents.
7. Update risk management and documentation
Feed incident learnings into the risk management system (Article 9), update the technical documentation (Annex IV), and adjust the post-market monitoring plan. Review whether the incident changes the system's risk level.
Common mistakes
- Waiting until the root cause is known before notifying — the 15-day clock starts from when you become aware of the incident, not when you finish investigating.
- Reporting only to your own national authority when the incident affected persons in other Member States.
- Treating the incident as an isolated IT issue instead of triggering the full regulatory response process.
- Not documenting the timeline of events, decisions, and actions taken — this evidence is critical if enforcement follows.
- Failing to update the post-market monitoring plan and risk management system after incident resolution.
- Not linking the incident back to the conformity assessment and technical documentation updates.
How ActLoom automates incident management
- Incident logging — structured forms to capture incident details, severity classification, and affected systems with timestamps for every action.
- Deadline tracking — automated countdown for the 15-day notification window with escalation alerts to compliance owners and management.
- Root cause analysis templates — guided investigation workflows that capture the information regulators expect: causality chain, harm assessment, and corrective measures.
- Export-ready reports — generate initial notifications, follow-up reports, and closure documents in formats ready for market surveillance authority submission.
- Integrated feedback loop — incident findings automatically flag updates needed in the risk management system, technical documentation, and post-market monitoring plan.
Related resources
Frequently asked questions
- What qualifies as a serious incident under the EU AI Act?
- Article 3(49) defines a serious incident as one that directly or indirectly leads to: death or serious health damage, serious disruption of critical infrastructure, breach of fundamental rights obligations, serious property or environmental damage, or serious harm to health or safety including psychological harm.
- Who must report a serious AI incident?
- The provider of the high-risk AI system bears the primary reporting obligation. Deployers must inform the provider, and distributors/importers must also pass on what they learn.
- What is the reporting deadline for AI incidents?
- Initial notification must be submitted immediately, no later than 15 days after becoming aware of the incident. A follow-up report within 15 days of the initial report, and a final closure report once corrective measures are implemented.
- Is a software bug the same as a serious incident?
- Not necessarily. A serious incident requires actual or potential serious harm. A bug is reportable only if it results in or could result in one of the harmful outcomes defined in Article 3(49). Near-misses should also be assessed.
- What is the penalty for not reporting?
- Failure to report is a Tier 2 violation: up to €15 million or 3% of global annual turnover, whichever is higher. Late or incomplete reporting can also trigger enforcement.
Manage AI incidents with full audit trails
ActLoom automates incident logging, deadline tracking, root cause analysis, and authority reporting — so you never miss a deadline.
Start free assessment