EU AI Act · Articles 9 to 15

The articles GuardianAI scans

GuardianAI analyzes your documents and contracts against the 7 core articles of the EU AI Act. Every finding is automatically mapped to the corresponding article so you know exactly where your gaps are and what to fix.

Art. 9

Risk Management System

Establishes the obligation to implement and maintain an iterative risk management system throughout the entire lifecycle of the AI system.

What the article requires

  • 1Identify and analyze all known and foreseeable risks to health, safety and fundamental rights.
  • 2Adopt adequate and proportionate risk management measures, prioritizing the most effective ones.
  • 3Test the AI system under real-world conditions to detect unforeseen risks before deployment.
  • 4Review and update the risk management system throughout the entire lifecycle.

What GuardianAI verifies

Automatic analysis against Art. 9

  • Risk management system established and documented
  • Known and foreseeable risks identified and analyzed
  • Risk mitigation measures adopted and documented
  • Testing procedures for risk management implemented
Art. 10

Data Governance

Training, validation and test datasets must meet strict quality criteria and be free from biases that could generate discriminatory results.

What the article requires

  • 1Implement documented data governance and management practices.
  • 2Ensure that data is relevant, representative, free of errors and complete.
  • 3Establish active bias detection and mitigation procedures.
  • 4Document the datasets: origin, scope, characteristics and possible limitations.
Read full guide: Data governance and sub-processors

What GuardianAI verifies

Automatic analysis against Art. 10

  • Data governance and management practices implemented
  • Training data quality criteria defined and met
  • Active bias detection and mitigation procedures in place
  • Datasets documented (source, scope, characteristics)
Art. 11

Technical Documentation

The technical documentation of the system must be drawn up before placing it on the market and kept up to date throughout its entire lifecycle.

What the article requires

  • 1Prepare comprehensive technical documentation in accordance with Annex IV of the Regulation.
  • 2Include a detailed description of the system, intended purpose and prior versions.
  • 3Document performance metrics, known limitations and test results.
  • 4Update documentation upon significant changes to the system.
Read full guide: Practical guide to technical documentation (Annex IV)

What GuardianAI verifies

Automatic analysis against Art. 11

  • Complete technical documentation prepared (Annex IV compliant)
  • Detailed description of the system and purpose documented
  • Performance metrics and limitations documented
Art. 12

Record-Keeping (Logs)

High-risk systems must be capable of automatically recording relevant events during their operation to ensure traceability.

What the article requires

  • 1Implement automatic logging capabilities in accordance with recognized standards.
  • 2Ensure traceability of the system's outputs throughout its entire lifecycle.
  • 3Record relevant security events, including those that may pose a risk.
  • 4Retain logs in a way that facilitates subsequent audits by competent authorities.

What GuardianAI verifies

Automatic analysis against Art. 12

  • Automatic logging capabilities implemented
  • Traceability of events throughout the lifecycle ensured
  • Relevant security events correctly recorded
Art. 13

Transparency and Information

Systems must be designed so that users can reliably interpret their outputs. Providers are required to supply clear instructions for use.

What the article requires

  • 1Provide sufficient transparency information so that users can understand the system.
  • 2Draft clear instructions for use: capabilities, limitations and necessary oversight.
  • 3Explicitly disclose known risks and the conditions under which the system may fail.
  • 4Specify the level of accuracy, including possible variations across groups of persons.
Read full guide: Transparency in SaaS: Article 13 explained

What GuardianAI verifies

Automatic analysis against Art. 13

  • Transparency information provided to operator users
  • Instructions for use prepared and delivered with the system
  • Known limitations and risks disclosed to users
Art. 14

Human Oversight

High-risk systems must be designed so that natural persons can effectively oversee them, detect anomalies and override their decisions.

What the article requires

  • 1Design the system so that persons can understand its capabilities and detect malfunctions.
  • 2Ensure technical capacity for human intervention or override at any time.
  • 3Allow operators to stop the system via a stop button or other mechanism.
  • 4Ensure that those exercising oversight receive adequate training to interpret and control it.

What GuardianAI verifies

Automatic analysis against Art. 14

  • System designed for effective human oversight
  • Human intervention / override capacity ensured
  • Persons exercising oversight adequately trained
Art. 15

Accuracy, Robustness and Cybersecurity

High-risk AI systems must achieve an appropriate level of accuracy, be robust against errors and be protected against cyber attacks.

What the article requires

  • 1Declare and achieve accuracy levels appropriate to the system's purpose.
  • 2Test the robustness of the system against errors, faults and inconsistencies in input data.
  • 3Implement technical resilience and cybersecurity measures against adversarial attacks.
  • 4Manage automated biases that may occur during operation.

What GuardianAI verifies

Automatic analysis against Art. 15

  • Accuracy levels declared and appropriate to the purpose
  • Robustness against errors and inconsistencies tested and documented
  • Cybersecurity measures implemented and documented

GuardianAI

Analyze all articles in a single pass

Upload a document or paste a URL. GuardianAI maps every gap against all 7 articles simultaneously and generates a compliance report ready for audits.