prEN 18286
prEN 18286
prEN 18286: Artificial intelligence - Quality management system for EU AI Act regulatory purposes

CEN/CLC JTC 21

Date: 2025-10

prEN 18286:2025

Secretariat: Danish Standards

Artificial intelligence — Quality management system for EU AI Act regulatory purposes

Künstliche Intelligenz - Qualitätsmanagementsystem für regulatorische Zwecke im Rahmen der Verordnung über künstliche Intelligenz der EU

Intelligence artificielle - Système de management de la qualité pour le règlement européen sur l'IA

CCMC will prepare and attach the official title page.

Contents Page

European foreword 4

Introduction 5

1 Scope 7

2 Normative references 7

3 Terms and definitions 7

3.1 Terms relating to management systems 7

3.2 Terms relating to the AI Act 13

3.3 Terms relating to AI systems 14

3.4 Terms related to risk management 16

4 Quality management system 17

4.1 General 17

4.2 Identifying regulatory requirements 17

4.3 Determining the scope of the quality management system 18

4.4 Strategy for regulatory compliance 18

4.5 Documented information 19

5 Management responsibility 21

5.1 General 21

5.2 Quality policy 22

5.3 Roles, responsibility, and authorities 22

6 Planning 23

6.1 Actions to address risks related to the functioning of the quality management system 23

6.2 Quality objectives and planning to achieve them 24

7 Support 24

7.1 Resources 24

7.2 Competence 25

7.3 Communication 26

8 Product realization 27

8.1 Actions to address risks 27

8.2 Determining the stages of the life cycle 28

8.3 Inception, design and development 30

8.4 Verification and validation 32

8.5 Data management 33

8.6 Environmental sustainability 33

8.7 Product documentation 34

9 Operation and control 34

9.1 Deployment, operation and monitoring 34

9.2 Supply chain 35

9.3 Changes to AI systems 37

9.4 Post-market monitoring 39

9.5 Reporting serious incidents 41

9.6 Nonconformities 42

10 Performance evaluation 43

10.1 General 43

10.2 Review 43

10.3 Improvement 45

10.4 Planning of changes 45

Annex A (informative) Consultation with interested parties regarding fundamental rights 46

A.1 General 46

A.2 Estimating risk and impact on affected persons 47

Annex B (informative) Relationship between this document and other harmonized standards 48

B.1 Introduction 48

B.2 Selection of technical specifications 48

B.3 Harmonized standard interactions 49

B.4 Supporting harmonized standards 50

Annex C (informative) Correspondence between this document and ISO 9001:2015 51

Annex D (informative) Correspondence between this document and ISO/IEC 42001:2023 52

Annex ZA (informative) Relationship between this European Standard and the essential requirements of Regulation (EU) 2024/1689 aimed to be covered 53

Bibliography 55

European foreword

This document (prEN 18286:2025) has been prepared by Technical Committee CEN/CLC/JTC 21 “Artificial Intelligence”, the secretariat of which is held by DS.

This document is currently submitted to the CEN Enquiry.

This document has been prepared under a standardization request addressed to CEN by the European Commission. The Standing Committee of the EFTA States subsequently approves these requests for its Member States.

For the relationship with EU Legislation, see informative Annex ZA, which is an integral part of this document.

Introduction

0.1 General

The EU’s Artificial Intelligence (AI) Act [1] regulates AI systems through the product safety system established under the New Legislative Framework. An AI system subject to the EU AI Act can be a product or a component of a product.

AI systems must be in compliance with applicable regulatory requirements at the moment that the AI system is placed on the market or put into service. An AI system is placed on the market when it is supplied for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge. An AI system is put into service when it is supplied for the first use directly to the deployer or for own use in the Union for its intended purpose.

EXAMPLE 1 An in-house developed AI system is deployed for internal use.

EXAMPLE 2 An AI system is placed on the market when it is offered for sale on a website.

A quality management system, while being implemented by a provider, can be directly associated with one or more AI systems that are intended to be put into service or placed on the market. Quality, in this context can be understood as compliance with all of the regulatory requirements of the EU AI Act that apply to providers.

Depending on the context of the AI system, the provider can be required to show conformity with the industry specific quality management system requirements under sector specific legislation. This document does not require the provider to maintain a separate quality management system, but can be used as a complementary to existing requirements depending on the applicable regulatory requirements for the AI system.

EXAMPLE 3 Medical devices are commonly compliant to ISO 13485 quality management system requirements. Incorporating the requirements of this document within the existing processes is desirable when achieving conformity with this document.

This document specifies requirements for a quality management system that complies with applicable regulatory requirements (as described in Annex ZA) throughout the entire life cycle of the AI system. These requirements apply to a broad range of AI systems, and include explicit requirements to address risks to health, safety and fundamental rights which can arise.

This document is intended for use by providers that provide AI systems irrespective of size, nature or location. The requirements and guidance in this document are, however, specifically tailored to support providers that operate inside of the European Union, and those located outside of the Union who are active in the European Union market or who intend to enter that market.

The quality management system in this document is described in a way that the implementation can take into account the size of the provider, while providing the degree of rigour and level of protection required by applicable regulatory requirements.

Annex A describes procedures for consultation with interested parties about fundamental rights, Annex B describes the relationship of this document with other harmonized standards, Annex C contains the correspondence between the clauses of this document and ISO 9001:2015 [2], and Annex D contains the correspondence with ISO/IEC 42001:2023 [3].

0.2 Fundamental rights

Fundamental rights are universal legal guarantees without which individuals and groups cannot secure their fundamental freedoms and human dignity and which apply equally to every human being regardless of nationality, place of residence, sex, national or ethnic origin, colour, religion, language or any other status as per the legal system of a country without any conditions.

The EU Charter of Fundamental Rights [4] describes the European view on these fundamental rights. Further information about their scope and strength can be found in the Charter and in prEN 18228:—[1] [5], Annex F. Additional EU and country-specific regulatory requirements can apply.

1.0 Scope

This document specifies the requirements and provides guidance for the definition, implementation, maintenance and improvement of a quality management system for organizations that provide AI systems.

This document is intended to support the organization in meeting applicable regulatory requirements.

2.0 Normative references

There are no normative references in this document.

3.0 Terms and definitions

For the purposes of this document, the following terms and definitions apply.

ISO and IEC maintain terminology databases for use in standardization at the following addresses:

— ISO Online browsing platform: available at https://www.iso.org/obp/

— IEC Electropedia: available at https://www.electropedia.org/

3.1 Terms relating to management systems

3.1.1

audit

systematic and independent process (3.1.18) for obtaining evidence and evaluating it objectively to determine the extent to which the audit criteria are fulfilled

Note 1 to entry: An audit can be an internal audit (first party) or an external audit (second party or third party), and it can be a combined audit (combining two or more disciplines).

Note 2 to entry: An internal audit is conducted by the organization (3.1.15) itself, or by an external party on its behalf.

Note 3 to entry: “Audit criteria” is defined in ISO 19011.

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.3, modified – reference to evidence removed from Note 3]

3.1.2

competence

ability to apply knowledge and skills to achieve intended results

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.9]

3.1.3

complaint

statement claiming that an AI system (3.2.1) does not conform to applicable regulatory requirements or has caused or is causing harm (3.4.3)

3.1.4

conformity

fulfilment of a requirement (3.1.21)

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.15]

3.1.5

continual improvement

recurring activity to enhance performance (3.1.26)

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.12]

3.1.6

corrective action

action to eliminate the cause(s) of a nonconformity (3.1.14) and to prevent recurrence

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.17]

3.1.7

documented information

information controlled and maintained by an organization (3.1.15) and the medium on which it is contained

Note 1 to entry: Documented information can be in any format and media and from any source.

Note 2 to entry: Documented information can refer to:

a)   the management system, including related processes (3.1.18);

b)   information created in order for the organization to operate (documentation);

c)   evidence of compliance and results achieved.

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.10, modified – removed “required to be” from the definition, note 2 added reference to compliance and removed reference to records]

3.1.8

effectiveness

extent to which planned activities are realized and planned results are achieved

[SOURCE: ISO/IEC Directives Part 1, Annex SL Appendix 2 (rev 4 2024), 3.13]

3.1.9

harmonized standard

European standard adopted on the basis of a request made by the Commission for the application of Union harmonization legislation

Note 1 to entry: A European standardization organization can develop a new standard, adopt an existing international standard or adapt an existing international standard.

[SOURCE: Regulation (EU) 1025/2012, Article 2(1)(c), modified – removed “a”, added Note 1 to entry]

3.1.10

interested party

stakeholder

individual, group or organization (3.1.15) that can affect, be affected by or perceive itself to be affected by a decision or activity

Note 1 to entry: Affected persons (3.4.1) are a subset of interested parties.

Note 2 to entry: Interested party includes relevant regulatory bodies, national public bodies, bodies that enforce the protection of fundamental rights, and market surveillance authorities.

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.2 – modified to change individual to person in the definition, add group to the definition and add notes]

3.1.11

life cycle

evolution of a system, product, service, project or other human-made entity, from inception through retirement

[SOURCE: ISO/IEC/IEEE 15288:2023, 4.1.23, modified conception to inception]

3.1.12

measurement

process (3.1.18) to determine a value

[SOURCE: ISO/IEC Directives Part 1, Annex SL Appendix 2 (rev 4 2024), 3.3]

3.1.13

monitoring

repeatedly determining the status of a system, a process (3.1.18) or an activity using inputs including measurements (3.1.12)

Note 1 to entry: To determine the status, there can be a need to check, supervise or critically observe.

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.3, modified – added to definition “repeatedly” and “using inputs including measurements”]

3.1.14

nonconformity

non-fulfilment of a requirement (3.1.21)

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.16, modified – note 1 added]

3.1.15

organization

person or group of people that has its own functions with responsibilities, authorities and relationships to achieve its objectives

Note 1 to entry: The concept of organization includes, but is not limited to, sole-trader, company, corporation, firm, enterprise, authority, partnership, charity or institution, or part or combination thereof, whether incorporated or not, public or private.

Note 2 to entry: If the organization is part of a larger entity, the term “organization” refers only to the part of the larger entity that is within the scope of the quality management system.

[SOURCE: ISO/IEC Directives Part 1, Annex SL Appendix 2 (rev 4 2024), 3.3, modified – note 3 added]

3.1.16

policy

intentions and direction of an organization (3.1.15) as formally expressed by its top management (3.1.25)

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.1]

3.1.17

procedure

specified way to carry out an activity or a process (3.1.18)

Note 1 to entry: Procedures can be documented or not.

[SOURCE: ISO 9000:2015, 3.4.5]

3.1.18

process

set of interrelated or interacting activities that uses or transforms inputs to deliver a result

Note 1 to entry: Whether the result of a process is called an output, a product or a service depends on the context of the reference.

Note 2 to entry: Process may achieve an immediate result but can also consist of information-sharing or other activities.

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.8, modified – note 2 added]

3.1.19

quality

set of characteristics of an object (3.1.30) that fulfils regulatory requirements (3.1.27)

Note 1 to entry: Quality includes the protection required by applicable regulatory requirements aimed at ensuring and maintaining the protection of health, safety and fundamental rights (3.4.2).

Note 2 to entry: In the context of this document, quality pertains to regulatory compliance to the EU AI Act. It differs from the concept of quality in ISO 9001 which includes expectations of customers.

[SOURCE ISO 9000:2015, 3.6.2, modified remove a note, “degree of”, “inherent” and added “regulatory”, added notes]

3.1.20

quality policy

policy (3.1.16) related to quality (3.1.19)

[SOURCE ISO 9000:2015, 3.5.9, modified to remove notes to entry]

3.1.21

requirement

need or expectation that is stated and obligatory

Note 1 to entry: A specified requirement is one that is stated, e.g. in a document.

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.14, modified – removed “generally implied” from the definition]

3.1.22

top management

person or group of people who directs and controls an organization (3.1.15) at the highest level

Note 1 to entry: Top management has the power to delegate authority and provide resources within the organization.

Note 2 to entry: If the scope of the management system covers only part of an organization, then top management refers to those who direct and control that part of the organization.

Note 3 to entry: In different organizational contexts, top management can be referred to with different terms.

Note 4 to entry: Where the organization is a person, top management is that person.

[SOURCE: ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024), 3.3, modified – note 3 and 4 added]

3.1.23

performance

<quality management system> measurable result

Note 1 to entry: Performance can relate either to quantitative or qualitative findings.

Note 2 to entry: Performance can relate to the management of activities, processes (3.1.18), products, services, systems or organizations (3.1.15).

[SOURCE: ISO 9000:2015, 3.7.8]

3.1.24

regulatory requirement

requirement (3.1.21) that is necessary to be met for the purposes of complying with the content of applicable regulation

Note 1 to entry: Applicable regulation includes at least Regulation 2024/1689 (AI Act) [1]

3.1.25

quality objective

measurable goal established to ensure that regulatory requirements (3.1.24) are consistently met throughout the life cycle (3.1.11)

Note 1 to entry: In the context of quality (3.1.19) management systems, quality objectives are set by the provider (3.2.5), consistent with the quality policy (3.1.20), to achieve specific results.

3.1.26

systematic

pursuing defined objective(s) in a planned, step-by-step manner

[SOURCE: ISO/TR 18307:2001, 3.140]

3.1.27

scope

<quality management system> set of AI system(s) (3.2.1) that are covered under a quality (3.1.19) management system and the boundaries that define where the quality management system applies

EXAMPLE Boundaries can include physical (geographic), organizational, functional, process, product, service and interface boundaries.

3.1.28

verification

confirmation, through the provision of objective evidence, that specified requirements (3.1.21) have been fulfilled

Note 1 to entry: The objective evidence needed for a verification can be the result of an inspection or of other forms of determination such as performing alternative calculations or reviewing documents.

Note 2 to entry: The word “verified” is used to designate the corresponding status.

Note 3 to entry: Verification can rely on testing activities and results to provide objective evidence.

Note 4 to entry: Verification activities pertaining to the identification, analysis, evaluation and control of risks arising from fundamental rights hazard can include:

—   consultation with potentially affected stakeholders (or their proxies, including civil society organizations) identified through stakeholder mapping, taking due account of local variation across the region and the domains in which the AI system is intended to operate;

—   real-world conditions testing to evaluate the effectiveness of risk controls, conducted in accordance with applicable ethical and regulatory requirements;

—   review and evaluation by a cross-functional team of independent experts with appropriate knowledge, skill, experience and professional expertise;

—   consultation with national European or international bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights.

[SOURCE: prEN 18228:—, 3.30]

3.1.29

validation

verification (3.1.28) where the specified requirements (3.1.21) are adequate for an intended purpose (3.2.3)

EXAMPLE A measurement procedure, ordinarily used for the measurement of mass concentration of nitrogen in water, can be validated also for measurement of mass concentration of nitrogen in human serum.

Note 1 to entry: The concept of validation as a procedure is not directly related to validation datasets used in machine learning.

[SOURCE: ISO/IEC Guide 99:2007, 2.45, modified — Note 1 to entry added]

3.1.30

object

object of conformity assessment

entity to which specified requirements (3.1.21) apply

EXAMPLE Product, process, service, system, installation, project, data, design, material, claim, person, body or organization, or any combination thereof.

[SOURCE: ISO/IEC 17000:2020, switched preferred and admitted terms]

3.1.1 Terms relating to the AI Act

3.2.1

AI system

machine-based system that is designed to operate with varying levels of autonomy and that can exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments

Note 1 to entry: The verb “can” represents a possibility, not all AI systems that fit the above definition have this ability to adapt after deployment.

[SOURCE: EU AI Act (Article 3(1)), modified — “a” has been removed, “may” replaced with “can” based on the use of verbs in standards. Note 1 to entry added.]

3.2.2

deployer

natural or legal person, public authority, agency or other body using an AI system (3.2.1) under its authority except where the AI system is used in the course of a personal non-professional activity

[SOURCE: EU AI Act 2024/1689 (Article 3(4)), modified removed “a”]

3.2.3

intended purpose

intended use

use for which an AI system (3.2.1) is intended by the organization (3.1.17), including the specific context and conditions of use, as specified in the information supplied by the organization in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation

[SOURCE: EU AI Act 2024/1689, (Article 3(12)), modified removed “a”]

3.2.4

performance

<AI system> ability of an AI system (3.2.1) to achieve its intended purpose (3.2.3)

[SOURCE: EU AI Act 2024/1689, (Article 3(18)), modified removed “the”]

3.2.5

provider

natural or legal person, public authority, agency or other body that develops an AI system (3.2.1) or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge

Note 1 to entry: A distributor, importer, deployer or other third party can be considered a provider of an AI system in certain circumstances.

[SOURCE: EU AI Act 2024/1689, (Article 3(3)), modified — “a” removed]

3.2.6

reasonably foreseeable misuse

use of an AI system (3.2.1) in a way that is not in accordance with its intended purpose (3.2.3), but which can result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems

[SOURCE: EU AI Act 2024/1689, (Article 3(13)), modified — “a” removed, “may” replaced with “can” based on the verbal form of standards.]

3.2.7

substantial modification

change to an AI system (3.2.1) after its placing on the market or putting into service which is not foreseen or planned in the initial conformity (3.1.4) assessment carried out by the provider (3.2.5) and as a result of which the compliance of the AI system with the applicable regulatory requirements (3.1.24) is affected or results in a modification to the intended purpose (3.2.3) for which the AI system has been assessed

Note 1 to entry: Rephrased from the AI Act Article (3)(23) to reference applicable regulatory requirements instead of a specific reference to the AI Act Chapter III, Section 2.

[SOURCE: EU AI Act 2024/1689, (Article 3(23)), modified as described in Note 1 to entry]

3.2.8

essential requirement

definition of the results to be attained, or the hazards (3.4.4) to be dealt with, without specifying the technical solutions for doing so

Note 1 to entry: Essential requirements in relation to the EU AI Act [1] are specified in Chapter III, Section 2 of that Regulation.

[SOURCE: The Blue Guide on implementation of EU product rules 20202/C247/01, modified – “but do not specify” to “without specifying”]

3.2.9

serious incident

incident or malfunctioning of an AI system (3.2.1) that directly or indirectly leads to any of the following:

a) the death of a person, or serious harm (3.4.3) to a person’s health;

b) a serious and irreversible disruption of the management or operation of critical infrastructure;

c) the infringement of obligations under applicable regulatory requirements (3.1.27) intended to protect fundamental rights (3.4.2);

d) serious harm (3.4.3) to property or the environment.

[SOURCE: EU AI Act 2024/1689, (Article 3(49))]

3.1.2 Terms relating to AI systems

3.3.1

inference

reasoning by which conclusions are derived from known premises

Note 1 to entry: In AI, a premise is either a fact, a rule, a model, a feature or raw data.

Note 2 to entry: The term “inference” refers both to the process and its result.

[SOURCE: ISO/IEC 22989:2022, 3.1.17]

3.3.2

machine learning algorithm

algorithm to determine parameters of a machine learning model (3.3.3) from data according to given criteria

EXAMPLE Consider a univariate linear function y = θ0 + θ1x where y is an output or result, x is an input, θ0 is an intercept (the value of y where x = 0) and θ1 is a weight. In machine learning, the process of determining the intercept and weights for a linear function is known as linear regression.

[SOURCE: ISO/IEC 22989:2022, 3.3.6]

3.3.3

machine learning model

mathematical construct that generates an inference (3.3.1) or prediction (3.3.4) based on input data or information

EXAMPLE If a univariate linear function (y = θ0 + θ1x) has been trained using linear regression, the resulting model can be y = 3 + 7x.

Note 1 to entry: A machine learning model results from training based on a machine learning algorithm

[SOURCE: ISO/IEC 22989:2022, 3.3.7]

3.3.4

prediction

primary output of an AI system (3.2.1) when provided with input data or information

Note 1 to entry: Predictions can be followed by additional outputs, such as recommendations, decisions and actions.

Note 2 to entry: Prediction does not necessarily refer to predicting something in the future.

Note 3 to entry: Predictions can refer to various kinds of data analysis or production applied to new data or historical data (including translating text, creating synthetic images or diagnosing a previous power failure).

[SOURCE: ISO/IEC 22989:2022, 3.1.27]

3.3.5

training

model training

process (3.1.18) to determine or to improve the parameters of a machine learning model (3.3.3), based on a machine learning algorithm (3.3.2), by using training data

[SOURCE: ISO/IEC 22989:2022, 3.3.15]

3.3.6

traceability

ability to trace the history of the AI system (3.2.1)

Note 1 to entry: Traceability includes information on how AI systems have been specified, developed, verified, validated, operated, monitored and retired.

3.3.7

AI system requirements

functional and non-functional requirements (3.1.21) derived from regulatory requirements (3.1.27)

3.3.8

test procedure

sequence of test cases in execution order, with any associated actions required to set up preconditions and perform wrap up activities post execution

[SOURCE: ISO/IEC 29119‑1:2022, 3.1.20]

3.1.3 Terms related to risk management

3.4.1

affected person

individual, or group of individuals, who is or are directly or indirectly impacted by an AI system (3.2.1) when used in accordance with its intended purpose (3.2.3) or in the frame of reasonably foreseeable misuse (3.2.6)

[SOURCE: prEN 18228:—, 3.41]

3.4.2

fundamental rights

basic right(s) and freedom(s) held by every human being irrespective of birth, religion, belief, age, race, ethnicity, sex, gender or any other status

Note 1 to entry: For the purposes of this document, fundamental rights and their applicability are those protected by EU law, including the protection of the rights outlined in EU law, including the Charter of Fundamental Rights of the EU (EU Charter) and the European Convention on Human Rights.

Note 2 to entry: prEN 18228, Annex D and Annex F provide information about other sources of applicable law governing fundamental rights.

[SOURCE: prEN 18228:—, 3.36]

3.4.3

harm

injury or damage to health or interference with the fundamental rights (3.4.2) of a person or group of persons, or damage to property or the environment

Note 1 to entry: Harm can be material or immaterial, including physical, psychological, societal or economic harm.

[SOURCE: prEN 18228:—, 3.3, modified to add note]

3.4.4

hazard

potential source of harm (3.4.3)

[SOURCE: ISO/IEC Guide 51:2014, 3.2]

3.4.5

risk

combination of the probability of an occurrence of harm (3.4.3) and the severity (3.4.8) of that event

Note 1 to entry: The probability of occurrence includes the exposure to a hazardous situation and the possibility to avoid or limit the harm.

Note 2 to entry: Risk includes harm to the health, safety and interference of fundamental rights (3.4.2) directly or indirectly impacted by hazardous situations created where an AI system (3.2.1) is involved.

[SOURCE: prEN 18228:—, 3.19, modified to remove note 3]

3.4.6

risk control

process (3.1.18) in which decisions are made and measures implemented by which risks (3.4.5) are reduced to, or maintained within, specified levels

[SOURCE: ISO/IEC Guide 63:2019, 3.12]

3.4.7

risk management

systematic (3.1.26) and continuous application of management policies (3.1.16), procedures (3.1.17) and practices to the tasks of analysing, evaluating, controlling and monitoring (3.1.13) risk (3.4.5) throughout the entire life cycle (3.1.11) of an AI system (3.2.1)

[SOURCE: prEN 18228:—, 3.25]

3.4.8

severity

measure of the possible consequences of harm (3.4.3)

Note 1 to entry: The definition does not imply numerical measure of severity.

Note 2 to entry: For any risk (3.4.5) to fundamental rights (3.4.2), the severity of risk includes consideration of the nature of the harm (3.4.3), the strength of the harm, the significance and scale of the harm in terms of the number of individuals and groups of individuals whose rights are placed at risk, the irremediability of the harm and whether the rights at risk are those of persons under the age of 18 and others who are disproportionately at risk from the use of the system.

[SOURCE: prEN 18228:—, 3.27]

4.0 Quality management system

4.1 General

The provider shall establish, maintain and continually improve the quality management system in accordance with the requirements of this document, and in order to protect health, safety and fundamental rights.

The provider shall establish, document, implement and maintain any process, procedure and activity necessary to maintain the quality management system and its effectiveness in meeting applicable regulatory requirements throughout the applicable stages of the life cycle.

4.1.1 Identifying regulatory requirements

The provider shall determine and systematically review the regulatory requirements that these AI systems must comply with, at any point of their life cycle.

NOTE This includes at least the essential requirements, as explained in 4.4.2.

The regulatory requirements identified shall be integrated into the strategy for regulatory compliance referred to in 4.4.

4.1.2 Determining the scope of the quality management system

The provider shall determine the scope of the quality management system, by:

a) determining the set of AI system(s) that are covered under the quality management system;

b) defining the boundaries, taking into account:

1) the regulatory requirements referred to in 4.2;

2) the intended purpose of the AI system(s).

4.1.3 Strategy for regulatory compliance

4.1.4 Determining the strategy

The provider shall determine a strategy for compliance with regulatory requirements including at least the following elements:

a) compliance with the regulatory requirements for this quality management system, in accordance with this document;

b) compliance with essential requirements (see 4.4.2);

c) compliance with the regulatory requirements for post-market monitoring, in accordance with 9.4;

d) compliance with the regulatory requirements in relation to serious incidents, in accordance with 9.5;

e) the strategy for data management, in accordance with 8.5.

The strategy shall be available as documented information, in accordance with 4.5.

4.1.5 Essential requirements

Applicable Union harmonization legislation defines the essential requirements of products. They are written in a way that supports conformity assessment, even in the absence of harmonized standards.

The essential requirements are those for:

a) the risk management system;

b) data and data governance;

c) technical documentation;

d) record-keeping;

e) transparency and provision of information to deployers;

f) human oversight;

g) accuracy, robustness and cybersecurity.

NOTE These essential requirements are found in Chapter III, Section 2 of the AI Act [1].

4.1.6 Selecting and documenting measures to demonstrate compliance

Selecting approaches

When demonstrating compliance the provider shall select one of, or a combination of the following approaches that provide compliance with each applicable essential requirement:

a) harmonized standards (see Annex B) that have been cited in the Official Journal;

b) common specifications that have been adopted in an implementing act;

c) other standards;

EXAMPLE 1 EN 62586‑2 [14] specifies what level of aggregated data to log for each type of event versus raw input data measurements.

d) other technical specifications or solutions.

EXAMPLE 2 A provider that produces AI-enabled fall detectors uses a technical solution developed internally (or one recommended by an industry body) based on an in-depth risk assessment of that use case, and that technical approach provides specific requirements for conformity with the regulatory requirements.

Selecting measures

4.4.3.2.1 The provider shall document each essential requirement complied with by use of approaches 4.4.3.1 a) or b).

4.4.3.2.2 If the provider uses approaches 4.4.3.1 c) or d) to comply with any essential requirements, or there are restrictions documented in the Annexes of harmonized standard implemented and they therefore do not fully or adequately cover the essential requirements, then in the technical documentation the provider shall:

a) document the essential requirements that are not covered in full by the implementation of harmonized standards or common specifications;

b) document and justify the measures used for these essential requirements, including objective evidence that each and all essential requirements are met.

4.2 Documented information

4.2.1 Documentation of the quality management system

The provider shall establish and maintain documentation of the quality management system that demonstrates adherence to the requirements of this document.

NOTE Maintenance of the quality management system documentation includes cases where the quality management system design changes following the addition of a new AI system in scope, or in case of substantial modification. See 8.2.4.

The documentation of the quality management system shall:

a) contain detailed information about the measures put in place by the provider to ensure that the AI systems managed under this quality management system meet their applicable regulatory requirements;

b) be common to all AI systems under this quality management system, not specific to a particular AI system;

c) be written for an audience of auditors and kept at the disposal of notified bodies and competent authorities;

d) be presented in a clear, accessible and version-controlled manner and ensuring easy retrieval of relevant information presented in one of the official languages of the European Union;

e) include:

1) the scope of the quality management system, as defined in 4.3;

2) documented statements of an quality policy and quality objectives, as defined in 5.2 and 6.2;

3) processes and evidence as defined in Clause 7;

4) reference to documented procedures for the quality management system, as defined in 6.2.2, Clause 5, Clause 8, Clause 9 and Clause 10;

5) a description of how the provider ensures the effective planning, operation, maintenance and control of the quality management system processes;

6) a description of the interaction between the processes of the quality management system;

7) written evidence maintained to demonstrate conformance to this document.

4.2.2 Operational documentation

The provider shall ensure that documents, including written evidence, determined by the provider to be necessary to ensure the effective planning, operation and control of the quality management system processes (see 4.1), are written and maintained.

NOTE This concerns the documents that support the application of the processes themselves, for instance traceability documents or documents written for communication purposes. For other documents and written evidence that are produced as the outcome of these processes, see for instance 8.5 for the results of serious incidents, 9.2.4 for feedback or other information gather by the post-market monitoring system, or Clause 10 for the results of the reviews.

4.2.3 Updating documented information

The provider shall ensure that changes to documents are reviewed and approved either by the original approving function or another designated function that has access to pertinent background information upon which to base its decisions.

4.2.4 Control of documented information

4.5.4.1 Documented information required by the quality management system and this document shall be controlled to ensure:

a) it is suitable for use, where and when it is needed (including as regards the choice of the format and media);

NOTE Format includes language and graphics. Media includes paper and electronic.

b) it is adequately protected (e.g. from loss of confidentiality, improper use or loss of integrity);

c) storage and preservation, including preservation of legibility;

d) control of changes (e.g. version control);

e) retention and disposition;

f) traceability (including documents from external and internal sources).

The provider shall ensure documents are identified and described (e.g. a title, date, author or reference number).

The provider shall retain documented information for a period as specified by applicable regulatory requirements.

The retention period shall ensure that documents related to AI systems which have been developed and tested are available for at least the lifetime of each AI system as defined by the provider, but not less than the retention period of any resulting written evidence (see 4.6.4.3), or as specified by applicable regulatory requirements.

4.5.4.2 Documented information of external origin determined by the provider to be necessary for the planning and operation of the quality management system shall be identified as appropriate and controlled.

4.5.4.3 A documented procedure shall define the controls needed to:

a) review and approve documents for adequacy prior to issue;

b) review, update as necessary and reapprove documents, taking into account written evidence;

c) ensure that the current revision status of and changes to documents are identified;

d) ensure that the outcomes in 4.5.4.1 are achieved.

5.0 Management responsibility

5.1 General

Top management shall ensure:

a) that the quality policy (see 5.2) and quality objectives (see 6.2) are established;

b) that the resources (see Clause 7) needed for the quality management system are available;

c) that other (not top management) relevant roles can carry out their roles effectively within their areas of responsibility.

d) that the quality management system requirements are integrated into the provider’s processes;

e) that the quality management system achieves its intended result(s);

f) that the importance of effective quality management is communicated to provider personnel (see 7.3).

NOTE 1 Top management can demonstrate commitment by promoting a culture of responsible use and development of AI systems, and thus aid the success of the quality management system.

NOTE 2 This subclause is a modified version of the requirements of ISO/IEC 42001:2023, 5.1. The quality policy and quality objectives mentioned in this document are based on criteria derived from the regulatory purpose.

5.1.1 Quality policy

5.1.2 Establishing the quality policy

Top management shall establish a quality policy that:

a) provides a framework for setting quality objectives (see 6.2);

b) includes a commitment to meet applicable requirements;

c) implements the regulatory strategy (see 4.4);

d) includes a commitment to continual improvement of the quality management system;

e) is included in the documentation of the quality management system;

f) is communicated to the provider’s relevant personnel, in accordance with 7.3.

EXAMPLE The quality policy can be communicated during personnel orientation or training, during the design review process, for customer relationship management staff and as a standing item of board meetings.

The quality policy may be embedded in a wider policy document.

NOTE This subclause is a modified version of the requirements of ISO/IEC 42001:2023, 5.2. The quality objectives mentioned here are determined based on different criteria, as a result of the regulatory purpose.

5.2 Roles, responsibility, and authorities

5.3.1 The provider shall assign supervision and responsibility for the quality management system to provider personnel with relevant expertise and experience, including by assigning top management level responsibilities wherever applicable.

Top management shall assign the responsibility and authority for:

a) ensuring that the quality management system conforms to the requirements of this document;

b) reporting on the performance of the quality management system to top management.

Roles, responsibilities and authorities shall be described in the documentation of the quality management system.

The roles and their corresponding responsibilities and authorities shall be assigned and communicated within the provider, including when information is shared across roles.

5.3.2 The assignment of roles and responsibilities shall ensure that:

a) roles are applicable given the context of the provider;

b) roles are traceable to the quality policy (see 5.2) and quality objectives (see 6.2);

c) responsibilities and decision-making authority are defined for all AI systems in scope of the quality management system (see 4.3);

d) for the regulatory requirements identified in 4.2, responsibilities are assigned to monitor and address them;

e) responsibilities are identified for the handling of all processes required by this document, including:

1) the processes required by the regulatory requirements identified in 4.2;

2) across the life cycle of the AI system and which roles are consulted or informed.

5.3.3 Top management shall assign the responsibility and authority for:

a) ensuring that the risk management system addresses risks to fundamental rights, health and safety;

b) reviewing applicable regulatory requirements (see 10.2);

c) ensuring that threats and vulnerabilities of the AI system necessary to address b) are also addressed when applying the requirements of this document;

d) ensuring ongoing monitoring of the technological and regulatory state of the art relevant to the AI systems covered by the quality management system, if necessary to implement 6.1, 6.2.2, 8.1, Clause 9 or Clause 10.

The accountability and responsibility for overseeing the implementation of the risk management system and the approval of the risk control measures shall be assigned to a specific role.

5.3.4 The provider may outsource roles and responsibilities to external organizations and different types of workers.

The responsibility for ensuring that all outsourced activities comply with the quality management system and other applicable regulatory requirements remains with the provider.

The provider should encourage alignment across functions and parts of the provider on relevant aspects of the quality management system wherever applicable and in proportion to the scale and complexity of the provider.

6.0 Planning

6.1 Actions to address risks related to the functioning of the quality management system

6.1.1 When planning for the quality management system, the provider shall, based on the requirements referred to in 4.2, determine the risks that need to be addressed to:

a) give assurance that the quality management system can achieve its intended results;

b) prevent, or reduce, undesired effects of the application of the quality management system;

c) achieve continual improvement of the quality management system.

6.1.2 The provider shall plan:

a) actions to address the risks described in 6.1.1;

b) how to:

1) integrate and implement the actions into its quality management system processes;

2) evaluate the effectiveness of these actions.

6.1.3 When determining actions to address risks described in 6.1.1, the provider shall consider at least the following:

a) the regulatory compliance strategy described in 4.4;

b) used AI technologies (e.g. various machine learning approaches, expert systems, logic-based approaches);

c) the need for other parties to provide information and assistance throughout the AI system life cycle that is relevant for fulfilling regulatory requirements;

d) availability of resources and expertise (see Clause 7).

NOTE Addressing risks when planning the quality management system is different from, and is not to be confused with, the risk management process for the AI system in 8.1.

6.1.1 Quality objectives and planning to achieve them

6.1.2 Quality objectives

The provider shall establish quality objectives at relevant functions, levels and processes needed for the quality management system that are consistent with the provider’s quality policy (see 5.2)

Each AI system’s quality objective shall, as applicable:

a) be verifiable;

b) take into account applicable requirements, in particular regulatory requirements;

c) be monitored, regularly reviewed and updated (see 10.2.1.3);

d) be regularly reviewed and updated to maintain regulatory compliance throughout the AI system life cycle.

The provider shall describe the quality objectives in the documentation of the quality management system.

NOTE This is a modified version of the requirements of ISO/IEC 42001:2023, 6.2, but based on different criteria, as a result of the regulatory purpose.

6.1.3 Planning for the achievement of quality objectives

When planning how to apply and achieve its quality objectives, the provider shall determine:

a) what will be done, including the relevant processes and the applicable quality criteria of these processes, as well as of each AI system, and each stage of the life cycle, see Clause 8;

b) measures to be taken to implement the requirements of this document;

c) who will be responsible, including responsibilities and roles on relevant levels and functions of the provider (see 5.3).

7.0 Support

7.1 Resources

The provider shall determine and provide the resources needed for the establishment, implementation, maintenance and continual improvement of the quality management system.

When determining the necessary resources the provider shall take at least the following aspects into account:

a) human resources and their competences;

b) organizational, discipline, application and technology specific knowledge;

c) organizational infrastructure and work environment (e.g. for design, development, testing);

d) measures to ensure the security of supply;

e) time.

See also 9.2.

7.1.1 Competence

7.2.1 The provider shall:

a) determine the necessary competences of personnel doing work under its control that affects its quality objectives;

b) ensure that these personnel are competent on the basis of education, training or experience for carrying out their role(s);

c) where applicable, take actions to acquire the necessary competences, and evaluate the effectiveness of the actions taken;

d) document the processes for:

1) establishing and validating competences;

2) providing needed training;

3) maintaining supervision;

4) ensuring awareness of personnel.

NOTE Achieving this can involve taking actions such as training, mentoring, task reassignment, hiring or contracting.

Documented information shall be available as evidence of competence.

7.2.2 The provider shall:

a) ensure that relevant personnel are familiar with their duties related to quality management and the provider’s quality management processes;

b) ensure that it has (or has access to) the competences necessary to understand:

a) the regulatory requirements identified in 4.2;

b) the intended purpose (see also 7.2.3).

NOTE 1 This includes competences necessary to understand regulatory requirements relating to health, safety and fundamental rights.

NOTE 2 Competences can be obtained through appropriate education, training, work experience of persons within the provider, or by through acquiring the expertise and knowledge of persons outside of the provider.

7.2.3 The provider shall evaluate how the following factors influence the competency requirements:

a) each AI system’s intended purpose and how it can be reasonably foreseeably misused;

b) the nature of the AI technologies and the data being processed;

c) the relationship between the intended purpose, how each AI system can be reasonably foreseeably misused and risks that they pose, including significant effects on the affected persons, as identified in the risk management system;

d) the effect of the usability and accessibility of each AI system for diverse users, including persons with disabilities.

NOTE 1 Guidance on accessibility and equitable design of AI systems can be found in [17].

NOTE 2 The effects of the AI system can be downstream from the context in which the AI system is implemented.

7.1.2 Communication

7.1.3 General

The provider shall determine the internal and external communications relevant to the quality management system including:

a) what it will communicate;

b) when to communicate;

c) with whom to communicate;

d) how to communicate;

NOTE 1 Internal communication can be in the form of email, announcements in dedicated team communication or workplace messaging applications.

e) how the communication with the provider can be established.

The provider shall consider the relevant regulatory requirements that reflect the needs and expectations of interested parties in the internal and external communication, and plan how to meet those regulatory requirements.

NOTE 2 Interested parties can include other organizations across supply chain, deployers, workers or end users.

7.1.4 Awareness

Persons doing work under the provider’s control and within the scope of the quality management system shall be aware of:

a) the quality policy (see 5.2);

b) their contribution to the effectiveness of the quality management system, including their role with respect to the quality management system;

c) the implications of not conforming with the quality management system requirements.

7.1.5 Communication for regulatory purposes

7.3.3.1 The provider shall handle communication with:

a) national competent authorities;

b) other authorities, including those providing or supporting the access to data;

c) notified bodies;

d) other operators;

e) customers;

f) other interested parties, including those identified through the risk management process.

The provider shall define and maintain procedures to communicate with a) and b).

See 9.5 in relation to serious incidents.

7.3.3.2 In the event of nonconformities, the provider shall inform relevant interested parties of these nonconformities and of any actions taken to correct them, including bringing each AI system into conformity, withdrawing it, disabling it or recalling it. The relevant interested parties are, as applicable:

a) market surveillance authorities;

b) notified bodies;

c) importers;

d) distributors;

e) authorized representatives;

f) deployers.

When an AI system presents a risk, the provider shall inform the responsible market surveillance authorities and, if applicable, the relevant notified body of the nonconformity of the AI system and of any actions taken to correct it.

7.3.3.3 When a competent authority issues a reasoned request, the provider shall provide the necessary documentation and information to demonstrate compliance. The provider shall ensure that it has procedures in place to comply with these requests within an appropriate time frame.

7.3.3.4 The provider shall ensure that it has processes in place to identify, collect and transmit or make available the information and documentation necessary to demonstrate the conformity and continuous compliance of each AI system. This includes any information requested by a competent authority, for example automatically generated logs within the control of the provider.

8.0 Product realization

8.1 Actions to address risks

The provider shall establish, implement, document and maintain a risk management system throughout the life cycle of each AI system, in accordance with regulatory requirements, aimed at achieving a high level of protection for health, safety and fundamental rights. prEN 18228 can be used for this, in whole or in part.

8.1.1 Determining the stages of the life cycle

8.2.1 The provider shall determine the stages of the life cycle.

The provider shall establish, document, implement and maintain processes and procedures that are:

a) appropriate to ensure that the AI system requirements are met and maintained and applicable across the AI system life cycle;

b) needed to meet the requirements for the provision of each AI system;

c) needed to implement the actions determined in Clause 6.

8.2.2 The processes and procedures in 8.2.1 shall include:

a) techniques, procedures and systematic actions to be used for the design, design control and design verification of each AI system (see 8.3);

b) techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of each AI system (see 8.3 and 8.4);

c) systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purpose of the placing on the market or the putting into service of AI systems (see 8.5);

d) examination, test and validation procedures to be carried out before, during and after the development of each AI system, and the frequency with which they have to be carried out (see 8.4);

e) post-market monitoring (see 9.4);

f) support.

8.2.3 In establishing the processes and procedures in 8.2.1 the provider shall:

a) determine the requirements for each AI system;

b) establish criteria for the processes necessary to meet AI system requirements;

c) determine the sequence and interaction of these processes;

d) determine the methods and criteria needed to ensure that both the operation and supervision of these processes are effective;

e) consider the following factors:

1) the requirements for each AI system;

2) the nature, duration and complexity of each AI system life cycle activities;

3) the required process stages, including applicable design and development reviews;

4) the required AI system verification and validation activities;

5) the responsibilities and authorities involved in each AI system life cycle process determined according to 5.3;

6) the internal and external resource needs for the life cycle of each AI systems;

7) the need to control interfaces between persons involved in the AI system life cycle process;

8) the need for involvement of relevant interested parties (including deployers and affected persons) in relevant processes throughout the life cycle of the AI system, if applicable;

9) the requirements for subsequent provision of each AI system and services, including ongoing maintenance, retraining and updates;

10) the documented information needed to demonstrate that requirements applicable to the AI system throughout its life cycle have been met.

8.2.4 Documented information or written evidence shall be available to the extent necessary to have confidence that the processes have been carried out as planned.

Planning and process control documents shall be maintained and updated as the AI system life cycle progresses for each AI system.

The effectiveness of these measures shall be monitored and corrective actions shall be taken if the intended results are not achieved.

8.2.5 Further information on the AI system life cycle can be found in ISO/IEC 22989:2022, Clause 6, which contains a similar diagram to Figure 1.

Figure 1 — Example AI system life cycle model with AI system-specific processes based on ISO/IEC 22989

8.1.2 Inception, design and development

8.1.3 Inception

8.3.1.1 The provider shall determine the intended purpose of the AI system.

8.3.1.2 The provider should consider consultation with interested parties regarding fundamental rights, see Annex A.

8.1.4 Design and development

AI system requirements

8.3.2.1.1 The provider shall determine AI system requirements for the intended purpose (including reasonably foreseeable misuse) of each AI system that translates the applicable regulatory requirements (identified in 4.2) into definitions of explicit features in a form that can be used during design and development.

AI system requirements shall be determined and documented information maintained (see 4.5). These AI system requirements shall include:

a) accuracy, robustness, cybersecurity, transparency, human oversight, data and data governance and record keeping according to the intended purpose;

b) applicable regulatory requirements;

c) requirements related to applicable risk control measures resulting from the risk management system, see prEN 18228;

d) as appropriate, information derived from previous similar designs;

e) other requirements essential for design and development of the AI system.

8.3.2.1.2 AI system requirements shall be:

a) complete;

b) unambiguous;

c) able to be verified or validated;

d) not in conflict with each other;

e) reviewed for continued appropriateness during the life cycle of the AI system (see 10.2).

Review of the AI system requirements

8.3.2.2.1 The AI system requirements shall be reviewed for adequacy and approved in accordance with 8.3.2.2.2. This review shall be:

a) conducted systematically;

b) prior to placing the AI system on the market or putting it into service for the intended purpose.

8.3.2.2.2 The review shall allow the provider to ensure that the AI system requirements (see 8.3.2.1):

a) are defined and documented;

b) covers applicable regulatory requirements;

c) can be met.

The results of the review and actions arising from the review shall be documented.

AI system requirements

AI system specifications shall:

a) meet the AI system requirements (see 8.3.2.1);

b) provide information for processes, products and services that are integrated into the AI system that are relevant to maintain the quality of the AI system;

c) be verifiable.

Written evidence of the specifications of each AI system shall be maintained in the technical documentation (see 8.7.1).

Design and development controls

The provider shall ensure that:

a) reviews are conducted to ensure design and development objectives are met;

b) verification and validation activities are conducted to ensure that the design and development specifications meet the AI system requirements;

c) any necessary actions are taken to address problems determined during the reviews, or verification and validation activities;

d) documented information of these activities is retained.

NOTE Design and development reviews, verification and validation activities have distinct purposes. They can be conducted separately or in any combination, as is suitable for the AI systems of the provider.

8.2 Verification and validation

8.2.1 AI system verification

Testing and verification shall be performed to ensure that each AI system meets the AI system specifications (see 8.4.2.3). Further information on the testing of accuracy can be found in prEN 18229‑2 [20].

The provider shall define and document testing plans and test procedures that:

a) are appropriate to the specified intended purpose and for identified reasonably foreseeable misuse;

b) include methods and numerical limits, ranges or other suitable and verifiable measures for acceptance of test results;

c) are aligned with best practices and are reproducible, in particular by setting out the conditions for testing.

Written evidence of the results and conclusions of the verification and necessary actions shall be maintained (see 4.5).

8.2.2 AI system validation

8.4.2.1 Design and development validation shall be:

a) performed in accordance with planned and documented arrangements (e.g. instructions for use, see b)) to ensure that each AI system is capable of meeting the requirements for the specified intended purpose;

b) carried out taking account of the AI system’s instructions for use and technical documentation;

c) carried out during and after the development of each AI system. The provider shall determine the frequency of validation and shall perform a risk evaluation based on the results of validation. See prEN 18228:—, 5.2.4;

d) completed prior to placing the AI system on the market or putting it into service, including modifications that are not substantial modifications;

e) include documented validation plans and test procedures that include methods and numerical limits, ranges or other suitable measures for acceptance of test results.

Written evidence of the results and conclusion of validation and necessary actions shall be maintained.

8.4.2.2 The provider should consider consultation with interested parties regarding fundamental rights, see Annex A.

EXAMPLE When developing an AI system to manage or recruit workers, it is essential to consult workers and workers’ representatives in order to know which potential impacts to investigate.

8.3 Data management

The provider shall put in place a strategy to comply with applicable regulatory requirements relating to data management in accordance with 4.4.

The provider shall define, document and implement data management processes related to the design and development of each AI system. As appropriate and proportionate to the risk of the AI system, the provider shall:

a) establish and maintain systems and procedures for data management, including:

1) data acquisition;

2) data collection;

3) data analysis;

4) data labelling;

5) data storage;

6) data filtration;

7) data mining;

8) data aggregation;

9) data retention;

10) any other operation regarding the data that is performed before and for the purpose of placing on the market or the putting into service of each AI system.

b) define and document processes about:

1) data requirements;

2) data planning;

3) data preparation;

4) data decommissioning.

The provider shall specify a mechanism for data no longer in use is destroyed, when each AI system is decommissioned. These mechanisms shall detail how data no longer in use is destroyed or archived to fulfil regulatory requirements. Data can be reused in certain situations, and destruction of data shall not conflict with the ability of the provider to comply with applicable regulatory requirements.

8.3.1 Environmental sustainability

The provider should establish processes to ensure the identification and mitigation of environmental impacts of the AI system due to the intended purpose or reasonably foreseeable misuse, throughout the entire life cycle.

The provider should support relevant interested parties, in particular deployers to meet applicable regulatory requirements, with information on environmental impacts.

NOTE ISO 14040 describes life cycle assessment and life cycle inventory studies which provide additional guidance.

8.3.2 Product documentation

8.3.3 Technical documentation

For each AI system, the provider shall establish and maintain technical documentation.

The technical documentation shall contain comprehensive, detailed, technical and specific information about each AI system and its elements to demonstrate compliance to auditors, notified bodies and competent authorities.

The technical documentation can include references to written evidence maintained to support the demonstration of compliance.

When the specifications for or characteristics of an AI system are changed, the provider shall ensure that outdated technical documentation is amended and communicated to interested parties, as applicable.

NOTE Interested parties can include internal personnel, competent authorities, other public authorities and notified bodies.

8.3.4 Instructions for use

For each AI system, the provider shall establish and maintain instructions for use with information on how to use each AI system and its outputs. The instructions for use shall:

a) be written in a clear and accessible manner for the intended deployers of AI systems;

NOTE The intended audience can include persons that are not necessarily of technical background, such as professional end-users.

b) contain information, specifications and procedures for deploying and using each AI system (including integration, installation, deployment and servicing) to ensure that it can operate in a manner that is fit for its intended purpose;

c) where applicable, include specific information prescribing organizational measures and procedures that are needed during deployment to ensure that affected persons are provided with opportunities to provide input to post-market monitoring (see 9.4). Such measures and procedures can be related to human oversight, logging and other traceability measures;

d) include requirements for the maintenance activities, including the frequency and scope of performing these activities, to ensure AI system quality is maintained.

9.0 Operation and control

9.1 Deployment, operation and monitoring

9.1.1 Deployment

The provider shall put into place procedures to ensure that the version of each AI system:

a) can be clearly identified enabling its traceability and linking as a product on the market or in service to its instructions for use and technical documentation (see 8.7.1);

NOTE 1 Traceability is enabled by written evidence and documented information from the provider, (e.g. Software Bill of Materials).

NOTE 2 Once an AI system is put into service or placed on the market, record-keeping (see ISO/IEC 24970 [19]) provides traceability of changes to the version of the AI system and relevant components.

b) can be linked to all written evidence and documentation required by this document;

c) can be traced to all economic operators to whom the provider has supplied the AI system.

The AI system version should be linked to:

a) technical versions of AI components, such as software or specific AI models;

d) other relevant information, including datasets.

9.1.2 Operation and monitoring

Subject to the necessity of support to ensure adequate protection of health, safety and fundamental rights in accordance with applicable regulatory requirements, support services should be identified, specified and provided, considering as applicable:

a) entities that are expected to require support (e.g. deployers or end-users);

b) support channels;

c) expected types of problem and appropriate responses;

d) diagnostic tools;

e) a mechanism to ensure the deployers can communicate received feedback regarding potential risks to health, safety and fundamental rights to AI providers.

9.2 Supply chain

9.2.1 General

The provider shall define and document procedures to ensure that products, components, data and services that are supplied externally conform to specified requirements, applicable regulatory requirements and standards. These products, components, data and services can come from outside or inside the provider.

The provider shall determine the measures when:

a) products and components (e.g. software, hardware) are supplied externally;

b) model training and test data for AI systems are supplied externally;

c) services for certain life cycle activities (e.g. design and development, model training, data annotation, evaluations and testing) are supplied externally.

9.2.2 Evaluation and selection

The provider shall establish and document criteria for evaluation and selection of external suppliers. The criteria shall be:

a) based on the suppliers

1) ability to provide products, components, data and services that meets the provider’s requirements;

2) history of reliability, adherence to agreed-upon specifications, and ability to meet contractual obligations, including quality and applicable standards.

b) based on the likely effect of the supplied products, components, data and services on the quality of AI systems;

c) proportionate to the risk associated with AI systems and its intended purpose, as determined by the risk management system.

9.2.3 Monitoring and re-evaluation

The provider shall plan the monitoring and re-evaluation of suppliers.

The performance of suppliers in meeting requirements for the acquired products, components, data and services shall be monitored based on their ability to meet regulatory requirements and the requirements of this document.

The results of the monitoring shall provide an input into the supplier re-evaluation process.

The provider shall retain documented information of these activities and any necessary actions arising from the evaluations.

9.2.4 Requirements and specifications

The provider should communicate to suppliers requirements and specifications as applicable for:

a) the products, components, data and services to be supplied;

b) the acceptance procedures for the supplied products, components, data and services;

c) the supplier’s:

1) quality management system;

2) competences, including any required qualification of persons;

3) interactions with the provider;

4) use of security by design principles (see prEN 18282 [18]).

d) control and monitoring of the suppliers’ performance to be applied by the provider;

e) the absence of known vulnerabilities and disclosure of future vulnerabilities;

f) verification or validation activities that the provider intends to perform at the external suppliers’ premises.

9.2.5 Extent of control

In determining the extent of control over the supplied products, components, data and services, the provider shall:

a) ensure and document that the supplied products, components, data and services remain within the control of its quality management system;

b) define and document both the controls that it intends to apply to a supplier and those it intends to apply to the supplied products, components, data and services;

c) take into consideration:

1) the potential impact of the supplied products, components, data and services on the provider’s ability to consistently meet user requirements, applicable standards and applicable regulatory requirements;

2) the effectiveness of the controls applied by the supplier.

d) determine the verification, product acceptance or other activities necessary to ensure that the supplied products, components, data and services meet requirements. Records of the verification shall be maintained.

9.3 Changes to AI systems

9.3.1 Planning

The provider shall implement a change management process to control planned changes and review the consequences of unintended changes to AI systems that can result in a substantial modification of the AI system.

The provider can make use of relevant processes and procedures established as part of its risk management framework to comply with this requirement.

9.3.2 Review of changes

The provider shall review the consequences of both its planned changes and unintended changes in accordance with the risk management system that meets applicable regulatory requirements. See prEN 18228.

The provider shall specify the procedures required to identify, document and review modifications to each AI system, whether intended or unintended. Those procedures shall include processes, methods and mechanisms to ensure that:

a) the AI system is kept under recurrent review to ensure that the risks to health, safety and fundamental rights arising from the AI system continue to be acceptable as determined by the provider’s AI risk management system;

b) to enable the prompt identification of any changes to the risks produced by the AI system and to undertake any necessary action as set out by the organizational procedures for managing changes to the AI system.

AI systems on the market or in service that are modified shall result in a reviewed and updated set of documentation required for the quality management system (see 4.6). The technical documentation shall reflect all versions of the product, including pre-determined changes.

9.3.3 Changes triggering action

Once any changes to an AI system have been identified, the provider shall review them and if needed, take necessary action to address:

a) adverse impacts on the quality of each AI system;

b) any risk that has not been documented and accepted in accordance with the risk management system (see prEN 18288, 5.2) at the time of the previous conformity assessment;

c) gaps in monitoring and detection measures.

9.3.4 Pre-determined changes

9.3.4.1 This subclause can apply to AI systems using continuous learning, where pre-determined changes can be considered planned maintenance activities and other situations.

Providers can conduct verification and validation activities on pre-determined changes, to ensure that they do not affect the intended purpose of AI systems, affect the quality management system, or increase risks to health, safety and fundamental rights.

EXAMPLE A provider who provides a machine learning model that can be subject to continuous learning, can verify and validate the AI system and its learning capability in a conformity assessment activity. This can lead to pre-determined changes.

If the provider intends to rely on such pre-determined changes, they can document it in the technical documentation and instructions for use.

9.3.4.2 The technical documentation can include:

a) a description of the pre-determined changes, including a specification of expected changes to performance;

b) how various versions of the AI system can be identified;

NOTE This is to avoid situations where a regulator is faced with previous versions of the AI system for which the version of the technical documentation it is presented with, is not applicable.

c) a step-by-step modification procedure, including:

1) appropriate data, test methods and numerical limits, ranges or other suitable measures for acceptance of test results used to develop, verify, validate and implement all proposed modifications;

2) the update process and any communication or training requirements;

d) an impact assessment, including:

1) any impact on quality objectives;

2) identifying risks introduced by the pre-determined change;

3) how those risks and impacts have been mitigated by verification and validation;

4) how implementation of one change affects implementation of another and the cumulative impact of all pre-determined changes.

9.3.4.3 The existence of the pre-determined change procedure can be included in the instructions for use. This should include:

a) a description of the implemented modifications, including a:

1) summary of current AI system performance;

2) description of the relevant data (model training, tuning and test data) as applicable used to implement a modification;

3) associated inputs/outputs;

4) validation requirements and related evidence.

b) a description of how the modifications were implemented;

c) a description of how users will be informed of implemented modifications.

9.4 Post-market monitoring

9.4.1 General

The provider shall establish and document a post-market monitoring system that:

a) applies from when each AI system is placed on the market or put into service and until it is no longer in use;

b) allows the provider to evaluate continuous compliance of each AI system in scope of the quality management system;

c) is proportionate to the nature of the AI technologies and the risks of the AI system in the context of its intended purpose, particularly residual risk present after the risk management process has been applied;

d) provide processes to collect and review experience gained from the use of AI systems to identify needs for immediate and necessary corrective or preventive actions.

9.4.2 Monitoring scope

The provider shall identify the scope of the post-market monitoring system including:

a) each AI system in scope;

b) the quality objectives connected to the AI systems in scope;

c) the objectives of the post-market monitoring system.

9.4.3 Monitoring approach

The provider shall determine an effective approach for monitoring the continued achievement of in-scope quality objectives. This shall be planned and documented and include consideration of:

a) potential negative impacts of the operation of each AI system;

b) applicable regulatory requirements, including data privacy requirements and fundamental rights;

NOTE This is particularly important for persons who are disproportionately at risk of inequality and unlawful discrimination from the use of the AI system.

c) the potential reliance on other organizations, including distributors, importers and deployers, as well as third parties supplying tools, services, components or processes;

d) the intended purpose, including reasonably foreseeable misuse of each AI system;

e) technical constraints that need to be addressed to facilitate effective post-market monitoring;

f) the performance of the AI system;

g) where relevant, interaction with other AI systems, including not meeting performance objectives as they relate to the AI systems in scope of the post-market monitoring.

NOTE 1 Not meeting performance objectives can result from feedback loops and resilience failures.

NOTE 2 Interactions with other AI systems can include dependencies between the AI systems.

The monitoring approach shall track the effectiveness of the risk management prevention and mitigation measures through qualitative or quantitative indicators, and by drawing on feedback from both internal and external sources, included affected persons.

In order to be effective, the monitoring approach shall be active and systematic, address nonconformities promptly and feed into the continual improvement process.

9.4.4 Information provided

The provider shall determine policies and procedures for systematically gathering and storing the information that is gained from the use of each AI system and that is necessary to support post-market monitoring. As applicable this shall include information provided by:

a) the deployer, end-users or other interested parties;

b) monitoring the AI system or its logs;

c) regulatory authorities;

d) feedback and complaint mechanisms, and serious incidents.

The provider shall implement AI system logging to capture relevant data about the AI system, as appropriate. Further information on AI system logging can be found in ISO/IEC 24970.

9.4.5 New and emerging risks

The provider shall implement procedures to identify and act upon new and emerging risks from each AI system when monitoring (see 9.4.3) and information provided (see 9.4.4) indicate that risks are not currently being managed and reduced to an acceptable level (e.g. complaints, incident reports). See prEN 18228:—, 5.6 for further information about risk management review based on new information acquired.

9.4.6 Interaction with deployers

Where the provider is not able to monitor an AI system directly and detect nonconformities without the involvement of the deployer, then appropriate requirements for monitoring shall be included in the instructions for use, if necessary to mitigate risk. In this event the provider shall consider inclusion of the following in the instructions for use:

a) technical monitoring requirements of the AI systems in line with the post-market monitoring plan, and recommended tools for this monitoring if not integrated into the AI system;

b) recommendations on technical competency requirements to monitor the AI system.

9.4.7 Nonconformities identified by post-market monitoring

Nonconformities shall follow a documented procedure for each AI system setting out how to manage nonconformities in accordance with 9.6. As applicable, procedures shall define what constitutes a breach of quality objectives, including:

a) single events;

b) a collection of events over a defined time period;

c) time-based performance deviations and shifts;

d) tolerances or threshold ranges within which exceeding a threshold is considered acceptable.

9.5 Reporting serious incidents

9.5.1 General

The provider shall implement a process for investigating serious incidents to determine if there is a causal link between the AI system and the serious incident.

The provider shall ensure that the serious incident is reported to the competent authorities after it has established a causal link between the AI system and the serious incident or considered that there is a reasonably plausible link.

The provider shall ensure that serious incident reports are filed with the competent authorities within the following time scales:

a) for serious incidents involving critical infrastructure, the serious incident shall be reported immediately or at the latest within two days;

b) for serious incidents involving the death of a person, the serious incident shall be reported immediately or at the latest within ten days;

c) for all other serious incidents, the serious incident shall be reported to the competent authorities immediately or, at the latest, within 15 days.

The report of the serious incident may be submitted in a provisional version that is incomplete, followed by a complete version.

9.5.2 Specific procedures

9.5.2.1 The provider shall document, implement and maintain procedures related to the reporting of serious incidents to report such incidents within the timelines in 9.5.1. This includes procedures for deployers to report serious incidents to the provider and to suspend use of the AI system.

9.5.2.2 The procedures should include:

a) establishing the key internal contacts responsible and internal escalation process for the reporting of serious incidents in accordance with 5.3;

b) promoting awareness of the risks of serious incidents and the relevant internal escalation process to relevant provider personnel;

c) implementing and maintaining processes that will enable the provider to meet any applicable regulatory timescales for reporting serious incidents;

d) ensuring that the provider can allocate adequate resources, including competent personnel and necessary tools, to support an investigation into the serious incident and respond to any enquiries from relevant authorities;

e) maintaining detailed written evidence of all serious incidents and any associated investigations and their findings (e.g. root cause analysis) and action taken by the provider to ensure continued compliance, or bringing the AI system into compliance, with the regulatory requirements identified in 4.2;

f) procedures and obligations between provider and deployer to enable reporting from deployer to provider.

NOTE Some serious incidents need to be reported by the deployer to the provider first before the provider can be aware of the situation and apply the relevant procedures.

9.6 Nonconformities

9.6.1 Nonconformity and corrective action

The provider shall manage nonconformities in relation to the quality management system. See 7.3.3 regarding communication of nonconformities for regulatory purposes.

When a nonconformity occurs, the provider shall:

a) react to the nonconformity and as applicable:

1) take action to control and correct it;

2) deal with the consequences.

b) determine the extent and severity of the nonconformity;

EXAMPLE If a nonconformity is detected in one AI system covered by this quality management system, the provider determines whether the nonconformity affects any other AI systems covered by the quality management system.

c) evaluate the need for action to eliminate the cause(s) of the nonconformity, so that it does not recur or occur elsewhere, by:

1) reviewing the nonconformity;

2) determining the causes of the nonconformity;

3) determining if similar nonconformities exist or can potentially occur.

d) implement any corrective action needed, ensuring that those actions are proportionate to the severity of the nonconformity, are applied to the full extent of the nonconformity, and appropriate to the type of nonconformity identified;

e) review the effectiveness of any corrective action taken;

f) make changes to the quality management system, if necessary.

9.6.2 Documentation

Documented information shall be available as evidence of:

a) the nature of the nonconformities and any subsequent actions taken;

b) the results of any corrective action;

NOTE 9.6.1 is modified from, and 9.6.2 is identical to ISO/IEC 42001:2023, 10.2.

10.0 Performance evaluation

10.1 General

The quality management system shall be effective when it and the AI systems within its scope align with the applicable requirements of this document, including:

a) protection of health, safety and fundamental rights;

d) quality objectives.

10.1.1 Review

10.1.2 General

10.2.1.1 The effectiveness of the quality management system as a whole shall be reviewed using clear and measurable criteria of a quantitative or qualitative nature.

The provider shall establish and document procedures for review of the provider’s quality management system at planned intervals to ensure its continuing suitability, adequacy and effectiveness and identify the need for changes including:

a) the quality policy (see 5.2);

e) the quality objectives (see 6.2);

f) adherence to policies and procedures;

g) monitoring the effectiveness of risk control measures;

h) the interested parties, particularly affected persons;

i) opportunities for improvement.

In addition to planned reviews, the provider shall ensure that a review of its quality management system is conducted when an investigation of a serious incident finds the quality management system or its measures to be inadequate.

The provider shall periodically review the applicable regulatory requirements for changes.

NOTE Depending on resources and expertise available, the provider can consider facilitating such reviews through means such as meetings, trainings or newsletters.

The provider shall maintain review documentation (see 4.5), including recommendations and written evidence.

10.2.1.2 The periodic review process should be proportionate to the risks potentially presented by each AI system provided that the degree of rigour and the level of protection to health, safety and fundamental rights is maintained and ensured.

10.2.1.3 The provider can define and put in place a process to report concerns about the provider’s role with respect to an AI system throughout its life cycle.

The reporting mechanism should fulfil the following functions:

a) options for confidentiality or anonymity or both;

b) available and promoted to employed and contracted persons;

c) staffed with qualified persons;

d) stipulates appropriate investigation and resolution powers for the persons referred to in c);

e) provides for mechanisms to report and to escalate to management in a timely manner;

f) provides for effective protection from reprisals for both the persons concerned with reporting and investigation (e.g. by allowing reports to be made anonymously and confidentially);

g) provides reports according to, if appropriate, e); while maintaining confidentiality and anonymity in a), and respecting general business confidentiality considerations;

h) provides response mechanisms within an appropriate time frame.

NOTE This subclause is a modified version of ISO/IEC 42001:2023, B.3.3.

10.1.3 Management review input

The review should include information arising from:

a) interested party feedback;

b) concerns and complaints, complaints-handling and investigation reports;

c) reporting to regulatory authorities;

d) internal and external audits;

e) monitoring and measurement of quality management system processes;

f) monitoring and measurement of the performance of the AI system in operation;

g) corrective action;

h) follow-up actions from previous management reviews;

i) changes that can affect the quality management system;

j) recommendations for improvement;

k) applicable new or revised regulatory requirements;

l) monitoring of new or revised harmonized standards related to applicable regulatory requirements.

10.1.4 Review output

The output from reviews shall be recorded and include the following changes as part of this written evidence:

a) any improvement needed to maintain the suitability, adequacy and effectiveness of the quality management system and its processes;

b) any improvement of the AI system related to interested party requirements;

c) any changes needed to ensure compliance with applicable new or revised regulatory requirements;

d) any changes to resource needs.

10.2 Improvement

The provider should continually improve the suitability, adequacy and effectiveness of the quality management system.

10.2.1 Planning of changes

10.2.2 General

When the provider determines the need for changes to the quality management system, the provider shall:

a) specify and document the procedures required to manage the changes to the quality management system;

b) carry out the changes in a planned and controlled manner;

c) systematically keep written evidence on implemented changes.

10.2.3 Changes to scope

Whenever a new AI system becomes covered by this quality management system or is substantially modified, the provider shall assess the need to review the processes of the quality management system determined according to 4.1, based on the characteristics of that AI system.

If review is needed, the review shall be conducted, and if it concludes that changes to the processes are needed then those processes shall be revised accordingly.

10.2.4 Changes to process

Changes to be made to the quality management system processes shall be:

a) evaluated for their impact on the quality management system;

b) evaluated for their impact on each AI system under the quality management system;

c) controlled in accordance with the requirements of this document.


  1. (informative)

    Consultation with interested parties regarding fundamental rights
    1. General

In respect to fundamental rights, the provider should seek to understand the concerns of potentially affected persons by consulting them directly in a manner that takes into account differences and similarities between European citizens and other potential barriers to effective engagement. In situations where such consultation is not possible, the provider should consider reasonable alternatives such as consulting credible, independent expert resources, including human rights organizations and others from civil society.

The provider should structure a consultation process comprising of the following steps:

a) planning for both material and human resources to ensure that affected persons or groups of persons (or their representatives) and other interested parties who can foreseeably be adversely affected by the operation of the AI system are properly consulted;

b) identification and mapping of individuals and groups that can be negatively impacted, with a focus on disadvantaged, under-represented groups or persons in situations of vulnerability;

c) establishing clear objectives for the consultation such as identification of fundamental rights risks, defining risk acceptability criteria, mitigation of fundamental rights risks, investigation of serious incidents and post-market monitoring;

d) determination of the consultation method and sharing of relevant and meaningful information about the AI system to give a comprehensive understanding of potential implications and fundamental rights impacts. The consultation method should:

1) take into account considerations of age-appropriateness, accessibility needs as well as the need for capacity building of persons and groups to ensure their meaningful involvement;

2) provide opportunities to obtain meaningful feedback concerning concerns about the risks which the AI system poses.

e) implementation of consultation activity, documentation of findings and communication to affected parties about the outcomes of the consultation process.

Consultations should begin at the inception stage, prior to the commencement of design and development and throughout the examination, testing and validation process, in order to gain a comprehensive understanding of the AI system’s risks to the health, safety and fundamental rights of persons. Consultation can be of added value at every stage of the AI system life cycle.

Testing and validation should be conducted in consultation with affected persons and groups of persons, and any others whose health, safety and fundamental rights are likely to be adversely affected by the AI system (i.e. AI subjects).

The outcomes of these consultations can result in the provider modifying the intended purpose of the proposed and the introduction of additional safety by design measures to reduce the risks of the AI system.

    1. Estimating risk and impact on affected persons

After potential impacts are identified, processes can be designed to observe the magnitude of the impacts on affected persons and groups provided that such observation is not undertaken unless and until those affected are properly informed of any material risks associated with the testing and validation of the AI system and have given express consent to such observation and measurement activities and processes.


  1. (informative)

    Relationship between this document and other harmonized standards
    1. Introduction

This Annex provides an overview of how this quality management system document relates to other standards. The quality management system fulfils a central and overarching role by providing systematic processes to ensure AI systems meet applicable regulatory requirements throughout their life cycle. Providers can understand its relationship with other harmonized standards addressing specific aspects of AI systems. This relationship is particularly important when selecting appropriate technical specifications as reported in 4.4.3 and Article 17(1)(e) of the AI Act [1].

NOTE Technical specifications in this context is not limited to a standardization deliverable.

Figure B.1 illustrates the relationships between this quality management system document and the other primary harmonized standards expected to have Annex ZAs supporting the EU AI Act, while the standards listed in Clause B.4 provide further support. Dotted lines represent reference or support relationships where standards are complementary.

Figure B.1 — Relationship between harmonized standards

    1. Selection of technical specifications

Following 4.4.3, providers are required to select appropriate technical specifications to demonstrate compliance with essential requirements. Table B.1 provides an overview of harmonized standards related to various aspects of AI system requirements.

Table B.1 — Mapping of primary standards and AI Act articles

Aspect

Standard reference

Related to

Risk management

prEN 18228

Risk management system requirements (Article 9)

Data quality

prEN 18284

Quality and governance of datasets in AI (Article 10)

 

 

 

Trustworthiness Part 1

prEN 18229‑1

Framework for AI systems trustworthiness (Articles

12–14)

Trustworthiness Part 1

prEN 18229‑2

Framework for AI systems trustworthiness (Article

15)

Cybersecurity

prEN 18282

Cybersecurity specifications for AI Systems (Article 15)

    1. Harmonized standard interactions
      1. Risk management integration

There is an explicit connection in the AI Act between the quality management system and the risk management system. Article 17(1)(g) directly mandates that the quality management system must include “the risk management system referred to in Article 9”. prEN 18228 can be used to establish the risk management system.

      1. Data management and bias mitigation

Subclause 8.5 implements Article 17(1)(f) of the AI Act, which requires “systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data.”.

Subclause 8.5 directly connects to two related standards:

— prEN 18283 [13];

— prEN 18284 [14].

These standards provide detailed technical specifications for data quality and bias management that can be implemented through quality management system.

It is important to note the relationship between the Dataset and Bias standards (see Clause B.4): the Dataset Quality and Governance standard contains requirements that build upon and make normative references to the Bias Management standard in order to provide a means of conforming to Article 10(2)(f-g) of the AI Act.

      1. Cybersecurity integration

This document defines cybersecurity as one of the essential requirements for regulatory compliance for the AI system (See 4.4.2 g)), and as part of the overall quality management approach for AI systems.

      1. Trustworthiness framework implementation

This document provides systematic processes to establish and maintain the methods and measures needed to comply with the essential requirements of the AI Act. These essential requirements are listed for reference in 4.4.2, and 8.3.2.1 mandates that the chosen compliance methods are translated into concrete AI system requirements.

The AI Trustworthiness Framework (prEN 18229 Parts I and II) are a related harmonized standards that establish a conceptual framework and provides specific technical methods which can be used to meet these essential requirements. A direct link between the standards exists in 8.4.1 on AI system verification, which references prEN 18229‑2 for methods related to the testing of accuracy.

      1. Conformity assessment

Subclause 4.4.3 requires providers to select measures to demonstrate compliance with essential requirements, including harmonized standards or common specifications. This directly interacts with the conformity assessment framework (prEN 18284), which:

— uses quality management system documentation as evidence during conformity assessment;

— evaluates whether the quality management system effectively implements all required processes;

— verifies that the quality management system includes all elements required by Article 17 of the AI Act;

The technical documentation maintained under the quality management system (see 8.7.1) forms the basis for conformity assessment activities, creating a bidirectional relationship between these standards.

    1. Supporting harmonized standards

In addition to the primary standards shown in Clause B.3, the AI Act work programme includes several supporting harmonized standards that provide crucial technical methodologies. These are necessary to implement some of the requirements established by the harmonized standards listed in Clause B.2. These include:

— Bias management (prEN 18283): Provides foundational concepts and requirements for addressing bias in datasets and AI systems, and is referenced by the Dataset quality and governance standard;

— AI system logging (prEN ISO/IEC 24970): Provides specifications for logging events during an AI system's operation, essential for traceability and post-market monitoring (see 9.4);

— Evaluation methods for accurate natural language processing systems (prEN ISO/IEC 23282): Provides frameworks and methods for evaluating NLP systems, which is critical for demonstrating accuracy and robustness (see 8.4.1) as well as for prEN 18229‑2);

— Evaluation methods for accurate computer vision systems (prEN 18281): Provides frameworks and methods for evaluating computer vision systems.

NOTE 1 The standards listed represent key documents in the AI Act standardization work programme at the time of writing. The ecosystem is continually evolving, and other technical standards intended to provide further support for specific articles are also under development. Providers can consult the latest list of harmonized standards published in the Official Journal of the European Union.

NOTE 2 Harmonized standards do not all have an Annex ZA, and are not all cited in the Official Journal.


  1. (informative)

    Correspondence between this document and ISO 9001:2015

Table C.1 shows the correspondence between this document and ISO 9001:2015.

Table C.1 — Correspondence between this document and ISO 9001:2015

Clause in this document

Clause in ISO 9001:2015

1 Scope

1 Scope

4 Quality management system

4.1. General

4.2 Understanding the needs and expectations of interested parties

4.3 Determining the scope of the quality management system

4 Context of the organization

4.4 Quality management system and its processes

4.2 Understanding the needs and expectations of interested parties

4.3 Determining the scope of the quality management system

4.4 Strategy for regulatory compliance

 

4.5 Documented information

7.5 Documented information

5 Management responsibility

5 Leadership

6 Planning

6 Planning

7 Support

7.1 Resources

7.2 Competence

7.2 Competence

7.3 Communication

8.2.1 Customer communication

8 Product realization

8 Operation

9 Operation and control

 

10 Performance evaluation

9 Performance evaluation

10.3 Improvement

10 Improvement


  1. (informative)

    Correspondence between this document and ISO/IEC 42001:2023

Table D.1 shows the correspondence between this document and ISO/IEC 42001:2023.

Table D.1 — Correspondence between this document and ISO/IEC 42001:2023

Clause in this document

Clause in ISO/IEC 42001:2023

1 Scope

1 Scope

4 Quality management system

4.1. General

4.2 Understanding the needs and expectations of interested parties

4.3 Determining the scope of the quality management system

4 Context of the organization

4.4 Quality management system and its processes

4.2 Understanding the needs and expectations of interested parties

4.3 Determining the scope of the quality management system

4.4 Strategy for regulatory compliance

 

4.5 Documented information

7.5 Documented information

5 Management responsibility

5 Leadership

6 Planning

6 Planning

7 Support

7 Support

8 Product realization

8 Operation

9 Operations and control

 

10 Performance evaluation

9 Performance evaluation

10.3 Improvement

10 Improvement


  1. (informative)

    Relationship between this European Standard and the essential requirements of Regulation (EU) 2024/1689 aimed to be covered

This European Standard has been prepared under a Commission’s standardization request M/613 C(2023) 3215 to provide one voluntary means of conforming to essential requirements of Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (OJ L 1689, 12.7.2024).

Once this standard is cited in the Official Journal of the European Union under that Regulation, compliance with the normative clauses of this standard given in Table ZA.1 confers, within the limits of the scope of this standard, a presumption of conformity with the corresponding essential requirements of that Regulation, and associated EFTA regulations.

Table ZA.1 — Correspondence between this European Standard and Regulation (EU) 2024/1689

Essential Requirements of Regulation (EU) 2024/1689

Clause(s)/subclause(s) of this EN

Remarks/Notes

Article 11(1) – first sentence

8.7.1

Article 17(1) – first sentence

4.1, 4.2, 4.3, 4.4, 5.1, 5.3.1, 5.3.2, 5.3.3

Covered to the extent the obligations are covered by Article 17.

Article 17(1)(a)

4.4, 9.3.1, 9.3.2, 9.3.3

Article 17(1)(b)

8.3.1, 8.3.2

Article 17(1)(c)

8.4

Article 17(1)(d)

8.4

Article 17(1)(e)

4.4

Article 17(1)(f)

8.5

Article 17(1)(g)

8.1

Subject to use of a risk management system compliant with Article 9

Article 17(1)(h)

9.4

Article 17(1)(i)

9.5

Article 17(1)(j)

7.3

Article 17(1)(k)

4.5, 8.7

Article 17(1)(l)

7.1, 9.2

Article 17(1)(m)

5.3.1, 5.3.2, 5.3.3

Article 17(2)

Not covered

Article 72

9.4

WARNING 1 — Presumption of conformity stays valid only as long as a reference to this European Standard is maintained in the list published in the Official Journal of the European Union. Users of this standard should consult frequently the latest list published in the Official Journal of the European Union.

WARNING 2 — Other Union legislation may be applicable to the products falling within the scope of this standard.

Bibliography

[1] Act E.U.A.I. (Original work published 1828) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

[2] ISO 9001:2015, Quality management systems — Requirements

[3] ISO/IEC 42001:2023, Information technology — Artificial intelligence — Management system

[4] Charter of Fundamental Rights of the European Union, 2012/C 326/02

[5] prEN 18228:—, AI Risk Management

[6] ISO/IEC Directives Part 1, Consolidated ISO Supplement, Annex SL Appendix 2 (rev 4 2024)

[7] ISO 9000:2015, Quality management systems — Fundamentals and vocabulary

[8] ISO/IEC Guide 99:2007, International vocabulary of metrology — Basic and general concepts and associated terms (VIM)

[9] ISO/IEC 17025:2017, General requirements for the competence of testing and calibration laboratories

[10] ISO/IEC 22989:2022, Information technology — Artificial intelligence — Artificial intelligence concepts and terminology

[11] ISO/IEC Guide 51:2014, Safety aspects — Guidelines for their inclusion in standards

[12] ISO/IEC Guide 63:2019, Guide to the development and inclusion of aspects of safety in International Standards for medical devices

[13] ISO/IEC/IEEE 15288:2023, Systems and software engineering — System life cycle processes

[14] EN 62586‑2, Power quality measurement in power supply systems - Part 2: Functional tests and uncertainty requirements

[15] prEN 18283:—, Artificial Intelligence — Concepts, measures and requirements for managing bias in AI systems

[16] prEN 18284:—, Artificial Intelligence — Quality and governance of datasets in AI

[17] ASC 6.2 Accessible and Equitable Artificial Intelligence Systems (Accessible Canada Standards Development Organization) [Accessed 16th September 2025]. Available from: https://accessible.canada.ca/creating-accessibility-standards/overview-asc-62-accessible-equitable-artificial-intelligence-systems.

[18] prEN 18282:—, Artificial Intelligence — Cybersecurity specifications for AI Systems

[19] ISO/IEC 24970:—, Artificial intelligence — AI system logging

[20] prEN 18229‑2:—, AI trustworthiness framework – Part 2: Accuracy and robustness

[21] prEN 18229‑1:—, AI trustworthiness framework – Part 1: Logging, transparency and human oversight

  1. Under preparation. Stage at time of publication, Working Draft.

espa-banner