ISO/IEC DIS 25023
ISO/IEC DIS 25023
ISO/IEC DIS 25023: Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Measurement of product quality

ISO/IEC DIS 25023:2026(en)

ISO/IEC JTC 1/SC 7/WG 6

Secretariat: BIS

Date: 2026-01-17

Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE): ㅡMeasurement of product quality

© ISO/IEC 2026

All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below or ISO’s member body in the country of the requester.

ISO copyright office

CP 401 • Ch. de Blandonnet 8

CH-1214 Vernier, Geneva

Phone: +41 22 749 01 11

Fax: +41 22 749 09 47

Email: copyright@iso.org

Website: www.iso.org

Published in Switzerland

Contents

Foreword 4

Introduction 5

1 Scope 8

2 Normative reference 9

3 Terms and definitions 10

3.1 ICT Product 10

3.2 measure, noun 10

3.3 measure, verb 11

3.4 measurement 11

3.5 measurement function 11

3.6 product 11

3.7 property to quantify 11

3.8 quality measure 11

3.9 quality measure element (QME) 12

3.10 quality model 12

3.11 quality property 12

3.12 quality characteristic 12

3.13 product quality 12

4 Abbreviated terms 13

5 Use of product quality measures 13

5.1 Requirement of this document to be applied 13

5.2 Product quality measurement concepts 13

5.3 Approach to quality measurement 14

6 Format used for documenting the quality measures 16

7 Product quality measures 17

7.1 General 17

7.2 ​ Functional suitability measures 17

7.2.1 General 17

7.2.2 Functional completeness measures 18

7.2.3 Functional correctness measures 19

7.2.4 Functional appropriateness measures 20

7.3 ​ Performance efficiency measures 20

7.3.1 General 20

7.3.2 Time behaviour measures 21

7.3.3 Resource utilization measures 24

7.3.4 Capacity measures 27

7.4 Compatibility measures 28

7.4.1 General 28

7.4.2 Co-existence measures 28

7.4.3 Interoperability measures 29

7.5 Interaction capability measures 29

7.5.1 General 29

7.5.2 Appropriateness recognizability measures 30

7.5.3 Learnability measures 31

7.5.4 Operability measures 32

7.5.5 User error protection measures 34

7.5.6 User engagement measures 35

7.5.7 Inclusivity measures 35

7.5.8 User assistance measures 36

7.5.9 Self-descriptiveness measures 36

7.6 Reliability measures 37

7.6.1 General 37

7.6.2 Faultlessness measures 37

7.6.3 Availability measures 38

7.6.4 Fault tolerance measures 38

7.6.5 Recoverability measures 40

7.7 Security measures 40

7.7.1 General 40

7.7.2 Confidentiality measures 41

7.7.3 Integrity measures 42

7.7.4 Non-repudiation measures 44

7.7.5 Accountability measures 45

7.7.6 Authenticity measures 46

7.7.7 Resistance measures 47

7.8 Maintainability measures 47

7.8.1 General 47

7.8.2 Modularity measures 48

7.8.3 Reusability measures 49

7.8.4 Analysability measures 49

7.8.5 Modifiability measures 50

7.8.6 Testability measures 50

7.9 Flexibility measures 52

7.9.1 General 52

7.9.2 Adaptability measures 52

7.9.3 Scalability measures 53

7.9.4 Installability measures 53

7.9.5 Replaceability measures 54

7.10 Safety measures 54

7.10.1 General 54

7.10.2 Operational constraint measures 55

7.10.3 Risk identification measures 56

7.10.4 Fail safe measures 57

7.10.5 Hazard warning measures 57

7.10.6 Safe integration measures 57

Annex A (informative) Considerations for the use of quality measures 58

Annex B (informative) QMEs used to define product or system quality measures 65

Annex C (informative) Detailed explanation of measurement types 69

Annex D (informative) Application of Quality Measures at Different Stages 76

Bibliography 83

Foreword

ISO (the International Organization for Standardization) and IEC (the International Electrotechnical Commission) form the specialized system for worldwide standardization. National bodies that are members of ISO or IEC participate in the development of International Standards through technical committees established by the respective organization to deal with particular fields of technical activity. ISO and IEC technical committees collaborate in fields of mutual interest. Other international organizations, governmental and non-governmental, in liaison with ISO and IEC, also take part in the work.

The procedures used to develop this document and those intended for its further maintenance are described in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for the different types of documents should be noted. This document was drafted following the editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives or www.iec.ch/members_experts/refdocs).

ISO and IEC draw attention to the possibility that the implementation of this document may involve the use of (a) patent(s). ISO and IEC take no position concerning the evidence, validity, or applicability of any claimed patent rights in respect thereof. As of the date of publication of this document, ISO and IEC had not received notice of (a) patent(s) that may be required to implement this document. However, implementers are cautioned that this may not represent the latest information, which may be obtained from the patent database available at www.iso.org/patents and https://patents.iec.ch. ISO and IEC shall not be held responsible for identifying any or all such patent rights.

Any trade name used in this document is information given for the convenience of users and does not constitute an endorsement.

For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and expressions related to conformity assessment, as well as information about ISO's adherence to the World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT) see www.iso.org/iso/foreword.html. In the IEC, see www.iec.ch/understanding-standards.

This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology, Subcommittee SC 7, Software and systems engineering.

Any feedback or questions on this document should be directed to the user’s national standards body. A complete listing of these bodies can be found at www.iso.org/members.html and www.iec.ch/national-committees.

Introduction

This document is a part of the SQuaRE family of International Standards. It provides a set of quality measures for the characteristics of ICT products, including software products, that can be used for specifying requirements, measuring and evaluating the product quality, in conjunction with other SQuaRE family of International Standards, especially ISO/IEC 25010, ISO/IEC 25030, ISO/IEC 25040 and ISO/IEC 25041. The term “Product” is used for ICT Products which can include software, data, hardware and communication facilities, and other ICT products throughout this document.

The second edition of ISO/IEC 25010 replaces ISO/IEC 25010:2011, which has been technically revised, quality model overview and usage are moved to ISO/IEC 25002 and the quality-use-in model is moved to ISO/IEC 25019.

The major changes in the quality model defined in the second edition of ISO/IEC 25010 are as follows:

a) The target of the product quality model has been extended to include various types of ICT products and information systems.

b) Safety has been added as a quality characteristic with subcharacteristics, i.e. operational constraint risk identification, fail safe, hazard warning, and safe integration.

c) Usability and portability have been replaced with interaction capability and flexibility respectively.

d) Inclusivity and self-descriptiveness, resistance, and scalability have been added as subcharacteristics of interaction capability, security, and flexibility respectively.

e) User interface aesthetics and maturity have been replaced with user engagement and faultlessness respectively.

f) Accessibility has been split into inclusivity and user assistance.

NOTE Accessibility and “inclusivity and user assistance” are different meanings. Correctly, accessibility is not one of product quality sub-characteristics same as usability. However, inclusivity and user assistance have been defined because it is necessary to prepare terms related to diversity in sub-characteristics of interaction capability

g) Several characteristics and subcharacteristics have been given more accurate names and definitions.

The set of quality measures in this document were selected based on their practical value. They are not intended to be exhaustive and users of this document are encouraged to refine them if necessary.

This document is a part of the ISO/IEC 2502n series that currently consists of the following International Standards:

— ISO/IEC 25020 — Quality measurement framework: provides a reference model and guide for measuring the quality characteristics defined in ISO/IEC 2501n quality model division.

— ISO/IEC 25021 — Quality measure elements: provides a format for specifying quality measure elements and some examples of quality measure elements (QMEs) that can be used to construct software quality measures.

— ISO/IEC 25022 — Measurement of quality-in-use: provides measures including associated measurement functions for the quality characteristics in the quality-in-use model.

— ISO/IEC 25023 — Measurement of product quality: provides measures including associated measurement functions for the quality characteristics in the product quality model.

— ISO/IEC 25024 — Measurement of data quality: provides measures including associated measurement functions for the quality characteristics in the data quality model.

— ISO/IEC TS 25025 — Measurement of IT service quality: provides measures including associated measurement functions for the quality characteristics in the IT service quality model.

Figure 1 depicts the relationship between this document and the other International Standards in the ISO/IEC 2502n division. Developers, evaluators, quality managers, acquirers, suppliers, maintainers, and users of target system/software products can select measures from these International Standards for the measurement of quality characteristics of interest. This could be for defining requirements, evaluating system/software products, performing quality management activities, or for other purposes.

Figure 1 — Structure of the Quality Measurement Division

The divisions within the SQuaRE family are:

ISO/IEC 2500n - Quality Management Division. The International Standards that form this division define all common models, terms, and definitions further referred to by all other International Standards from the SQuaRE family. This division also provides requirements and guidance for a supporting function that is responsible for the management of the requirements, specification, and evaluation of software product quality. Practical guidance on the use of the quality models is also provided.

ISO/IEC 2501n - Quality Model Division. The International Standards that form this division present detailed quality models for computer systems and software products, data, IT services, and quality-in-use. Practical guidance on the use of the quality models is provided by ISO/IEC 25002 in the Quality Management Division.

ISO/IEC 2502n - Quality Measurement Division. The International Standards that form this division include quality measurement framework, mathematical definitions of quality measures, and practical guidance for their application. Examples are given of quality measures for the internal and external properties of product, data, IT services, and quality-in-use. Quality Measure Elements (QME) forming foundations for quality measures for the internal and external properties of product are defined and presented.

ISO/IEC 2503n - Quality Requirements Division. The International Standards that form this division help specify quality requirements, based on quality models and quality measures. These quality requirements can be used in the process of quality requirements elicitation for information systems and IT services to be developed or as input for an evaluation process.

ISO/IEC 2504n - Quality Evaluation Division. The International Standards that form this division provide requirements, recommendations, and guidelines for software product evaluation, whether performed by evaluators, acquirers, or developers. The guideline for documenting a measure as an Evaluation Module is also provided.

ISO/IEC 25050 25099 SQuaRE Extension Division. These International Standards currently include requirements for the quality of ready-to-use software products (RUSP), instructions for testing, and requirements for quality of Commercial Off-The-Shelf software and Common Industry Formats for usability reports and quality models and measures for new technologies such as cloud services and artificial intelligence.

Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE): ㅡMeasurement of product quality

1.0 Scope

This document defines quality measures for quantitatively evaluating product quality in terms of characteristics and subcharacteristics defined in ISO/IEC 25010 and is intended to be used together with ISO/IEC 25010. It can be used in conjunction with the ISO/IEC 2503n and the ISO/IEC 2504n standards or to more generally meet user needs concerning ICT products and software products quality.

This document contains a basic set of quality measures for each characteristic and subcharacteristics. It includes, as informative annexes, considerations for the use of quality measures (Annex A), QMEs used to define product or system quality measures (Annex B), and a detailed explanation of measurement types (Annex C).

This document does not provide the specific ranges of values of the measures to rated levels or grades of compliance because these values are determined based on the nature of the system, product, or a part of the product, and depending on factors such as the category of the software, integrity level, and users’ needs. Some attributes could have a desirable range of values, which does not depend on specific user needs but depends on generic factors;

The proposed quality measures are primarily intended to be used for quality assurance and improvement of products during or post the development life cycle process.

The main users of this document are people carrying out quality requirement specification and evaluation activities as part of the following:

— development: including requirements analysis, design specification, coding, and testing through acceptance during the life cycle process;

— quality management: systematic examination of the software product or computer system, for example, when evaluating system or software product quality as part of quality assurance, quality control, and quality certification;

— supply: a contract with the acquirer for the supply of a system, software product, or software service under the terms of a contract, for example, when validating quality at qualification test;

— acquisition: including product selection and acceptance testing, when acquiring or procuring a system, software product, or software service from a supplier;

— maintenance: improvement of the software product or system based on quality measurement.

2.0 Normative reference

The following documents, in whole or in part, are normatively referenced in this document and are indispensable for its application. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.

ISO/IEC 25000, Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Guide to SQuaRE

ISO/IEC 25002, Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality model overview and usage

ISO/IEC 25010, Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Product quality model

ISO/IEC 25020, Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality measurement framework

3.0 Terms and definitions

For this document, the terms and definitions given in ISO/IEC 25000 and the following

apply. ISO and IEC maintain terminological databases for use in standardization at the following addresses:

— ISO Online browsing platform: available at https://www.iso.org/obp

— IEC Electropedia: available at https://www.electropedia.org/

Note 1 to entry: The essential definitions from ISO/IEC 25000 and the other ISO standards are reproduced here.

3.1

ICT Product

product that uses Information and Communication Technologies, and can be a part of an information system

[SOURCE: ISO/IEC 25030:2019]

Note 1 to entry: ICT products can constitute other ICT products (sub-products) and sometimes a component of an ICT product can also be considered an ICT product by itself. Examples of ICT products include computer hardware, software products, software components, and data.

Note 2 to entry: ICT product refers to the combination of one or more technology components (e.g., cloud, internet, data, multimedia, communication, hardware, firmware, software, and middleware) that enables modern computing and allows people and organizations to interact and operate in the digital world.

Note 3 to entry: ICT products do not include people, machines, infrastructure, and other facilities which are independent of communication and data. ICT Product includes hardware with embedded computers, such as sensors and communicators, but not the users.

Note 4 to entry: While many artifacts like data sheets, user manuals, installation manuals, operations guides, and configuration guides contribute to the quality of an ICT Product and the information system that constitutes it, they are not ICT products by themselves.

3.2

measure, noun

variable to which a value is assigned as the result of measurement

Note 1 to entry: The plural form “measures” is used to refer collectively to base measures, derived measures, and indicators

[SOURCE: ISO/IEC/IEEE 15939:2017, 3.15]

3.3

measure, verb

make a measurement

[SOURCE: ISO/IEC 25000:2014, 4.19]

3.4

measurement

set of operations having the objective of determining the value of a measure

Note 1 to entry: Measurement can include assigning a qualitative category such as the language of a source program (ADA, C, COBOL, etc.).

[SOURCE: ISO/IEC 25000:2014, 4.20]

3.5

measurement function

algorithm or calculation performed to combine two or more quality measure elements

[SOURCE: ISO/IEC 25021:2012, 4.7, modified]

3.6

product

artifact that is produced, is quantifiable and is deliverable to the user as either an end item in itself or a component item

[SOURCE: ISO/IEC 25030:2019]

Note 1 to entry: In this document, product refers to an ICT product that is part of an information system. ICT product components include subsystems, software, firmware, hardware, data, communication infrastructure, and other elements that are part of the ICT Product

3.7

property to quantify

property of a target entity that is related to a quality measure element and which can be quantified by a measurement method

Note 1 to entry: A software artifact is an example of a target entity.

[SOURCE: ISO/IEC 25021:2012, 4.11, modified

3.8

quality measure

measure that is defined as a measurement function of two or more values of quality measure elements

[SOURCE: ISO/IEC 25010:2023]

Note 1 to entry: Quality measures can be considered as derived properties of an ICT product or information system.

Note 2 to entry: Inherent (structural) quality measures quantify structural properties of the ICT product or information system, while behavioural quality measures quantify properties that can be identified and measured on the ICT product or information system as a whole and its behaviour in a context of use.

3.9

quality measure element (QME)

measure defined in terms of a property and the measurement method for quantifying it, including optionally the transformation by a mathematical function

[SOURCE: ISO/IEC 25021: 2012 4.14]

3.10

quality model

defined set of characteristics and relationships between them, which provides a framework for specifying quality requirements and evaluating the quality

[SOURCE: ISO/IEC 25000:2014, 4.27]

3.11

quality property

property of a target entity that is related to a quality measure element, and which can be quantified by a measurement method

[SOURCE: ISO/IEC 25020:2019, 3.11, modified]

Note 1 to entry: Quality properties can be used either in the measurement of quality or just for providing qualitative feedback.

3.12

quality characteristic

category of quality attributes that bears on the quality of the ICT product or information system

Note 1 to entry: Quality characteristics can be further divided into quality subcharacteristics. While characteristics typically represent one aspect of quality that is of interest to stakeholders, quality subcharacteristics can help subdivide quality characteristics into individual aspects that help map them to quality properties.

[SOURCE: ISO/IEC 25000:2014, 4.34, formerly software quality characteristic, adapted to apply to a larger scope of products and systems, Note 1 to entry added]

3.13

product quality

capability of an ICT product or its components to satisfy stated and implied quality needs when used under specific conditions

[SOURCE: ISO/IEC 25010:2023]

Note 1 to entry: This definition differs from the ISO 9000:2015 quality definition mainly because the software quality definition refers to the satisfaction of stated and implied needs, while the ISO 9000 quality definition refers to the satisfaction of requirements.

Note 2 to entry: Typically, users do not consider systems that only satisfy requirements as high-quality systems. Quality is related to satisfying and even surpassing expectations with associated constraints and conditions.

4.0 Abbreviated terms

The following abbreviated term is used in this document.

QM

quality measure

QME

quality measure element

5.0 Use of product quality measures

5.1 Requirement of this document to be applied

Any quality requirement specification or quality evaluation that conforms to this document shall:

a) select the quality characteristics and/or subcharacteristics to be specified or evaluated as defined in ISO/IEC 25010;

b) for each selected characteristic or subcharacteristic, all the Generic (G) quality measures defined in Clause 8 are recommended as a baseline set, but may be adapted or excluded with appropriate justification;

c) optionally select any Specific (S) quality measures in Clause 8 that are relevant;

d) if any quality measure is modified, provide the rationale for the changes;

e) define any additional QMs(quality measures) and QMEs(quality measure elements) as per ISO/IEC 25020 and ISO/IEC 25021 that are not included in this document.

5.1.1 Product quality measurement concepts

The quality of a product is the degree to which it satisfies the stated and implied needs of its various stakeholders. By meeting these needs, the product provides value to them. These stated and implied needs are represented in the SQuaRE family of standards by quality models that categorize product quality into characteristics, which in some cases are further subdivided into subcharacteristics, as described in ISO/IEC 25002. The measurable quality-related properties of a product are called properties to quantify and can be associated with quality measures. These properties are measured by applying a measurement method. A measurement method is a logical sequence of operations used to quantify properties concerning a specified scale. The result of applying a measurement method is called a quality measure element. The quality characteristics and subcharacteristics can be quantified by applying measurement functions. A measurement function is an algorithm used to combine quality measure elements. The result of applying a measurement function is called a quality measure. In this way, quality measures become quantifications of the quality characteristics and subcharacteristics. More than one quality measure can be used for the measurement of a quality characteristic or subcharacteristic (see Figure 2).

<The following figure including NOTE will be modified according to current situations>

Figure 2 — Relationship among quality model, QM, QME, property to quantify, target entity

NOTE This figure is modified from the quality measurement reference model in ISO/IEC 25020 Quality measurement framework, Figure 2. Target entity can be a system, a software product, data, or IT service, and those in use. Quality models for them are provided in ISO/IEC 25010, ISO/IEC 25011, ISO/IEC 25012, and ISO/IEC 25019, respectively.

5.1.2 Approach to quality measurement

User needs for quality include requirements for product quality-in-use in specific contexts of use. These identified needs can be considered when specifying behavioural and inherent(structural) measures of quality using product quality characteristics and subcharacteristics. Product quality can be evaluated by measuring inherent properties (typically static measures of intermediate products) by measuring behavioural properties (typically by measuring the behaviour of the code when executed) or by measuring quality-in-use properties (when the product is in real or simulated use). Appropriate inherent properties of the product are a prerequisite for achieving the required behaviour and appropriate behaviour is a prerequisite for achieving quality-in-use (see Figure 3).

Figure 3 — Relationship between types of quality measures

The inherent measures can be applied to a non-executable product during its development stages (such as a request for proposal, requirements definition, design specification, or source code) which can be verified by review, inspection, simulation, and/or automated tools. Inherent measures provide the users with the ability to measure the quality of the intermediate deliverables and thereby predict the quality of the final product. This allows the user to identify quality issues and initiate corrective action as early as possible in the development life cycle. For example, complexity measures and the number, severity, and failure frequency of faults found in a walk-through are inherent measures of software quality made on the product itself.

The behavioural measures can be used to measure the quality of the product by measuring the behaviour of the system of which it is a part. The behavioural measures can only be used during the testing stages of the life cycle process and any operational stages. The measurement is performed when executing the product in the system environment in which it is tested and/or intended to operate. For example, the number of failures found during testing is a behavioural measure of software quality related to the number of faults present in the computer system. It is recommended, where possible, to use inherent measures that have a strong relationship with the target behavioural measures so that they can be used to predict the values of behavioural measures. This document provides a suggested set of product quality measures (behavioural and inherent measures) to be used with the ISO/IEC 25010 product quality model. The user of this document can modify the quality measures defined and can also define and use quality measures not identified or defined in this document.

NOTE 1 For example, the specific measurement of quality characteristics, such as safety or security, can be found in the International Standards provided by IEC 65 and ISO/IEC JTC 1/SC 27.

NOTE 2 When applying contemporary development practices (e.g., Agile methodologies, DevOps), it is suggested to refine quality measures in response to evolving requirements.

When using a modified or a new quality measure not identified in this document, the user should specify how the measure relates to the ISO/IEC 25010 product quality model or any other substitute quality model that is being used. Most quality measures use a measurement function, which normalizes the result value within a range from 0.0 to 1.0. Closer to 1.0 is better. When this is not true, the interpretation is described in a NOTE.

Some quality measures produce a result that is relative to a target value that needs to be established as part of the requirements.

NOTE 3 Some measurements are normalized against the target value specified in a requirement specification, a design specification, or a user documentation. Such target value can be determined and required as the threshold by developers, maintainers or testers to improve architecture, design, implementation, assembles, operational procedures, user interface, or performance of the software product or system. The target value is also able to be specified as one of the agreed requirements by acquirers and suppliers to specify quality requirements or to examine conformance for acquisition. A requirements specification is usually changed and revised during development and affects the quality measures based on it. Some of the requirements to be specified might be missing or inconsistent, or some of the target values might be insufficient and need to be changed because it is very difficult to specify completely both stated and implied needs derived from stakeholders or system requirements at the beginning of development. Accordingly, users of quality measures are expected to take account of evolving and revising a requirements specification and to apply quality measures not at once but iteratively during development and/or evaluation.

NOTE 4 Some quality measures (such as mean response time) can be difficult to interpret in isolation. The following are ways that quality measures can be applied so that they are easier to understand and interpret:

a) conformance: comparing measures with a specific business or usage requirements (e.g. the maximum acceptable response time is 0.5 seconds);

b) benchmarks: comparing measures with a benchmark for the same or a similar product or system used for the same purpose (e.g. the mean response time of the new system is no more than the mean response time of the old system);

c) time series: comparing trends over time (e.g. how does the mean response time change during the day).

NOTE 5 These applications support decision-making in quality management, enabling performance tracking, comparative analysis, and compliance verification.

The value of quality measure defined in this document, depends on the quality and extent of the testing process, including the adequacy of test cases, coverage of normal and exceptional conditions, and the thoroughness of reviews. Consequently, the reported values should be interpreted with this context in mind.

6.0 Format used for documenting the quality measures

The following information is given for each quality measure in the tables in Clause 7:

a) ID: identification code of quality measure; each ID consists of the following three parts:

— an abbreviated alphabetic code for the characteristic (e.g., “F” for Functional suitability);

— a code for the subcharacteristic (e.g., “FCp” for Functional completeness);

— a serial number (e.g., 1, 2, 3);

— a measure type indicator (“G” for Generic or “S” for Specific);

EXAMPLE FCp-1-G denotes the first Generic measure for Functional completeness.

b) Name: quality measure name;

c) Description: the information provided by the quality measure;

d) Measurement function: mathematical formula showing how the quality measure elements are combined to produce the quality measure.

NOTE Useful QMEs that can be used frequently to construct quality measures are specified briefly in Annex B to help comprehend and apply measurement functions for the quality measures.

7.0 Product quality measures

7.1 General

The quality measures in Clause 7 are listed by quality characteristics and subcharacteristics in the order used in ISO/IEC 25010.

Quality measures can be used with different evaluation techniques that could be chosen according to quality characteristics and evaluation rating levels depending on whether it is used as inherent/behavioural measures. Accordingly, some quality measures listed in Clause 7 can be used at different stages of evaluation such as static review of design specification or dynamic analysis of executable products.

Quality measures, which may be applicable, are not limited to those listed here. It is recommended to refer to a specific measure or measurement from specific International Standards or guidelines. For example, functional size measurement is defined in ISO/IEC 14143 and an example of precise time efficiency measurement can be referred from ISO/IEC 14756.

NOTE 1 The function is defined in the form of X = A / B, where the value of B is generally intended to be determined based on the requirements or standards.

NOTE 2 This list of quality measures is not finalized and might be revised in future versions of this document. Readers of this document are invited to provide feedback.

NOTE 3 In this clause, the word measure means quality measure unless otherwise mentioned. For example, “Functional suitability measures” means “Functional suitability quality measures”.

According to ISO/IEC 25020, users may also modify the measures defined in ISO/IEC 25023 or use measures that are not included in ISO/IEC 25023. The further information can be found in 6.3 of ISO/IEC 25020, which includes Annex C as an example of how to document a quality measure. For example, the quality measure element B of Fault resolution rate can be modified from "Number of reliability-related faults detected during development and operation" to "Number of reliability-related faults detected by users during testing and operation" to better reflect measurement conditions where users collect data during the testing and operation stages.

7.1.1 Functional suitability measures

7.1.2 General

Functional suitability measures are used to assess the capability of a product to provide functions that meet the stated and implied needs of intended users when it is used under specified conditions.

NOTE 1 A function referred to here could be an elementary process as defined in functional user requirements in ISO/IEC 14143.

7.1.3 Functional completeness measures

Functional completeness measures are used to assess the capability of a product to provide a set of functions that covers all the specified tasks and intended users’ objectives.

Table 1 — Functional completeness measures

ID

Name

Description

Measurement function

FCp-1-G

Functional completeness

What proportion of the specified product functions are implemented?

X = 1 - A/B

A

= Number of functions missing

B

= Number of functions specified

NOTE 1 The denominator B must be non-zero. If B = 0, this measure is not applicable.

NOTE 2 Functions can be specified in a requirement specification, a design specification, a user manual, or all of these.

NOTE 3 A missing function is detected when the system or software product cannot perform a function that is specified.

NOTE 4 The functions are implemented to meet tasks and intended user objectives. The tasks and intended user objectives can be specified in a requirement specification, a design specification, user documentation, a test document, or all of these.

NOTE 5 ISO/IEC 14143 can be used to identify the functions of a product.

FCp-2-G

Functional

requirement

completeness

What proportion of the specified functional requirements have been implemented?

X = 1 - A/B

A

= Number of functional requirements no t implemented

B

= Total number of specified functional requirements

7.1.4 Functional correctness measures

Functional correctness measures are used to assess the capability of a product to provide accurate results when used by intended users.

Table 2 — Functional correctness measures

ID

Name

Description

Measurement function

FCr-1-G

Functional correctness

What proportion of functions provides the correct results?

X = 1 - A/B

A

= Number of incorrect functions

B

= Number of functions considered

NOTE 1 An incorrect function does not provide a reasonable and acceptable outcome to achieve the specific intended objective.

NOTE 2 The functions considered for evaluation can be all the functions of a product or a specific set of

functions required for a particular usage.

NOTE 3 The developer or tester possibly examines an individual function by reviewing or testing and determines whether the function successfully provides suitable outcomes to specific objectives as defined in the requirements specification or not. In such a case, the degree of correctness is determined per an individual function.

NOTE 4 The result of functional accuracy measurement can be included in the result of functional correctness measurement.

FCr-2-G

Functional

accuracy

How accurate are the results provided by a specific function?

X = 1 - A/B

A

= Number of test cases that do not meet the specified threshold

B

= Number of test cases evaluating the specific function

NOTE 1 Higher values of X (e.g., above 0.9) are generally indicative of high accuracy, depending on context.

EXAMPLE If 10 out of 100 input images are incorrectly classified by an object recognition function, the functional accuracy is 0.9.

NOTE 2 Functional correctness is to measure whether the product's functions are provided correctly, while functional accuracy is to focus on measuring the accuracy of a specific function.

FCr-3-G

Functional

precision

How precise are the results provided by a specific function?

X = 1 - A/B

A

= Number of test cases that do not meet the required degree of precision

B

= Number of test cases evaluating the specific function

NOTE 1 The precision indicates how well the test results fall within the acceptable range of error. The acceptable range is defined based on specific testing objectives and requirements (e.g., Acceptable range: ±0.5 °C). The closer the measured values are to a central value (i.e., the smaller the standard deviation), the higher the precision.

NOTE 2 If the output values are not dense but sparse for given input values, we can say that the function has low precision. For example, the function that uses floating-point values produces imprecise results because floating-point numbers are represented as approximations. When measuring precision, such cases can be taken into account where the output is sparse due to rounding, approximation, or physical representation (e.g., floor function), despite the expectation of dense or continuous values.

NOTE 3 The value of X ranges from 0 to 1, and higher values are preferable. However, this interpretation depends on how close the mean values are to the measured values, which in turn depends on the actual standard deviation.

7.1.5 Functional appropriateness measures

Functional appropriateness measure is used to assess the capability of a product to provide functions that facilitate the accomplishment of specified tasks and objectives.

Table 3 — Functional appropriateness measures

ID

Name

Description

Measurement function

FAp-1-G

Functional appropriateness

What proportion of functions are traceable to the specified requirements?

X = A/B

A

= The number of functions which are traceable to the specified requirements

B

= Number of functions considered

NOTE 1 The functions considered for evaluation can be all the functions of a product or a specific set of functions required for a particular usage.

NOTE 2 Even if the value of this measure is good, when the requirements specification does not adequately cover the functionality, appropriateness will be reduced by the lack or inefficiency of function to accomplish the task and objectives during operation. Therefore, when defining requirements, it is helpful to eager to adequately define stakeholder needs and requirements (e.g., use prototype and defining requirements iteratively).

7.2 Performance efficiency measures

7.2.1 General

Performance efficiency measures are used to assess the capability of a product to perform its functions within specified time and throughput parameters and be efficient in the use of resources under specified conditions. Resources can include other software products, the software and hardware configuration of the system, and materials (e.g. print paper, storage media).

NOTE 1 The performance efficiency measure is affected strongly and fluctuates depending on the conditions of use, such as load of processing data, frequency of use, number of connecting sites, and so on. Therefore, performance efficiency measures might include the ratio of estimated or measured value with error variance to the designed value with the allowed error variance range required by the specification. It is recommended to list and investigate the role played by factors such as “CPU” and memory used by other software, network traffic, and scheduled background processes. Possible variance and valid ranges for estimated or measured values can be established and compared to requirement specifications.

NOTE 2 The statistical reliability of performance efficiency measures can be greatly influenced by the number of observations. For instance, 30 measurements with the same standard deviation as only 3 measurements provide significantly more reliable results.

NOTE 3 It is also a good practice that a task be identified and defined to be suitable for performance efficiency or capacity measures; for example, a transaction as a task for a business application, a switching or data packet sent as a task for a communication application, an event control as a task for a control application and an output of data produced by a user callable function as a task for a common user application.

7.2.2 Time behaviour measures

Time behaviour measures are used to assess the capability of a product to perform its specified function under specified conditions so that the response time and throughput rates meet the requirements.

Table 4 — Time behaviour measures

ID

Name

Description

Measurement function

PTb-1-G

Mean system wait time

What is the mean time between the arrival of a request and the point when the processing begins?

Ai

= Time between the arrival of an i-th request and the point when the processing begins

n

= Number of requests measured

PTb-2-G

Mean response time

How long is the mean time for the product to respond to a user request to perform specified tasks under specified conditions?

Ai

= Time taken by the product to respond to a user i-th request to perform a specific user task

n

= Number of responses measured

NOTE 1 In the case of a pipeline (e.g. a systems chain), the elapsed time in each stage of the pipeline has to be considered and bottlenecks in one stage can affect overall turnaround time.

NOTE 2 It is a good practice to use this measure in conjunction with specified payload and/or workload.

NOTE 3 See Figure 4 below for a visual distinction between response time and turnaround time.

PTb-3-G

Response time adequacy

What proportion of the product response times meet the specified response time requirements?

X = 1 - A/B

A

= Number of response instances that do not meet the specified response time requirements

B

= Number of response time requirements

NOTE 1 Response time is the duration it takes for the initial response to be received after a request has been initiated, whereas turnaround time represents the total duration required for a task to be initiated and completed.

NOTE 2 An alternative to this measure is nth percentile response time under expected load conditions. It is also useful to apply it to individual functions or classes of functions.

PTb-4-G

Mean

turnaround time

What is the mean time taken for completion of a job or an asynchronous process?

Ai

= Time of job submission or time of starting asynchronous process i

Bi

= Time of completing the job or asynchronous process i

n

= Number of jobs or asynchronous processes

PTb-5-G

Turnaround time adequacy

What proportion of the product turnaround time meets the specified turnaround time requirements?

X = 1 - A/B

A

= Number of turnaround instances that do not meet the specified turnaround time requirements

B

= Number of turnaround time measured

Figure 4 — Comparison Between Response Time and Turnaround Time

PTb-6-G

Mean throughput

What is the mean number of jobs completed per unit of time?

Ai

= Number of jobs completed during i-t h observation time

Bi

= The length of time in observation period i

n

= Number of observations

NOTE 1 Jobs could be fine-grained operations, like microprocessor operations, coarse-grained transaction processing units, like those defined by the Transaction Processing Performance Council (TPC), or higher-level abstractions, like functions. So, the results of this measure, when used in different contexts, are interpreted appropriately taking the context into account.

NOTE 2 When such a target threshold is specified as one of the requirements, the adequacy value is expected to be greater than or equal to 1 to satisfy the requirement. For example, if the required throughput is 100 requests per second and the system achieves a mean throughput of 120, the adequacy value would be 1.2, which satisfies the requirement.

PTb-7-G

Throughput adequacy

What proportion of the product throughput meets the specified throughput requirements?

X = 1 - A/B

A

= Number of throughput that does not meet throughput requirements

B

= Number of throughput requirements

7.2.3 Resource utilization measures

Resource utilization measures are used to assess the capability of a product to use no more than the specified amount of resources to perform its function under specified conditions.

Table 5 — Resource utilization measures

ID

Name

Description

Measurement function

PRu-1-G

Peak

processor utilization

How much of the maximum processing utilization is used to execute the specific functions of the product?

X = Max (A1, A2, A3, …, An)

Ai

= Maximum processor utilization rate during each execution of a specific product function i

n

= Number of observations

NOTE 1 Result value varies from greater than 0 to 1. Usually, the smaller is better.

NOTE 2 In order for the results to be statistically meaningful, it is generally a good practice to repeat the measurement at least five times, therefore it is desirable n is more than 5.

NOTE 3 The interpretation of peak utilization values depends on the system’s operational context. For general-purpose systems, sustained usage above 90% possibly indicate potential resource strain. However, in performance-optimized environments (e.g., HPC), high utilization is sometimes both intended and desirable. Users are advised to compare observed utilization levels against their specific system requirements and operational goals.

PRu-2-G

Mean processor utilization

What is the mean value of the processor utilization used to execute the specific functions of the product?

Ai

= Percentage of time a core or processor is occupied compared to the total time the core or processor is available for use in i-th observation

n

= Number of 1observations

NOTE 1 It is essential not only to measure the maximum processor utilization but also to measure the average processor utilization.

NOTE 2 It is worth considering better measures, such as median, because mean values are sensitive to outliers.

PRu-3-G

Peak

memory utilization

How much of the maximum memory utilization is used to execute the specific functions of the product?

X = Max (A1, A2, A3, …, An)

Ai

= Maximum size of memory used to execute the specific functions of the product in i-th observation

n

= Number of observations

PRu-4-G

Mean memory utilization

What is the mean value of the memory used to execute the specific functions of the product?

Ai

= Size of memory used to execute the specific functions of the product in i-th observation

n

= Number of observations

NOTE 1 Linux systems do not have a specific metric to measure the actual memory usage of a process. As an alternative, available memory can be measured instead of memory usage.

NOTE 2 To understand how much the product utilizes system resources to the maximum, it is essential not only to measure the average memory utilization but also to observe the maximum memory utilization.

PRu-5-G

Mean wait time

What is the mean time between the arrival of a request and the point when the processing begins?

Ai

= Time between the arrival of a request and the point when the processing begins in i-th observation

n

= Number of requests measured

PRu-6-G

Peak

I/O devices utilization

How much is the maximum I/O device utilization rate used to execute the function of the product?

X = Max (A1/T1, A2/T2, , …, An/Tn)

Ai

= Busy time of I/O device during the execution of the specific functions of the product in i-th observation

Ti

= Total observation time in i-th observation

n

= Number of observations

NOTE Utilization rate is measured as (Disk Busy Time / Total Observation Time) × 100 %. Busy time means the period during which a system or a device is working.

PRu-7-G

Mean

I/O devices utilization

What is the mean utilization rate of I/O devices during the operational period of the system or product?

Ai

= Utilization rate of I/O devices during i-th measurement interval

n

= Number of observations or intervals measured

PRu-8-G

Peak

bandwidth

utilization

How much is the maximum bandwidth utilized to execute the specific functions of the product?

X = Max (A1, A2, A3, …, An)

Ai

= Maximum bandwidth during the execution of the specific functions of the product in i-th observation

n

= Number of observations

NOTE 1 The result of X is expected to be below specified bandwidth capacity.

NOTE 2 The measurer has to consider the possible communication traffic limitations (e.g. dropping or throttling) which can affect the resulting statistical values including average.

PRu-9-G

Mean bandwidth utilization

What is the mean value of the bandwidth utilization used to execute the specific functions of the product?

Ai

= Size of bandwidth used to execute the specific functions of the product in i-th observation

n

= Number of observations

NOTE Bandwidth utilization is commonly measured in Mbps. Alternatively, it can be expressed as a percentage of maximum capacity to reflect relative usage.

PRu-10-G

Peak

energy

consumption

utilization

How much is the maximum energy consumed to execute the specific functions of the product?

X = Max (A1, A2, A3, …, An)

Ai

= Maximum energy utilization rate consumed during the execution of the specific functions of the product in i-th observation

n

= Number of observations

NOTE In physical deployments (e.g., IoT, edge devices, on-premise servers), power meters and smart PDUs can be used to directly measure energy consumption during specific function execution. In cloud environments, direct measurement is not possible. However, major cloud providers such as AWS, Azure, and Google Cloud provide tools or APIs (e.g., Cloud Carbon Footprint, Sustainability Calculator) that enable energy consumption estimation based on resource usage. These allow for approximation of peak and mean energy consumption per function or service.

PRu-11-G

Mean energy consumption utilization

What is the mean value of the energy consumption used to execute the specific functions of the product?

Ai

= Size of energy consumption used to execute the specific functions of the product in i-th observation

n

= Number of observations

PRu-12-G

Resource utilization

adequacy

What proportion of the resource utilization meets the specified utilization requirements?

X = 1 - A/B

A

= Number of resource utilizations that do not meet resource utilization requirements

B

= Number of resource utilization requirements

7.2.4 Capacity measures

Capacity measures are used to assess the capability of a product to meet requirements for the maximum limits of a product parameter.

NOTE 1 Capacity measures are expected to be measured through dynamic analysis, such as volume testing of the system, or can be measured by system integration testing or simulation. Maximum value and distribution of the duration can be investigated for many cases of static analysis, dynamic testing, or operations.

NOTE 2 The maximum limit is expected to be specified as a target value which can theoretically be beyond a possible realistic value.

Table 6 — Capacity measures

ID

Name

Description

Measurement function

PCa-1-G

Maximum capacity used

What is the maximum amount of throughput under the given resources?

X = Max (A1, A2, A3, … An)

Ai

= Maximum number of transactions completed during observation time under the given resources in i-th observation

n

= Number of observations

NOTE 1 Result value varies from 0 to maximum limit. Usually, the larger is better.

NOTE 2 This measure can be useful only if there is sufficient workload to test.

NOTE 3 Task can be alternatively used, as well as transaction.

NOTE 4 The capacity of a product refers to the maximum amount of throughput that can be achieved under specific conditions within a given period. This capacity can be influenced by various factors such as hardware specifications, software configuration, network bandwidth, and other resources, which collectively determine the specified conditions.

PCa-2-G

Maximum user access capability used

How many users can access the product simultaneously at a certain time at maximum?

X = Max (A1, A2, A3, …, An)

Ai

= Maximum number of users who can simultaneously access the product at i-th observation

n

= Number of observations

NOTE 1 Result value varies from 0 to maximum limit. Usually, the result of a larger value is better.

NOTE 2 In terms of performance efficiency, the term ‘users’ typically refers to concurrent users, which means the number of users who access the system simultaneously and request specific tasks.

PCa-3-G

User access

adequacy

What proportion of user access can the product accommodate?

X = A/B

A

= Number of user accesses that the product can accommodate

B

= Number of targeted user accesses that the product should accommodate utilizing available resources

NOTE 1 This measure is designed to measure whether the product can cope with the user access increase during the specific period.

NOTE 2 Result value varies from 0 to maximum limit. Usually, the larger is better.

NOTE 3 This measure indicates the degree to which the capability of software or system has enough capacity to accept accesses from a lot of users, even during the rapid increase of users in a given moment, e.g. an extremely large number of users could simultaneously access the system or software in an instance through the internet. If the value of X is less than 1, it means that the product does not accommodate the targeted user accesses, indicating in sufficient capacity under the given conditions.

PCa-4-G

Capacity adequacy

What proportion of the product capacity meets the specified capacity requirements?

X = 1 - A/B

A

= Number of capacities that do not meet capacity requirements

B

= Number of capacity requirements

NOTE If x is smaller than 1, it means the product fails to meet capacity requirements.

7.3 Compatibility measures

7.3.1 General

Compatibility measures are used to assess the capability of a product to exchange information with other products, and/or to perform its required functions while sharing the same common environment and resources.

7.3.2 Co-existence measures

Co-existence measures are used to assess the capability of a product to perform its required functions efficiently while sharing a common environment and resources with other products, without detrimental impact on any other product.

Table 7 — Co-existence measures

ID

Name

Description

Measurement function

CCo-1-G

Co-existence

with other

products

What proportion of specified capability of products can share the environment with this product without adverse impact on their quality characteristic or functionality?

X = A/B

A

= Number of other products with which this product can interact without negatively affecting their quality characteristics or functionality in the same environment

B

= Number of other products required to co-exist with this product in the same environment

NOTE 1 A negative impact on quality characteristics or functionality refers to measurable issues such as performance degradation (e.g., slower response time or reduced throughput), increased error rates, or failure to meet functional requirements.

7.3.3 Interoperability measures

Interoperability measures are used to assess the capability of a product to exchange information with other products and mutually use the information that has been exchanged.

Table 8 — Interoperability measures

ID

Name

Description

Measurement function

CIn-1-G

Data formats exchangeability

What proportion of the specified data formats is exchangeable with other products?

X = A/B

A

= Number of data formats exchangeable with other products

B

= Number of data formats specified to be exchangeable

CIn-2-G

Data exchange protocol sufficiency

What proportion of the specified data exchange protocols is supported?

X = A/B

A

= Number of data exchange protocols supported

B

= Number of data exchange protocols specified to be supported

NOTE For the details of data quality, refer to Con-I-1 in ISO/IEC 25024.

CIn-3-G

External interface completeness

What proportion of the specified external interfaces (interfaces with other products) is implemented?

X = A/B

A

= Number of external interfaces that are implemented

B

= Number of external interfaces specified

7.4 Interaction capability measures

7.4.1 General

Interaction capability measures are used to assess a product’s ability to support interaction with specified users by enabling information exchange between the user and the system through the user interface and human interaction processes necessary to complete the intended tasks.

NOTE 1 Inherent measures for interaction capability are used to predict the extent to which the software in question can be understood, learned, and operated and will enable encouraging and satisfying interaction for the user.

NOTE 2 Many behavioural measures of interaction capability are tested by users attempting to use a function. The results will be influenced by the capabilities of the users and the product characteristics. This does not invalidate the measurements, since the evaluated product is run under explicitly specified conditions by a sample of users who are representative of an identified user group. (For general-purpose products, representatives of a range of user groups could be used.) For reliable results, a sample of a large group of committed users is necessary, although useful information can be obtained from smaller groups. Users carry out the test without any hints or external assistance. To enhance the objectivity of behavioural evaluation, standardized user testing protocols (e.g., ISO 9241-11) or structured questionnaires can be used.

NOTE 3 Inherent and behavioural measures for interaction capability make comparisons between stated design conventions, specific guidelines, or specifications for interaction capability and develop the documented design, prototype, or executable system/software. Therefore, it is very important to elicit the end user’s requirements and create a well-specific specification for interaction capability by considering characteristics and measures of quality-in-use as well as user-centered design concepts and human ergonomics views. For example, guidelines, templates, or check list related to interaction capability are necessary to explain in detail what kinds of messages are easy to understand for end users.

NOTE 4 In this document, the target entities of interaction capability measures are limited to any ICT product only.

NOTE 5 The interaction capability measures would inevitably generate somewhat subjective results. In case of difficulties in measuring with a ratio scale, an ordinal scale can be used as an alternative depending on the situation.

NOTE 5 An inherent and behavioural operability quality measure is used to assess whether users can operate and control the product. Operability measures can be categorized by the following dialogue principles in ISO 9241-110:

— suitability of the software for the task;

— self-descriptiveness of the software;

— controllability of the software;

— conformity of the software with user expectations;

— error tolerance of the software;

— suitability of the software for individualization.

7.4.2 Appropriateness recognizability measures

Users have to be able to select a product that is suitable for their intended use. The quality measures for appropriateness recognizability are used to assess the capability of a product to be recognized by users as appropriate for their needs.

NOTE Appropriateness recognizability measures can be used to assess whether new users can understand:

— whether the product is suitable for their purposes or not;

— how it can be used for particular tasks

Table 9 — Appropriateness recognizability measures

ID

Name

Description

Measurement function

IAr-1-G

Description completeness

What proportion of usage scenarios is described in the product description or user documentation?

X = A/B

A

= Number of usage scenarios described in the product description or user documentation explicitly

B

= Number of usage scenarios of the product

IAr-2-G

Demonstration coverage

What proportion of tasks have demonstration features for users to recognize the appropriateness?

X = A/B

A

= Number of tasks with demonstration features

B

= Number of tasks that could benefit from demonstration features

7.4.3 Learnability measures

Learnability measures are used to assess the capability of a product to have specified users learn to use specified product functions within a specified amount of time.

Table 10 — Learnability measures

ID

Name

Description

Measurement function

ILe-1-G

User guidance

completeness

What proportion of functions is explained in sufficient detail in user documentation and/or help facility to enable the user to apply the specified functions?

X = A/B

A

= Number of fufnctions described in user documentation and/or help facility as required

B

= Number of functions implemented that are required to be documented

NOTE 1 Learnability is strongly related to appropriateness recognizability and appropriateness recognizability measurements can be indicators of the learnability potential of the software.

NOTE 2 Help facility includes, for example, online help, operational guide video, operational instruction system, etc.

ILe-2-G

Entry fields default

What proportion of entry fields that could have default values are automatically filled with default values?

X = A/B

A

= Number of entry fields whose default values have been automatically filled in during operation

B

= Number of entry fields that could have default values

NOTE The default values for entry fields are helpful for beginners to learn how to operate the product comprehensively and quickly.

ILe-3-G

Self-explanatory user interface

What proportion of user interfaces presented to the user enables specific tasks to be completed by a first-time user without prior study or training or seeking external assistance?

Ai

= Number of user interfaces for task i that can be understood without study, training, or external assistance

Bi

= Number of user interfaces used for task i

n

= Total number of tasks

NOTE 1 This measure is particularly relevant for public systems and websites.

NOTE 2 User interfaces can include GUI(Graphical User Interface), AUI(Auditory UI) and T and H UI(Tactile and Haptic UI), command lines, and so on.

7.4.4 Operability measures

Operability measures are used to assess the capability of a product to have facilities and attributes that make it easy to operate and control.

NOTE 1 Operability measures are expected to be measured through operational testing by representatives of operators or end users, or can be measured through static analysis such as a review of requirements, design specifications, or user manuals.

Table 11 — Operability measures

ID

Name

Description

Measurement function

IOp-1-G

Operational consistency

To what extent do interactive tasks have a behaviour and appearance that is consistent both within the task and across similar tasks?

X = 1 - A/B

A

= Number of interactive tasks with interfaces that exhibit or create inconsistencies in the product's operation to complete the task

B

= Total number of interactive tasks in the product

IOp-2-G

Message clarity

What proportion of messages from a product conveys the right outcome or instructions to the user?

X = A/B

A

= Number of messages that convey the right outcome or instructions to the user

B

= Number of messages implemented

IOp-3-S

Functional customizability

What proportion of functions and operational procedures can users customize for their convenience?

X = A/B

A

= Number of functions and operational procedures that can be customized for the user's convenience

B

= Number of functions and operational procedures for which users could benefit from customization

IOp-4-S

User interface customizability

What proportion of user interface elements can be customized by users?

X = A/B

A

= Number of user interface elements that can be customized by users

B

= Number of user interface elements

IOp-5-S

Monitoring

capability

What proportion of functions can be monitored during operation?

X = A/B

A

= Number of functions having state monitoring capability

B

= Number of functions that could benefit from monitoring capability

NOTE 1 Monitoring and management of the operational state of some functions are very important in case of, for example, distributed systems, embedded systems, and so on.

NOTE 2 For a better measurement, it is helpful to find which function has a benefit from monitoring capability from a view of usability during operational scenario review or operational testing by the user. Such a function is also able to be specified as requirements.

IOp-6-S

Undo capability

What proportion of tasks that have a significant consequence to users provides an option for re-confirmation or undo capability?

X = A/B

A

= Number of tasks that provide undo capability or prompt for re-confirmation

B

= Number of tasks for which users could benefit from having re-confirmation or undo capability

IOp-7-S

Understandable categorization

of information

To what extent does the product organize information in categories that are understandable for their tasks?

X = A/B

A

= Number of information structures that are familiar and convenient for the intended users

B

= Number of information structures used

EXAMPLE The online shop of a department store organizes the goods in a similar way to the physical layout of the goods in the store.

IOp-8-S

Appearance consistency

What proportion of user interfaces with similar items has a similar appearance?

X = 1 - A/B

A

= Number of user interfaces with similar items but with different appearances

B

= Number of user interfaces with similar items

EXAMPLE The “OK” and “Cancel” screens are always located at the same position.

IOp-9-S

Input device

support

To what extent can tasks be initiated by all appropriate input modalities (such as keyboard, mouse, or voice)?

X = A/B

A

= Number of tasks that can be initiated by all appropriate input modalities

B

= Number of tasks supported by the system

EXAMPLE Within a search form, the search button can be activated by using the mouse or by pressing the “Enter” key on the keyboard.

7.4.5 User error protection measures

User error protection measures are used to assess the capability of a product to protect operation errors.

NOTE User error protection measures are expected to be measured through operational testing by representatives of operators or end users, or can be measured through reviewing requirements, design specifications, or user manuals.

Table 12 — User error protection measures

ID

Name

Description

Measurement function

IEp-1-G

Avoidance of use error

What proportion of user-inappropriate actions and inputs are validated to avoid operation errors that can influence product malfunctions?

X = A/B

A

= Number of actions and inputs that are validated to avoid operation error that can influence the product malfunctions

B

= Number of user actions and inputs that are specified to be validated to avoid operation errors that can influence product malfunctions

NOTE 1 This includes the system requesting confirmation before carrying out an action that cannot be undone and that would have significant consequences.

EXAMPLE When erasing files within an application, the user is required to confirm each deletion.

NOTE 2 For a better measurement, it is helpful to find user actions and inputs where a user often makes an error during operational testing. Such protections against erroneous user actions and inputs are also able to be specified as requirements.

NOTE 3 It is helpful to involve representative users from early phase of development and to observe their operation behaviour for specifying which user actions and inputs are to be validated for operational error prevention.

IEp-2-G

Error message resolvability

What proportion of the error messages state the reason why the error occurred and how to resolve it?

X = A/B

A

= Number of error messages which state the reason for occurrence and suggest the ways of resolution where this is possible

B

= Number of error messages implemented

IEp-3-G

User entry

error

correction

To what extent does the product correct detected user entry errors or suggest corrections for them?

X = A/B

A

= Number of user entry errors that the product corrects or for which it suggests corrections

B

= Number of entry errors detected

NOTE For the details of related data quality, refer to Cre-I-1 in ISO/IEC 25024.

IEp-4-G

User error recoverability

What proportion of user operation errors can be corrected by the product to resume nominal operation without human intervention?

X = A/B

A

= Number of user operation errors corrected by the product to resume nominal operation without human intervention

B

= Number of user operation errors that were made during the operation

7.4.6 User engagement measures

User engagement measures are used to assess the capability of a product to present its functions in an inviting and motivating manner that encourages ongoing interaction.

Table 13 — User engagement measures

ID

Name

Description

Measurement function

IUe-1-G

Engaging user interfaces

To what extent are the user interfaces, including functions or information, provided in an inviting and motivating manner?

X = A/B

A

= Number of interfaces that motivate user engagement

B

= Number of interfaces required to motivate user engagement

NOTE 1 An inherent or behaviour user engagement quality measure is used to assess the appearance of the user interfaces and will be influenced by factors such as screen design and color. This is particularly important for consumer products.

NOTE 2 Good colour combinations can help users quickly read the text or identify the image. Then, it can be helpful for better aesthetics measurement to address bad colour combinations, such as light blue on grey, red on orange, green on blue, and so on.

NOTE 3 This quality measure often depends on an individual of users. Then, either expert usability designers or testers on behalf of users, or representatives from target user groups are expected to be involved to measure this.

7.4.7 Inclusivity measures

Inclusivity measures are used to assess the capability of a product to be utilized by people of various backgrounds.

Table 14 — Inclusivity measures

ID

Name

Description

Measurement function

IIn-1-G

Language inclusivity for the widest range of users

To what extent is the product accessible to users from specified language backgrounds?

X = A/B

A

= Number of functions successfully usable by users from specified language backgrounds

B

= Number of functions implemented

NOTE 1 For this quality measure, there is a need to define target users of the product that use language backgrounds.

NOTE 2 Inclusivity focuses on how products can be inclusive of all users, while user assistance focuses on how products can be helpful to disabled users. For example, to support inclusivity, a product is provided in multiple languages or content that is appropriate for users from different cultural backgrounds. To support user assistance, assistive technology can allow blind users to hear the content of a product or service, or deaf users to see the content of a product.

IIn-2-G

Culture inclusivity for the widest range of users

To what extent is the product accessible to users from specified cultural groups?

X = A/B

A

= Number of functions successfully usable by users from specified cultural groups

B

= Number of functions implemented

NOTE For this quality measure, there is a need to define target users of the product to culture groups.

EXAMPLE The meaning of colors can differ across cultures (e.g., white symbolizes purity in some cultures but mourning in others), and suitable character sizes can vary depending on the language (e.g., alphanumeric vs. Kanji characters).

7.4.8 User assistance measures

User assistance measures are used to assess the capability of a product to be used by people with the widest range of characteristics and capabilities to achieve specified goals in a specified context of use.

NOTE Accessibility has been renamed to User Assistance in ISO/IEC 25010.

Table 15 — User assistance measures

ID

Name

Description

Measurement function

IUa-1-G

Assistance for users with disabilities and diverse users

To what extent can potential users with specific disabilities successfully use the product with assistive technology?

X = A/B

A

= Number of functions that the users with specific disabilities successfully use the product with assistive technology

B

= Number of functions implemented

NOTE 1 To define criteria for whether the product is 'successfully completing’ accessibility-related standards, for example, ISO/IEC 40500 ISO 9241-171, and ISO/IEC/IEEE 2651n series can be used.

NOTE 2 Specific disabilities include cognitive disability, physical disability, hearing/voice disability, and visual disability.

NOTE 3 The range of capabilities includes disabilities associated with age.

NOTE 4 Any person becomes possibly a user with limited cognitive, physical, hearing, or visual ability under specific situations or environments, for example, in darkness, in low atmospheric pressure at high altitude, in water, and so on.

IUa-2-G

Assistance and support for users who require any support for use of system or products

To what extent can many kinds of users in a variety of environment successfully use the product with assistive technology or some support

X = A/B

A

= Number of functions that the users successfully use the product with assistive technology

B

= Number of functions implemented

7.4.9 Self-descriptiveness measures

Self-descriptiveness measures are used to assess the capability of a product to present appropriate information, where needed by the user, to make its capabilities and use immediately obvious to the user without excessive interactions with a product or other resources.

Self-descriptiveness is addressed in ISO 9241-110, and that the document should be referred to for further details.

Table 16 — Self-descriptiveness measures

ID

Name

Description

Measurement function

ISd-1-G

Presentation of understandable information for user tasks

What proportion of user tasks are provided with understandable information for performing tasks?

X = A/B

A

= Number of user tasks with understandable information for performing the tasks

B

= Number of user tasks

NOTE The level of understandable information can be influenced by the user's background knowledge or proficiency in performing the task, and these factors can be taken into account when applying this measure more effectively.

7.5 Reliability measures

7.5.1 General

Reliability measures are used to assess the capability of a product to perform specified functions under specified conditions for a specified period without interruptions and failures.

Inherent reliability measures are used for predicting if the completed product in question will satisfy prescribed reliability needs during the development of the product.

Behavioural reliability quality measures are used to assess attributes related to the behaviour of the system of which the software is a part during execution testing to indicate the extent of reliability of the software in that system during operation. Systems and software are not distinguished from each other in most cases.

NOTE For detailed definitions and examples of reliability measures, refer to IEEE 982:2024.

7.5.2 Faultlessness measures

Faultlessness measures are used to assess the capability of a product to perform specified functions without fault under normal operation and during testing.

NOTE The name “faultlessness” replaces the previous name “maturity” (in ISO/IEC 25010:2023) as a quality subcharacteristic of the product.

Table 17 — Faultlessness measures

ID

Name

Description

Measurement function

RFa-1-G

Fault resolution rate

What proportion of detected reliability-related faults have been resolved during development and operation?

X = A/B

A

= Number of faults resolved during development and operation

B

= Number of faults detected during development and operation

NOTE Reliability-related faults refer to faults where a system or product experiences unexpected interruptions or impairments, such as program abnormal termination/interruption, abnormal data loss, system recovery errors, and data integrity errors.

RFa-2-G

Mean time between failures (MTBF)

What is the MTBF during the product operation?

X = A/B

A

= Operation time

B

= Number of product failures that occurred during the operation time

NOTE 1 Result value varies from 0 to infinite. Usually, the larger is better.

NOTE 2 MTBF itself can be used to compare the reliabilities of different systems or software products.

RFa-3-G

MTBF improvement

Has the MTBF improved across operational periods?

X = A/B

A

= MTBF measured during current operation time

B

= MTBF measured during previous operation time

NOTE If X is greater than 1, it indicates that the MTBF is improving. If X equals 1, the MTBF shows no change. If X is less than 1, it indicates that the MTBF is worsening.

RFa-4-G

Failure rate

What proportion of the observation period corresponds to the time in which failures are detected?

X = A/B

A

= Amount of time in which failures are detected

B

= Duration of observation period

NOTE 1 The period used in this measure could be different for testing and operations purposes, which refers to actual usage or testing time.

NOTE 2 A reliability estimation model can use this measure as an input.

NOTE 3 The usefulness of this quality measure depends on the adequacy of test cases or the extent of system usage during testing, e.g. normal, exceptional, and abnormal cases.

RFa-5-G

Failure rate improvement

Has the failure rate improved across operational periods?

X = A/B

A

= Failure rate measured during current operation time

B

= Failure rate measured during previous operation time

NOTE If X is greater than 1, it indicates that the failure rate is worsening. If X equals 1, the failure rate shows no change. If X is less than 1, it indicates that the failure rate is improving.

7.5.3 Availability measures

Availability measures are used to assess the capability of a product to be operational and accessible when required for use.

Table 18 — Availability measures

ID

Name

Description

Measurement function

RAv-1-G

Product availability

For what proportion of the scheduled product operational time is the product available?

X = A/B

A

= Product operation time provided

B

= Product operation time specified in the operation schedule

NOTE 1 This measure can be extended to special days, such as holidays and weekends, in addition to regular operational days.

NOTE 2 Generally, product availability can be measured during operation. When measuring product availability during the testing phase, B can be defined as the observation time during testing.

RAv-2-G

Mean down time

How long does the product stay unavailable when a failure occurs?

X = A/B

A

= Total down time

B

= Number of breakdowns observed

NOTE 1 Result value varies from 0 to infinite. Usually, the smaller is better.

NOTE 2 Externally, availability can be assessed by the proportion of total time during which the system, product, or component is in an up state. Availability is therefore a combination of maturity (which governs the frequency of failure), fault tolerance, and recoverability (which governs the length of downtime following each failure).

7.5.4 Fault tolerance measures

Fault tolerance measures are used to assess the capability of a product to operate as intended despite the presence of hardware or software faults.

NOTE An inherence and behavioural fault tolerance measure can be related to the products’ capability of maintaining a specified performance level in cases of operation faults or infringement of its specified interface.

Table 19 — Fault tolerance measures

ID

Name

Description

Measurement function

RFt-1-G

Fault avoidance

What proportion of fault patterns has been brought under control to avoid critical and serious failures?

X = A/B

A

= Number of fault patterns under control

B

= Number of fault patterns causing failure during testing

RFt-2-G

Fault identification

What proportion of fault patterns has been identified by performing testing?

X = A/B

A

= Number of test cases that have detected faults during testing or analysis

B

= Number of test cases

RFt-3-G

Redundancy of components

What proportion of product components is duplicated redundantly to avoid product failure?

X = A/B

A

= Number of components duplicated

B

= Number of components to be required for duplication

NOTE 1 For example, in many safety-critical systems, some parts of the control system could be duplicated to increase the reliability of the system.

NOTE 2 To enhance system reliability, redundancy can be implemented for servers, DBMS, storage, processes, and other components. As a result, the unit of components can vary depending on the context.

RFt-4-G

Mean fault notification time

How quickly does the product report the occurrence of faults?

Ai

= Time at which the fault i is reported by the product

Bi

= Time at which fault i is detected

n

= Number of faults detected

NOTE Result value varies from 0 to infinite. Usually, the closer to 0 the better.

7.5.5 Recoverability measures

Recoverability measures are used to assess the capability of a product in the event of an interruption or a failure to recover the data directly affected and re-establish the desired state of the system.

Table 20 — Recoverability measures

ID

Name

Description

Measurement function

RRe-1-G

Mean recovery time

How long does it take for the product to recover from failure?

Ai

= Total time to recover the downed product and re-initiate operation for i-th failure

n

= Number of failures

NOTE 1 Result value varies from 0 to infinite. Usually, the smaller is better.

NOTE 2 When this quality measure is compared to a target threshold for mean recovery time, that is specified in agreed requirements by the acquirer and supplier, the measure can be used to examine conformance.

RRe-2-G

Mean recovery time by component recovery level

How long does it take for the component to recover from failure at a specific recovery level?

Ai

= Total time required to recover the failed component at the specified recovery level and re-initiate its operation for failure i

n

= Number of failures

NOTE 'Mean recovery time' is to measure the recovery time for the product as a whole, including its components, while 'Mean recovery time by component recovery level' is to measure recovery time specifically based on the recovery level of each component within the product.

RRe-3-G

Backup data completeness

What proportion of data items is backed up regularly?

X = A/B

A

= Number of data items backed up regularly

B

= Number of data items requiring backup for error recovery

7.6 Security measures

7.6.1 General

Security measures are used to assess the capability of a product to protect information and data so that persons or other products have the degree of data access appropriate to their types and levels of authorization and to defend against attack patterns by malicious actors.

NOTE 1 Penetration tests can be performed to simulate an attack because such a security attack does not normally occur in the usual testing.

NOTE 2 Security protection requirements vary widely from the case of a stand-alone system to the case of a system connected to the Internet. The determination of the required security functions and the assurance of their effectiveness have been addressed extensively in related International Standards. The user of this document has to determine what kind of security functions need to be used in each case depending on the level of risk.

7.6.2 Confidentiality measures

Confidentiality measures are used to assess the capability of a product to ensure that data are accessible only to those authorized to have access.

Table 21 — Confidentiality measures

ID

Name

Description

Measurement function

SCo-1-G

Access controllability

What proportion of confidential data items are protected from unauthorized access?

X = 1 - A/B

A

= Number of confidential data items that can be accessed without authorization

B

= Number of data items that require access control

SCo-2-G

Access control mechanism sufficiency

To what extent are the access control mechanisms implemented to protect the product from unauthorized access?

X = A/B

A

= Number of access control mechanisms actually implemented

B

= Number of access control mechanisms required to be implemented for the product

NOTE Access control mechanisms to protect the product include user role-based access restrictions, URL direct access restrictions, automatic session termination for web pages, screen lock after a period of inactivity, and access restrictions based on specified IP or MAC addresses.

SCo-3-G

Data encryption correctness

How correctly is the encryption/decryption of data items implemented as stated in the requirement specification?

X = A/B

A

= Number of data items encrypted/decryption correctly

B

= Number of data items that require encryption/decryption

NOTE For the details of related data quality, refer to Cnf-I-1 in ISO/IEC 25024.

SCo-4-G

Strength of cryptographic algorithms

What proportion of cryptographic algorithms has been well-vetted?

X = 1 - A/B

A

= Number of cryptographic algorithms broken or unacceptably risky in use

B

= Number of cryptographic algorithms used

NOTE 1 It is important to select a well-vetted algorithm that is currently considered to be strong by experts in the field and to select well-tested implementations. As with some cryptographic mechanisms, the source code has to be available for analysis. For example, US government systems require FIPS 140-2 certification.

NOTE 2 There are other ways of measuring the strength of cryptographic algorithms, for example, using ethical hacking.

NOTE 3 The term "well-vetted" refers to components that have been thoroughly examined or tested to ensure they are not broken or unacceptably risky for use.

SCo-5-G

One-way encryption algorithm

To what extent has a one-way encryption algorithm been applied to data that could pose security risks if decrypted when leaked?

X = A/B

A

= Number of data items encrypted using a one-way encryption algorithm

B

= Number of data items required to be encrypted using a one-way encryption algorithm

NOTE 1 Encrypted data, such as user passwords and biometric information used for authentication and identification, can pose security risks if decrypted when leaked. Therefore, it must be encrypted using a one-way encryption algorithm to prevent decryption.

NOTE 2 One-way encryption algorithms generate the same output for identical input values during encryption and must be used with a seed value.

SCo-6-G

Data transmission protection

To what extent are communication paths within the product components or with other systems implemented through secure communication channels?

X = A/B

A

= Number of communication paths transmitting data through a secure cryptographic communication channel

B

= Total number of communication paths transmitting data

NOTE 1 Communication paths transmitting data can include paths between servers and clients, agents and servers, or clients and databases.

NOTE 2 Secure cryptographic communication channels are implemented using protocols such as TLS or SSH.

SCo-7-S

Minimization of personal data collection

To what extent does the product collect unnecessary personal data among the collected personal data items?

X = 1- A/B

A

= Number of unnecessary personal data items among the collected personal data

B

= Total number of collected personal data items

NOTE 1 Personal data item can include name, age, blood type, address, etc.

NOTE 2 If the product requests blood type information during personal data registration but does not utilize or provide it, this is considered unnecessary personal data collection.

7.6.3 Integrity measures

Integrity measures are used to assess the capability of a product to ensure that the state of its system and data are protected from unauthorized modification or deletion either by malicious action or computer error.

Table 22 — Integrity measures

ID

Name

Description

Measurement function

SIn-1-G

Data integrity

To what extent is data corruption modification or deletion by unauthorized access prevented?

X = 1 - A/B

A

= Number of data items that are corrupted by unauthorized access, modification or deletion

B

= Number of components of the product where an internal data corruption prevention method is necessary or required

SIn-2-G

Internal data corruption prevention

To what extent are the available prevention methods for internal data corruption implemented?

X = A/B

A

= Number of internal data corruption prevention methods implemented

B

= Number of components of the product where an internal data corruption prevention method is necessary or required

NOTE 1 Internal data refers to data that is generated, collected, stored, and managed within a system. Examples of internal data include employee information, customer data, operational data, and so on.

NOTE 2 Examples of methods for data corruption prevention are backing up data frequently, comparing data to reference data periodically, storing data in multiple mirror sites, use of RAID systems, data validation, and integrity checks, Antivirus and Anti-malware Software, use of network security programs, regular updates to software, access control management, transaction logging and monitoring and so on.

SIn-3-G

Important executable file integrity

To what extent are important executable files protected from modification or deletion by unauthorized access?

X = 1- A/B

A

= Number of important executable files that were modified or deleted due to unauthorized access

B

= Total number of important executable files that must be protected from modification or deletion

NOTE Important executable files can be modified or corrupted to bypass genuine license checks or cause data leaks of specific information.

SIn-4-G

Response to integrity corruption

To what extent has the product implemented responses to handle corruption of data or important executable files caused by unauthorized access?

X = A/B

A

= Number of response actions actually implemented when detecting integrity corruption

B

= Number of response actions specified to be performed when detecting integrity corruption

NOTE When integrity corruption is detected, the product can perform actions such as disabling execution, displaying an integrity corruption detection message, or recovering corrupted data.

7.6.4 Non-repudiation measures

Non-repudiation measures are used to assess the capability of a product to prove that actions or events have taken place so that the events or actions cannot be repudiated later.

Table 23 — Non-repudiation measures

ID

Name

Description

Measurement function

SNo-1-G

Non-repudiation assurance

What proportion of actions or events is proved that they have taken place so that they cannot be repudiated later?

X = A/B

A

= Number of actions or events that have been successfully proven to have occurred using the specified non-repudiation methods

B

= Number of actions or events requiring non-repudiation using the specified methods

NOTE 1 Certificates and security algorithms are also helpful in improving non-repudiation.

NOTE 2 Examples of the specified methods are digital signature, digital certificates, logging, timestamping, blockchain, and so on.

SNo-2-G

Non-repudiation implementation completeness

To what extent are non-repudiation methods (e.g., digital signatures, digital certificates) completely implemented in a secure and reliable manner?

X = A/B

A

= Number of non-repudiation methods completely implemented using secure and reliable methods

B

= Total number of non-repudiation methods required to be implemented

NOTE Complete and secure implementation of non-repudiation methods involves the use of strong cryptographic algorithms (e.g., RSA, ECDSA, SHA-256 or higher), adherence to validity periods, and the issuance of certificates from internationally trusted root and subordinate certification authorities.

SNo-3-G

Utilization of trusted timestamps

To what extent are trusted timestamps utilized in functions that rely on time information within the product?

X = A/B

A

= Number of functions using trusted timestamps

B

= Number of functions requiring trusted timestamps

NOTE 1 Examples of functions using trusted time information include audit logs and digital signatures.

NOTE 2 The source of a trusted timestamp can be a reliable NTP server or an NTP server located in a physically secure environment. Time information that can be freely modified, such as the client PC clock, cannot be considered a trusted timestamp.

7.6.5 Accountability measures

Accountability measures are used to assess the capability of a product to enable the actions of an entity to be traced uniquely to the entity.

Table 24 — Accountability measures

ID

Name

Description

Measurement function

SAc-1-G

User audit trail completeness

How complete is the audit trail concerning the user access or activities to the product or data?

X = A/B

A

= Number of user accesses or activities that are recorded in logs

B

= Number of user accesses or activities that are required to be recorded in logs

NOTE For audit tracing, logs can include activities and events, timestamps, errors and warnings, and so on.

SAc-2-G

Audit log retention

For what percent of the required retention period is the product log retained in stable storage?

X = A/B

A

= Number of product log types (e.g. configuration management logs, login/logout logs, user data access logs, etc.) retained in stable storage for the specified retention period

B

= Number of product log types required to be retained in stable storage for the specified retention period

NOTE 1 A stable storage is a classification of computer data storage technology that guarantees atomicity for any given write operation and allows software to be written that is robust against some hardware and power failures. Most often, stable storage functionality is achieved by mirroring data on separate disks via RAID technology.

NOTE 2 Result value varies from 0 to infinite. Usually, larger than 1 is better.

SAc-3-G

Mechanism for audit log

To what extent are mechanisms for stable log retention implemented in the product?

X = A/B

A

= Number of implemented mechanisms for secure and stable audit log retention

B

= Number of mechanisms required for secure and stable audit log retention

NOTE Mechanisms for stable audit log retention can include notifying administrators via email, SMS, or alerts when storage space exceeds a critical threshold, deleting old data (only exceeding the specified retention period) when storage space is insufficient, or halting product operation when the storage space is completely full.

7.6.6 Authenticity measures

Authenticity measures are used to assess the capability of a product to prove that the identity of a subject or resource is the one claimed.

Table 25 — Authenticity measures

ID

Name

Description

Measurement function

SAu-1-G

Authentication mechanism sufficiency

How well does the product authenticate the identity of a subject?

X = A/B

A

= Number of authentication mechanisms implemented (e.g., User ID/password, IC card, or biometric authentication)

B

= Number of authentication mechanisms specified

NOTE 1 What is relevant for security is the strength of the authentication model and the ability to have multi-level multi-factor authentication and threat detection. Several factors and the degree of authenticity of the provided protocol can also be used as authenticity measures.

NOTE 2 Examples of biometric authentication include fingerprint scanning, facial recognition, iris scanning, and vein recognition.

SAu-2-G

Authentication rules conformity

What proportion of the required authentication rules is implemented in the product?

X = A/B

A

= Number of authentication rules implemented

B

= Number of authentication rules specified

SAu-3-G

Authentication protection mechanism

To what extent are authentication protection mechanisms implemented to prevent security threats during the authentication process?

X = A/B

A

= Number of implemented authentication protection mechanisms

B

= Number of mechanisms required for secure authentication protection

NOTE Authentication protection mechanisms for secure authentication can include locking an account after consecutive authentication failures (requiring manual unlocking by an administrator) or disabling it for a specified time, enforcing password expiration and mandatory changes upon reaching the expiration period, displaying failure messages that do not reveal the username or password, and blocking duplicate logins if necessary.

7.6.7 Resistance measures

Resistance measures are used to assess the capability of a product to sustain operations while under attack from a malicious attack.

Table 26 — Resistance measures

ID

Name

Description

Measurement function

SRe-1-G

Resistance to hacker attacks

To what extent did the product resist penetration attempts during security testing?

X = A/B

A

= Number of penetration attempts that were unsuccessful (i.e., did not penetrate the product)

B

= Total number of penetration attempts during security testing

SRe-2-G

Use of secure middleware and operating systems

To what extent does the product rely on middleware or operating systems with known security vulnerabilities for its operation?

X = 1 - A/B

A

= Number of middleware or operating systems with known security vulnerabilities

B

= Total number of middleware or operating systems used for product operation

NOTE Information about insecure middleware (e.g., web servers, WAS, open source, databases) or operating systems can be found on platforms such as the CVE website.

SRe-3-G

Middleware information disclosure

To what extent does the middleware required for product operation expose important information?

X = 1 - A/B

A

= Number of middleware exposing important information

B

= Total number of middleware used for product operation

NOTE If important information such as passwords or encryption keys is stored in plaintext, or if database passwords use well-known default values, these situations can pose security threats to the entire IT system.

7.7 Maintainability measures

7.7.1 General

Maintainability measures are used to assess the capability of a product to be modified by the intended maintainers with effectiveness and efficiency.

7.7.2 Modularity measures

Modularity measures are used to assess the capability of a product to limit changes to one component from affecting other components.

Table 27 — Modularity measures

ID

Name

Description

Measurement function

MMo-1-G

Coupling of components

How strongly are the components free from impacts from changes to other components in a product?

X = A/B

A

= Number of components which are implemented with no impact on others

B

= Number of specified components which are required to be independent

NOTE Such a threshold is helpful to determine whether the degree of impact from changes of other components is minimal or not, for example, the frequency of changes of the component caused by changes of other components or the number of externally shared databases that the component directly accesses.

MMo-2-G

Acceptable cyclomatic complexity

How many product modules have acceptable cyclomatic complexity?

X = 1 - A/B

A

= Number of software modules that have a cyclomatic complexity score that exceeds the specified threshold

B

= Number of software modules implemented

NOTE 1 Such a threshold is used to determine whether a value of cyclomatic complexity is acceptable or not for each module. This is defined by each project or organization and is possibly a different value for a programming language, a type of module, or a function.

NOTE 2 This scope of quality measure is software rather than the product which is different from other measures.

7.7.3 Reusability measures

Reusability measures are used to assess the capability of a product to be used as assets in more than one system, or in building other assets.

Table 28 — Reusability measures

ID

Name

Description

Measurement function

MRe-1-G

Reusability of assets

How many assets in a product can be reusable?

X = A/B

A

= Number of assets which are designed and implemented to be reusable

B

= Number of assets required to be reusable in a product

NOTE In this measure, assets could be work products such as requirements documents, source code modules, testing modules, specific hardware, etc.

MRe-2-G

Coding rules conformity

How many modules conform to the required coding rules?

X = A/B

A

= Number of product modules conforming to coding rules for a specific system

B

= Number of product modules implemented

NOTE 1 Coding rules for a specific system might include rules that contribute to, for example, modularity, traceability, and conciseness.

NOTE 2 This quality measure can also be applied to different characteristics and subcharacteristics such as analysability.

7.7.4 Analysability measures

Analysability measures are used to assess the capability of a product to be effectively and efficiently assessed regarding the impact of an intended change to one or more of its parts, to diagnose it for deficiencies or causes of failures, or to identify parts to be modified.

Table 29 — Analysability measures

ID

Name

Description

Measurement function

MAn-1-G

Product log completeness

To what extent does the product record its operations in logs so that they are to be traceable?

X = A/B

A

= Number of operations that are recorded in logs

B

= Number of operations for which audit trails are required

MAn-2-G

Diagnosis function effectiveness

What proportion of the diagnosis functions meets the requirements of causal analysis?

X = A/B

A

= Number of diagnostic functions useful for causal analysis

B

= Number of diagnostic functions implemented

MAn-3-G

Diagnosis function sufficiency

What proportion of the required diagnosis functions have been implemented?

X = A/B

A

= Number of diagnostic functions implemented

B

= Number of diagnostic functions required

NOTE Analysability measures are used to assess such attributes as the maintainer’s or user’s effort or resources used when trying to diagnose deficiencies or causes of failures or for identifying parts to be modified.

7.7.5 Modifiability measures

Modifiability measures are used to assess the capability of a product to be effectively and efficiently modified without introducing defects or degrading existing product quality.

Table 30 — Modifiability measures

ID

Name

Description

Measurement function

MMd-1-G

Modification efficiency

How efficiently are the modifications made compared to the expected time?

Aij

= Actual time taken to perform the j-th modification of type i

Bij

= Expected time required to perform the j-th modification of type im = Number of modification types

n

= Number of modifications for each modification type

NOTE 1 A value of X less than 1 indicates that modifications were completed within the allowed duration. A lower value generally implies more efficient modification, though the level of efficiency can vary depending on the context.

NOTE 2 Expected time for making a specific type of modification can be based on historical data or industry averages.

NOTE 3 Modifications for interaction capability are to improve the user interface and the overall user experience. On the other hand, modification for maintainability is targeted to the system's codebase or architecture making it easier to update, modify, or extend in the future.

MMd-2-G

Modification capability

To what extent are the required modifications made within a specified duration?

X = A/B

A

= Number of modifications successfully completed within the specified duration without causing any incidents or failures

B

= Total number of modifications made

7.7.6 Testability measures

Testability measures are used to assess the capability of a product to enable an objective and feasible test to be designed and performed to determine whether a requirement is met.

NOTE 1 Inherent testability measures indicate a set of attributes for predicting the amount of designed and implemented autonomous test aid functions present in the product.

NOTE 2 Behavioural testability measures are used to assess such attributes as the tester’s or user’s effort by measuring the behaviour of the product, user, or system including software when trying to test the modified or non-modified software.

Table 31 — Testability measures

ID

Name

Description

Measurement function

MTe-1-G

Test function completeness

How completely are test functions implemented?

X = A/B

A

= Number of test functions implemented as specified

B

= Number of test functions specified

MTe-2-G

Code coverage

How completely is the product code tested against a specific test coverage criterion?

X = A/B

A

= Number of unique code structural elements covered by test cases

B

= Number of unique code structural elements in the product’s code

NOTE Code structural elements can include statements, branches, loops, and other relevant constructs depending on the selected test coverage criterion. For the details of test coverage, refer to ISO/IEC 29119-4.

MTe-3-G

Testable dependency

How independently can a product be tested?

X = A/B

A

= Number of tests that can be simulated by stub among the tests that depend on other products

B

= Number of tests which depend on other products

NOTE A stub is a skeletal or special-purpose implementation of a software module used to develop or test a module that calls or is otherwise dependent on it.

MTe-4-G

Test restartability

How easy is it to resume the test from the point where it was paused?

X = A/B

A

= Number of cases in which testers can pause and restart executing test runs at desired points to check step-by-step

B

= Number of cases in which executing test run can be paused

MTe-5-G

Test coverage

To what extent of the test cases cover the product functions, capabilities, and states?

X = A/B

A

= Number of test cases that actually cover the requirements

B

= Number of test cases required to cover requirements that define the product's functions, capabilities, and states

MTe-6-G

Test case execution coverage

What proportion of the designed test cases are executed?

X = A/B

A

= Number of test cases that are executed

B

= Total number of test cases designed

MTe-7-G

Test case pass rate

What proportion of test cases have passed successfully?

X = A/B

A

= Number of test cases passed

B

= Total number of test cases executed

7.8 Flexibility measures

7.8.1 General

Flexibility measures are used to assess the capability of a product to adapt to changes in its requirements, contexts of use, or system environment.

NOTE The name “flexibility” replaces the previous name “portability” as a quality characteristic of the product.

7.8.2 Adaptability measures

Adaptability measures are used to assess the capability of a product to be effectively and efficiently adapted for or transferred to different hardware, software, or other operational or usage environments.

Table 32 — Adaptability measures

ID

Name

Description

Measurement function

FAd-1-G

Hardware environmental adaptability

What proportion of hardware environments can the product adapt itself to?

X = 1 - A/B

A

= Number of functions that were not completed or results failed tests in different hardware environments

B

= Number of functions which were tested in different hardware environments

FAd-2-G

System software environmental adaptability

What is the proportion of system software environments to which the product can adapt itself?

X = 1 - A/B

A

= Number of functions that were not completed or results failed tests in different system software environments

B

= Number of functions that were tested in different system software environments

NOTE 1 When a user has to apply an adaptation procedure other than previously provided by software for a specific adaptation need, the user’s effort required for adapting has to be measured.

NOTE 2 System software is a type of software that serves as an intermediary between computer hardware and application programs. It plays a crucial role in supporting and managing the core functions of a computer system.

NOTE 3 System software can include operating systems, middleware, database management systems, compilers, network management systems, etc.

FAd-3-G

Operational environment adaptability

What proportion of operational environments can the product successfully adapt to?

X = 1 - A/B

A

= Number of functions that were not completed or results failed tests in different operational user’s environments

B

= Number of functions which were tested in different operational user’s environment

NOTE A testing environment refers to the environment in which the product is installed to perform testing, typically referred to as a testbed. On the other hand, an operational environment is the environment where the product is deployed and used, which can differ from the testing environment.

7.8.3 Scalability measures

Scalability measures are used to assess the capability of a product to handle growing or shrinking workloads or to adapt its capacity to handle variability.

Table 33 — Scalability measures

ID

Name

Description

Measurement function

FSc-1-G

Scalability

scale-out

To what extent does the product support scale-out compared to the configuration setting?

X = A/B

A

= Number of resources that are available automatically and immediately

B

= Number of configured resources

NOTE A product is already designed to handle larger workloads and is capable of accommodating additional resources to support scalability.

FSc-2-G

Scalability

scale-up

To what extent does the product support scale-up compared to the configuration setting?

X = A/B

A

= Amount of resources that are available automatically and immediately

B

= Amount of configured resources

NOTE “Scale-out” refers to handling increased workloads by adding multiple identical servers or nodes, each performing the same tasks. In contrast, “scale-up” refers to enhancing the performance of a single server or node by upgrading its hardware components such as CPU, memory, or storage.

7.8.4 Installability measures

Installability measures are used to assess the capability of a product to be effectively and efficiently installed successfully and/or uninstalled in a specified environment.

Table 34 — Installability measures

ID

Name

Description

Measurement function

FIn-1-G

Installation time efficiency

How efficient is the actual installation time compared to the expected time?

Ai

= Total work time spent for making an installation i

Bi

= Expected time for making an installation i

n

= Number of installations measured

NOTE 1 X greater than 1 represents inefficient installation, and X less than 1 represents efficient installation.

NOTE 2 Expected time for making an installation can be based on historical data or industry averages.

FIn-2-G

Installation customizability

Can users customize the installation procedure for their convenience?

X = A/B

A

= Number of cases in which a user succeeds in customizing the installation procedure

B

= Number of cases in which a user attempted to customize the installation procedure for the user’s convenience

NOTE Such changes in installation procedure can be recognized as customization of installation by the user.

7.8.5 Replaceability measures

Replaceability measures are used to assess the capability of a product to replace another specified product for the same purpose in the same environment.

Table 35 — Replaceability measures

ID

Name

Description

Measurement function

FRe-1-G

Usage similarity

What proportion of user functions of the replaced product can be performed without any additional learning or workaround?

X = A/B

A

= Number of user functions that can be performed without any additional learning or workaround

B

= Number of user functions in the replaced software product

NOTE User functions are those that users can call and use to perform their intended tasks including user interfaces.

FRe-2-G

Product quality equivalence

What proportion of the quality measures is satisfied after replacing the previously specified product with this one?

X = A/B

A

= Number of quality measures of the new product that are better or equal to the replaced product

B

= Number of quality measures of the replaced specified product that are relevant

NOTE Some of the critical product qualities relevant to replaceability are interoperability, security, and performance efficiency.

FRe-3-G

Functional inclusiveness

Can similar functions easily be used after replacing the previously specified product with this one?

X = A/B

A

= Number of similar functions that can be easily used in the replaced product

B

= Number of similar functions which have to be used in the replaced product

FRe-4-G

Ease of data use

Can the same data be used after replacing the previously specified product with this one?

X = A/B

A

= Number of data items that can be used continuously as before

B

= Number of data items which have to be used continuously in the replaced product

7.9 Safety measures

7.9.1 General

Safety measures are used to assess the capability of a product under defined conditions to avoid a state in which human life, health, property, or the environment is endangered.

NOTE This characteristic is newly added in the product quality model defined in ISO/IEC 25010.

7.9.2 Operational constraint measures

Operational constraint measures are used to assess the capability of a product to constrain its operation within safe parameters or states when encountering operational hazards.

Table 36 — Operational constraint measures

ID

Name

Description

Measurement function

SOp-1-G

Domain hazard

coverage

What proportion of internal safety hazards can the product operate within its specified safety parameters during test?

X = A/B

A

= Number of internal safety hazards in which the product operated within its specified safety parameters during test

B

= Number of internal safety hazards tested

NOTE 1 To measure this quality measure, it is good practice to identify potential hazards in advance based on IEC 61508. The hazard analysis and risk assessment (HARA) is in the context of the operational design domain(ODD).

NOTE 2 Internal safety hazards are unsafe conditions created by the way the product operates or when the product tries to operate outside its safe operational parameters. It is good practice to implement controls to ensure safe operation against each identified internal safety hazard.

SOp-2-G

Coverage of successful behaviours to treat domain hazards

Under what proportion of operational safety hazards can the product operate within its specified safety parameters during test?

X = A/B

A

= Number of operational safety hazards in which the product operated within its specified safety parameters during test

B

= Number of operational safety hazards tested under

SOp-3-G

Operational domain coverage

Is the product capable of addressing potential hazards in the operational domain?

X = A/B

A

= Number of potential hazards that have been addressed in the operational domain

B

= Number of potential hazards in the operational domain

NOTE The ‘address’ means the hazard can be identified during the hazard analysis and mitigated in the safety requirement and design. The operational domain is the environment within which a functional unit is expected to operate and perform its required function. For instance, if a self-driving car system is suitable only for urban driving, then the urban environment becomes the operational domain for that system.

SOp-4-G

Coverage of successful behaviours to treat hazards in the operational domain

Is the product capable of treating potential hazards in the operational domain enough successfully during dynamic testing and operation?

X = 1 - A/B

A

= Number of operation scenarios in which the product failed to behave to treat potential hazards during dynamic testing or operation, although those hazards have been addressed in the operational domain

B

= Number of operation scenarios corresponding to identified potential hazards in the operational domain

NOTE Safety hazards refer to potential risks or dangers to safety, operational scenarios describe specific conditions or events during system operation, and the operational domain defines the broader environment or context in which the system functions.

7.9.3 Risk identification measures

Risk identification measures are used to assess the capability of a product to identify a course of events or operations that can expose life, property, or environment to unacceptable risk.

Table 37 — Risk identification measures

ID

Name

Description

Measurement function

SRi-1-G

Risk identification

coverage

Has the risk associated with specific conditions (hazards, events, etc.) that could occur during product operation been identified?

X = A/B

A

= Number of risk scenarios that are identified as test cases for the product.

B

= Number of specified risk scenarios where risks could occur

NOTE It is desirable to identify risks and hazard scenarios through the process of hazard analysis and risk assessment (e.g., FEMA).

SRi-2-G

Coverage of successful behaviours to handle risks of product operation

Has the risk associated with specific conditions (hazards, events, etc.) that potentially occur during product operation been successfully handled during dynamic testing and operation?

X = 1 - A/B

A

= Number of risk scenarios of operation that failed to be handled during dynamic testing or operation

B

= Number of specified risk scenarios of operation where identified risks potentially occur

SRi-3-G

Level of safety risk

What is the level of safety risk presented by specific risk scenario?

Ai

= Impact of the i-th risk

Bi

= Likelihood of the i-th risk

n

= Total number of risks included in the specific scenario

NOTE 1 Mean level of risk can be computed at the scenario level and then aggregated to measure the total risk across scenarios in the operational domain.

NOTE 2 Risk measure scales can be created by each user of these measures or use a scale such as: 1 = no safety risk, 2 = slight safety risk, 3 = moderate safety risk, 4 = significant safety risk, 5 = dangerous safety risk, 6 = extreme safety risk, 7 = extreme harm cannot be avoided.

NOTE 3 In each scenario, if any risk item has a risk score in a specified dangerous zone, the highest score can the automatically assigned to the full risk scenario.

SRi-4-G

Maximum level of safety risk

What is the maximum level of safety risk presented by all risk scenarios?

Ain

= Risk rating for risk k, calculated as the product of probability and impact for each individual risk in risk scenario i

k

= Number of risks in i-th scenario

n

= Number of scenarios

7.9.4 Fail safe measures

Fail safe measures are used to assess the capability of a product to automatically place itself in a safe operating mode, or to revert to a safe condition in the event of a failure.

Table 38 — Fail safe measures

ID

Name

Description

Measurement function

SFa-1-G

Malfunction

recoverability

Can the product automatically transition to a safe mode in case of malfunction?

X = A/B

A

= Number of risk scenarios in which the product automatically transitions to a safe mode when the scenario leads to malfunctions

B

= Number of specified risk scenarios

NOTE Malfunction means the inability of a functional unit to perform its required function under specified conditions.

7.9.5 Hazard warning measures

Hazard warning measures are used to assess the capability of a product to provide warnings of unacceptable risks to operations or internal controls so that they can react in sufficient time to sustain safe operations.

Table 39 — Hazard warning measures

ID

Name

Description

Measurement function

SHa-1-G

Hazard warning

responsiveness

Can the product provide a hazard warning when risk scenarios occur?

X = A/B

A

= Number of warnings provided within the specified safe time frame

B

= Number of warnings that are expected to be provided within the specified time frame

7.9.6 Safe integration measures

Safe integration measures are used to assess the capability of a product to maintain safety during and after integration with one or more components.

Table 40 — Safe integration measures

ID

Name

Description

Measurement function

SSi-1-G

System safety

integration

What proportion of safety-related functions operate correctly after integration with another product or system?

X = A/B

A

= Number of safety-related functions in specified product that successfully pass the safety related testing when applied to the integrated system

B

= Total number of safety-related functions in the specified product when applied to the integrated system

NOTE 1 Even if the individual systems are considered safe, the integrated system is not always safe due to interactions between them. The component can be a module, subsystem, or another system.

NOTE 2 In general, X is expected to be non-zero, as a zero value means there are system problems.


  1. (informative)

    Considerations for the use of quality measures

This Annex deals with several considerations in the selection and application of quality measures. Each quality measure defined in Clause 7 can be used in the case of measuring inherent properties (typically static measures of intermediate products), external properties (typically by measuring the behaviour of the code when executed), or both of them.

NOTE 1 When an iterative or incremental model is applied to development or maintenance, both internal and behavioural measures can be used for each cycle of iteration or increment. Iteratively increased or improved system/software specification, architectural design, detailed design, component, and code can be measured by review with inherent measures, while iteratively integrated system/software can be measured by executing them for build testing tasks with behavioural measures during each iteration or increment. In case executable testing can be conducted very frequently during iterations or increments, behavioural measures (or quality-in-use measures) are possibly employed more than inherent measures. Then, these quality measures can be repeatedly used to monitor evolving quality trends through multiple iterations or increments. For example, the measured value of functional coverage, which is one of the quality measures for functional suitability, is possibly lower in the early iterations and is expected to increase to higher in the later iterations.

NOTE 2 Inherent measures for performance efficiency apply to static design documents or source code. These measured values can be obtained by estimation of the theoretical calculation amount of designed algorithms, number of function calls, or steps of executable code. However, applications of behavioural measures for performance efficiency on the intermediate executable prototype during design are helpful to understand actual gaps between inherent and behavioural measures and calibrating estimation for inherent measures.

NOTE 3 Inherent measures for usability apply to static mock-up of screen display, specification for usability design, a set of message text files, user manuals, source code for user interfaces, and so on. However, applications of behavioural measures for usability on the intermediate executable prototype during development are helpful to understand actual gaps between inherent and behavioural measures. If available, the application of quality-in-use measures to executable prototypes during development is also very helpful.

In addition, the quality measures can be classified according to the recommendation level such as

— HR: highly recommended, which means “use this quality measure always”,

— R: recommended, which means “use this quality measure when appropriate”, and

— UD: used at the user’s discretion, which means “use this quality measure as a reference when developing a new quality measure” because the measure has unknown reliability.

Table A.1 represents these kinds of considerations related to the usage of each quality measure.

Table A.1 — Summary table for the usage of quality measures

Quality

characteristic

Quality

Subcharacteristic

ID

Quality measure name

Inherent/

Behavioural/ Both

Recommendation level

Functional Suitability

Functional completeness

FCp-1-G

Functional completeness

Both

HR

FCp-2-G

Functional requirement completeness

Both

HR

Functional correctness

FCr-1-G

Functional correctness

Both

HR

FCr-2-G

Functional accuracy

Both

HR

FCr-3-G

Functional precision

Both

HR

Functional appropriateness

FAp-1-G

Functional appropriateness

Both

HR

Performance efficiency

Time behaviour

PTb-1-G

Mean system wait time

Both

HR

PTb-2-G

Mean response time

Both

HR

PTb-3-G

Response time adequacy

Both

R

PTb-4-G

Mean turnaround time

Both

R

PTb-5-G

Turnaround time adequacy

Both

R

PTb-6-G

Mean throughput

Both

R

PTb-7-G

Throughput adequacy

Both

R

Resource utilization

PRu-1-G

Peak processor utilization

Behavioural

HR

PRu-2-G

Mean processor utilization

Behavioural

R

PRu-3-G

Peak memory utilization

Behavioural

R

PRu-4-G

Mean memory utilization

Behavioural

R

PRu-5-G

Mean wait time

Behavioural

R

PRu-6-G

Peak I/O device utilization

Behavioural

R

PRu-7-G

Mean I/O devices utilization

Behavioural

R

PRu-8-G

Peak bandwidth utilization

Behavioural

UD

PRu-9-G

Mean bandwidth utilization

Behavioural

UD

PRu-10-G

Peak energy consumption utilization

Behavioural

R

PRu-11-G

Mean energy consumption utilization

Behavioural

R

PRu-12-G

Resource utilization adequacy

Behavioural

R

Capacity

PCa-1-G

Maximum capacity used

Both

R

PCa-2-G

Maximum user access capacity used

Both

R

PCa-3-G

User access adequacy

External

UD

PCa-4-G

Capacity adequacy

Both

R

Compatibility

Co-existence

CCo-1-G

Co-existence with other products

External

HR

Interoperability

CIn-1-G

Data formats exchangeability

Both

HR

CIn-2-G

Data exchange protocol sufficiency

Both

R

CIn-3-G

External interface completeness

Both

HR

Interaction capability

Appropriateness recognizability

IAr-1-G

Description completeness

Both

HR

IAr-2-G

Demonstration coverage

Both

UD

Learnability

ILe-1-G

User guidance completeness

Both

HR

ILe-2-G

Entry fields default

Both

R

ILe-3-G

Error messages resolvability

Both

R

ILe-4-G

Self-explanatory user interface

Both

UD

Operability

IOp-1-G

Operational consistency

Both

HR

IOp-2-G

Message clarity

Both

R

IOp-3-S

Functional customizability

Both

UD

IOp-4-S

User interface customizability

Both

UD

IOp-5-S

Monitoring capability

Both

UD

IOp-6-S

Undo capability

Both

R

IOp-7-S

Understandable categorization of information

Both

R

IOp-8-S

Appearance consistency

Both

UD

IOp-9-S

Input device support

Both

UD

User error

protection

IEp-1-G

Avoidance of user operation error

Both

HR

IEp-2-G

Error message resolvability

Both

HR

IEp-3-G

User entry error correction

Both

R

IEp-4-G

User error recoverability

Both

R

User engagement

IUe-1-G

Engaging user interfaces

Both

UD

Inclusivity

IIn-1-G

Language inclusivity for the widest range of users

Both

HR

IIn-2-G

Culture inclusivity for the widest range of users

Both

HR

User assistance

IUa-1-G

Assistance for users with disabilities and diverse users

Both

R

Interaction capability

Self-

descriptiveness

ISd-1-G

Presentation of understandable information for

user tasks

Both

R

Reliability

Faultlessness

RFa-1-G

Fault resolution rate

Both

HR

RFa-2-G

Mean time between failures (MTBF)

Behavioural

HR

RFa-3-G

MTBF improvement

Behavioural

R

RFa-4-G

Failure rate

Behavioural

R

RFa-5-G

Failure rate improvement

Behavioural

R

Availability

RAv-1-G

Product availability

Behavioural

HR

RAv-2-G

Mean down time

Behavioural

R

Fault tolerance

RFt-1-G

Fault avoidance

Behavioural

HR

RFt-2-G

Fault identification

Behavioural

HR

RFt-3-G

Redundancy of components

Both

R

RFt-4-G

Mean fault notification time

Behavioural

UD

Recoverability

RRe-1-G

Mean recovery time

Behavioural

HR

RRe-2-G

Mean recovery time by component recovery level

Behavioural

HR

RRe-3-G

Backup data completeness

Both

R

Security

Confidentiality

SCo-1-G

Access controllability

Both

HR

SCo-2-G

Access control

mechanism

sufficiency

Both

HR

SCo-3-G

Data encryption correctness

Both

R

SCo-4-G

Strength of

cryptographic

algorithms

Both

R

SCo-5-G

One-way

encryption

algorithm

Both

UD

SCo-6-G

Data

transmission

protection

Both

R

SCo-7-S

Minimization of

personal data

collection

Both

R

Security

Integrity

SIn-1-G

Data integrity

Both

HR

SIn-2-G

Internal data corruption prevention

Both

R

SIn-3-G

Important

executable file

integrity

Both

R

SIn-4-G

Response to

integrity

corruption

Both

R

Non-repudiation

SNo-1-G

Non-repudiation assurance

Both

R

SNo-2-G

Non-repudiation

implementation

completeness

Both

R

SNo-3-G

Utilization of

trusted

timestamps

Both

R

Accountability

SAc-1-G

User audit trail completeness

Both

HR

SAc-2-G

Audit log retention

Both

R

SAc-3-G

Mechanism for

audit log

Both

R

Authenticity

SAu-1-G

Authentication mechanism sufficiency

Both

HR

SAu-2-G

Authentication rules conformity

Both

R

Sau-3-G

Authentication

protection

mechanism

Both

R

Resistance

SRe-1-G

Resistance to hacker attacks

Both

HR

SRe-2-G

Use of secure

middleware and

operating

systems

Both

R

SRe-3-G

Middleware

information

disclosure

Both

R

Maintainability

Modularity

MMo-1-G

Coupling of components

Both

R

MMo-2-G

Acceptable cyclomatic complexity

Inherent

UD

Reusability

MRe-1-G

Reusability of assets

Both

HR

MRe-2-G

Coding rules conformity

Inherent

R

Analysability

MAn-1-G

Product log completeness

Both

HR

MAn-2-G

Diagnosis function effectiveness

Both

R

MAn-3-G

Diagnosis function sufficiency

Both

R

Modifiability

MMd-1-G

Modification efficiency

Both

HR

MMd-2-G

Modification capability

Both

UD

Testability

MTe-1-G

Test function completeness

Both

R

MTe-2-G

Code coverage

Behavioural

R

MTe-3-G

Testable dependency

Behavioural

R

MTe-4-G

Test restartability

Both

UD

MTe-5-G

Test coverage

Behavioural

R

MTe-6-G

Test case execution coverage

Behavioural

R

MTe-7-G

Test case pass rate

Behavioural

R

Flexibility

Adaptability

FAd-1-G

Hardware environmental adaptability

Behavioural

HR

FAd-2-G

System software environmental adaptability

Behavioural

HR

FAd-3-G

Operational environment adaptability

Behavioural

UD

Scalability

FSc-1-G

Scalability scale-out

Behavioural

HR

FSc-2-G

Scalability scale-up

Behavioural

HR

Installability

FIn-1-G

Installation time efficiency

Behavioural

R

FIn-2-G

Installation customizability

Behavioural

R

Replaceability

FRe-1-G

Usage similarity

Both

HR

FRe-2-G

Product quality equivalence

Both

R

FRe-3-G

Functional inclusiveness

Behavioural

R

FRe-4-G

Ease of data use

Behavioural

R

Safety

Operational constraint

SOp-1-G

Domain hazard coverage

Behavioural

HR

SOp-2-G

Coverage of successful behaviours to treat domain hazards

Both

R

SOp-3-G

Operational domain coverage

Behavioural

HR

SOp-4-G

Coverage of successful behaviours to treat hazards in the operational domain

Both

R

Risk identification

SRi-1-G

Risk identification coverage

Behavioural

HR

SRi-2-G

Coverage of successful behaviours to handle risks of product operation

Both

R

SRi-3-G

Level of safety risk

Both

R

SRi-4-G

Maximum level of safety risk

Both

R

Fail safe

SFa-1-G

Malfunction recoverability

Behavioural

R

Hazard warning

SHa-1-G

Hazard warning responsiveness

Behavioural

HR

Safe integration

SSi-1-G

System safety integration

Behavioural

R


  1. (informative)

    QMEs used to define product or system quality measures

Most of the QMEs used for the measurement function of various quality measures are already described in ISO/IEC 25021:2012, Annex A. If needed, several new QMEs can be defined or designed properly according to the procedure and table format provided in ISO/IEC 25021.

The following is a summary of general QMEs which are used frequently in the measurement functions of various quality measures.

NOTE For more information about QMEs, see the QME definitions in ISO/IEC 25021.

    1. Number of functions

The count of all the functions that satisfy the condition given in the specific QME definitions.

NOTE The functions can be, for example, required, implemented, tested, essential, optional, or any combination of these and more.

    1. Number of failures

The count of all failures which occur in a given period and which also satisfy the condition given in the specific QME definitions.

Examples of QMEs: number of expected failures, number of detected failures, number of resolved failures, and number of failures of a given severity level.

    1. Number of faults

The count of software product faults detected (or estimated) in a given software product component and satisfy the condition given in the specific QME definitions, e.g., number of faults of a given category, number of faults of a given severity, number of faults successfully corrected, etc.

    1. Number of hazards

The count of all hazards that occur within a given period and meet certain conditions or defined QME definitions, e.g. fall hazards at construction sites, chemical spills occurring during the product manufacturing process, or potential defects in automobiles, etc.

    1. Product size

The count of software product components according to a desired criterion. This can be lines of code (LOC), function points, modules, classes, or visual structures such as diagrams or their parts.

NOTE Software product components can be counted only if some additional properties are satisfied, e.g. only executable lines of code, lines of code which also contain commenting, only comment lines, declaration or typecasting, bracket/braces only, etc.

    1. Duration

Refers to the interval between a starting time and an end time of any process described in the specific QME definitions (Duration = end time – start time). For example,

— execution time: refers to time measured internally by the computer clock, e.g. CPU time, I/O time, etc., or time measured via inserted code or software tools (e.g. test suites);

— observation time: refers to time measured externally by the observer, using an external clock, e.g. time to finish a transaction or user task;

— set-up time: a fixed time process or observation, but meaningful for a measure that is independent of action, e.g. required response time.

    1. Number of test cases

Refers to the count of different test input data and scenarios that satisfy the condition given in the specific QME definitions, e.g. test cases designed, required, executed (successfully or failed), etc.

    1. Number of restarts

Counts the number of attempts for a system to resume computation after a critical failure and satisfies the condition given in the specific QME definitions. It can be distinguished between system restart and recovery.

    1. Number of I/O

The count of the number of I/O events that satisfy the condition given in the specific QME definitions. I/O events are distinguished from system messages to the observer.

— Interaction between the observer and the system, e.g. a dialogue;

— transaction: the sequence of interactions between the observer and system that must be executed atomically to accomplish an operation, e.g. a wizard (with options).

    1. Number of jobs

A job is the set or sequence of activities required to achieve a specific role or function within an organization or project. The number of jobs is the count of these roles or functions that satisfy the conditions specified in the specific QME definitions.

    1. Number of tasks

A task is the set or sequence of activities required to achieve a given goal. The number of tasks is the count of the tasks that satisfy the condition given in the specific QME definitions. It can be distinguished between:

— user tasks: activities performed by the user (using software product) towards a specified goal;

— system tasks: activities performed by the system to support the user.

    1. Number of user attempts (trials)

The count of attempts to perform the same operation to satisfy the condition given in the specific QME definitions. These attempts can be:

— evaluation: iterations with the same input and same scenario (e.g. stress testing);

— cases: iterations with different inputs and/or different scenarios.

    1. Number of data items

The count of different structures, classes, or formats of data that satisfy the condition given in the specific QME definitions.

    1. Number of records

The count of records of the same structure, class, or format that satisfy the conditions given in the specific QME definitions.

    1. Number of responses

The count of responses to perform specified tasks under specified conditions that satisfy the conditions given in the specific QME definitions.

    1. Number of results

The count of results of a specific function that satisfies the conditions given in the specific QME definitions.

    1. Number of requirements

The count of requirement clauses that satisfy the condition given in the specific QME definitions. For example,

— functional requirements: refer to the requirements that specify what the system is required to do, how it is required to process data, and any constraints that are required to be adhered to when interacting with other systems or performing specific tasks (e.g. time limits).

NOTE The requirements can be i.e. essential, optional, validated, or any combination of these and more.

    1. Number of user operations

The count of the number of operations performed by the user that satisfy the condition given in the Specific QME definition, where an operation is a sequence of steps required to perform a task.

    1. Number of system operations

The count of complete operations performed by the system that satisfy the condition given in the Specific QME definitions.

NOTE This QME counts the number of complete operations, not the individual steps required in each operation.

    1. Number of languages

The count of different languages supported by the system or software product to be used to perform the intended user functions.

    1. Number of software modules

The count of software components that work independently from one another. Conceptually, modules represent a separation of concerns and improve maintainability by enforcing logical boundaries between components.

    1. Number of interfaces

The count of shared boundaries across which two separate components of computer systems exchange information. The exchange can be between software, hardware, peripheral devices, humans, and combinations of these.


  1. (informative)

    Detailed explanation of measurement types
    1. General

To design a procedure for collecting data, interpreting fair meanings, and normalizing measures for comparison, it is good practice for a user of measures to identify and take into account the measure type of measurement employed by a quality measure.

NOTE For most behavioural measures, testing can be carried out to collect the input data for measurement function. The measurement types explained in this Annex are closely related to the test design techniques and the types of testing defined in ISO/IEC/IEEE 29119-4. The quality-related types of testing and mapping quality characteristics/subcharacteristics to types of testing are described in detail in ISO/IEC/IEEE 29119-4:2015, Annex A.

    1. Size measure type
      1. General

A measure of this type represents a particular size of software according to what it claims to measure within its definition.

NOTE Software could have many representations of size (like any entity can be measured in more than one dimension - mass, volume, surface area, etc.).

Normalizing other measures with a size measure can give comparable values in terms of units of size. The size measures described below can be used for software quality measurement.

      1. Functional size type

Functional size is an example of one type of size (one dimension) that a software could have. Any one instance of software can have more than one functional size depending on, for example:

— the purpose for measuring the software size (it influences the scope of the software included in the measurement);

— the particular functional sizing method used (it will change the units and scale).

The definition of the concepts and process for applying a functional size measurement method (FSM Method) is provided by ISO/IEC 14143-1.

To use functional size for normalization, it is necessary to ensure that the same functional sizing method is used and that the different software being compared have been measured for the same purpose and consequently have a comparable scope.

Although the following often claim that they represent functional sizes, it is not guaranteed they are equivalent to the functional size obtained from applying an FSM Method compliant with ISO/IEC 14143- 1. However, they are widely used in software development:

— number of spreadsheets;

— number of screens;

— number of files or data sets that are processed;

— number of itemized functional requirements described in user requirements specifications.

      1. Program size type

In this clause, the term “programming” represents the expressions that when executed result in actions, and the term “language” represents the type of expression used.

        1. Source program size

The programming language is expected to be explained and it is also expected to be provided how the non-executable statements, such as comment lines, are treated. The following measure is commonly used.

Non-comment source statements (NCSS) include executable statements and data declaration statements with logical source statements.

NOTE 1 New program size: A developer can use a newly developed program size to represent development and maintenance work product size.

NOTE 2 Changed program size: A developer can use changed program size to represent the size of software containing modified components.

It might be necessary to distinguish a type of statements of source code in more detail as follows:

— Statement type

— Logical Source Statement (LSS). The LSS measures the number of software instructions. The statements are irrespective of their relationship to lines and independent of the physical format in which they appear.

— Physical Source Statement (PSS). The PSS measures the number of software source lines of code.

— Statement attribute

— Executable statements

— Data declaration statements

— Compiler directive statements

— Comment source statements

— Origin

— Modified source statements

— Added source statements

— Removed source statements

— Newly developed source statements: (= added source statements + modified source statements)

— Reused source statements: (= original - modified - removed source statements)

        1. Program word count size

The measurement can be performed in the following manner using the Halstead’s measure:

Program vocabulary = n1 + n2; Observed program length = N1 + N2,

where

 

n1

is the number of distinct operators (i.e. the number of distinct operator words that are prepared and reserved by the programming language in a program source code);

 

n2

is the number of distinct operands (i.e. the number of distinct operand words that are defined by the programmer in a program source code);

 

N1

is the total number of operators (i.e. the number of occurrences of distinct operators in a program source code);

 

N2

is the total number of operands (i.e. the number of occurrences of distinct operands in a program source code).

        1. Number of modules

The measurement is counting the number of independently executable objects such as modules of a program.

      1. Utilized resource measure type

This type identifies resources utilized by the operation of the software being evaluated. Examples are the following:

a) amount of memory, for example, the amount of disk or memory occupied temporally or permanently during the software execution;

b) I/O load, for example, the amount of traffic of communication data (meaningful for backup tools on a network);

c) CPU load, for example, the percentage of occupied CPU instruction sets per second (this measure type is meaningful for measuring CPU utilization and efficiency of process distribution in multi-thread software running on concurrent/parallel systems);

d) files and data records, for example, length in bytes of files or records;

e) documents, for example, the number of document pages.

It might be important to take note of peak (maximal), minimum, and average values, as well as periods and number of observations done.

      1. Specified operating procedure step type

This type identifies static steps of procedures that are specified in a human-interface design specification or a user manual.

The measured value can differ depending on what kinds of descriptions are used for measurement, such as a diagram or a text representing user operating procedures.

    1. Time measure type
      1. General

The user of measures of time measure type is expected to record periods, how many sites were examined, and how many users took part in the measurements. There are many ways in which time can be measured as a unit, as the following examples show.

a) Real time unit

This is a physical time: i.e. second, minute, or hour. This unit is usually used for describing task processing time of real time software.

b) Computer machinery time unit

This is the computer processor’s clock time: i.e. second, minute, or hour of CPU time.

c) Official scheduled time unit

This includes working hours, calendar days, months, or years.

d) Component time unit

When there are multiple sites, component time identifies individual sites and it is an accumulation of individual time for each site. This unit is usually used for describing component reliability, for example, component failure rate.

e) System time unit

When there are multiple sites, system time does not identify individual sites but identifies all the sites running, as a whole in one system. This unit is usually used for describing system reliability, for example, system failure rate.

      1. System operation time type

System operation time type provides a basis for measuring software availability. This is mainly used for reliability evaluation. It is good practice to identifywhether the software is under discontinuous operation or continuous operation. If the software operates discontinuously, it is better to assure that the time measurement is done during the periods the software is active (this is extended to continuous operation).

a) Elapsed time

When the use of software is constant, for example, in systems operating for the same length of time each week.

b) Machine powered-on time

For real time, embedded, or operating system software that is in full use the whole time the system is operational.

c) Normalized machine time

As in “machine powered-on time” pooling data from several machines of different “powered on-time” and applying a correction factor.

      1. Execution time type

Execution time type is the time that is needed to execute software to complete a specified task. It is good practice to analyse the distribution of several attempts and to compute mean, deviation or maximal values. It is also good practice to examine the execution under specific conditions, particularly overloaded conditions. Execution time type is mainly used for efficiency evaluation.

      1. User time type

User time type is measured upon periods spent by individual users on completing tasks by using operations of the software. Some examples are as follows:

a) Session time

Measured between the start and end of a session. Useful, as an example, for drawing the behaviour of users of a home banking system. For an interactive program where idling time is of no interest or where interactive usability problems only are to be studied.

b) Task time

Time spent by an individual user to accomplish a task by using operations of the software on each attempt. It is good practice to make well-defined start and end points of the measurement.

c) User time

Time spent by an individual user using the software from time started at a point in time. (Approximately, it is how many hours or days the user uses the software from the beginning.)

      1. Effort type

Effort type is the productive time associated with a specific project task.

a) Individual effort

This is the productive time that is needed for the person who is a developer, maintainer, or operator to work to complete a specified task. Individual effort assumes only a certain number of productive hours per day.

b) Task effort

Task effort is an accumulated value of all the individual project personnel: developer, maintainer, operator, user, or others who worked to complete a specified task.

      1. Time interval of events type

This measure type is the time interval between one event and the next one during an observation period. The frequency of an observation period can be used in place of this measure. This is typically used for describing the time between failures occurring successively.

    1. Count measure type

If attributes of documents of the software product are counted, they are static count types. If events or human actions are counted, they are kinetic count types.

      1. Number of detected fault type

The measurement counts the detected faults during reviewing, testing, correcting, operating, or maintaining. Severity levels can be used to categorize them to take into account the impact of the fault.

      1. Program structural complexity number type

The measurement counts the program's structural complexity. Examples are the number of distinct paths or McCabe’s cyclomatic number.

      1. Number of detected inconsistency type

This measure counts the detected inconsistent items that are prepared for the investigation.

a) Number of failed conforming items

EXAMPLE  

— conformance to specified items of requirements specifications;

— conformance to rules, regulations, or standards;

— conformance to protocols, data formats, media formats, and character codes.

b) Number of failed instances of user expectation

The measurement is to count satisfied/unsatisfied list items, which describe gaps between the user’s reasonable expectations and software product performance.

The measurement uses questionnaires to be answered by testers, customers, operators, or end users on what deficiencies were discovered.

The following are examples:

— Function available or not;

— Function effectively operable or not;

— Function operable to user’s specific intended use or not;

— Function is expected, needed, or not needed.

      1. Number of changes type

This type identifies software configuration items that are detected to have been changed. An example is the number of changed lines of source code.

      1. Number of detected failures type

The measurement counts the detected number of failures during product development, testing, operation, or maintenance. Severity levels can be used to categorize them to take into account the impact of the failure.

      1. Number of attempts (trial) type

This measure counts the number of attempts at correcting the defect or fault. For example, during reviews, testing, and maintenance.

      1. Stroke of human operating procedure type

This measure counts the number of strokes of user human action as kinetic steps of a procedure when a user is interactively operating the software. This measure quantifies the ergonomic usability as well as the effort to use. Therefore, this is used in usability measurement. Examples are number of strokes to perform a task, the number of eye movements, etc.

      1. Score type

This type identifies the score or the result of an arithmetic calculation. The score can include counting or calculation of weights checked on/off on checklists. Examples are the score of the checklist, the score of the questionnaire, the Delphi method, etc.


  1. (informative)

    Application of Quality Measures at Different Stages

The quality measures in this document are designed to be used during the development phase, which consists of the requirement, analysis and design, implementation, and testing stages. Some quality measures are relevant to the operational phase even though there are quality-in-use measures defined in ISO/IEC 25019. Table D.1 guides when each quality measure can be used.

Table D.1 — Applicable Stages of Qualify Measures

ID

Quality measure

name

Product Quality

Quality-in-Use

Analysis/Design

Implementation

Testing

Operation

FCp-1-G

Functional completeness

 

O

 

 

FCp-2-G

Functional requirement completeness

 

O

 

 

FCr-1-G

Functional correctness

 

 

O

O

FCr-2-G

Functional accuracy

 

 

O

O

FCr-3-G

Functional precision

 

 

O

O

FAp-1-G

Functional appropriateness

 

 

O

O

PTb-1-G

Mean system wait time

 

 

O

O

PTb-2-G

Mean response time

 

 

O

O

PTb-3-G

Response time adequacy

 

 

O

O

PTb-4-G

Mean turnaround time

 

 

O

O

PTb-5-G

Turnaround time adequacy

 

 

O

O

PTb-6-G

Mean throughput

 

 

O

O

PTb-7-G

Throughput adequacy

 

 

O

O

PRu-1-G

Peak processor utilization

 

 

O

O

PRu-2-G

Mean processor utilization

 

 

O

O

PRu-3-G

Peak memory utilization

 

 

O

O

PRu-4-G

Mean memory utilization

 

 

O

O

PRu-5-G

Mean wait time

 

 

O

O

PRu-6-G

Peak I/O device utilization

 

 

O

O

PRu-7-G

Mean I/O devices utilization

 

 

O

O

PRu-8-G

Peak bandwidth utilization

 

 

O

O

PRu-9-G

Mean bandwidth utilization

 

 

O

O

PRu-10-G

Peak energy consumption utilization

 

 

O

O

PRu-11-G

Mean energy consumption utilization

 

 

O

O

PRu-12-G

Resource utilization adequacy

 

 

O

O

PCa-1-G

Maximum capacity used

 

 

O

O

PCa-2-G

Maximum user access capability used

 

 

O

O

PCa-3-G

User access adequacy

 

 

O

O

PCa-4-G

Capacity adequacy

 

 

O

O

CCo-1-G

Co-existence with other products

 

 

O

O

CIn-1-G

Data formats exchangeability

 

 

O

O

CIn-2-G

Data exchange protocol sufficiency

 

 

O

O

CIn-3-G

External

interface completeness

 

O

 

 

IAr-1-G

Description completeness

 

O

O

 

IAr-2-G

Demonstration coverage

 

 

O

O

ILe-1-G

User guidance completeness

 

 

O

O

ILe-2-G

Entry fields default

 

 

O

O

ILe-3-G

Error messages resolvability

 

 

O

O

ILe-4-G

Self-explanatory user interface

 

 

O

O

IOp-1-G

Operational consistency

 

 

O

O

IOp-2-G

Message clarity

 

 

O

O

IOp-3-S

Functional customizability

 

 

O

O

IOp-4-S

User interface customizability

 

 

O

O

IOp-5-S

Monitoring capability

 

 

O

O

IOp-6-S

Undo capability

 

 

O

O

IOp-7-S

Understandable categorization of information

 

 

O

O

IOp-8-S

Appearance consistency

 

 

O

O

IOp-9-S

Input device support

 

 

O

O

IEp-1-G

Avoidance of user operation error

 

 

O

O

IEp-2-G

Error message resolvability

 

 

O

O

IEp-3-G

User entry error correction

 

 

O

O

IEp-4-G

User error recoverability

 

 

O

O

IUe-1-G

Engaging user interfaces

 

 

O

O

IIn-1-G

Language inclusivity for the widest range of users

 

 

O

O

IIn-2-G

Culture inclusivity for the widest range of users

 

 

O

O

IUa-1-G

Assistance for users with disabilities and diverse users

 

 

O

O

ISd-1-G

Presentation of understandable information for user tasks

 

 

O

O

RFa-1-G

Fault resolution rate

O

O

 

 

RFa-2-G

Mean time between failures (MTBF)

 

 

O

O

RFa-3-G

MTBF improvement

 

 

O

O

RFa-4-G

Failure rate

 

 

O

O

RFa-5-G

Failure rate improvement

 

 

O

O

RAv-1-G

Product availability

 

 

O

O

RAv-2-G

Mean down time

 

 

O

O

RFt-1-G

Fault avoidance

 

 

O

O

RFt-2-G

Fault identification

O

O

 

 

RFt-3-G

Redundancy of components

 

 

O

O

RFt-4-G

Mean fault notification time

 

 

O

O

RRe-1-G

Mean recovery time

 

 

O

O

RR3-2-G

Mean recovery time by component recovery level

 

 

O

O

RRe-3-G

Backup data completeness

 

 

O

O

SCo-1-G

Access controllability

 

 

O

O

SCo-2-G

Access control

mechanism

sufficiency

 

 

O

O

SCo-3-G

Data encryption correctness

 

 

O

O

SCo-4-G

Strength of

cryptographic

algorithms

 

 

O

O

SCo-5-G

One-way

encryption

algorithm

 

 

O

O

SCo-6-G

Data

transmission

protection

 

 

O

O

SCo-7-S

Minimization of

personal data

collection

 

 

O

O

SIn-1-G

Data integrity

 

 

O

O

SIn-2-G

Internal data corruption prevention

 

 

O

O

SIn-3-G

Important

executable file

integrity

 

 

O

O

SIn-4-G

Response to

integrity

corruption

 

 

O

O

SNo-1-G

Non-repudiation assurance

 

 

O

O

SNo-2-G

Non-repudiation

implementation

completeness

 

 

O

O

SNo-3-G

Utilization of

trusted

timestamps

 

 

O

O

SAc-1-G

User audit trail completeness

 

 

O

O

SAc-2-G

Audit log retention

 

 

O

O

SAc-3-G

Mechanism for

audit log

 

 

O

O

SAu-1-G

Authentication mechanism sufficiency

 

 

O

O

SAu-2-G

Authentication rules conformity

 

O

O

O

SAu-3-G

Authentication

protection

mechanism

 

 

O

O

SRe-1-G

Resistance to hacker attacks

 

O

O

O

SRe-2-G

Use of secure

middleware and

operating

systems

 

 

O

O

SRe-3-G

Middleware

information

disclosure

 

 

O

O

MMo-1-G

Coupling of components

O

O

 

 

MMo-2-G

Acceptable

cyclomatic

complexity

O

O

 

 

MRe-1-G

Reusability of assets

 

 

O

O

MRe-2-G

Coding rules conformity

O

O

 

 

MAn-1-G

Product log completeness

O

O

 

 

MAn-2-G

Diagnosis function effectiveness

O

O

 

 

MAn-3-G

Diagnosis function sufficiency

O

O

 

 

MMd-1-G

Modification efficiency

O

O

 

 

MMd-2-G

Modification capability

 

 

O

O

MTe-1-G

Test function completeness

 

 

O

O

MTe-2-G

Code coverage

O

O

 

 

MTe-3-G

Testable dependency

 

O

 

 

MTe-4-G

Test restartability

 

 

O

O

MTe-5-G

Test coverage

 

 

O

O

MTe-6-G

Test case execution coverage

 

 

O

O

MTe-7-G

Test case pass rate

 

 

O

O

FAd-1-G

Hardware environmental adaptability

 

 

O

O

FAd-2-G

System software environmental adaptability

 

 

O

O

FAd-3-G

Operational environment adaptability

 

 

O

O

FSc-1-G

Scalability scale-out

O

O

 

 

FSc-2-G

Scalability scale-up

O

O

 

 

FIn-1-G

Installation time efficiency

 

 

O

 

FIn-2-G

Installation customizability

O

O

O

O

FRe-1-G

Usage similarity

 

 

O

O

FRe-2-G

Product quality equivalence

 

 

O

 

FRe-3-G

Functional inclusiveness

 

 

O

 

FRe-4-G

Ease of data use

 

 

O

 

SOp-1-G

Domain hazard coverage

O

O

O

O

SOp-2-G

Coverage of successful behaviours to treat domain hazards

 

 

O

O

SOp-3-G

Operational domain coverage

O

O

O

O

SOp-4-G

Coverage of successful behaviours to treat hazards in the operational domain

 

 

O

O

SRi-1-G

Risk identification coverage

 

 

O

O

SRi-2-G

Coverage of successful behaviours to handle risks of product operation

 

 

O

O

SRi-3-G

Level of safety risk

O

O

 

 

SRi-4-G

Maximum level of safety risk

O

O

 

 

SFa-1-G

Malfunction recoverability

 

 

 

O

SHa-1-G

Hazard warning responsiveness

 

 

O

O

SSi-1-G

System safety integration

 

 

O

O

Bibliography

[1] ISO 9241‑11:2018, Ergonomics of human-system interaction — Part 11: Usability: Definitions and concepts

[2] ISO 9241‑110:2020, Ergonomics of human-system interaction — Part 110: Interaction principles

[3] ISO/IEC 14143, Information technology — Software measurement — Functional size measurement

[4] ISO/IEC 14143‑1, Information technology — Software measurement — Functional size measurement — Part 1: Definition of concepts

[5] ISO/IEC/IEEE 15939:2017, Systems and software engineering — Measurement process

[6] ISO/IEC 25012, Software engineering — Software product Quality Requirements and Evaluation (SQuaRE) — Data quality model

[7] ISO/IEC 25020:2019, Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality measurement framework

[8] ISO/IEC 25022, Systems and software engineering — Systems and software quality requirements and evaluation (SQuaRE) — Measurement of quality in use

[9] ISO/IEC 25024, Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Measurement of data quality

[10] ISO/IEC 25030, Systems and software engineering — Systems and software quality requirements and evaluation (SQuaRE) — Quality requirements framework

[11] ISO/IEC/IEEE 24765:2017, Systems and software engineering — Vocabulary

[12] ISO/IEC/IEEE 29119‑4:2021, Software and systems engineering — Software testing — Part 4: Test techniques

[13] U.S. Department of Health and Human Services. The Research-Based Web Design & Usability Guidelines, Enlarged/Expanded edition. U.S. Government Printing Office, Washington, 2006

[14] OMG. CISQ Specification for Automated Quality Characteristic Measures, CISQ-TR-2012-01, 2012

espa-banner