prEN 50600-1:2026
prEN 50600-1:2026
prEN 50600-1:2026: Information technology - Data centre facilities and infrastructures - Part 1: General concepts

CLC/TC 215

Date: 20YY-XX

prEN 50600‑1:2026

Secretariat: XXX

(Title) Introductory element — Main element — Complementary element

Einführendes Element — Haupt-Element — Ergänzendes Element

Élément introductif — Élément central — Élément complémentaire

CCMC will prepare and attach the official title page.

Contents Page

European foreword 4

Introduction 5

1 Scope 7

2 Normative references 7

3 Terms, definitions and abbreviations 7

3.1 Terms and definitions 7

3.2 Abbreviations 12

4 Conformance 12

5 Business risk analysis 13

5.1 General 13

5.2 Business impact analysis 13

5.3 Risk analysis 14

6 Data centre design overview 15

6.1 General 15

6.2 Spaces and facilities 16

7 Classification system for the design of data centre facilities and infrastructures 18

7.1 General 18

7.2 Availability 18

7.2.1 General 18

7.2.2 Single-site data centres 18

7.2.3 Multi-site data centres 21

7.3 Physical security 21

7.3.1 General 21

7.3.2 Protection against unauthorized access 22

7.3.3 Protection against intrusion 22

7.3.4 Protection against environmental events 22

7.4 Resource and energy efficiency enablement 23

7.4.1 General 23

7.4.2 Power distribution system 23

7.4.3 Environmental control 24

7.4.4 Operational processes and KPIs 24

7.4.5 Maturity Level for energy and resource efficiency 24

8 Design and implementation process 24

8.1 General 24

8.2 Design phases 25

8.2.1 Phase 1 - Strategy 25

8.2.2 Phase 2 - Objectives 26

8.2.3 Phase 3 - System specifications 26

8.2.4 Phase 4 - Design proposal 26

8.2.5 Phase 5 - Decision 27

8.2.6 Phase 6 - Functional design 27

8.2.7 Phase 7 - Approval 27

8.2.8 Phase 8 - Final design and project plan 27

8.2.9 Phase 9 - Contract 27

8.2.10 Phase 10 – Construction and acceptance testing 28

8.2.11 Phase 11 - Operation 28

9 Design principles to support energy efficiency, resource efficiency and environmental sustainability 28

9.1 Design reference documentation 28

9.2 Design principles to support energy efficiency, resource efficiency and environmental sustainability 28

9.3 Design principles for EMI 28

9.4 Design principles to support operational excellence 28

9.5 Design principles to improve resilience 29

Annex A (informative) Availability classes, resilience and efficiency 30

Annex B (informative) Availability and resilience criteria 35

Bibliography 37

Tables

Table 1 — Availability Classes and example implementations 20

Table A.1 — Availability and annual downtime 32

Table B.1 — Summary of availability classification for power supply, power distribution and environmental control 35

Table B.2 — Summary of availability classification for telecommunications cabling 36

Figures

Figure 1 — Schematic relationship between EN 50600 series of documents 7

Figure 2 — Example of risk map 16

Figure 3 — Typical schematic diagram of premises containing a data centre 18

Figure 4 — Design phases 26

Figure A.1 — Unplanned disruptions — duration vs. cost 32

European foreword

This document (prEN 50600-1:2026) has been prepared by CLC/TC 215 “Electrotechnical aspects of telecommunication equipment”.

This document is currently submitted to the Enquiry.

latest date by which the existence of this document has to be announced at national level

(doa)

dav + 6 months

latest date by which this document has to be implemented at national level by publication of an identical national standard or by endorsement

(dop)

dav + 12 months

latest date by which the national standards conflicting with this document have to be withdrawn

(dow)

dav + 36 months (to be confirmed or modified when voting)

This document will supersede EN 50600-1:2019.

prEN 50600-1:2026 includes the following significant technical changes with respect to EN 50600-1:2019:

a) the whole document has been revised technically and editorially, aligning with EN 50600-2-2 and EN 50600-2-3;

b) resource enablement aspects included;

c) Clause 3 updated and aligned with terms and definitions of CLC/TC 50600‑4‑31 and implemented throughout the document;

d) Clause 5 updated to include e.g. resilience analysis aspects according to CLC/TS 50600-4-31 and improved terminology (“services” replaced with “functional capability”);

e) Clause 7 revised, in particular regarding the application of the data centre maturity model of CLC/TS 50600-5-1 and 7.2.3 on multi-site data centres;

f) Clause 8 design phases 1 to 4, 6 and 10 revised and aligned with EN 50600-2-5;

g) Clause 9 updated and supplemented with new 9.5 on design principles for improved resilience;

h) Annex A completely revised considering the specifications of CLC/TS 50600-4-31;

i) Annex B contains a new Table B.2 summarizing the availability classification for telecommunications cabling.

Introduction

The unrestricted access to internet-based information demanded by the information society has led to an exponential growth of both internet traffic and the volume of stored/retrieved data. Data centres are housing and supporting the information technology and network telecommunications equipment for data processing, data storage and data transport. They are required both by network operators (delivering those services to customer premises) and by enterprises within those customer premises.

Data centres usually need to provide modular, scalable and flexible facilities and infrastructures to easily accommodate the rapidly changing requirements of the market. In addition, energy consumption and water/resource usage of data centres has become critical both from an environmental point of view (reduction of environmental footprint) and with respect to economical considerations (cost of energy) for the data centre operator.

The implementation of data centres varies in terms of:

a) purpose (enterprise, co-location, co-hosting or network operator facilities);

b) security level;

c) physical size;

d) accommodation (mobile, temporary and permanent constructions).

The needs of data centres also vary in terms of availability of service, the provision of security and the objectives for energy efficiency. These needs and objectives influence the design of data centres in terms of building construction, power distribution, environmental control, telecommunications cabling and physical security as well as the operation of the data centre. Effective management and operational information is required to monitor achievement of the defined needs and objectives.

Recognizing the substantial resource consumption, particularly of energy, of larger data centres, it is also important to provide tools for the assessment of that consumption both in terms of overall value and of source mix and to provide Key Performance Indicators (KPIs) to evaluate trends and drive performance improvements.

At the time of publication of this document, the EN 50600 series have been designed as a framework of standards, technical specifications and technical reports covering the design, the operation and management, the key performance indicators for energy efficient operation of the data centre as well as a maturity model for energy management and environmental sustainability.

This series of documents specifies requirements and recommendations to support the various parties involved in the design, planning, procurement, integration, installation, operation and maintenance of facilities and infrastructures within data centres. These parties include:

1) owners, operators, facility managers, ICT managers, project managers, main contractors;

2) consulting engineers, architects, building designers and builders, system and installation designers, auditors, test and commissioning agents;

3) facility and infrastructure integrators, suppliers of equipment;

4) installers, maintainers.

This document is intended for use by and collaboration between all parties involved, however, at least by consulting engineers, architects, building designers and builders, system and installation designers.

The inter-relationship of the documents within the EN 50600 series is shown in Figure 1.

Figure 1 — Schematic relationship between EN 50600 series of documents

EN 50600-1 introduces the general concepts relevant for the design and operation of data centres.

EN 50600-2-X documents define the requirements for the data centre design and specify requirements and recommendations for particular facilities and infrastructures to support the relevant classification for “availability”, “physical security” and “energy efficiency enablement” selected from EN 50600-1.

EN 50600-3-1 specifies requirements and recommendations for data centre operations, processes and management.

EN 50600-4-X documents specify requirements and recommendations for key performance indicators (KPIs) used to assess and improve the resource usage efficiency and effectiveness, respectively, and criteria of resilience of a data centre.

CLC/TS 50600-5-1 specifies the maturity model for energy management and environmental sustainability and refers amongst others to EN 50600-4-X for KPIs as appropriate.

This document, EN 50600-1, specifies general requirements for all kinds of data centres irrespective of their size and physical construction. It introduces a classification system for availability, physical security and energy efficiency enablement based on business risk/impact analysis outcome.

This series of documents does not address the selection of information technology and network telecommunications equipment, software and associated configuration issues.

1.0 Scope

This document:

a) describes the general principles for data centres upon which the requirements of the EN 50600 series are based;

b) defines the common aspects of data centres including terminology, parameters and reference models (functional elements and their accommodation) addressing both the size and complexity of their intended purpose;

c) describes general aspects of the facilities and infrastructures required to support data centres;

d) specifies a classification system, based upon the key criteria of “availability”, “security” and “resource and energy efficiency enablement” over the planned lifetime of the data centre, for the provision of effective facilities and infrastructure;

e) details the issues to be addressed in a business risk and operating cost analysis enabling application of the classification of the data centre;

f) provides reference to documentation, operation and management of data centres;

g) introduces the concepts of Key Performance Indicators (KPIs) for resource management and resilience of data centre facilities and infrastructures;

h) defines the use of an environmental sustainability strategy.

The following topics are outside of the scope of this series of documents:

1) the selection of information technology and network telecommunications equipment, software and associated configuration issues are outside the scope of this document;

2) quantitative analysis of overall service availability resulting from multi-site data centres;

3) safety and electromagnetic compatibility (EMC) requirements (covered by other standards and regulations. However, information given in this document can be of assistance in meeting these standards and regulations).

2.0 Normative references

The following documents are referred to in the text in such a way that some or all of their content constitutes requirements of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.

CLC/TS 50600-5-1, Information technology - Data centre facilities and infrastructures - Part 5-1: Maturity Model for Energy Management and Environmental Sustainability

3.0 Terms, definitions and abbreviations

3.1 Terms and definitions

For the purposes of this document, the following terms and definitions apply.

ISO and IEC maintain terminological databases for use in standardization at the following addresses:

— IEC Electropedia: available at http://www.electropedia.org/

— ISO Online browsing platform: available at http://www.iso.org/obp

3.1.1

availability

ability to be in a state to perform as required

[SOURCE: CLC/TS 50600-4-31:2024, 3.1.1.]

3.1.2

building entrance facility

facility that provides all necessary services, and which complies with all relevant regulations for the entry of infrastructures or specific services into a building

[SOURCE: EN 50173-1:2018, 3.1.18, modified — “telecommunication cables” was replaced with “infrastructures or specific services”; “and which can enable transmission from outdoor to indoor cable” was deleted; “mechanical and electrical” was deleted]

3.1.3

cabinet

enclosed construction for housing closures and other information technology equipment

[SOURCE: EN 50174-1:2018, 3.1.7]

3.1.4

co-hosting data centre

data centre in which multiple customers are provided with access to network(s), servers and storage equipment on which they operate their own services/applications

Note 1 to entry: Both the information technology equipment and the support infrastructure of the building are provided as a service by the data centre operator.

[SOURCE: EN 50174-2:2018, 3.1.2]

3.1.5

co-location data centre

data centre in which multiple customers locate their own network(s), servers and storage equipment

Note 1 to entry: The support infrastructure of the building (such as power distribution and environmental control) is provided as a service by the data centre operator.

[SOURCE: EN 50174-2:2018, 3.1.3]

3.1.6

computer room space

area within the data centre that accommodates the data processing, data storage and telecommunication equipment that provides the primary function of the data centre

3.1.7

control room space

area within the data centre used to control the operation of the data centre and to act as a central point for all control and monitoring functions

3.1.8

data centre

structure, or group of structures, dedicated to the centralised accommodation, interconnection and operation of information technology and network telecommunications equipment providing data storage, processing and transport services together with all the facilities and infrastructures for power distribution and environmental control together with the necessary levels of resilience and security required to provide the desired service availability

Note 1 to entry: A structure can consist of multiple buildings and/or spaces with specific functions to support the primary function.

Note 2 to entry: The boundaries of the structure or space considered the data centre which includes the information and communication technology equipment and supporting environmental controls can be defined within a larger structure or building.

3.1.9

demarcation point

point where the operational control or ownership changes

3.1.10

downtime

duration of the time interval for which the item is in a down state

Note 1 to entry: Item can be a device, a functional element or a system.

3.1.11

electrical distribution space

area used for housing facilities to distribute electrical power between the transformer space and electrical spaces within the data centre or elsewhere within the premises or individual buildings within the premises

3.1.12

electrical space

area within the data centre used for housing facilities to deliver and control electrical power to the data centre spaces (including switchboards, batteries, uninterruptible power systems (UPS) etc.)

3.1.13

enterprise data centre

data centre that is operated by an enterprise which has the sole purpose of the delivery and management of services to its employees and customers

[SOURCE: EN 50174-2:2018, 3.1.8]

3.1.14

energy efficiency enablement

ability to measure the energy consumption and to allow calculation and reporting of energy efficiency of the various facilities and infrastructures

3.1.15

facility

spaces and pathways that accommodate a specific infrastructure

3.1.16

failure

<of an item> loss of ability to perform as required

Note 1 to entry: In this context it is irrelevant if the cause was planned or unplanned.

[SOURCE: CLC/TS 50600-4-31:2024, 3.1.8]

3.1.17

fault

inability to perform as required, due to an internal state

Note 1 to entry: Opposite of success. In the context of the expected resilience level (RL), at a specified operation point (OP).

[SOURCE: CLC/TS 50600-4-31:2024, 3.1.10]

3.1.18

functional capability

ability of the data centre (or system or subsystem) to deliver its intended function

3.1.19

functional element

source of supply, device or path

3.1.20

generator space

area used for housing the installation of electrical power supply generation equipment together with control systems, storage of associated fuels or energy conversion equipment

3.1.21

holding space

area within the data centre used for the holding of equipment prior to being brought into service or having been taken out of service

3.1.22

infrastructure

technical systems providing functional capability of the data centre (e.g. power distribution, environmental control and physical security)

3.1.23

main distributor

distributor used to make connections between the main distribution cabling subsystem, network access cabling subsystem and active equipment

[SOURCE: EN 50173-5:2018, 3.1.10]

3.1.24

mechanical space

area that is used for housing mechanical equipment and infrastructure that provides environmental control for the data centre spaces (including chillers and water treatment, air handling and fire suppression systems)

3.1.25

network operating centre

room or workplace to monitor ICT and network equipment and receive alarms from building management systems

3.1.26

network operator data centre

data centre that has the primary purpose of the delivery and management of broadband services to the operators’ customers

[SOURCE: EN 50174-2:2018, 3.1.18]

3.1.27

physical security

measures (combining physical and technological controls), procedures and responsibilities to maintain the desired level of availability for the facilities and infrastructures of the data centres in relation to access control, intrusion and environmental events

3.1.28

planned downtime

period of time during which a system or subsystem does not provide functional capability whilst it undergoes maintenance or is switched off to test the response of a related system or subsystem

3.1.29

premises entrance facility

space that provides all necessary mechanical and electrical services for the entry of cables into the premises

3.1.30

redundancy

<in a system> provision of more than one means for performing a function

Note 1 to entry: In a data centre, redundancy can be achieved by duplication of devices, functional elements, and/or supply paths.

[SOURCE: CLC/TS 50600-4-31:2024, 3.1.27]

3.1.31

reliability

ability to perform as required, without failure, for a given time interval, under given conditions

[SOURCE: CLC/TS 50600-4-31:2024, 3.1.28.]

3.1.32

resilience

ability to withstand and reduce the magnitude and/or duration of disruptive events, including the capability to anticipate, absorb, adapt to, and/or rapidly recover from such an event

[SOURCE: CLC/TS 50600-4-31:2024, 3.1.25]

3.1.33

storage space

area where general goods and/or data centre goods to be used in the premises and data centre are stored

3.1.34

system

set of interrelated functional elements considered in a defined context as a whole and separated from their environment

[SOURCE: IEC 60050-151:2001, 151-11-27, modified – Note 1 to entry to Note 4 to entry were deleted]

3.1.35

telecommunications

technology concerned with the transmission, emission, and reception of signs, signals, writings, images, and sounds, by cable, radio, optical, or other electromagnetic systems

Note 1 to entry: The term telecommunications has no legal meaning when used in this document

[SOURCE: EN 50173-1:2018, 3.1.49]

3.1.36

telecommunications cabling

infrastructure from the telecommunications space(s) to the premises entrance facility

3.1.37

telecommunication equipment

equipment within the data centre that provides telecommunication services

3.1.38

telecommunications space

area which may house demarcation points and telecommunication equipment associated with the building entrance facility

3.1.39

testing space

area within the data centre used for the testing and configuring of equipment prior to being brought into service

Note 1 to entry: Testing space is sometimes called staging area.

3.1.40

transformer space

area used for housing equipment necessary to convert voltage levels and/or provide necessary isolation for the connection to the equipment within the premises or individual buildings within the premises

3.1.41

uninterruptible power system

combination of convertors, switches and energy storage devices (such as batteries), constituting a power system for maintaining continuity of load power in case of input power failure

Note 1 to entry: Continuity of load power occurs when voltage and frequency are within rated steady-state and transient tolerance bands and with distortion and interruptions within the limits specified for the output port. Input power failure occurs when voltage and frequency are outside rated steady-state and transient tolerance bands or with distortion or interruptions outside the limits specified for the UPS.

[SOURCE: EN IEC 62040-1:2019, 3.101]

3.1.42

unplanned downtime

unexpected time taken, following a failure of functional capability, to repair the relevant infrastructure together with the “re-boot” time necessary to recover functional capability following that repair

3.1.1 Abbreviations

For the purposes of this document the following abbreviations apply:

CFR

cabinet, frame or rack

CRAC

computer room air conditioning (unit)

CRAH

computer room air handler (unit)

EMI

electromagnetic interference

ffs

for further study

ICT

information and communications technology

ITE

information technology equipment

KPI

key performance indicator

MTBF

mean time between failures

MTTR

mean time to repair

NOC

network operating centre

UPS

uninterruptible power system

4.0 Conformance

For a data centre design to conform to this document:

a) a business risk analysis according to Clause 5 shall be completed;

b) an appropriate Availability Class in 7.2 shall be selected using a business risk analysis in Clause 5;

c) appropriate Protection Classes for the data centre pathways and spaces shall be selected in accordance with 7.3.1;

d) an appropriate energy efficiency enablement level in 7.4 shall be selected;

e) the design process of Clause 8 (or equivalent) shall be applied;

f) the design principles of Clause 9 shall be applied.

NOTE The application of the design process in Clause 8 is not mandatory for an assessment of existing data centres.

5.0 Business risk analysis

5.1 General

The overall availability of a data centre is a measure of the continuity of its data processing, storage and transport functions. The acceptable level of the overall availability of a data centre is determined by a number of factors including:

a) a business impact analysis (see 5.2) evaluating the cost and/or other consequences associated with a failure of service provision, which depends upon a number of factors including the function and importance of the data centre;

b) externally applied commercial pressures (e.g. insurance cost, market/customer expectations).

There is a link between the availability of the infrastructures specified in EN 50600-2-X standards and the overall availability but it should be recognized that the recovery of intended data processing, storage, and transport functionality following the repair of an infrastructure failure depends on many factors related to the configuration of the hardware and software providing that functionality.

As a result, the role of the infrastructure is to support overall availability objectives but is not the sole factor in their attainment.

The availability of each of the facilities and infrastructures of the data centre required to support the desired overall availability is described by an availability classification (see 7.2). The design of each of the data centre infrastructures shall take account of their impact on overall availability and the costs associated with the predicted downtime associated with failure or planned downtime for maintenance.

The design and physical security of the facilities and infrastructures of the data centre shall be subjected to a risk analysis (see 5.3) which maps identified risk events against the requirements of the availability classification (see 7.2). The availability classification for each infrastructure is described as providing low, medium, high and very high availability. Clause 7 further describes the situations (risk events) for which each infrastructure is protected against failure. A further optimization approach within the given Availability Classes is an availability analysis of the infrastructures in accordance with CLC/TS 50600-4-31.

A business risk analysis identifies the aspects of the facilities and infrastructures that require investment in terms of design improvements to reduce their impact and/or probability of those risk events. Appropriate Availability Classes, as defined in 7.2, shall be selected for each infrastructure to reduce business risks to an acceptable level. The benefits and effectiveness of individual optimizations within a given Availability Class can be demonstrated by resilience analysis as described in CLC/TS 50600-4-31.

5.1.1 Business impact analysis

This document does not define methods of analysis for the cost of downtime. Standards such as EN IEC 31010, ISO/TS 22317 or EN ISO 22301 provide useful guidance.

The parameters to be considered within such an analysis will depend upon the purpose of the data centre. Some organisations can assign a monetary value (or range) to loss of service which may include the following:

a) immediate financial penalties;

b) consequential losses;

c) an assessment of longer-term damage to business reputation e.g. an Internet Service Provider or a financial institution.

Although cost is often considered when analysing downtime, other impacts should also be considered. Data centres containing life safety, legal, medical and criminal information may have individually recognized consequences from unplanned downtime.

5.1.2 Risk analysis

This document does not define methods of risk analysis. Standards such as ISO 31000 and EN IEC 31010 provide useful guidance on this topic.

Risk analysis may be used as a management tool allowing the comparison with the acceptable total risk and showing trends resulting from mitigation activity. For the purposes of this document the risk associated with an event concerning the facilities and infrastructures of the data centre which disrupts the provision of the ICT service of the data centre is defined as event risk which is a function of impact and probability where

a) impact is the magnitude or severity of adverse incidents or impacts, expressed numerically or nominally expected duration of loss of service (availability) of the event;

b) probability is the likelihood that an event will occur.

The impact of risk may be assessed using different units of measure e.g. expected downtime, cost and/or other consequences, safety etc.

The total risk to the functional capability of the data centre is a function of the event risks associated with each facility and infrastructure provided that those risks are quantified on the same basis. If related to the output of the business impact analysis (see 5.2) the financial value and other consequences of the total risk can be estimated.

The risks considered should include external threats which can affect the facilities and infrastructures including in particular the location, which could be geographical (e.g. air traffic, flooding etc.), political (e.g. wars, trouble spots, terror etc.) or affecting neighbourhood relations (e.g. fire hazards exist due to filling stations, chemical storage etc.) and thus influence the likelihood of a potential downtime. In addition, potential risks resulting from internal and external attacks by the staff or others should be part of the overall risk evaluation.

Impact can be categorized as:

1)

low:

e.g. loss of non-critical functional capability;

2)

medium:

e.g. failure of critical system functional elements but no loss of redundancy;

3)

high:

e.g. loss of critical system redundancy but no loss of functional capability;

4)

critical:

e.g. loss of critical functional capability or loss of life (which may be extended to address personal injury).

The probability of an event occurring can be categorized in a similar way, that is:

1)

very low:

e.g. event expected in more than 100 years;

2)

low:

e.g. event expected in 25 to 100 years;

3)

medium:

e.g. event expected in 10 to 25 years;

4)

high:

e.g. event expected in 10 years.

Each risk can be categorized on a risk map as shown in Figure 2 and can differ from project to project. High risk events inhabit the top right-hand corner of the figure and low risk events inhabit the bottom left-hand corner.

Figure 2 — Example of risk map

Having identified the risk of the possible events associated with data centre facilities and infrastructures, the downtime cost and/or other consequences with that event shall be determined to enable design decisions to be made that reduce the risk (by means of reducing the impact or probability of the event).

6.0 Data centre design overview

6.1 General

Data centres differ in terms of their purpose e.g. co-hosting data centre, co-location data centre, enterprise data centre, network operator data centre. Data centres can also differ significantly with respect to their physical size ranging from:

a) a data centre in a building housing a small quantity of storage and server equipment to provide information technology services to the occupants of that building, to

b) a data centre housing a large quantity of such equipment providing information technology services via diverse internal and external telecommunications networks and requiring sophisticated power distribution and environmental control facilities housed in one or more buildings dedicated to ensure the operation of the data centre.

This clause provides a general design overview for data centres independent of their purpose and their size.

6.1.1 Spaces and facilities

Figure 3 shows a schematic representation of the spaces required by a large data centre within a building and within premises containing one or more building.

The data centre may share certain spaces with the rest of the building including:

a) building entrance facilities;

b) personnel entrance(s);

c) docking/loading bay(s);

d) generators space(s);

e) transformer space(s);

f) electrical distribution space(s);

g) mechanical space(s) accommodating environmental control systems;

h) telecommunications spaces(s).

The need for the above spaces and facilities within the building depends upon the purpose of both the building and the data centre. Any sharing of these spaces and facilities and the corresponding pathways will depend not only on the size but also on the defined Availability and Protection Classes of the data centre and the functions of the remainder of the building. For example, in buildings housing large data centres, the facilities and spaces supporting the data centre can be dedicated to the data centre with separate spaces being provided for the remainder of the building.

The area within the building designated as a data centre can contain the following spaces:

1) personnel entrance(s);

2) main distributor space(s);

3) computer room space(s) and associated testing space(s);

4) electrical space(s);

5) mechanical space(s) accommodating e.g. environmental control systems;

6) control room space(s) accommodating e.g. the NOC;

7) office space(s);

8) fuel storage;

9) storage space(s) and holding space(s).

Figure 3 — Typical schematic diagram of premises containing a data centre

Within the area of the building designated as a data centre, the need for, and contents of, the spaces depend upon the purpose of the data centre, its anticipated power consumption and the need for environmental control.

The need for segregation of spaces depends on availability and fire protection considerations, requirements for security and upon the need for environmental control.

As examples, a small enterprise data centre can comprise a single room having the function of a computer room space and an electrical space without physical segregation whereas a large data centre can require one or more segregated spaces of each type identified in Figure 3.

7.0 Classification system for the design of data centre facilities and infrastructures

7.1 General

For the purposes of the EN 50600 series, data centres facilities and infrastructures are designated with respect to:

a) Availability Classes (see 7.2);

b) Protection Classes (see 7.3);

c) resource and energy efficiency enablement levels (see 7.4).

These designations are used in combination to determine the relevant requirements and recommendations for the following facilities and infrastructures:

1) building construction (see EN 50600-2-1);

2) power distribution (see EN 50600-2-2);

3) environmental control (see EN 50600-2-3);

4) telecommunications cabling infrastructure (see EN 50600-2-4);

5) security systems (see EN 50600-2-5).

7.1.1 Availability

7.1.2 General

Data centres can be single-site or configured to operate across multiple sites.

7.2.2 describes the availability concepts and requirements for a single-site data centre.

7.2.3 describes the use of a multi-site data centre to improve the overall service availability.

Annex B summarizes the availability classification specified in this document.

7.1.3 Single-site data centres

The required availability of the facilities and infrastructures that support the functionality of the data centre is of the utmost significance. The data centre owner/user shall determine the desired availability of the overall set of facilities and infrastructures using business risk analysis and business impact analysis (Clause 5). It is recognized that availability requirements can vary with time of day, week or month.

Different qualitative Availability Classes for the overall set of data centre facilities and infrastructures are defined as shown in Table 1. The availability of the entire data centre depends on the Availability Classes of its individual infrastructures such as power sourcing and distribution, environmental control and security. The requirements for a specific facility or infrastructure of a given Availability Class are specified in the EN 50600-2-X series.

In order for the set of facilities and infrastructures of a data centre to be considered to be of a given Availability Class, the design of each individual facility and infrastructure listed in Table 1 shall meet or exceed that Availability Class.

The provision of higher Availability Classes generally requires greater investment. Additional details about availability are provided in Annex A.

The EN 50600 series defines four classes of availability. Based on the outcome of the business risk analysis in Clause 5 an Availability Class shall be selected for the following infrastructures:

— power supply and distribution;

— environmental control;

— telecommunications cabling.

The availability of the entire data centre depends on the Availability Classes of its infrastructures.

The selection of the Availability Class shall be made based on the following design objectives (for requirements and recommendations specific to each infrastructure see the appropriate part of EN 50600-2-X).

A Class 1 solution - single path - is appropriate where the outcome of the risk assessment deems it acceptable that:

— a single failure in a functional element can result in loss of functional capability;

— planned maintenance can require the load to be shut-down.

A Class 2 solution - single path with redundancy - is appropriate where the outcome of the risk assessment deems it necessary that:

— a single failure in a redundant device shall not result in loss of functional capability of that path;

— routine planned maintenance of a redundant device shall not require the load to be shut down.

NOTE Failure of the path or of a non-redundant device can result in unplanned load shutdown and routine maintenance of non-redundant devices can require planned load shutdown.

A Class 3 solution - multiple paths - providing a concurrent repair and operate solution, is appropriate where the outcome of the risk assessment deems it necessary that:

— a failure of a functional element shall not result in loss of functional capability;

— planned maintenance shall not require the load to be shut-down;

— for environmental control: although a failure of a path can result in unplanned load shutdown, maintenance routines shall not require planned load shutdown as the passive path serves to act as the concurrent maintenance enabler as well as reducing the recovery of service time (minimizing the mean downtime) after the failure of a path.

All paths shall be designed to sustain the maximum load.

A Class 4 solution - fault tolerant solution except during maintenance - is appropriate where the outcome of the risk assessment deems it necessary that:

— a failure of a functional element shall not result in loss of functional capability;

— planned maintenance shall not require the load to be shut-down;

— for power supply and distribution:

— a failure of one path shall not result in unplanned load shut-down;

— any single event impacting a functional element shall not result in load shut-down.

— for environmental control:

— a failure of one path shall not result in unplanned load shut-down;

— any single event impacting a functional element shall not result in load shut-down.

All paths shall be designed to sustain the maximum load.

Technical solutions supporting different qualitative Availability Classes for the overall set of data centre facilities and infrastructures are shown in Table 1.

Table 1 — Availability Classes and example implementations

Infrastructure of the EN 50600 series

Availability Class 1

Availability Class 2

Availability Class 3

Availability Class 4

Definition for power supply (see EN 50600-2-2)

Single path to primary distribution equipment

-

Single source

Single path to primary distribution equipment

-

Redundant sources

Multiple paths to primary distribution equipment

-

Redundant sources

Multiple paths to primary distribution equipment

-

Multiple sources

Definition for power distribution (see EN 50600-2-2)

Single path

Single path with redundancy

Multiple paths

-

Concurrent repair and operate solution

Multiple paths

-

Fault tolerant except during maintenance

Definition for environmental control (see EN 50600-2-3)

Single path

Single path with redundancy

Multiple paths

-

Concurrent repair and operate solution

Multiple paths

-

Fault tolerant except during maintenance

Definition for telecommunications cabling (see EN 50600-2-4)

Single path

-

direct connections or fixed infrastructure with single access network connection

Single path

-

fixed infrastructure with multiple access network connections

Multiple paths

-

fixed infrastructure with diverse pathways with multiple access network connections

Multiple paths

-

fixed infrastructure with diverse pathways and redundant distribution zones and multiple access network connections

NOTE 1 Requirements and recommendations for data centre construction that provide the desired Protection Classes to ensure availability of the facilities and infrastructures are addressed in EN 50600-2–1.

NOTE 2 Requirements and recommendations for physical security of data centre spaces and pathways to ensure availability of the facilities and infrastructures are addressed in EN 50600-2–5.

NOTE 3 Paths are providing infrastructure input to cabinets/frames/racks (CFR). CFRs need to provide the required service to the hosted ICT equipment according the given Availability Class.

More information about availability can be found in Annex A.

Additional attention shall be given to the physical security of the facilities and infrastructures outlined in 7.3, describing other important factors for the overall availability of the entire data centre.

In addition to the design and installation of more sophisticated technical solutions, the implementation of higher Availability Classes implies the application of effective organisational structures to manage the operation of those technical solutions including, but not limited to:

1) the availability of trained service personnel;

2) storage of spare parts;

3) the establishment of maintenance contracts and service level agreements;

4) rapid access to precise instructions defining the actions and communications required in any case of failure.

For more information about processes and availability management see EN 50600-3-1.

7.1.4 Multi-site data centres

The outcome of the risk analysis can lead to the conclusion that the required overall availability objectives can be achieved in a comparable or even better way by multi-site data centres instead of a single data centre. Multi-site data centres generally require additional operational and ICT service capabilities which are outside the scope of this document.

However, the EN 50600 series provides methods applicable to single site data centres, with the possibility of drawing conclusions about multi-site data centre structures as well. This includes:

a) analytical considerations of aspects of the resilience of an individual data centre, as well as conclusions on the resilience of the multi-site data centre structure, using methods of CLC/TS 50600-4-31;

b) application of KPIs considering the resource efficiency of an individual data centre and the overall comparison of the multi-site data centre structure using EN 50600-4-1 to EN 50600-4-9. KPI averaging over multi-site data centres for reporting shall not be done;

c) application of the maturity model to the individual data centre and the multi-site data centre structure overall comparison using CLC/TS 50600-5-1.

The results of the application of the mentioned documents can contribute significantly to the holistic consideration, decision-making, and optimization of multi-site data centres.

7.2 Physical security

7.2.1 General

Each of the data centre spaces and pathways, independent of the size or purpose of the data centre, is designated as being of a particular Protection Class. There is no concept of having a single Protection Class for a data centre.

The physical security provided for the data centre has an influence on both the probability and impact of risk events (see 5.3) since the objective of physical security is to protect against:

a) unauthorized access (see 7.3.2);

b) intrusion (see 7.3.3);

c) fire events (see 7.3.4);

d) internal environmental events (see 7.3.4);

e) external environmental events (see 7.3.4).

The required Protection Classes for the data centre spaces shall be selected according EN 50600-2-5 for each of these objectives. The Protection Classes and their functional options shall be coordinated and documented in a physical security concept. Refer to the results of the business impact analyses in Clause 5 and EN 50600‑2‑5.

7.2.2 Protection against unauthorized access

The areas of the data centre and its surroundings shall be protected against unauthorized access.

Within the data centre, the access restrictions are dependent on the purpose of the data centre (e.g. enterprise vs. co-location) and on the function of the data centre spaces and pathways. The design criteria are based upon an analysis of needs defining appropriate requirements and recommendations.

EN 50600-2-1 specifies requirements and recommendation for the construction of boundaries between spaces and pathways of a given Protection Class.

EN 50600-2-2 specifies the Protection Classes applicable to spaces accommodating power supply and distribution systems.

EN 50600-2-3 specifies the Protection Classes applicable to spaces accommodating environmental control systems.

EN 50600-2-4 specifies the Protection Classes applicable to spaces accommodating telecommunications infrastructure.

EN 50600-2-5 specifies the requirements of and provides recommendations for active and passive measures in support of the Protection Classes for unauthorized access.

7.2.3 Protection against intrusion

The areas of the data centre and its surroundings shall be protected against intrusion.

Within the data centre, the intrusion measures are dependent on the purpose of the data centre (e.g. enterprise vs. co-location) and on the function of the data centre spaces and pathways. The design criteria are based upon an analysis of needs defining appropriate requirements and recommendations.

For a particular Protection Class the intrusion delay time provided by an intrusion barrier should be longer than the time it takes to stop the intruder. If the intrusion delay is created by multiple barriers, the summation of the individual delay times results in the total intrusion delay time.

Intrusion related requirements and recommendations for the construction of data centres are the subject of EN 50600-2-1.

Intrusion related requirements and recommendations for active and passive measures in support of the Protection Classes are the subject of EN 50600-2-5.

7.2.4 Protection against environmental events

The areas of the data centre and its surroundings shall be protected against environmental events.

Protection against internal and external environmental events includes all measures required to ensure the desired Availability Class for the facilities and infrastructures of the data centre including building construction, protection systems and organisational measures.

Internal environmental events include overheating, fire, electrostatic discharge, water etc. impacting the function of the data centre infrastructures.

External environmental events include fire, flood, earthquake, explosion and other forms of natural disaster (lightning and other electromagnetic affects).

Under optimal conditions, the risks posed by external environmental events are mitigated by the selection of the data centre location (see EN 50600-2-1). However, in most situations alternative design solutions need to be applied to the data centre facilities and infrastructures to provide them with an acceptable degree of protection against such events.

EN 50600-2-2 specifies the Protection Classes applicable to spaces accommodating power supply and distribution systems.

EN 50600-2-3 specifies the Protection Classes applicable to spaces accommodating environmental control systems.

EN 50600-2-5 specifies the requirements of and provides recommendations for security and protection systems in support of the Protection Classes. EN 50600-2-1 specifies requirements and recommendations for the:

a) construction of boundaries between spaces and pathways of a given Protection Class to minimize the impact of internal environmental events;

b) location and construction of data centres to mitigate external environmental events.

7.3 Resource and energy efficiency enablement

7.3.1 General

The ability to measure the energy consumption and to allow calculation and reporting of resource management (e.g. energy efficiency, source diversity and mix) and of the various facilities and infrastructures supporting the operation of a data centre is critical to the achievement of any related objectives.

The data centre owner/user shall define the appropriate energy efficiency enablement level prior to the data centre design.

The desired energy efficiency enablement level can be determined by:

a) the application of resource and energy management processes according to EN 50600-3-1;

b) the selection and application of one or more appropriate KPIs for resource management according to the EN 50600-4-X series;

c) the application of the maturity model for energy management and environmental sustainability according to CLC/TS 50600-5-1;

d) external regulatory or legislative requirements;

e) owner and user defined rules;

f) an operating cost analysis.

Three levels of granularity for the measurement are defined:

1) Level 1: a measurement regime providing simple global information for the data centre as a whole;

2) Level 2: a measurement regime providing detailed information for specific facilities and infrastructures within the data centre;

3) Level 3: a measurement regime providing granular data for systems within the spaces and pathways of the data centre.

Moving from one granularity level to a higher level requires an increased level of measurement/monitoring infrastructure.

7.3.2 Power distribution system

EN 50600-2-2 describes the power distribution infrastructure for data centres and defines the requirements and recommendations for the measurement/monitoring infrastructures of the power distribution systems in support of the desired granularity level.

7.3.3 Environmental control

EN 50600-2-3 describes the environmental control infrastructure for data centres and defines the requirements and recommendations for the measurement/monitoring infrastructures of the environmental control systems in support of the desired granularity level.

7.3.4 Operational processes and KPIs

EN 50600-3-1 describes processes and KPIs for resource and energy management which analyse data provided by monitoring of power distribution and environmental control infrastructures. Standards in the EN 50600-4-X series specify the detailed requirements for this type of KPI.

7.3.5 Maturity Level for energy and resource efficiency

The data centre owner/user shall select the appropriate maturity level of CLC/TS 50600-5-1. Design and operation of the data centre shall comply to the requirements of the selected level.

8.0 Design and implementation process

8.1 General

Effective data centre design and implementation processes require the splitting of the project into phases. Each phase has own input and output. All these phases follow a sequential timeline, resulting in the final project plan, leading to the issuing of a contract for the installation of the data centre enabling the operational phase to commence. Phases can be executed several times if required to achieve the agreed or defined objectives. Figure 4 lists all phases in their sequential order including phase descriptions and responsibilities.

Data centre owners should be aware of the impact of operational strategy on data centre availability, security concept, data centre management and operation. An operational concept should be discussed and decided to ensure that room layouts and Protection Class boundaries provide the necessary function for protection against unauthorized access.

The operational concept should also describe process interfaces between owner, operator, customers and suppliers. Processes, roles and responsibilities shall be defined prior to the beginning of operation. Operational staff shall be instructed on technical infrastructure and trained on operational procedures at latest during acceptance test phase (see EN 50600-3-1 for more information about acceptance tests).

Figure 4 — Design phases

At appropriate points before final approval (phase 7), one or more assessments shall verify that the design, the operation and management processes and the KPIs meets the project objectives.

8.1.1 Design phases

8.1.2 Phase 1 - Strategy

This phase is for information collection in order to define the project objectives. The following information is required:

a) business risk analysis;

b) ICT strategy;

c) corporate data centre strategy;

d) environmental sustainability strategy;

e) general customer requirements/expectations;

f) analysis of current load/demand/costs;

g) expected infrastructure technology roadmap;

h) “forecast” of future facility and infrastructure demand (space, power and locations);

i) certifications;

j) operational strategy.

8.1.3 Phase 2 - Objectives

This phase is used by the owner to convert the strategy into objectives. The results are the following:

a) correlation with corporate data centre strategy;

b) design benchmarks (e.g. size/performance levels/budgets/certifications);

c) design objectives for resource and energy efficiency;

c) project risk analysis (internal and external);

d) selecting location options;

e) definition of work flow;

f) timelines and impact of delays;

g) general floor plan and materials catalogue.

8.1.4 Phase 3 - System specifications

This phase defines the target specifications for all infrastructures with the following output:

a) target specification for construction;

b) target specification for power supply (sources);

c) target specification for power distribution;

d) target specification for environmental control;

e) target specifications for telecommunications infrastructure;

f) target specification for physical security;

g) target specification for fire detection and firefighting;

h) target specification for data centre operation and management.

8.1.5 Phase 4 - Design proposal

The designer uses the target specifications and objectives to create a design proposal for all infrastructures offering several options to the owner. The design proposal contains:

a) design proposal for construction;

b) design proposal for power supply (sources);

c) design proposal for power distribution;

d) design proposal for environmental control;

e) design proposal specifications for telecommunications infrastructure;

f) design proposal for physical security;

g) design proposal for fire detection and firefighting;

h) design proposal for infrastructural measuring system for resource and energy efficiency enablement;

i) design proposal for data centre operation and management;

j) cost models and timelines for proposed options;

k) final location selection.

8.1.6 Phase 5 - Decision

The owner selects the design from the available design options and cost models (supported by designer).

8.1.7 Phase 6 - Functional design

The designer converts the owner’s selection into functional design. The functional design contains:

a) functional design for construction;

b) functional design for power supply (sources);

c) functional design for power distribution;

d) functional design for environmental control;

e) functional design for telecommunications infrastructure;

f) functional design for physical security;

g) functional design for fire detection and firefighting;

h) functional design for infrastructural measuring system for resource and energy efficiency enablement;

h) functional design for data centre operation and management;

i) energy efficiency under partial and full load;

j) cost model “fine tuning” for selected option.

8.1.8 Phase 7 - Approval

The owner approves the functional design and cost models taking into account the risks and scheduling constraints of the project.

8.1.9 Phase 8 - Final design and project plan

The designer defines volume and/or pieces for all the infrastructures approved under 8.2.7. Furthermore, the project workflow and all project milestones and timelines are defined and subject to change control, resulting in an overall implementation plan.

8.1.10 Phase 9 - Contract

The owner (with support of the designer/consultant) issues a tender and selects the contractor(s).

8.1.11 Phase 10 – Construction and acceptance testing

The owner and/or the designer supervise(s) the construction over the entire construction time to ensure that the data centre meets the specified design requirements as well as the public parameter permits. Acceptance verification (testing and commissioning) for all infrastructures and for the entire data centre is executed until the data centre is put into service. Further details on testing and commissioning can be found in EN 50600-3-1.

8.1.12 Phase 11 - Operation

Hand over to owner for operation. For further details see EN 50600-3-1.

9.0 Design principles to support energy efficiency, resource efficiency and environmental sustainability

9.1 Design reference documentation

The outcome of the steps of Clauses 5 to 8 shall be collected in a design reference document which contains as a minimum:

a) the outcome of business impact analysis in accordance with 5.2;

b) the outcome of risk analysis in accordance with 5.3;

c) the description of a base data centre strategy and the selected Availability Class according to 7.2 selected using a business risk analysis;

d) the application of physical security concept in accordance with 7.3;

e) the selection of energy efficiency enablement level in accordance with 7.4;

f) the operational concept in accordance with 8.1.

9.1.1 Design principles to support energy efficiency, resource efficiency and environmental sustainability

The design of data centres shall consider energy efficiency and wider aspects of resource efficiency and environmental sustainability as a principle objective independent of the Availability Class to be applied.

The EN 50600-4-X series provides a series of Key Performance Indicators some of which can be employed at the design stage to assess the predicted energy efficiency. The designed KPIs shall be calculated under partial and full load conditions.

Recommended practices about energy efficient data centre design and more information about resource and energy efficiency in data centres can be found in the following documents:

a) CLC/TS 50600-5-1;

b) ETSI EN 305 174-2.

9.1.2 Design principles for EMI

The design of data centres shall consider EMI as a principle objective. Additional information can be found in EN 50600-2-1 and EN 50600-2-5.

9.1.3 Design principles to support operational excellence

The design of data centres shall consider operational excellence as a principle objective independent of the Availability Class to be applied.

The design of data centres should enable the provision of management and operational information required by EN 50600-3-1.

9.1.4 Design principles to improve resilience

The data centre industry places an emphasis on the importance of availability of the ICT applications, ICT systems supporting the applications, and facility systems supporting the ICT systems.

In addition to availability, other aspects of resilience shall be considered. These include reliability, failure rate, fault tolerance, recoverability as well as operation during planned maintenance or unplanned restrictions.

Detailed information on methods for the quantification of resilience criteria using KPIs can be found in CLC/TS 50600-4-31.


  1. (informative)

    Availability classes, resilience and efficiency
    1. General

To target the quality of the service experienced by the end user, this document defines four Availability Classes (see Clause 7). Starting with Availability Class 1 (low availability) up to Availability Class 4 (very high availability) represents the increasing ability of a data centre to function as intended, to be in the up state.

In order to meet availability as well as efficiency targets, detailed requirements should be taken into account in all phases of a data centre life cycle. For physical reasons, these requirements are partly divergent.

For example, a data centre of Availability Class 4 can require higher investment costs and higher resource consumption resulting in higher operating costs, compared to a data centre of a lower Availability Class. On the other hand, costs do not increase linearly as a result of unplanned down states, see Clause A.2.

The application of analytical methods during the phases of data centre design, planning and operation, is adequate to provide a more holistic view, especially for optimizing opposing requirements. Clause A.3 shows, why availability calculation without further qualification is not sufficient. For this reason, the concept of resilience KPIs was introduced in CLC/TS 50600-4-31. Clause A.4 gives an overview of the different criteria of resilience, such as availability, reliability, failure rate, fault tolerance, availability tolerance and service level agreements (SLAs).

With respect to different items of the data centre infrastructure, Annex B provides information on availability and particular resilience criteria considering the Availability Classes of this document.

    1. Cost of data centre down states

The cost of unplanned data centre down states is not linear with the total duration of the unplanned fault or combined unplanned faults. Figure A.1 shows that each down state has an initial fixed cost associated with that fault which is not dependent on the duration of the fault. Each down state also has a cost/minute associated with the fault.

The costs will include tangible financial costs such as lost revenue or labour and equipment costs to restore services, and intangible costs such as damaged reputation or lost customers. The actual fixed and variable costs will vary significantly depending on the critical infrastructure that experienced an unplanned down state and the magnitude of data centre ICT services that are impacted.

Key

t is the duration;

C is cost;

a is initial fixed losses;

b is time dependent losses.

Figure A.1 — Unplanned disruptions — duration vs. cost

    1. Availability and downtime

The primary drivers of data centre design considerations, operation policies and procedures are to reduce unplanned down states in the data centre’s ICT services. In a perfect scenario there would be no unplanned down states during the entire lifetime of the data centre. Depending on the level of redundancy of the critical infrastructure (compute, storage and network hardware, power, cooling, space, etc.) supporting the data centre’s ICT services, the failure of a single component or system does not necessarily lead to the fault of the ICT services.

Minimum requirements and maximum acceptances are typically defined using service level agreements (SLAs). The fulfilment of an SLA with regard to availability can be verified by calculating the past availability Ap, as shown in Formula (A.1):

(A.1)

where

is the measured uptime;

is the measured downtime.

Availability considerations are, in general, expressed in terms “number of 9s” (e.g. 4-nines, 5-nines). Table A.1 shows the relationship between “9s” and annual downtime.

Table A.1 — Availability and annual downtime

Availability A

Common reference

Downtime
(based on 8 760 h per year)

90 %

1-nine

36,5 days

99 %

2-nines

3,65 days

99,9 % (3-nines)

3-nines

8,76 h

99,99 % (4-nines)

4-nines

52,6 min

99,999 % (5-nines)

5-nines

5,3 min

99,9999 % (6-nines)

6-nines

31,5 s

Table A.1 in connection with Figure A.1 gives examples that strictly using availability, as the only key metric, is of limited significance:

a) The cost of one 60 min disruption would exceed the cost of 60 disruptions of 1 min duration, see the example illustrated in Figure A.1.

b) The fault of the power supply to ICT equipment of more than 20 ms will typically result in a shutdown of the equipment. Design considerations of the power system (EN 50600-2-2) need to address events within a critical time range to guarantee high availability by preventing against unexpected disruptions. By comparison, the environmental control system (EN 50600-2-3) would typically tolerate failures of a minute (or multi-minute) range without any effect on the availability of the ICT equipment and services within the data centre.

c) In practice, the time interval up to or between unplanned infrastructure faults, particularly in the case of higher Availability Classes 3 and 4, can only be tolerated within accepted limits. Such limits may be defined for longer time spans than one year (e.g. 5 years, 10 years etc.).

d) In case of an infrastructure fault, the data centre downtime until recovery could potentially result in an overall data centre service downtime of several hours or even days.

The limiting characteristics of sole availability considerations can be supplemented by resilience considerations.

    1. Resilience
      1. General

Following the definition, resilience covers the ability to withstand and reduce the magnitude and/or duration of disruptive events, including the capability to anticipate, absorb, adapt to, and/or rapidly recover from such an event. This definition covers multiple aspects, as availability, reliability, failure rate, fault tolerance and more.

A brief overview of KPIs of several criteria of resilience is given here. For mathematical definitions of the KPIs, underlying metrics, applicable methods, calculation examples, and dependability data refer to CLC/TS 50600-4-31, which defines KPIs for resilience.

      1. Reliability and failure rate

Reliability is a probability value to quantify the ability of an item to perform as required, without failure, for a given time interval.

For instance, the calculation of the inherent reliability requires a probability distribution (typically the “e” function), a time interval (typically on yearly basis), and the metric Mean Time between Failure (MTBF).

The inherent failure rate is the reciprocal of the MTBF. In practical applications, the failure rate is often specified alongside the reliability for better interpretation.

      1. Inherent availability

The inherent availability is the instantaneous probability that a component or system will be up or down. It considers only downtime for repair to failures under ideal conditions, without logistics time, preventative maintenance, etc.

In addition to other KPIs, the inherent availability is often used to evaluate data centre infrastructures in the phase of planning and design. In practical scenarios, an inherent availability of at least 5-nines, which means > 99,999 % (or 0,999 99) is typically required for infrastructures of Availability Class 3, including power supply, power distribution and environmental control.

      1. Operational availability

In comparison to the inherent availability, the operational availability considers maintenance preparation, logistics time, procurement of replacement parts, configuration, reprogramming, testing of software, availability of personnel to respond, system restart/reboot and reestablishment of network services and the software stack to return all systems to the operational state they were in prior to the downtime event.

As a consequence, the operational availability is lower than the inherent availability. With practical certainty, the difference to the inherent availability can be one mathematical order of magnitude or even more.

      1. Fault tolerance

Fault tolerance values the capability to anticipate, absorb, and adapt to disruptive events. The basic KPI of the fault tolerance is the number of Single Points of Failure (SPoF).

Data centres of Availability Class 4 are deemed as “fully fault tolerant”, see Annex B. Data centres of Availability Class 3 can be deemed as “partly fault tolerant”, because the power supply only is required to be “fault tolerant”.

To be considered as fault tolerant, the number of SPoF equals zero, for the particular part of the infrastructure.

Furthermore, the power supply of data centres of Availability Class 4 is deemed as “fault tolerant” also during maintenance. To value an infrastructural item in case of two failures, such as the power supply of Availability Class 4, the number of Double Point of Failures (DPoF) is crucial.

Note that SPoF and DPoF considerations do not take into account whether the cause of failures is planned or unplanned.

      1. Availability tolerance

The KPIs single point of reduced availability (SPoRA) and double point of reduced availability (DPoRA) consider availability and infrastructural characteristics in connection. SPoRA and DPoRA, respectively, indicate the number of items (item pairs) whose failure leads to falling below a certain availability limit.

SPoRA and DPoRA are helpful for deeper infrastructural design comparison, design optimization as well as for SLA definition.

      1. Service level agreements (SLAs)

SLAs serve to define certain requirements for the data centre. With regard to the infrastructure power supply, power distribution, and environmental control, the following should be considered:

a) availability requirements;

b) maximum number of single points of failure (SPoF);

c) reporting time interval (years);

d) maximum accepted number of faults per reporting interval;

e) maximum accepted downtime per service violation (hours).

Due to the long lifetime of data centres, SLA definitions should also take into account the requirement of planned maintenance events for limited timespans.


  1. (informative)

    Availability and resilience criteria

Table B.1 and Table B.2 give more details about the availability classification in relation to other resilience criteria of this document.

Table B.1 — Summary of availability classification for power supply, power distribution and environmental control

Infrastructure

Availability Class

Class 1

Class 2

Class 3

Class 4

EN 50600-2–2 Power supply

 

 

 

 

Availability

Low

Medium

High

Very high

Redundant sources

N

Y

Y

Y

Protected against source failure

N

Y

Y

Y

Redundant path to primary distribution

N

N

Y

Y

Protected against path failure

N

N

Y

Y

Compartmentalization

N

N

N

Y

Protected against single device failure

N

Y

Y

Y

Load operation during maintenance

N

Na

Y

Y

Fault tolerant

N

N

Yb

Y

EN 50600-2–2 Power distribution

 

 

 

 

Availability

Low

Medium

High

Very high

Redundant path

N

N

Y

Y

Protected against path failure

N

N

Y

Y

Compartmentalization

N

N

N

Y

Protected against single device failure

N

Y

Y

Y

Load operation during maintenance

N

Na

Y

Y

Fault tolerant

N

N

N

Yb

EN 50600-2–3 Environmental control

 

 

 

 

Availability

Low

Medium

High

Very high

Redundant source

N

N

Y

Y

Redundant path

N

N

Y

Y

Protected against path failure

N

N

Y

Y

Compartmentalization

N

N

N

Y

Protected against single device failure

N

Y

Y

Y

Load operation during maintenance

N

Na

Y

Y

Fault tolerant

N

N

N

Yb

a Depending on the device being maintained.

b Except during maintenance.

Table B.2 — Summary of availability classification for telecommunications cabling

Infrastructure

 

 

Availability Class

 

 

 

Class 1

Class 2

Class 3

Class 4

EN 50600-2-4 Telecommunications service/campus supply

 

 

 

 

Availability

 

 

Low

Medium

High

Very high

Redundant sources

 

 

N

Y

Y

Y

Protected against source failure

 

N

Y

Y

Y

Redundant path to MDA

 

 

N

N

Y

Y

Protected against path failure

 

N

N

Y

Y

Compartmentalization

 

 

N

N

Ya

Ya

Protected against single device failure

 

N

Y

Y

Y

Load operation during device maintenance

 

N

Y

Y

Y

Load operation during cabling maintenance

 

N

N

Y

Y

Fault tolerant

 

 

N

N

N

Y

EN 50600-2-4 Network distribution cabling Infrastructure to and within computer room space

 

 

 

 

Availability

 

 

Low

Medium

High

Very high

Redundant paths

 

 

N

N

Y

Y

Protected against path failure

 

N

N

Y

Y

Compartmentalization

 

 

N

N

N

Y

Protected against single device failure

 

N

N

Y

Y

Protected against distributor/distribution area failure

N

N

N

Y

Load operation during maintenance

 

N

Na

Y

Y

Fault tolerant

 

 

N

N

N

Yb

a Equipment room and building entrance facility.

b Except during maintenance.

Bibliography

EN IEC 31010, Risk management - Risk assessment techniques

EN 50173-1:2018, Information technology - Generic cabling systems - Part 1: General requirements

EN 50173-5:2018, Information technology - Generic cabling systems - Part 5: Data centre spaces

EN 50174-1:2018, Information technology - Cabling installation - Part 1: Installation specification and quality assurance

EN 50174-2:2018, Information technology - Cabling installation - Part 2: Installation planning and practices inside buildings

EN 50600-2-1, Information technology - Data centre facilities and infrastructures - Part 2-1: Building construction

EN 50600-2-2, Information technology - Data centre facilities and infrastructures - Part 2-2: Power supply and distribution

EN 50600-2-3, Information technology - Data centre facilities and infrastructures - Part 2-3: Environmental control

EN 50600-2-4, Information technology - Data centre facilities and infrastructures - Part 2-4: Telecommunications cabling infrastructure

EN 50600-2-5, Information technology - Data centre facilities and infrastructures - Part 2-5: Security systems

EN 50600-3-1, Information technology - Data centre facilities and infrastructures - Part 3-1: Management and operational information

EN 50600-4 (all parts), Information technology – Data centre facilities and infrastructures

EN IEC 62040-1:2019, Uninterruptible power systems (UPS) - Part 1: Safety requirements

EN ISO 22301, Security and resilience - Business continuity management systems - Requirements (ISO 22301)

IEC 60050-151, International Electrotechnical Vocabulary – Part 151: Electrical and magnetic devices

ISO/TS 22317, Security and resilience — Business continuity management systems — Guidelines for business impact analysis

ISO 31000, Risk management — Guidelines

CLC/TS 50600-4-31, Information technology - Data centre facilities and infrastructures - Part 4-31: Key performance indicators for Resilience

ETSI EN 305-174-2, Broadband Deployment and Lifecycle Resource Management; Part 2: ICT Sites

espa-banner