Tier 1 & 2 Data Center Cooling System Design





The unlimited evolution of internet and smart devices connectivity into everybody’s life is without objection the most impressive change of the last two decades in our modern life. According to Internet World Stats last update of June 30 2018, 4.2 out of 7.6 billion global population has access to the internet. This figure is with higher penetration rates in North America (95% of area population), Europe (85%) and lowest rate in Africa (36%) which is expected to boost within the next decade.

Streaming of music and video is the main data loading function in the internet on a 24/7 base. The storage and distribution of all available data is through data farms around the world more widely known as Data Centers (DC). The exact definition according to ASHRAE [1] mention data center facilities as:

Datacom (data processing and telecommunications) facilities which are predominantly populated with computers, networking equipment, electronic equipment and peripherals.

The equipment installed within a data center (DC) serves mission critical applications and has special environmental requirements. These requirements create two main operational needs that affect the HVAC design:

  • Highest possible rate of facility availability on a 24/7 base in order to reliably serve mission critical applications.
  • Continuous cooling in order to maintain special temperature and humidity conditions in order to avoid overheating and thermal failure of computer equipment.


The HVAC design of datacenters that follow the above requirements can be managed by assuring design, construction and operation compliance with one of the most industry recognized standards. That is the Uptime Institute Tier classification.

The Tier classification of a data center can take four rates, namely:
                                           Tier I, Tier II, Tier III or Tier IV.
The Tier class can be unique for a data center and describes criteria to differentiate four classifications of site infrastructure topology based on increasing levels (as numbers increase I to IV) of redundant capacity components and distribution paths.

Scope of this article is to give the HVAC design engineer initial guidelines on how the topology of the DC cooling system should be structured in order to comply with the UI Tier standards requirements. The standard of reference is Tier Standard: Topology [2] which can be requested for download from here.

Carry on with reading this post if interested on a Tier I or Tier II data center requirements or check Tier III Cooling System design or Tier IV Cooling System design respectively.


Reference Standards and regulations


Guidelines, requirements and design criteria in use below are in accordance with the following references:
  • ASHRAE – Design Considerations for Datacom Equipment centers / 2nd ed. 2009,
  • Uptime Institute – Data Center Site Infrastructure, Tier Standard: Topology / 2018,


Tier I - Design Criteria


According to Tier Standard: Topology [2] / clause 2.1.1, the fundamental requirements for a Tier I - Basic Site Infrastructure facility are:

a) A Tier I basic data center has non-redundant capacity components and a single non-redundant distribution path serving the critical environment. Tier I infrastructure has a dedicated space for IT systems; a UPS; dedicated cooling equipment; and on-site power production (e.g. engine generator) to protect IT functions from extended power outages.

b) Twelve hours of on-site fuel storage for on-site power production.

Tier II – Design Criteria


According to Tier Standard: Topology [2] / clause 2.2.1, the fundamental requirements for a Tier II – Redundant Site Infrastructure Capacity Components facility are:

a) A Tier II data center has redundant capacity components and a single non-redundant distribution path serving the critical environment. The redundant components are:

·         Extra on-site power production (e.g. engine generator),
·         UPS modules and energy storage,
·         Chillers,
·         Heat rejection equipment (e.g. cooling towers, condensers),
·         Pumps,
·         Cooling units,
·         Fuel tanks.

 b) Twelve hours of on-site fuel storage for 'N' capacity.


A few important points


Before dive into details of the cooling system design requirements in compliance to Tier criteria, keep first in mind a few more important points from the aforementioned standard [2].

Every data center subsystem and system must be consistently deployed with the same site uptime objective to satisfy the distinctive Tier requirements.

This standard requirement makes clear that certain Tier rating requirements shall be applicable to all mechanical, electrical and building systems that serve the IT space. So whatever we will discuss below about the cooling system for a Tier I or II data center design are equally applicable and essential for the on-site power production, UPS and storage equipment, fuel tanks and water storage (evaporative cooling) systems as well.

The Tier topology rating for an entire site is constrained by the rating of the weakest subsystem that will impact site operation. For example, a site with a robust Tier IV UPS configuration combined with a Tier II chilled water system yield a Tier II site rating.

So even if the team is quite careful and especially concerned about the cooling system design, but fail to implement the same criteria into the electrical or fuel supply systems, the overall rating of the data center will be lower than the expected.


Design of Tier I Cooling system


The cooling system design of a Tier I rated data center shall comply with the following requirements:

Non-redundant Capacity components – meaning that there is no need for redundancy (backup) of any equipment of the cooling system including and not limited to components like:

  • CRAC / CRAH units,
  • Chillers, chilled water pumps,
  • Cooling towers, condenser water pumps,
  • AHUs
  • Split type DX cooling units,
  • Makeup water storage tanks and pumps.


This is what we call “N” units configuration or in simple terms the total capacity of all equipment to be used is equal to the datacenter cooling demand.

Single non-redundant distribution path – meaning that there is no need for redundancy (backup) of the cooling system distribution networks like:

  • Chilled water piping,
  • Condensing water piping,
  • Refrigerant copper piping for DX cooling systems,
  • Makeup water piping.

Depending on the system architecture these requirements can be applied in several cooling type systems like shown in the schematic representations below. A Tier I cooling system can use split type air conditioning systems with refrigerant (Fig.01), or a water type system with CRAC units inside the IT space (Fig.02) or even more advanced solutions with chilled water and air handling units (AHU) that blow air within the IT spaces (Fig.03).
All these solutions are quite common in the data center industry and the number of capacity units or distribution path defines the Tier class. 
Fig. 01 - A Tier I split system CRAC units developed with DX type systems make use of 'N' number of running CRAC units, the respective external condensing units and refrigerant pipes connecting indoor and outdoor components in a 'single distribution path'. It is a basic system with no redundancy.




Fig. 02 - A Tier I water system CRAC unit contain single or more running 'capacity components'. This includes a combination of water chiller, cooling tower, water pumps all together running at the same time and supplying chilled water to 'N' number of running CRAC units. A two pipe chilled water system connects chiller and CRAC units as a 'single distribution path'. It is a basic system with no redundancy.


Fig. 03 - A Tier I water system Air Handling unit  contain single or more running 'capacity components'. This includes a combination of water chiller, cooling tower, water pumps all together running at the same time and supplying chilled water to 'N' number of running Air Handling units. A two pipe chilled water system connects chiller and AHUs as a 'single distribution path'. It is a basic system with no redundancy.

In general a Tier I data center is not proposed since the site infrastructure and critical environments must be shut down for any maintenance or repair work. Any installation or construction of capacity will disrupt the critical environment. This configuration brings operational risks which have to be considered and clarified to the computer facility owner.

Design of Tier II Cooling system


The cooling system design of a Tier II rated data center shall comply with the following requirements:

Redundant Capacity components – meaning that there is need for redundancy (backup) of any equipment of the cooling system including and not limited to components like:

  • CRAC / CRAH units,
  • Chillers, chilled water pumps,
  • Cooling towers, condenser water pumps,
  • AHUs
  • Split type DX cooling units,
  • Makeup water storage tanks and pumps.


This is what we call “N+1” units configuration or in simple terms the total capacity of all equipment units to be used is equal to the data center cooling demand plus one more unit at least.

Single non-redundant distribution path – meaning that there is no need for redundancy (backup) of the cooling system distribution networks like:

  • Chilled water piping,
  • Condensing water piping,
  • Refrigerant copper piping for DX cooling systems,
  • Makeup water piping.
At the same way as previously, these requirements can be applied in several cooling type systems like shown in the schematic representations below. A Tier II cooling system can use split type air conditioning systems with refrigerant (Fig.04), or a water type system with CRAC units inside the IT space (Fig.05) or even more advanced solutions with chilled water and air handling units (AHU) that blow air within the IT spaces (Fig.06).
These systems configurations are quite common in the data center industry where redundancy of the 'capacity components' make operators believe that achieves a higher Tier rate. This is not real and a configuration as the ones shown below can only provide a Tier II class. Providing redundant equipment does not in any way gives the system Tier III rate unless more requirements are met.


Fig. 04 - A Tier II split system CRAC units developed with DX type systems make use of 'N+1' number of CRAC units in a configuration of N units running and at least 1 redundant. The respective external condensing units and refrigerant pipes connecting indoor and outdoor components in a 'single distribution path'. It is a system with only equipment redundancy.


Fig. 05 - A Tier II water system CRAC unit contain single or more running 'capacity components' all in a 'N+1' configuration. This includes a combination of water chiller, cooling tower, water pumps all together running at the same time and supplying chilled water to 'N' number of running CRAC units. Also the system has at least one more piece of redundant component for every type of capacity equipment. So there is a redundant water chiller, cooling tower, pumps and CRAC unit. A two pipe chilled water system connects all the chiller and CRAC units as a 'single distribution path'. It is a system with equipment redundancy.and a single common piping network.



Fig. 06 - A Tier II water system Air Handling unit  contain single or more running 'capacity components' all in a 'N+1' configuration. This includes a combination of water chiller, cooling tower, water pumps all together running at the same time and supplying chilled water to 'N' number of running Air Handling units. Also the system has at least one more piece of redundant component for every type of capacity equipment. So there is a redundant water chiller, cooling tower, pumps and AHU. A two pipe chilled water system connects all the chiller and AHUs as a 'single distribution path'. It is a system with equipment redundancy.and a single common piping network.


Note that in case of water systems there are also requirements for the makeup water installation and the storage tanks. 


In general a Tier II data center provides some capacity components that can be maintained or repaired with limited impact to the critical environment. On the other hand a distribution path element failure will disrupt the critical environment. The operational risk that this configuration brings to the computer facility owner is still high and related to many possible reasons. The piping networks become the weakest point of the system.


I hope that you find this post interesting and educative.





  • If you did like it please share it through the social so that more people can have access to it.
  • If you have any questions or would like to discuss any special case, please leave your comments below. I will be happy to answer!

3 comments:

  1. I am loving such kind of informative blogs...Thanks you so much for sharing reliable information. You can go through this too dc free cooling system

    ReplyDelete
  2. Impressive web site, Distinguished feedback that I can tackle. Im moving forward and may apply to my current job as a pet sitter, which is very enjoyable, but I need to additional expand. Regards. full wheel pose

    ReplyDelete
  3. Great Blog! Thanks for sharing detail information about Air & water-cooled chiller system

    ReplyDelete

AI & Deep Learning in MEP - Buildings Services design

AI and deep learning - The new trend Progress of new digital technologies is something that all people experience e...

Powered by Blogger.