How to solve the heat dissipation problem of dense server racks?

  Certain server racks in data centers consume up to 800 kilowatts per rack per year, and they will continue to increase. How will it develop in the future?


Data center density was a general topic at the end of the century in the past, which may explain why many IT organizations still hover at the energy density of 4 to 6 kilowatts/rack. However, power and heat management are already preparing to design server racks larger than 10 kilowatts.


The skyrocketing number of processor cores and rack-level blade server design make it seem inevitable to increase CRAC and power costs. But high density does not kill the server as the designer fears. Virtualization, high-efficiency and energy-saving hardware, active cooling suppression and higher acceptable operating temperature in coordination with each other will delay and reduce thermal energy consumption.


How big is the thermal issue for servers?

Unlike configuring a server for each workload, a medium-configured server can support 10, 20, or even more workloads with a hypervisor. The rack space of the facility may be freed up after various loads are virtualized.


At the same time, the chip is made of a higher-density transistor-level manufacturing process and a lower clock speed, so when the device is updated, the spiral increase in the number of processor cores will hardly affect the energy consumption of the rack.


With downsizing, there are more fully utilized servers in the data center, so the required racks are also reduced, which has changed how we apply cooling. Different from cooling the entire data center, using macroscopic air handling strategies, such as hot/cold air channels to achieve air convection in the space, operators implement suppression strategies to reduce the operating area to a few smaller spaces, or even some racks. Use the in-row or rack internal cooling system to deal with this heat, and even turn off the computer room air conditioner (CRAC).


In addition, the American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE) also recommends increasing the effective server air inlet temperature to 80 or even 90 degrees Fahrenheit.


With the development of these energy management, hot spots and insufficient cooling are unlikely to occur, usually due to improper design or poor facility renovation.


Hot spots and other cooling issues


Even if the best suppression strategy and high-efficiency cooling system are used, server hotspots in the rack will still be generated due to the suboptimal selection or placement of computing equipment.


Unexpected obstacles or accidental changes in the air flow path may generate heat. For example, removing the shields of a server rack and allowing air to flow into an unplanned location in the rack will weaken the air flowing to other servers and increase the outlet temperature.


Significantly increasing server power consumption will also cause heat dissipation problems. For example, replacing several 1U servers with advanced blade server systems will greatly increase the energy consumption of the rack, and insufficient air flow will directly affect all module components of the blade machine. If the cooling system is not designed for such a server, hot spots are likely to occur frequently.


When increasing rack density in the service area, operating organizations need to consider investing in data center infrastructure management and other system management tools, collecting data from thermal sensors in the racks and generating reports. They can detect conditions exceeding the heating limit and take necessary measures, such as notifying technicians, automatically invoking workload migration or shutting down the system to prevent premature failure of the facility.


When server rack planning generates hot spots, the IT team can reallocate hardware. Unlike filling a single rack, if space permits, move half or one or two racks of equipment to other racks, or shut down overheated systems.


If the space is not enough for the redesign, add some mobile cooling equipment that has its own air-conditioning and can be used in the data center. If the rack uses a compact in-row or rack cooling unit, setting the temperature point can achieve the cooling effect more effectively than opening a closed unit and adding cooling equipment.


Long-term mitigation strategy


In the long run, breakthrough technology can help heat management.


Water-cooled racks can transmit cooling water through cabinet doors or other paths. Water-cooled racks can solve most of the heating problems—especially when low-temperature air and high-temperature air convection heat dissipation does not work.


The medium immersion cooling technology can immerse the server in a bathtub filled with a non-conductive, non-corrosive cooling substance like mineral oil. This technology is expected to achieve high efficiency, almost no noise, and close to zero loss of heat transfer.


However, these popular technology options are more suitable for new data center architectures, rather than regular technical updates.



  Sinda thermal is experienced in providing various thermal solution in 5G, server,computer, medical equipment, electric vehicle,LED applications,etc. We can provide extrusion heatsink, skive fin heatsink , stamping fin assembly ,heatpipe soldering modules, vapor chamber heatsink, liquid cooling plate, fan cooler,etc. Please contact with us if you need any help on thermal issues. 


website: www.sindathermal.com

contact:castio_ou@sindathermal.com

Wechat: +8618813908426


You Might Also Like

Send Inquiry