What is the best solution for data center Energy consumption
Between 2016 and 2020, China's computing power grew by an average of 42% annually, with the total computing power in 2020 reaching 135EFlops and still maintaining a high-speed growth rate of 55%. While computing power is growing rapidly, it has also brought new problems. As the computational load increases, the resulting power consumption also increases. Taking the world's most well-known pre trained large model GPT-3 as an example, a single training requires a huge amount of computing power, approximately 190000 kilowatt hours of electricity, and produces 850000 tons of carbon dioxide. It is not an exaggeration to describe it as an "electricity consuming monster".

PUE, also known as power utilization efficiency, is used to measure the ratio of all energy consumed by a data center to the energy consumed by IT loads. It is considered an important indicator for evaluating the energy efficiency of a data center. The closer the PUE value is to 1, the less energy is consumed by non IT equipment, and the higher the energy efficiency level of the data center. At present, the average PUE value of large data centers in China is 1.55, and the average PUE value of ultra large data centers is only 1.46.

In the face of the opportunity of reshaping the industrial landscape with computing power, data centers are already an unavoidable necessity. The few choices are to improve computing power efficiency and reduce energy consumption. Whether to find new cooling solutions has gradually become a topic that the upstream and downstream of the computing industry must address. The traditional cooling scheme mainly relies on air cooling, which uses air as the refrigerant to transfer the heat emitted by the server motherboard, CPU, etc. to the heat sink module, and then uses fans or air conditioning cooling to blow away the heat. This is also the main reason why the cooling system consumes nearly half of the power in the data center.

When the PUE value was strictly limited and green computing gradually became popular, the "liquid cooling" technology that had been tried since the 1980s quickly became a new focus in the upstream and downstream industries. In fact, the principle of "liquid cooling" technology is not complicated. Simply put, it uses insulation low boiling point cooling liquids such as mineral oil and fluorinated liquid as refrigerants, and through heat exchange, the heat of the server is discharged, evolving into various cooling schemes such as cold plate, spray, and immersion.

Air cooling has complex processes, high total thermal resistance, and low heat transfer efficiency, which greatly restrict the computational power density of data centers and often generate significant noise. Liquid cooling technology not only saves energy and consumption, but also reduces noise and saves space. The power consumption required for heat dissipation is reduced by more than 90% compared to traditional solutions.

It can be seen that the emergence and application of liquid cooling technology have largely solved the problems of calculation and heat dissipation. However, like many new technologies, liquid cooling solutions also have shortcomings: high manufacturing costs, strict requirements for the data center's computer room environment, and high cost of renovation; Liquid cooling is undoubtedly the best choice among various heat dissipation schemes, but it also needs to consider the limitations of practical factors.






