Research on the development status of server liquid cooling technology at home and abroad

        Efficient heat dissipation of microelectronics and electrical equipment has always been one of the main applications of modern heat transfer technology 

        [1]. The integration of electronic processing chips is getting higher and higher, but the volume is getting smaller and smaller. The CPU manufacturing process has moved from 65 nanometers to 32 nanometers as the mainstream era, and the GPU has entered the 28 nanometer era, which caused high heat flow. The heat dissipation problem of density 

        [2]. The heat output of the CPU chip has soared from about 105 W/m2 of 1 Yi a few years ago to about 106 W/m 2 of 1 Yi now 

        [3]. If the heat dissipation is poor, the excessively high temperature generated will not only reduce the working stability of the chip and increase the error rate, but also cause excessive thermal stress due to the excessive temperature difference between the internal and the external environment of the module, which affects the electrical performance of the chip. Working frequency, mechanical strength and reliability. Research and practical applications have shown that the failure rate of electronic components increases exponentially with the increase in operating temperature 

        [4]. Every time the temperature of a single semiconductor component increases by 10 degrees Celsius, the reliability of the system will decrease by 50%. Because high temperature can have a very harmful effect on the performance of electronic components, for example, high temperature can endanger the junction of semiconductors, damage the connection interface of the circuit, increase the resistance of the conductor and form mechanical stress damage, research shows that more than 55% The failure mode of electronic equipment is caused by excessive temperature 

        [5].  Since the server was born, its system heat dissipation has always been accompanied by its development and cannot be eliminated. Most common servers rely on cold air to cool the machine. With the development of supercomputers, chip integration and computing speed continue to increase, energy consumption is also increasing, and the problem of heat dissipation has become increasingly prominent. The traditional air-cooled heat dissipation method is the direct heat transfer method, which relies on the convective heat transfer method of single-phase fluid and the forced air cooling method can only be used for electronic devices with a heat flux density of not more than 10W/ cm2, and it appears that the heat flux density is greater than 10W/ cm2. Powerless. 

        In recent years, the development of domestic informatization has become more and more mature, and higher requirements have been placed on the information system infrastructure—data center technology. For the data center, as the amount of data that needs to be processed is increasing, the scale of the data center is also increasing.

        With the explosive growth of data processing business demand and the rapid development of computer and network technology, large enterprises such as banking, insurance, securities and other financial industries, transportation, medical and health and other large enterprises and government agencies have successively established many data centers. Driven by the demands of data services and IT technology, my country's data center construction has now entered a period of rapid development, and cloud computing centers and data centers have sprung up everywhere.  

        Since the current main cooling method is air cooling, a huge data center means a huge amount of electricity bills. With the construction of data centers, the huge energy consumption of data centers has also attracted the attention of various industries in society. For example, in 2013, the number of data centers in China was about 45,000, and the annual power consumption was about 20 billion kwh; it is expected that by 2020, the number of data centers in China will exceed 80,000, and the annual power consumption will exceed 40 billion. Kilowatt hours (data source: ICTresearch). 

        There are countless large-scale data centers with operating electricity costs of millions or even tens of millions of dollars in China, and data centers have become power-consuming "bottomless pits."  The energy efficiency ratio of current data centers is not high. 

        This is because related computing technology, power supply technology, and cooling technology have all evolved from history. Traditional data center design pursues performance, while a new generation of data centers must pursue energy efficiency (PUE), that is, data center energy utilization, under the circumstances of today's energy shortage and rapid increase in energy costs. In view of the increasingly warming global climate, increasingly tight energy, and rising energy costs, the high energy consumption department of the data center is facing severe challenges in reducing energy consumption, improving resource utilization, and saving costs. The attention of more and more data center managers and IT vendors has become an inevitable trend in the development of data centers in the future.  Relevant national authorities are paying more and more attention to the issue of energy consumption management in data centers. At the beginning of 2013, the "Ministry of Industry and Information Technology, Development and Reform Commission, Ministry of Land and Resources, Electricity Regulatory Commission, and Energy Administration: Guiding Opinions on the Construction and Layout of Data Centers" (Ministry of Industry and Information Technology Unicom, 2013, page 13) put forward very specific requirements , It is necessary to promote the overall planning of data center location and consider resources and environmental factors, promote the intensive use of resources, and improve the level of energy saving and emission reduction; introduce relevant standards to meet the requirements of the new generation of green data centers, optimize the layout of cold and hot airflows in the computer room, and adopt precise air supply, Measures such as rapid cooling of heat sources to reduce operating costs in terms of computer room construction, main equipment selection, etc., to ensure that the PUE value of newly built large-scale data centers, that is, the energy utilization rate of the data center, reaches below 1.5, and strive to reduce the PUE value of the data center after the transformation To below 2.  

         At present, the heat density of new rack servers and blade servers that are widely used in data centers is increasing year by year. Traditional air-conditioning systems using air-cooling technology have been unable to meet the cooling requirements of such high-density computer rooms. The designers of data center infrastructure must find another way to find an efficient and reasonable cooling mode.  

         There are many technical means to reduce the energy consumption of data centers, but the fundamental problem is not to solve the problem of data center construction methods, but to revolutionize the cooling methods of computers and other equipment. Judging from the latest research progress at home and abroad, the development of a new generation of liquid-cooled computers (or liquid-cooled servers) and the use of liquid refrigerants to replace air to cool computer heating elements is a technological revolution in future computer equipment.

66372ef30870d3e7c7f6c30c36bb08f

You Might Also Like

Send Inquiry