China Merchants Securities released a research report saying that the upgrade of AI computing power drives the CPU iteration of servers and increases GPU demand, driving the storage capacity and value of AI servers to increase several times compared with traditional servers. In the training AI server, GPU undertakes most of the computing power. The computing power requirement has promoted new memory markets such as HBM to exceed 10 billion US dollars, thus increasing the demand for advanced packaging processes such as Bumping, TSV and CoWoS, and bringing about incremental demand for equipment such as thinning, bonding, molding and testing, as well as materials such as EMC, electroplating solution and PSPI. Overlay domestic independent and controllable demand continues to grow, domestic storage and HBM and other advanced packaging industry chain development space is huge.
China Merchants Securities views are as follows:
AI server CPU and GPU are upgraded with computing power demand, pulling memory capacity and value several times.
traditional servers use CPU as the core of computing power. with the continuous improvement of computing power requirements of AI training models, the number of cores, main frequency and threads of CPU are continuously increasing. however, CPU alone can no longer meet the computing power requirements and needs to be matched with GPU for multi-threaded data processing. mainstream training servers are generally matched with 8 GPU. The main memory used by AI servers includes CPU memory, GPU display memory and hard disk NAND, etc. The memory capacity and value are several times higher than those of ordinary servers. 1)DRAM: CPU DRAM capacity in Avida training AI servers is as high as 2TB. In addition, a single GPU is generally equipped with more than 80GB of HBM memory. The total capacity of HBM of AI servers is expected to exceed 640G, and the total memory capacity is 4-8 times higher than that of ordinary servers, only the value of CPU memory is expected to increase by 5 times, while HBM of GPU is a pure incremental market. In addition, server memory is also constantly iterating. At present, ordinary servers are equipped with DDR4, but the most advanced AI servers have been matched with DDR5 or LPDDR5. 2)NAND: The hard disk capacity of AI server is as high as 30TB, which is 2-4 times higher than that of traditional servers, in addition, traditional servers use both mechanical hard drives and solid-state drives (SSDs), but AI servers basically all use SSDs, and the overall value is expected to increase by about 10 times compared to ordinary servers.
HBM can break through the GPU bandwidth limit of training AI servers, and the incremental space is expected to exceed 10 billion US dollars in 2024.
HBM(High Bandwidth Memory) is a new type of CPU/GPU memory chip based on 2.5/3D packaging technology. DRAM Die is vertically stacked and Die is connected by TSV. HBM can generate high bandwidth with low power consumption, so it is widely used with GPU of training AI server. The pulling force of training AI server on HBM demand is mainly reflected in: 1) the increase in the number of GPU carried by AI server: from 2 of ordinary servers to 8 at present; 2) Increase in the number of HBM Stack carried by a single GPU: in the HBM1 scheme, a single GPU carries 4 HBM1, in the current HBM2e or HBM3 scheme, a single GPU is generally matched with 6 HBM Stack;3) The number of DRAM layers and capacity of HBM stack increases: from HBM1 to HBM3, the density of single DRAM Die increases from 2Gb to 16Gb, the stack height increases from 4Hi to 12Hi, and the capacity of single HBM stack increases from 1GB to 24GB. Trendforce, the global server shipment volume is expected to be 17 million units in 2025, and the current AI server penetration rate is about less than 2%. Assuming that the AI server penetration rate is about 4% in 2024, according to the scheme that each AI server is equipped with 8 GPU and each GPU is equipped with 6 HBM Stack with a total of 80GB to 100GB and above, the HBM incremental space brought by AI servers in 2024 is expected to exceed 10 billion US dollars.
the GPU of AI server adopts 2.5D + 3D packaging process to promote the requirements of TSV, CoWoS and other core packaging technologies.
HBM and GPU adopt 2.5D + 3D packaging process. according to Yole, the total market size of HBM and Si interposer packaging is about us $1.4 billion in 2021 and is expected to increase to us $3.5 billion in 2027, with HBM and silicon interposer packaging market increasing to us $16.3 and us $1.88 billion respectively. TSV(Through-Silicon Via) is a through-silicon via technology that conforms to the 2.5D package architecture. It can provide extremely high bandwidth and density with the lowest energy consumption. It is the preferred solution for circuit miniaturization, high density and multi-function. 2.5D TSV technology has been widely used in the HBM on the AI GPU substrate to realize the connection between the DRAM layers Die and the connection between the HBM chip and the metal bumps below. The CoWoS process is used to package HBM, silicon interposer, packaging substrate, etc. TSMC is currently in a leading position. With Google TPU, Avida GPU, AMD MI300, etc. all importing generative AI, TSMC's CoWoS demand has doubled since 2022 and is currently in short supply. It is expected to double the current CoWoS capacity by 2024.
HBM multi-layer stack structure to enhance the process steps, will drive the packaging equipment and material requirements continue to increase.
1) equipment: HBM has greatly increased the number and precision of front-end testing equipment. the main increment of front-end testing and measurement equipment comes from micro-bump, TSV, silicon interposer and other processes. in addition, pre-bonded wafer-level testing and KGSD-related package-level testing added in HBM also drive the number and precision of rear-end testing equipment such as sorting machines, testing machines and probe stations. HBM stack structure increases, requiring continuous reduction of wafer thickness, further increase the demand for thinning, bonding and other equipment; HBM multi-layer stack structure requires ultra-thin wafer and copper-copper hybrid bonding process, increasing the demand for temporary bonding/de-bonding and hybrid bonding equipment, and the protection material of each layer of DRAM die is also very critical, which puts forward higher requirements for injection molding or compression molding equipment. In addition, the demand for traditional equipment such as dicing machine, solid crystal machine, reflow soldering machine/reflow furnace also benefits from the improvement of process steps brought about by HBM packaging and the improvement of value brought about by process changes. 2) Materials: The chip gap in HBM is filled with GMC (granular plastic encapsulation material) or LMC (liquid plastic encapsulation material), and the main raw materials of GMC are spherical silica powder and spherical alumina; HBM uses bottom filling glue for FC packaging process, the use of PSPI as a repassivation layer for RDL in the silicon interposer, the introduction of Bumping, RDL, TSV, etc. in HBM into the front-line process, resulting in an increase in the amount of plating solution, and HBM will also increase the demand for other materials such as electronic adhesives, packaging substrates, pressure-sensitive tapes, etc.
Investment advice: Compared with traditional servers, the storage capacity and value of AI servers are increased several times, in which the training AI server GPU significantly increases the bandwidth requirements, giving rise to the incremental demand for new types of memory such as HBM. At present, DRAM, NAND, HBM and other shares are mainly occupied by overseas original manufacturers such as Samsung, Meiguang and SK Hynix. HBM's CoWoS packaging process is mainly controlled by TSMC. However, considering the pull of AI on the entire storage industry chain, the demand of the superimposed industry continues to recover, the demand of domestic independent and controllable continues to increase, and the development space of the advanced packaging industry chain spawned by domestic storage and HBM is huge.
Recommended attention:
advanced packaging equipment standard zhongkefei test (688361.SH), north huachuang (002371.SZ), micro company (688012.SH), etc.;
advanced packaging materials target Dinglong shares (300054.SZ), Anji Technology (688019.SH), Jacques Technology (002409.SZ), etc.;
Advanced packaging standard Changdian Technology (600584.SH), Tongfu Microelectronics (002156.SZ), Huatian Technology (002185.SZ), etc.
storage module and main control target Jiangbolong (301308.SZ), Biwin storage (688525.SH), Longke technology (300042.SZ), etc.
store distribution targets Shannon Core (300475.SZ), Yachuang Electronics (301099.SZ), etc.;
storage and HBM supporting standard National Core Technology (688262.SH), Lanqi Technology (688008.SH), Chuangyitong (300991.SZ), etc.
Risk Warning: Risks of less than expected increase in AI server penetration, less than expected recovery in the storage industry, less than expected domestic substitution process, and less than expected progress in research and development.
Ticker Name
Percentage Change
Inclusion Date