Tencent Debuts Pepped-Up Computing Cluster to Help With AI Push(Yicai Global) April 14 -- Tencent Holdings unveiled a new high-performance computing cluster today to meet the growing need to develop and train artificial intelligence models.
The new cluster uses Tencent’s self-developed server Star Lake, US chip giant Nvidia’s H800 GPU, and 3.2 terabyte ultra-high communication bandwidth between servers, the Shenzhen-based firm said at a press conference. This means it can provide cluster computing for the training of large AI models, autonomous driving, and scientific computing applications.
Computing performance has also been boosted to four times that of the previous generation. Tencent said the new cluster can shorten the training time for its self-developed natural language processing model Hunyuan from 11 days to four days with the same data set.
US AI startup OpenAI’s ChatGPT bot has proved hugely popular around the world, and Chinese companies are rushing to develop similar products. Tencent previously revealed that it has set up a project team called HunyuanAide to research large AI models in sectors including natural language processing and computer vision, Yicai Global reported earlier.
The eagerness to develop AI models has also increased the need for scalable high-performance computing power, which needs to be stable, Song Dandan, director of heterogeneous computing products at Tencent Cloud, told Yicai Global previously.
AI’s computing power needs are divided into two parts: training and inference. In the training phase, a large amount of computing power is required within a short period of time, while in the inference phase, large models require more cost-effective computing power and higher connection speeds with end-user device applications, Song added.
Editors: Dou Shicong, Tom Litting