Tech firms have joined forces both domestically and globally to take on US chip giant Nvidia in the AI chip sector, which has enjoyed a near monopoly in the burgeoning market.
An increasing number of global Big Tech companies are building their own AI ecosystems amid expectations for continuous growth in the sector, according to experts on Tuesday.
While Nvidia has teamed up with SK hynix and TSMC to further strengthen its technical leadership in high-bandwidth memory chips, other tech firms are forming rival alliances.
Intel has put together partnerships with Korean tech firms including Samsung, Naver and Korea Advanced Institute of Science and Technology.
IT giant Naver, which also has a partnership with Samsung, is striving to join the AI chip market as part of its efforts to expand its identity from a portal company to an AI powerhouse.
And Kakao became the first Korean company to join the AI Alliance led by global tech giants IBM and Meta, which also includes Intel, to create a local AI ecosystem that meets global standards.
Lee Sung-yeop, a professor at Korea University's Graduate School of Management of Technology, also pointed out that it is inevitable for second-movers like Samsung Electronics and Intel to forge partnerships to compete with Nvidia despite their advanced chip technologies.
“Once a company secures a market dominance and monopolizes it, it becomes very difficult for latecomer companies to reshape the ecosystem,” the expert said, citing the example of ChatGPT, where it takes the lead in generative AI, while second-mover chatbot makers often introduce their services through open-source models.
However, questions remain about whether the alliances will result in technology that can rival Nvidia's.
Building an AI model requires a development platform and specialized chips, but since Nvidia’s global market share in AI chips is estimated to have reached more than 80 percent, AI services are dependent on the US chip giant for both chips and platforms.
Nvidia’s flagship data center GPU, H100, sells for around $25,000-$40,000 per unit, with the price expected to double this year as increasingly sophisticated AI services push up demand.
Nvidia’s AI software platform, Cuda, is specialized for Nvidia’s GPU and operates only with Nvidia semiconductors. For example, the most popular conversational AI chatbot, ChatGPT, was also trained with Cuda and the 3.5 version of it was made with 15,000 H100 chips.
Regarding Intel's active pursuit of AI chip partnerships with Korean firms lately, Lee said, “Intel probably has sent a request to other Big Tech companies globally, but it would have been difficult for other firms to accept Intel’s offer because they probably cared more about their businesses with Nvidia."
"Korean firms like Samsung and Naver are globally competitive as well, that’s why Intel wanted to reach out to them as a second option.”
Kim Myung-joo, a professor of information protection at Seoul Women's University, however, thought Naver's partnership with Intel and Samsung would not be strong enough to break Nvidia's monopoly in the short term.
On the other hand, he evaluated the opportunity positively as a “meaningful strategy,” in terms of combining the outstanding capabilities of each company to advance AI chip development.
Meanwhile, experts suggest Korean firms take this chance to effectively secure leadership in the AI chip industry, warning that “now or never.” An anonymous professor with expertise in semiconductors noted that “technology and price advantage” would be the key to the success.
“Without memory, we have not secured success in system semiconductors yet. Since we have competitive companies that can handle (chip) design, high-bandwidth memory, chip package, and the foundry, it is a good opportunity to create a successful model without depending on Nvidia for the first time,” he added.