- Competition goal: South Korea launched a contest to develop an independent AI model using Korean technology for national self-reliance.
- Foreign code use: Three of five finalists were found to incorporate open-source code from foreign, including Chinese, AI models, prompting scrutiny.
- Expert perspectives: Some experts argue that excluding open-source software is impractical and wasteful, while others cite security risks and national ownership concerns.
- Sovereign AI push: The government seeks two winners by 2027 achieving at least 95% parity with top models, offering funding, talent access, and government-procured chips.
- Upstage controversy: Rivals alleged Upstage’s model mirrored Chinese Zhipu AI and retained copyright markers; Upstage shared logs showing scratch training but acknowledged open-source inference code usage.
- Naver and SK Telecom scrutiny: Naver admitted to using external encoders similar to Alibaba/OpenAI, and SK Telecom faced claims of inference code resembling China’s DeepSeek, while both stress their core engines are independently developed.
- Rules unclear: Competition guidelines didn’t ban foreign open-source use, the science ministry has issued no new rules, and it plans to cut one finalist while the minister welcomes debate.
- Core training stance: Seoul National University’s AI Institute director affirmed that finalists trained their models from scratch without relying on foreign tools for internal numerical tuning.
By
Jan. 13, 2026 11:00 pm ET
The SK Telecom pavilion at an information-technology show in Seoul. Jeon Heon-Kyun/Shutterstock
SEOUL—Last June, the South Korean government launched a competition to create a new independent AI model developed with Korean technology. A homegrown tool like that was critical to ensuring Korea’s technological self-reliance in a world already dominated by U.S. and Chinese artificial intelligence.
It is proving to be easier said than done.
Of five finalist companies in the three-year competition, three have been found to use at least some open-source codes from foreign AI models, including Chinese ones.
The companies and AI experts argue it makes little sense to shun existing AI models and attempt to build everything from scratch. But others say that any use of foreign tools creates potential security risks and undercuts hopes of cultivating an AI model undeniably a nation’s own.
It isn’t realistic to require every single piece of code be written entirely in-house when pursuing AI-model development, said Gu-Yeon Wei, an electrical-engineering professor at Harvard University, who is familiar with the Korean competition but not directly involved with any of the competitors.
“To forgo open-source software,” Wei said, “you’re leaving on the table this huge amount of benefit.”
Countries worldwide are increasingly looking to reduce foreign reliance and hone their own capabilities in a technology that could profoundly affect their economic competitiveness and national security.
A chip from Nvidia’s Quantum-X platform. Bridget Bennett/Bloomberg News
South Korea, with a bevy of chip giants, software firms and political backing, represents one of the most aggressive proponents of so-called sovereign AI.
The race seeks to identify two homegrown winners by 2027 able to achieve 95% or higher parity in performance with leading AI models from the likes of OpenAI or Google. Winners get access to state funding for data and hiring talent, as well as access to government-procured chips essential for AI computing.
Controversy over one of the finalists, Upstage, erupted in recent days. Some components of its AI model bore resemblance to an open-source model from China-based Zhipu AI, according to the chief executive of Sionic AI, a local rival. In addition, Zhipu AI copyright markers were left within some of Upstage’s code, he claimed.
“It’s deeply regrettable that a model suspected to be a fine-tuned copy of a Chinese model was submitted to a project funded by taxpayers’ money,” Ko Suk-hyun, the Sionic CEO, wrote on LinkedIn. Sionic had also entered the South Korean competition, though failed to make the finalist list.
In response, Upstage held a livestreamed verification session that shared its development logs to prove its model was developed and trained from a blank state using its own methods. But the inference code used to make the model run had used open-source elements that originated from Zhipu AI, which is widely used globally. Sionic’s CEO apologized.
The scrutiny prompted closer looks of the other finalists. Naver’s AI model was accused of bearing similarities to offerings from China’s Alibaba and OpenAI in its visual and audio encoders that translate images and sounds into a format that a machine can understand.
Naver’s headquarters in Seongnam, south of Seoul. Yonhap News/ZUMA Press
SK Telecom faced criticism that the inference codes for running its AI model bore similarities with those of China’s DeepSeek.
Naver admitted to using external encoders, but said it was a strategic decision to use a standardized technology. It stressed that the model’s core engine—which determines how it learns and is trained—were developed entirely by the company. SK Telecom made a similar argument, stressing the independence of its model’s core.
The competition’s rules didn’t explicitly state whether open-source code from foreign firms could be used or not. South Korea’s science ministry, which is overseeing the competition, hasn’t given out any new guidelines since the controversy. The country’s science minister, Bae Kyung-hoon, welcomed the robust debate.
SHARE YOUR THOUGHTS
Do the benefits of using open-source code from Chinese AI models outweigh the potential security risks? Join the conversation below.
“As I watched the technological debates currently stirring our AI industry, I actually saw a bright future for South Korean AI,” Bae wrote in a social-media post earlier this month.
The ministry declined to comment when asked by The Wall Street Journal. It plans to eliminate one of the five finalists from the competition this week as scheduled.
AI models are developed by setting and fine-tuning internal numerical values to get an output, and those core tasks don’t appear to have relied on foreign tools in the models of the finalists that have faced questions, said Jae W. Lee, director of Seoul National University’s AI Institute.
“They trained from scratch,” he said.
Write to Jiyoung Sohn at jiyoung.sohn@wsj.com
Copyright ©2026 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8


emilyjashinsky