AI software specialist Konan Technology is showing strong performance. The news that the new Konan LLM model with integrated inference mode, ‘ENT (Enterprise)-11,’ has outperformed DeepSeek in benchmarks appears to be influencing its stock price.
As of 10:43 AM on the 26th, Konan Technology is trading at 25,150 KRW, up 24.5% from the previous day.
Konan Technology introduced the key feature of the new model as the integration of general mode and inference mode into a single engine. As a single model, it can switch to provide optimal answers not only for general Q&A but also for tasks requiring complex reasoning. Compared to other companies’ models that offer separate models for general use and inference, it is differentiated by enabling high-performance AI services with lower GPU costs.
Another strength is its Korean language-optimized architecture. Compared to models such as Qwen, LLaMA, Gemma, and DeepSeek, it includes more Korean tokens during the pre-training phase, resulting in significantly superior accuracy and response speed for Korean-based queries.
This was also validated through an internal benchmark conducted before release. The ENT-11 model has 32 billion parameters, which is about 5% of the 671 billion parameters of DeepSeek’s LLM ‘R1.’ In the ‘MT-Bench’ evaluation, which measures multi-turn conversation and instruction-following abilities, ENT-11 scored on par with DeepSeek R1 across eight categories: writing, role-playing, reasoning, mathematics, coding, information extraction, STEM (science, technology, engineering, mathematics), and humanities. Coding performance was notably superior. Compared to DeepSeek R1, which has the same 32B parameters as ENT-11, the model showed an average performance improvement of 4.75 percentage points (P).
Konan Technology directly translated, reviewed, and corrected MT-Bench to increase the accuracy of the evaluation results and reduce errors, creating its own ‘Konan MT-Bench’ to repeatedly assess the new model’s performance. As a result, the ‘ENT-11’ model recorded an average performance 5.38 percentage points higher than DeepSeek R1 of the same size. It particularly excelled in complex reasoning and mathematics. This indicates that despite being a compact model, it maximizes inference performance through efficient and refined design. The general mode performance of the ‘ENT-11’ model also improved by 4.5 percentage points compared to the previous ‘ENT-10’ model.
Performance improvements are also evident in context processing capabilities. While the existing ENT-10 model supported up to 16K context tokens, ENT-11 extends this to a long context of up to 128K tokens. This corresponds to the equivalent of 128 A4 pages in Korean tokens and 320 pages in English tokens.
CEO Kim Young-seom stated, "As the number of LLM models increases, evaluation methods are diversifying. It is necessary to distinguish models that are excessively fitted to specific evaluation metrics and perform well only in certain tests."
He added, "Although the model is 20 times smaller than DeepSeek R1, it has proven better inference performance. Leveraging high-quality Korean data and development infrastructure as strengths, Konan Technology will continue to strive to make its LLM technology a barometer of domestic generative AI performance."
The new model will be officially released at the end of this month. Based on its excellent performance proven in multi-turn benchmarks and efficiency that minimizes resource consumption, it is expected to have high applicability in increasingly sophisticated and specialized AI agent environments.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![[Special Stock] KonanTech, Ultimate Cost-Effective LLM Surpassing DeepSeek... Performance Advantage with About 5% Parameters](https://cphoto.asiae.co.kr/listimglink/1/2025032610285384020_1742952533.jpg)

