Naver announced on the 18th that its regular paper related to search technology was accepted at 'EMNLP 2024,' a global natural language processing (NLP) conference.
EMNLP, now in its 28th edition, is regarded as the world's top AI conference in the field of natural language processing. It covers various studies on language data-based NLP approaches such as AI translation, chatbots, and machine reading comprehension. EMNLP 2024 will be held in Florida, USA, from the 12th to the 16th of next month, and Naver plans to present four accepted papers, including research related to search technology.
Naver stated that it added significance in terms of creating service value and research effectiveness by directly and indirectly applying these research outcomes to its actual search services to improve search quality and usability.
First, a research paper on the algorithm applied to Naver’s generative AI search service, CUE, was accepted. This study deals with a learning mechanism that detects harmful queries and provides appropriate responses through a modular approach using a small large language model (sLM).
Naver has been utilizing this research outcome since November last year by applying it to CUE to enhance AI safety. It identifies queries related to illegal information such as crime and harmful content, copyright law violations, privacy infringement, personal information leaks, and abusive or vulgar language to prevent indiscriminate responses, thereby establishing a safe generative AI search environment.
Naver plans to use this technology to improve the judgment of query and search result relevance, expanding exposure to high-quality content. It will also be used to strengthen the overall quality of the search service by enabling answers from highly reliable sources to be displayed at the top.
Additionally, Naver proposed a technology that allows AI to effectively handle not only text but also complex snippet forms such as lists and tables when extracting information in the 'Knowledge Snippet' service, which summarizes key information related to search terms at the top of integrated search results.
This technology aims to be applied to Knowledge Snippet in the first half of next year and is expected to contribute to improving performance by increasing the accuracy of answers to long-tail queries, which are long and complex search terms, allowing users to quickly find the information they want.
A paper on a method to transfer the document prioritization ability of large language models (LLM) to small language models (sLLM) and apply it to search services was also accepted. This technology was devised and proposed to deliver results that users want in real-time search services without quality degradation or speed reduction compared to LLMs.
Moreover, this year, Naver Search published papers related to search technology not only at EMNLP but also at NAACL (1 paper), regarded as the world’s top NLP conference, CVPR (2 papers), the world’s top AI conference, as well as Information Sciences (1 paper), LREC-COLING (1 paper), SIGIR, and LLM4Eval (1 paper each). Additionally, at HCLT (the Korean Language and Korean Information Processing Conference), the most prestigious domestic conference now in its 36th edition, seven papers were accepted, with two selected as outstanding papers.
Kwanghyun Kim, head of Naver’s Search and Data Platform division, said, "Through this research, Naver’s search technology, which has led the domestic search market, has been recognized on the global stage. We will continue to offer competitive search services optimized for users by providing improved search accuracy and generative AI experiments."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


