LG Reveals Next-Gen Multimodal AI ‘EXAONE 4.5’

SEOUL, South Korea, April 9, 2026 /PRNewswire/ — LG AI Research today announced the release of EXAONE 4.5, its latest multimodal AI model capable of simultaneously understanding and reasoning across both text and images.

EXAONE 4.5 is a sophisticated Vision-Language Model (VLM) that integrates a proprietary vision encoder with a Large Language Model (LLM) into a unified architecture. This latest advancement builds on the deep technical expertise LG AI Research has accumulated since December 2021, when it developed EXAONE 1.0, Korea’s first-ever multimodal AI model.

The current model serves as a strategic stepping stone for the modality expansion of “K-EXAONE,” an ongoing project dedicated to developing a proprietary AI foundation model.

Following the successful completion of the project’s second phase this August, LG AI Research plans to accelerate its modality expansion efforts upon the confirmation of the third phase. The ultimate vision is to evolve EXAONE into a form of “Physical Intelligence”—an AI capable of understanding and making judgments within the physical world, transcending the boundaries of virtual environments.

  • EXAONE 4.5 Surpasses OpenAI’s GPT-5 mini and Alibaba’s Qwen-3-VL Across 13 Visual Assessment Benchmarks

EXAONE 4.5 excels in accurately reading and reasoning through complex documents encountered in real-world industrial settings, such as contracts, technical drawings, financial statements, and scanned documents.

LG AI Research demonstrated its competitive edge by unveiling the benchmark results for EXAONE 4.5, highlighting the multimodal model’s superior performance in visual processing and reasoning.

EXAONE 4.5 achieved an average score of 77.3 across five key STEM (Science, Technology, Engineering, and Mathematics) benchmarks, outperforming major global models including OpenAI’s GPT-5-mini (73.5), Anthropic’s Claude 4.5 Sonnet (74.6), and Alibaba’s Qwen-3 235B (77.0).

The model also demonstrated superior performance across 13 comprehensive benchmarks. These include three indicators for vision general-purpose and five indicators for document comprehension and reasoning—evaluating the ability to interpret complex information within professional literature and multimodal infographics. In these categories, EXAONE 4.5 consistently surpassed the average scores of GPT-5-mini, Claude 4.5 Sonnet, and Qwen-3-VL.

Notably, EXAONE 4.5 showcased its technical edge in coding by scoring 81.4 on LiveCodeBench v6, exceeding Google’s latest model, Gemma 4 (80.0). Furthermore, on ChartQA Pro, which assesses the ability to analyze and reason through complex charts, the model recorded a world-class score of 62.2, the highest among models in its class.

An official from LG AI Research explained, “Achieving high average scores in visual assessment indicators signifies that the AI has moved beyond simply recognizing text or unstructured data. It means the model now possesses the comprehensive reasoning capabilities to grasp context and provide accurate answers to complex questions.”

EXAONE 4.5 also delivered remarkable results in terms of operational efficiency.

Despite having 33 billion parameters (33B)—roughly one-seventh the size of the “K-EXAONE” model unveiled late last year—EXAONE 4.5 achieved comparable performance in text comprehension and reasoning. This breakthrough is the result of LG AI Research’s proprietary Hybrid Attention architecture and high-speed inference technology based on multi-token prediction.

Furthermore, LG AI Research has expanded its official language support beyond Korean and English to include Spanish, German, Japanese, and Vietnamese.

  • EXAONE 4.5 Open-Weights to Drive Ecosystem Growth and Mastering Korean Cultural Context

Following the decision to release EXAONE 3.0 as an open-weight model in August 2024—a first in Korea—LG AI Research has continued its efforts to expand the AI research ecosystem.

LG AI Research has released EXAONE 4.5 available on the global open-source platform Hugging Face, permitting its use for research, academic, and educational purposes.

Meanwhile, LG has also utilized EXAONE as an educational resource to enhance the AI development skills of young talents. Earlier this month, the company hosted the ‘LG Aimers’ Hackathon, a program dedicated to nurturing young AI experts, with a focus on developing lightweight versions of the EXAONE model.

“EXAONE 4.5 represents LG AI’s successful entry into the multimodal era, where AI understands not just text, but visual information as well,” said Jinsik Lee, Head of EXAONE Lab at LG AI Research. “Starting with this model, we will expand AI’s scope of understanding to include audio, video, and the physical environment, ultimately creating AI that can make practical judgments and take action within industrial settings.”

LG AI Research is continuing its development to make EXAONE the AI that best understands Korea’s history, culture, and social context.

To this end, LG AI Research trained EXAONE using data provided by the Northeast Asian History Foundation in January and is currently discussing collaborations with other domestic institutions holding high-quality data.

“The surge in AI models capable of speaking Korean does not equate to a true understanding of cultural sensitivity,” said Myoungshin Kim, Head of the AI Safety & Trust Office at LG AI Research. “With its built-in K-AUT(Korea-Augmented Universal Taxonomy), EXAONE is evolving to provide expressive depth alongside robust reliability, setting a new standard for culturally-aware AI.”

LG AI Research

Established in December 2020, LG AI Research serves as the AI think tank for LG Group. Its mission is to enhance the Group’s AI capabilities by solving business challenges, conducting cutting-edge AI research, and establishing and implementing ethical principles for AI technology.

Official website: https://www.lgresearch.ai/

CONTACT: CHAE Ok, ok.chae@lgresearch.ai