94.5
分,博士级测试达 66.1
分..EXAONE Deep 在包括数学和编码基准在内的各种推理任务中展现出卓越的能力,范围从 LG AI Research 开发和发布的 2.4B 到 32B 个参数。
exaone-deep Models
EXAONE Deep 包括数学和编码基准在内的各种推理任务中表现出卓越的能力,参数范围从 2.4B 到 32B,由 LG AI Research 开发和发布。
评估结果表明:
代理式人工智能时代即将到来,人工智能可以独立提出假设、验证假设,并在没有人类指令的情况下自主做出决策。增强推理模型的开发对于这一转变至关重要,但确保高性能推理模型并非易事。在全球范围内,只有少数拥有基础模型的公司能够开发自己的高级推理模型。
LG AI Research 现在推出了 EXAONE Deep,这是一款具有增强推理能力的推理人工智能,能够与这些行业领先的模型相媲美。EXAONE Deep 擅长理解数学逻辑、推理科学概念和解决编程问题,使其成为专门用于高级推理的高性能模型。
为了发布 EXAONE Deep,专注于大幅提高数学、科学和编码方面的推理性能,同时确保模型理解和应用各个领域知识的能力。
ollama run exaone-deep
推荐使用 transformers v4.43.1
或更高版本。
import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer from threading import Thread model_name = "LGAI-EXAONE/EXAONE-Deep-32B" streaming = True # choose the streaming option model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) # Choose your prompt: # Math example (AIME 2024) prompt = r"""Let $x,y$ and $z$ be positive real numbers that satisfy the following system of equations: \[\log_2\left({x \over yz}\right) = {1 \over 2}\]\[\log_2\left({y \over xz}\right) = {1 \over 3}\]\[\log_2\left({z \over xy}\right) = {1 \over 4}\] Then the value of $\left|\log_2(x^4y^3z^2)\right|$ is $\tfrac{m}{n}$ where $m$ and $n$ are relatively prime positive integers. Find $m+n$. Please reason step by step, and put your final answer within \boxed{}.""" # Korean MCQA example (CSAT Math 2025) prompt = r"""Question : $a_1 = 2$인 수열 $\{a_n\}$과 $b_1 = 2$인 등차수열 $\{b_n\}$이 모든 자연수 $n$에 대하여\[\sum_{k=1}^{n} \frac{a_k}{b_{k+1}} = \frac{1}{2} n^2\]을 만족시킬 때, $\sum_{k=1}^{5} a_k$의 값을 구하여라. Options : A) 120 B) 125 C) 130 D) 135 E) 140 Please reason step by step, and you should write the correct option alphabet (A, B, C, D or E) within \\boxed{}.""" messages = [ {"role": "user", "content": prompt} ] input_ids = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt" ) if streaming: streamer = TextIteratorStreamer(tokenizer) thread = Thread(target=model.generate, kwargs=dict( input_ids=input_ids.to("cuda"), eos_token_id=tokenizer.eos_token_id, max_new_tokens=32768, do_sample=True, temperature=0.6, top_p=0.95, streamer=streamer )) thread.start() for text in streamer: print(text, end="", flush=True) else: output = model.generate( input_ids.to("cuda"), eos_token_id=tokenizer.eos_token_id, max_new_tokens=32768, do_sample=True, temperature=0.6, top_p=0.95, ) print(tokenizer.decode(output[0]))