MoonshotAI/Kimi-K2.5
模型介绍文件和版本Pull Requests讨论分析
下载使用量0
Kimi K2.5

Chat github Homepage
Hugging Face Twitter Follow Discord
License

📰  技术博客     |     📄  论文

0. 更新日志

  • 2026年1月29日:
    • 默认系统提示词可能会给用户造成困惑并导致意外行为,因此我们将其移除。
    • 标记 <|media_start|> 存在错误,已在聊天模板中替换为 <|media_begin|>。

1. 模型介绍

Kimi K2.5 是一款开源的原生多模态智能体模型,它在 Kimi-K2-Base 的基础上,通过对约 15 万亿混合视觉和文本 tokens 进行持续预训练构建而成。该模型将视觉与语言理解、高级智能体能力、即时模式与思考模式,以及对话式与智能体范式无缝融合。

核心特性

  • 原生多模态:基于视觉-语言 tokens 进行预训练,K2.5 在视觉知识、跨模态推理以及基于视觉输入的智能体工具使用方面表现卓越。
  • 视觉驱动编码:K2.5 能够根据视觉规范(UI 设计、视频工作流)生成代码,并自主编排工具以进行视觉数据处理。
  • 智能体集群:K2.5 从单智能体扩展过渡到自主导向、协同的集群式执行方案。它将复杂任务分解为并行子任务,由动态实例化的特定领域智能体执行。

2. 模型概要

架构混合专家模型(Mixture-of-Experts, MoE)
总参数1T
激活参数32B
层数(包含密集层)61
密集层层数1
注意力隐藏维度7168
MoE 隐藏维度(每专家)2048
注意力头数64
专家数量384
每 Token 选择专家数8
共享专家数量1
词汇表大小160K
上下文长度256K
注意力机制MLA
激活函数SwiGLU
视觉编码器MoonViT
视觉编码器参数400M

3. 评估结果

基准测试Kimi K2.5
(思考模式)
GPT-5.2
(超高配置)
Claude 4.5 Opus
(扩展思考模式)
Gemini 3 Pro
(高级思考水平)
DeepSeek V3.2
(思考模式)
Qwen3-VL-
235B-A22B-
Thinking
推理与知识
HLE-Full30.134.530.837.525.1†-
HLE-Full
(使用工具)
50.245.543.245.840.8†-
AIME 202596.110092.895.093.1-
HMMT 2025(2月)95.499.492.9*97.3*92.5-
IMO-AnswerBench81.886.378.5*83.1*78.3-
GPQA-Diamond87.692.487.091.982.4-
MMLU-Pro87.186.7*89.3*90.185.0-
图像与视频
MMMU-Pro78.579.5*74.081.0-69.3
CharXiv(RQ)77.582.167.2*81.4-66.1
MathVision84.283.077.1*86.1*-74.6
MathVista(精简版)90.182.8*80.2*89.8*-85.8
ZeroBench99*3*8*-4*
ZeroBench
(使用工具)
117*9*12*-3*
OCRBench92.380.7*86.5*90.3*-87.5
OmniDocBench 1.588.885.787.7*88.5-82.0*
InfoVQA(验证集)92.684*76.9*57.2*-89.5
SimpleVQA71.255.8*69.7*69.7*-56.8*
WorldVQA46.328.036.847.4-23.5
VideoMMMU86.685.984.4*87.6-80.0
MMVU80.480.8*77.377.5-71.1
MotionBench70.464.860.370.3--
VideoMME87.486.0*-88.4*-79.0
LongVideoBench79.876.5*67.2*77.7*-65.6*
LVBench75.9--73.5*-63.6
代码能力
SWE-Bench Verified76.880.080.976.273.1-
SWE-Bench Pro50.755.655.4*---
SWE-Bench Multilingual73.072.077.565.070.2-
Terminal Bench 2.050.854.059.354.246.4-
PaperBench63.563.7*72.9*-47.1-
CyberGym41.3-50.639.9*17.3*-
SciCode48.752.149.556.138.9-
OJBench(cpp)57.4-54.6*68.5*54.7*-
LiveCodeBench(v6)85.0-82.2*87.4*83.3-
长文本上下文
Longbench v261.054.5*64.4*68.2*59.8*-
AA-LCR70.072.3*71.3*65.3*64.3*-
智能体搜索
BrowseComp60.665.837.037.851.4-
BrowseComp
(带上下文管理)
74.957.859.267.6-
BrowseComp
(智能体集群)
78.4-----
WideSearch
(item-f1)
72.7-76.2*57.032.5*-
WideSearch
(item-f1 智能体集群)
79.0-----
DeepSearchQA77.171.3*76.1*63.2*60.9*-
FinSearchCompT2&T367.8-66.2*49.959.1*-
Seal-057.445.047.7*45.5*49.5*-
脚注
  1. 总体测试详情
    • 我们报告了 Kimi K2.5 和启用思考模式的 DeepSeek-V3.2、启用扩展思考模式的 Claude Opus 4.5、采用 xhigh 推理力度的 GPT-5.2 以及采用高级思考水平的 Gemini 3 Pro 的结果。对于视觉基准测试,我们还额外报告了 Qwen3-VL-235B-A22B-Thinking 的结果。
    • 除非另有说明,所有 Kimi K2.5 实验均在 temperature = 1.0、top-p = 0.95 且上下文长度为 256k tokens 的条件下进行。
    • 对于没有公开分数的基准测试,我们在与 Kimi K2.5 相同的条件下重新进行了评估,并标有星号 (*)。
    • 由于服务稳定性问题,我们无法在所有基准测试上评估 GPT-5.2 xhigh。对于未测试的基准测试,我们将其标记为“-”。
  2. 文本与推理
    • HLE、AIME 2025、HMMT 2025(2 月)和 GPQA-Diamond 的评估最大完成预算为 96k tokens。
    • AIME 和 HMMT 的结果为 32 次运行的平均值(avg@32);GPQA-Diamond 为 8 次运行的平均值(avg@8)。
    • 对于 HLE,我们报告完整数据集(文本和图像)的分数。Kimi K2.5 在不使用工具的情况下,文本得分为 31.5,图像得分为 21.3;使用工具时,文本得分为 51.8,图像得分为 39.8。DeepSeek-V3.2 的分数对应其纯文本子集(标记为 †)。为防止潜在的数据泄露,Hugging Face 访问已被阻止。带工具的 HLE 采用简单的上下文管理:一旦上下文超过阈值,仅保留最新一轮的工具消息。
  3. 工具增强/智能体搜索
    • Kimi K2.5 配备了搜索、代码解释器和网页浏览工具,用于带工具的 HLE 和所有智能体搜索基准测试。
    • 除了 BrowseComp(K2.5 和 DeepSeek-V3.2 采用全部丢弃策略)外,未应用任何上下文管理,超过支持上下文长度的任务直接计为失败。
    • 测试系统提示强调深入且主动地使用工具,指示模型仔细推理、利用工具并验证不确定信息。完整提示将在技术报告中提供。
    • Seal-0 和 WideSearch 的结果为四次运行的平均值(avg@4)。
  4. 视觉基准测试
    • Max-tokens = 64k,为三次运行的平均值(avg@3)。
    • ZeroBench(带工具)在多步推理中使用 max-tokens-per-step = 24k 和 max-steps = 30。
    • MMMU-Pro 遵循官方协议,保留输入顺序并在图像前添加前缀。
    • GPT-5.2-xhigh 有 ~10% 的失败率(尽管重试 3 次仍无输出),视为错误;报告的分数可能低估了其真实性能。
    • WorldVQA 是一个旨在评估原子视觉中心世界知识的基准测试。可通过 https://github.com/MoonshotAI/WorldVQA 访问 WorldVQA。
    • OmniDocBench 分数计算方式为 (1 − 归一化编辑距离) × 100,分数越高表示准确率越高。
  5. 编码任务
    • Terminal-Bench 2.0 分数是使用默认智能体框架(Terminus-2)和提供的 JSON 解析器获得的。在我们的实现中,我们在非思考模式下评估了 Terminal-Bench 2.0。做出此选择是因为我们当前思考模式的上下文管理策略与 Terminus-2 不兼容。
    • 对于 SWE-Bench 系列评估(包括 verified、multilingual 和 pro),我们使用了内部开发的评估框架。该框架包括一组最小工具——bash 工具、createfile 工具、insert 工具、view 工具、strreplace 工具和 submit 工具——以及为任务量身定制的系统提示。在非思考模式下取得了最高分数。
    • Claude Opus 4.5 在 CyberGym 上的分数是在非思考设置下报告的。
    • 所有报告的编码任务分数均为 5 次独立运行的平均值。
  6. 长上下文基准测试
    • AA-LCR:分数为三次运行的平均值(avg@3)。
    • LongBench-V2:相同的提示和输入上下文标准化为 ~128k tokens。
  7. 智能体集群
    • BrowseComp(集群模式):主智能体最多 15 步;子智能体最多 100 步。
    • WideSearch(集群模式):主智能体和子智能体最多 100 步。

4. 原生 INT4 量化

Kimi-K2.5 采用与 Kimi-K2-Thinking 相同的原生 int4 量化方法。

5. 部署

[!Note] 您可以通过 https://platform.moonshot.ai 访问 Kimi-K2.5 的 API,我们提供与 OpenAI/Anthropic 兼容的 API。为验证部署是否正确,我们还提供了 Kimi Vendor Verifier。 目前,建议在以下推理引擎上运行 Kimi-K2.5:

  • vLLM
  • SGLang
  • KTransformers

transformers 的最低版本要求为 4.57.1。

部署示例可参见 模型部署指南。


6. 模型使用

以下使用示例演示如何调用我们的官方 API。

对于使用 vLLM 或 SGLang 部署的第三方 API,请注意:

[!Note]

  • 视频内容对话是一项实验性功能,目前仅在我们的官方 API 中支持。

  • 思考模式(Thinking mode)推荐的 temperature 为 1.0,即时模式(Instant mode)推荐的 temperature 为 0.6。

  • 推荐的 top_p 为 0.95。

  • 若要使用即时模式,需在 extra_body 中传入 {'chat_template_kwargs': {"thinking": False}}。

聊天补全

以下是一个简单的聊天补全脚本,展示如何在思考模式和即时模式下调用 K2.5 API。

import openai
import base64
import requests
def simple_chat(client: openai.OpenAI, model_name: str):
    messages = [
        {'role': 'system', 'content': 'You are Kimi, an AI assistant created by Moonshot AI.'},
        {
            'role': 'user',
            'content': [
                {'type': 'text', 'text': 'which one is bigger, 9.11 or 9.9? think carefully.'}
            ],
        },
    ]
    response = client.chat.completions.create(
        model=model_name, messages=messages, stream=False, max_tokens=4096
    )
    print('====== Below is reasoning_content in Thinking Mode ======')
    print(f'reasoning content: {response.choices[0].message.reasoning_content}')
    print('====== Below is response in Thinking Mode ======')
    print(f'response: {response.choices[0].message.content}')

    # To use instant mode, pass {"thinking" = {"type":"disabled"}}
    response = client.chat.completions.create(
        model=model_name,
        messages=messages,
        stream=False,
        max_tokens=4096,
        extra_body={'thinking': {'type': 'disabled'}},  # this is for official API
        # extra_body= {'chat_template_kwargs': {"thinking": False}}  # this is for vLLM/SGLang
    )
    print('====== Below is response in Instant Mode ======')
    print(f'response: {response.choices[0].message.content}')

含视觉内容的对话补全

K2.5 支持图像和视频输入。

以下示例展示了如何使用图像输入调用 K2.5 API:

import openai
import base64
import requests

def chat_with_image(client: openai.OpenAI, model_name: str):
    url = 'https://huggingface.co/moonshotai/Kimi-K2.5/resolve/main/figures/kimi-logo.png'
    image_base64 = base64.b64encode(requests.get(url).content).decode()
    messages = [
        {
            'role': 'user',
            'content': [
                {
                    'type': 'image_url',
                    'image_url': {'url': f'data:image/png;base64, {image_base64}'},
                },
                {'type': 'text', 'text': 'Describe this image in detail.'},
            ],
        }
    ]

    response = client.chat.completions.create(
        model=model_name, messages=messages, stream=False, max_tokens=8192
    )
    print('====== Below is reasoning_content in Thinking Mode ======')
    print(f'reasoning content: {response.choices[0].message.reasoning_content}')
    print('====== Below is response in Thinking Mode ======')
    print(f'response: {response.choices[0].message.content}')

    # Also support instant mode if you pass {"thinking" = {"type":"disabled"}}
    response = client.chat.completions.create(
        model=model_name,
        messages=messages,
        stream=False,
        max_tokens=4096,
        extra_body={'thinking': {'type': 'disabled'}},  # this is for official API
        # extra_body= {'chat_template_kwargs': {"thinking": False}}  # this is for vLLM/SGLang
    )
    print('====== Below is response in Instant Mode ======')
    print(f'response: {response.choices[0].message.content}')

    return response.choices[0].message.content

以下示例展示了如何使用视频输入调用 K2.5 API:

import openai
import base64
import requests

def chat_with_video(client: openai.OpenAI, model_name:str):
    url = 'https://huggingface.co/moonshotai/Kimi-K2.5/resolve/main/figures/demo_video.mp4'
    video_base64 = base64.b64encode(requests.get(url).content).decode()
    messages = [
        {
            "role": "user",
            "content": [
                {
                    "type": "video_url",
                    "video_url": {"url": f"data:video/mp4;base64,{video_base64}"},
                },
                {"type": "text","text": "Describe the video in detail."},
            ],
        }
    ]

    response = client.chat.completions.create(model=model_name, messages=messages)
    print('====== Below is reasoning_content in Thinking Mode ======')
    print(f'reasoning content: {response.choices[0].message.reasoning_content}')
    print('====== Below is response in Thinking Mode ======')
    print(f'response: {response.choices[0].message.content}')

    # Also support instant mode if pass {"thinking" = {"type":"disabled"}}
    response = client.chat.completions.create(
        model=model_name,
        messages=messages,
        stream=False,
        max_tokens=4096,
        extra_body={'thinking': {'type': 'disabled'}},  # this is for official API
        # extra_body= {'chat_template_kwargs': {"thinking": False}}  # this is for vLLM/SGLang
    )
    print('====== Below is response in Instant Mode ======')
    print(f'response: {response.choices[0].message.content}')
    return response.choices[0].message.content

交错思维与多步工具调用

K2.5 沿用了与 K2 Thinking 相同的交错思维与多步工具调用设计。使用示例请参考 K2 Thinking 文档。

编码智能体框架

Kimi K2.5 与 Kimi Code CLI 作为其智能体框架配合使用时效果最佳,欢迎访问 https://www.kimi.com/code 体验。


7. 许可协议

代码仓库和模型权重均基于 Modified MIT License 发布。


8. 第三方声明

详见 THIRD PARTY NOTICES


9. 联系我们

如有任何问题,请通过 support@moonshot.cn 与我们联系。

10. 参考引用

如果您发现 K2.5 对您的研究有所帮助,敬请引用 K2.5 技术报告,格式如下:

@misc{kimiteam2026kimik25visualagentic,
      title={Kimi K2.5: Visual Agentic Intelligence}, 
      author={Kimi Team and Tongtong Bai and Yifan Bai and Yiping Bao and S. H. Cai and Yuan Cao and Y. Charles and H. S. Che and Cheng Chen and Guanduo Chen and Huarong Chen and Jia Chen and Jiahao Chen and Jianlong Chen and Jun Chen and Kefan Chen and Liang Chen and Ruijue Chen and Xinhao Chen and Yanru Chen and Yanxu Chen and Yicun Chen and Yimin Chen and Yingjiang Chen and Yuankun Chen and Yujie Chen and Yutian Chen and Zhirong Chen and Ziwei Chen and Dazhi Cheng and Minghan Chu and Jialei Cui and Jiaqi Deng and Muxi Diao and Hao Ding and Mengfan Dong and Mengnan Dong and Yuxin Dong and Yuhao Dong and Angang Du and Chenzhuang Du and Dikang Du and Lingxiao Du and Yulun Du and Yu Fan and Shengjun Fang and Qiulin Feng and Yichen Feng and Garimugai Fu and Kelin Fu and Hongcheng Gao and Tong Gao and Yuyao Ge and Shangyi Geng and Chengyang Gong and Xiaochen Gong and Zhuoma Gongque and Qizheng Gu and Xinran Gu and Yicheng Gu and Longyu Guan and Yuanying Guo and Xiaoru Hao and Weiran He and Wenyang He and Yunjia He and Chao Hong and Hao Hu and Jiaxi Hu and Yangyang Hu and Zhenxing Hu and Ke Huang and Ruiyuan Huang and Weixiao Huang and Zhiqi Huang and Tao Jiang and Zhejun Jiang and Xinyi Jin and Yu Jing and Guokun Lai and Aidi Li and C. Li and Cheng Li and Fang Li and Guanghe Li and Guanyu Li and Haitao Li and Haoyang Li and Jia Li and Jingwei Li and Junxiong Li and Lincan Li and Mo Li and Weihong Li and Wentao Li and Xinhang Li and Xinhao Li and Yang Li and Yanhao Li and Yiwei Li and Yuxiao Li and Zhaowei Li and Zheming Li and Weilong Liao and Jiawei Lin and Xiaohan Lin and Zhishan Lin and Zichao Lin and Cheng Liu and Chenyu Liu and Hongzhang Liu and Liang Liu and Shaowei Liu and Shudong Liu and Shuran Liu and Tianwei Liu and Tianyu Liu and Weizhou Liu and Xiangyan Liu and Yangyang Liu and Yanming Liu and Yibo Liu and Yuanxin Liu and Yue Liu and Zhengying Liu and Zhongnuo Liu and Enzhe Lu and Haoyu Lu and Zhiyuan Lu and Junyu Luo and Tongxu Luo and Yashuo Luo and Long Ma and Yingwei Ma and Shaoguang Mao and Yuan Mei and Xin Men and Fanqing Meng and Zhiyong Meng and Yibo Miao and Minqing Ni and Kun Ouyang and Siyuan Pan and Bo Pang and Yuchao Qian and Ruoyu Qin and Zeyu Qin and Jiezhong Qiu and Bowen Qu and Zeyu Shang and Youbo Shao and Tianxiao Shen and Zhennan Shen and Juanfeng Shi and Lidong Shi and Shengyuan Shi and Feifan Song and Pengwei Song and Tianhui Song and Xiaoxi Song and Hongjin Su and Jianlin Su and Zhaochen Su and Lin Sui and Jinsong Sun and Junyao Sun and Tongyu Sun and Flood Sung and Yunpeng Tai and Chuning Tang and Heyi Tang and Xiaojuan Tang and Zhengyang Tang and Jiawen Tao and Shiyuan Teng and Chaoran Tian and Pengfei Tian and Ao Wang and Bowen Wang and Chensi Wang and Chuang Wang and Congcong Wang and Dingkun Wang and Dinglu Wang and Dongliang Wang and Feng Wang and Hailong Wang and Haiming Wang and Hengzhi Wang and Huaqing Wang and Hui Wang and Jiahao Wang and Jinhong Wang and Jiuzheng Wang and Kaixin Wang and Linian Wang and Qibin Wang and Shengjie Wang and Shuyi Wang and Si Wang and Wei Wang and Xiaochen Wang and Xinyuan Wang and Yao Wang and Yejie Wang and Yipu Wang and Yiqin Wang and Yucheng Wang and Yuzhi Wang and Zhaoji Wang and Zhaowei Wang and Zhengtao Wang and Zhexu Wang and Zihan Wang and Zizhe Wang and Chu Wei and Ming Wei and Chuan Wen and Zichen Wen and Chengjie Wu and Haoning Wu and Junyan Wu and Rucong Wu and Wenhao Wu and Yuefeng Wu and Yuhao Wu and Yuxin Wu and Zijian Wu and Chenjun Xiao and Jin Xie and Xiaotong Xie and Yuchong Xie and Yifei Xin and Bowei Xing and Boyu Xu and Jianfan Xu and Jing Xu and Jinjing Xu and L. H. Xu and Lin Xu and Suting Xu and Weixin Xu and Xinbo Xu and Xinran Xu and Yangchuan Xu and Yichang Xu and Yuemeng Xu and Zelai Xu and Ziyao Xu and Junjie Yan and Yuzi Yan and Guangyao Yang and Hao Yang and Junwei Yang and Kai Yang and Ningyuan Yang and Ruihan Yang and Xiaofei Yang and Xinlong Yang and Ying Yang and Yi Yang and Yi Yang and Zhen Yang and Zhilin Yang and Zonghan Yang and Haotian Yao and Dan Ye and Wenjie Ye and Zhuorui Ye and Bohong Yin and Chengzhen Yu and Longhui Yu and Tao Yu and Tianxiang Yu and Enming Yuan and Mengjie Yuan and Xiaokun Yuan and Yang Yue and Weihao Zeng and Dunyuan Zha and Haobing Zhan and Dehao Zhang and Hao Zhang and Jin Zhang and Puqi Zhang and Qiao Zhang and Rui Zhang and Xiaobin Zhang and Y. Zhang and Yadong Zhang and Yangkun Zhang and Yichi Zhang and Yizhi Zhang and Yongting Zhang and Yu Zhang and Yushun Zhang and Yutao Zhang and Yutong Zhang and Zheng Zhang and Chenguang Zhao and Feifan Zhao and Jinxiang Zhao and Shuai Zhao and Xiangyu Zhao and Yikai Zhao and Zijia Zhao and Huabin Zheng and Ruihan Zheng and Shaojie Zheng and Tengyang Zheng and Junfeng Zhong and Longguang Zhong and Weiming Zhong and M. Zhou and Runjie Zhou and Xinyu Zhou and Zaida Zhou and Jinguo Zhu and Liya Zhu and Xinhao Zhu and Yuxuan Zhu and Zhen Zhu and Jingze Zhuang and Weiyu Zhuang and Ying Zou and Xinxing Zu},
      year={2026},
      eprint={2602.02276},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2602.02276}, 
}