MiniMax-AI/MiniMax-Text-01
模型介绍文件和版本Pull Requests讨论分析
下载使用量0

MiniMax-Text-01

1. 简介

MiniMax-Text-01 是一款功能强大的语言模型,总参数量达4560亿,其中每token激活参数为459亿。为更好地释放模型的长上下文能力,MiniMax-Text-01 采用了混合架构,融合了 Lightning Attention、Softmax Attention 和混合专家(Mixture-of-Experts, MoE)技术。借助先进的并行策略和创新的计算-通信重叠方法(如 Linear Attention Sequence Parallelism Plus (LASP+)、变长环形注意力(varlen ring attention)、专家张量并行(Expert Tensor Parallel, ETP)等),MiniMax-Text-01 的训练上下文长度扩展至100万token,推理时可处理高达400万token的上下文。在各类学术基准测试中,MiniMax-Text-01 同样展现出顶级模型的性能水平。

2. 模型架构

MiniMax-Text-01 的架构简述如下:

  • 总参数量:4560亿
  • 每token激活参数:459亿
  • 网络层数:80层
  • 混合注意力机制:每7个 Lightning Attention 后配置1个 Softmax Attention
    • 注意力头数量:64个
    • 注意力头维度:128
  • 混合专家机制:
    • 专家数量:32个
    • 专家隐藏层维度:9216
    • Top-2 路由策略
  • 位置编码:对一半注意力头维度应用 Rotary Position Embedding (RoPE),基础频率为10,000,000
  • 隐藏层大小:6144
  • 词汇表大小:200,064

3. 评估

核心学术基准测试

任务GPT-4o (11-20)Claude-3.5-Sonnet (10-22)Gemini-1.5-Pro (002)Gemini-2.0-Flash (exp)Qwen2.5-72B-Inst.DeepSeek-V3Llama-3.1-405B-Inst.MiniMax-Text-01
通用能力
MMLU*85.788.386.886.586.188.588.688.5
MMLU-Pro*74.478.075.876.471.175.973.375.7
SimpleQA39.028.123.426.610.324.923.223.7
C-SimpleQA64.656.859.463.352.264.854.767.4
IFEval (avg)84.190.189.488.487.287.386.489.1
Arena-Hard92.487.685.372.781.291.463.589.1
推理能力
GPQA* (diamond)46.065.059.162.149.059.150.754.4
DROP* (F1)89.288.889.289.385.091.092.587.8
数学能力
GSM8k*95.696.995.295.495.896.796.794.8
MATH*76.674.184.683.981.884.673.877.4
编程能力
MBPP +76.275.175.475.977.078.873.071.7
HumanEval90.293.786.689.686.692.189.086.9

* 评估采用 0-shot CoT 设置。

长文本基准测试

400万词级“大海捞针”测试

Ruler测试

模型4k8k16k32k64k128k256k512k1M
GPT-4o (11-20)0.9700.9210.8900.8880.884----
Claude-3.5-Sonnet (10-22)0.9650.9600.9570.9500.9520.938---
Gemini-1.5-Pro (002)0.9620.9600.9600.9580.9380.9170.9160.8610.850
Gemini-2.0-Flash (exp)0.9600.9600.9510.9570.9370.8600.7970.709-
MiniMax-Text-010.9630.9610.9530.9540.9430.9470.9450.9280.910

LongBench v2测试

模型综合得分简单任务困难任务短文本中等文本长文本
人类53.7100.025.147.259.153.7
使用思维链(CoT)
GPT-4o (11-20)51.454.249.759.648.643.5
Claude-3.5-Sonnet (10-22)46.755.241.553.941.944.4
Deepseek-V3------
Qwen2.5-72B-Inst.43.547.940.848.940.939.8
MiniMax-Text-0156.566.150.561.756.747.2
不使用思维链(CoT)
GPT-4o (11-20)50.157.445.653.352.440.2
Claude-3.5-Sonnet (10-22)41.046.937.346.138.637.0
Deepseek-V348.7-----
Qwen2.5-72B-Inst.42.142.741.845.638.144.4
MiniMax-Text-0152.960.947.958.952.643.5

MTOB测试

语境类型无语境半书长度全书长度半书长度提升值全书长度提升值
英语→卡拉姆语(ChrF指标)
GPT-4o (11-20)9.9054.30-44.40-
Claude-3.5-Sonnet (10-22)20.2253.6255.6533.3935.42
Gemini-1.5-Pro (002)16.7953.6857.9036.8941.11
Gemini-2.0-Flash (exp)12.2049.5053.3037.3041.10
Qwen-Long16.5548.4845.9431.9229.39
MiniMax-Text-016.051.7451.6045.745.6
卡拉姆语→英语(BLEURT指标)
GPT-4o (11-20)33.2058.30-25.10-
Claude-3.5-Sonnet (10-22)31.4259.7062.3028.2830.88
Gemini-1.5-Pro (002)32.0261.5263.0929.5031.07
Gemini-2.0-Flash (exp)33.8057.5057.0023.7023.20
Qwen-Long30.1353.1432.1523.012.02
MiniMax-Text-0133.6557.1058.0023.4524.35

4. 快速入门

这里提供一个加载分词器和模型以生成内容的简单示例。

from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig, QuantoConfig, GenerationConfig

# load hf config
hf_config = AutoConfig.from_pretrained("MiniMaxAI/MiniMax-Text-01", trust_remote_code=True)

# quantization config, int8 is recommended
quantization_config =  QuantoConfig(
            weights="int8",
            modules_to_not_convert=[
                "lm_head",
                "embed_tokens",
            ] + [f"model.layers.{i}.coefficient" for i in range(hf_config.num_hidden_layers)]
            + [f"model.layers.{i}.block_sparse_moe.gate" for i in range(hf_config.num_hidden_layers)]
        )

# assume 8 GPUs
world_size = 8
layers_per_device = hf_config.num_hidden_layers // world_size
# set device map
device_map = {
    'model.embed_tokens': 'cuda:0',
    'model.norm': f'cuda:{world_size - 1}',
    'lm_head': f'cuda:{world_size - 1}'
}
for i in range(world_size):
    for j in range(layers_per_device):
        device_map[f'model.layers.{i * layers_per_device + j}'] = f'cuda:{i}'

# load tokenizer
tokenizer = AutoTokenizer.from_pretrained("MiniMaxAI/MiniMax-Text-01")
prompt = "Hello!"
messages = [
    {"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant created by MiniMax based on MiniMax-Text-01 model."}]},
    {"role": "user", "content": [{"type": "text", "text": prompt}]},
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
# tokenize and move to device
model_inputs = tokenizer(text, return_tensors="pt").to("cuda")

# load bfloat16 model, move to device, and apply quantization
quantized_model = AutoModelForCausalLM.from_pretrained(
    "MiniMaxAI/MiniMax-Text-01",
    torch_dtype="bfloat16",
    device_map=device_map,
    quantization_config=quantization_config,
    trust_remote_code=True,
    offload_buffers=True,
)

# generate response
generation_config = GenerationConfig(
    max_new_tokens=20,
    eos_token_id=200020,
    use_cache=True,
)
generated_ids = quantized_model.generate(**model_inputs, generation_config=generation_config)
print(f"generated_ids: {generated_ids}")
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

5. 部署指南

在生产环境部署时,我们建议使用 vLLM 来提供 MiniMax-Text-01 的服务。vLLM 在大语言模型服务方面表现卓越,具备以下特性:

🔥 出色的服务吞吐量性能
⚡ 高效智能的内存管理
📦 强大的批量请求处理能力
⚙️ 深度优化的底层性能

有关详细的部署说明,请参考我们的 vLLM 部署指南。

6. 函数调用

MiniMax-Text-01 支持函数调用功能,能让模型智能识别何时需要调用外部函数,并以结构化 JSON 格式输出参数。借助函数调用,您可以:

  • 让模型识别用户请求中隐含的函数调用需求
  • 接收结构化的参数输出,以便与应用程序无缝集成
  • 支持多种复杂参数类型,包括嵌套对象和数组 函数调用支持标准的 OpenAI 兼容格式定义,并能与 Transformers 库无缝集成。有关详细的使用说明,请参考我们的 函数调用指南 或 中文指南。

7. 引用

@misc{minimax2025minimax01scalingfoundationmodels,
      title={MiniMax-01: Scaling Foundation Models with Lightning Attention}, 
      author={MiniMax and Aonian Li and Bangwei Gong and Bo Yang and Boji Shan and Chang Liu and Cheng Zhu and Chunhao Zhang and Congchao Guo and Da Chen and Dong Li and Enwei Jiao and Gengxin Li and Guojun Zhang and Haohai Sun and Houze Dong and Jiadai Zhu and Jiaqi Zhuang and Jiayuan Song and Jin Zhu and Jingtao Han and Jingyang Li and Junbin Xie and Junhao Xu and Junjie Yan and Kaishun Zhang and Kecheng Xiao and Kexi Kang and Le Han and Leyang Wang and Lianfei Yu and Liheng Feng and Lin Zheng and Linbo Chai and Long Xing and Meizhi Ju and Mingyuan Chi and Mozhi Zhang and Peikai Huang and Pengcheng Niu and Pengfei Li and Pengyu Zhao and Qi Yang and Qidi Xu and Qiexiang Wang and Qin Wang and Qiuhui Li and Ruitao Leng and Shengmin Shi and Shuqi Yu and Sichen Li and Songquan Zhu and Tao Huang and Tianrun Liang and Weigao Sun and Weixuan Sun and Weiyu Cheng and Wenkai Li and Xiangjun Song and Xiao Su and Xiaodong Han and Xinjie Zhang and Xinzhu Hou and Xu Min and Xun Zou and Xuyang Shen and Yan Gong and Yingjie Zhu and Yipeng Zhou and Yiran Zhong and Yongyi Hu and Yuanxiang Fan and Yue Yu and Yufeng Yang and Yuhao Li and Yunan Huang and Yunji Li and Yunpeng Huang and Yunzhi Xu and Yuxin Mao and Zehan Li and Zekang Li and Zewei Tao and Zewen Ying and Zhaoyang Cong and Zhen Qin and Zhenhua Fan and Zhihang Yu and Zhuo Jiang and Zijia Wu},
      year={2025},
      eprint={2501.08313},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.08313}, 
}

8. 聊天机器人与API

为满足通用使用和评估需求,我们提供了具备在线搜索功能的Chatbot,以及面向开发者的在线API。此外,为方便开发者使用,我们还提供了MiniMax MCP Server,该服务支持视频生成、图像生成、语音合成及声音克隆功能。

9. 联系我们

如有任何问题,请通过model@minimaxi.com与我们联系。