1. 详细步骤

1.1 拉取源码

# 确保网络通畅
git clone --recursive https://github.com/li-plus/chatglm.cpp.git && cd chatglm.cpp

1.2 模型转换/量化

1.2.1 安装所需 Python 依赖
torch
pip install torch -U
其他依赖
pip install tabulate tqdm transformers accelerate sentencepiece tiktoken -U
1.2.1 模型转换/量化
f16
python chatglm_cpp/convert.py -i /path/THUDM/chatglm-6b -t f16 -o /path/THUDM/chatglm-6b/f16.bin
q8_0
python chatglm_cpp/convert.py -i /path/THUDM/chatglm-6b -t q8_0 -o /path/THUDM/chatglm-6b/q8_0.bin
q4_0
python chatglm_cpp/convert.py -i /path/THUDM/chatglm-6b -t q4_0 -o /path/THUDM/chatglm-6b/q4_0.bin

1.3 模型测试

1.3.1 编译所需 C/C++ 环境

通过 Python Binding 来调用也可以

CPU
cmake -B build && cmake --build build -j --config Release
CUDA
cmake -B build -DGGML_CUDA=ON && cmake --build build -j
Metal(MPS)
cmake -B build -DGGML_METAL=ON && cmake --build build -j
1.3.2 模型测试
单次推理
./build/bin/main -m /path/THUDM/chatglm-6b/f16.bin -p 你好
多轮对话
./build/bin/main -m /path/THUDM/chatglm-6b/f16.bin -i

2. 参考资料

2.1 ChatGLM.cpp

2.1.1 GitHub
Getting Started (Preparation, Quantize, Build & Run...)

https://github.com/li-plus/chatglm.cpp?tab=readme-ov-file#getting-started

Using BLAS (CUDA, Metal...)

https://github.com/li-plus/chatglm.cpp?tab=readme-ov-file#using-blas

3. 资源

3.1 ChatGLM.cpp

3.1.1 GitHub
官方页面

https://github.com/li-plus/chatglm.cpp

Python Binding

https://github.com/li-plus/chatglm.cpp?tab=readme-ov-file#python-binding

API Server

https://github.com/li-plus/chatglm.cpp?tab=readme-ov-file#api-server

Using Docker

https://github.com/li-plus/chatglm.cpp?tab=readme-ov-file#using-docker

Performance

https://github.com/li-plus/chatglm.cpp?tab=readme-ov-file#performance

Model Quality

https://github.com/li-plus/chatglm.cpp?tab=readme-ov-file#model-quality