instructor 是一个很不错的llm 结构化输出工具litellm 与instructor 的集成模式实际也包含了两种,一种是基于patch 模式,一种是标准openai 模式,以下简单说明下

patch 模式

import litellm
from litellm import Router
import instructor
from pydantic import BaseModel
 
litellm.set_verbose = True
 
client = instructor.patch(
    Router(
        model_list=[
            {
                "model_name": "myqwen2",
                "litellm_params": {  # params for litellm completion/embedding call - e.g.: https://github.com/BerriAI/litellm/blob/62a591f90c99120e1a51a8445f5c3752586868ea/litellm/router.py#L111
                    "model": "ollama/qwen2:1.5b",
                    "api_key": "demo",
                    "api_base": "http://localhost:11434",
                },
            }
        ]
    )
)
 
 
class UserDetail(BaseModel):
    name: str
    age: int
 
 
user = client.chat.completions.create(
    model="myqwen2",
    response_model=UserDetail,
    messages=[
        {"role": "user", "content": "Extract Jason is 25 years old"},
    ],
)
 
print(user)
from litellm import Router

标准openai api 模式

核心是基于proxy 进行配置,之后就是使用key 进行访问
参考代码

import litellm
import instructor
from pydantic import BaseModel
from openai import OpenAI
litellm.set_verbose = True
# api_key  需要通过 /key/generate 或者ui 获取
openai = OpenAI(
    api_key="sk-26xW7syxk4GIsr1qcXf1bw",
    base_url="http://localhost:4000"
)
 
client = instructor.from_openai(openai)
 
class UserDetail(BaseModel):
    name: str
    age: int
 
 
user = client.chat.completions.create(
    model="myqwen2",
    response_model=UserDetail,
    messages=[
        {"role": "user", "content": "Extract Jason is 25 years old"},
    ],
)
print(user)
import instructor

说明

以上是一个简单使用说明,实际上包含了标准openai api 之后好多集成就简单很多了,不需要太多了复杂处理了

参考资料

https://docs.litellm.ai/docs/tutorials/instructor
https://python.useinstructor.com/