跳到主要內容

Instructor

通過 Pydantic 驗證從 LLM 響應中提取結構化數據,自動重試失敗的提取操作,以類型安全的方式解析複雜 JSON,並使用 Instructor(經過實戰檢驗的結構化輸出庫)流式傳輸部分結果

技能元數據

來源可選 — 使用 hermes skills install official/mlops/instructor 安裝
路徑optional-skills/mlops/instructor
版本1.0.0
作者Orchestra Research
許可證MIT
依賴項instructor, pydantic, openai, anthropic
標籤Prompt Engineering, Instructor, Structured Output, Pydantic, Data Extraction, JSON Parsing, Type Safety, Validation, Streaming, OpenAI, Anthropic

參考:完整 SKILL.md

信息

以下是 Hermes 在觸發此技能時加載的完整技能定義。這是技能激活時代理看到的指令。

Instructor:結構化 LLM 輸出

何時使用此技能

當您需要執行以下操作時,請使用 Instructor:

  • 可靠地從 LLM 響應中提取結構化數據
  • 針對 Pydantic 模式自動驗證輸出
  • 通過自動錯誤處理重試失敗的提取操作
  • 以類型安全和驗證的方式解析複雜 JSON
  • 流式傳輸部分結果以實現實時處理
  • 通過一致的 API 支持多個 LLM 提供商

GitHub Stars:15,000+ | 經過實戰檢驗:100,000+ 開發者

安裝

# Base installation
pip install instructor

# With specific providers
pip install "instructor[anthropic]" # Anthropic Claude
pip install "instructor[openai]" # OpenAI
pip install "instructor[all]" # All providers

快速入門

基本示例:提取用戶數據

import instructor
from pydantic import BaseModel
from anthropic import Anthropic

# Define output structure
class User(BaseModel):
name: str
age: int
email: str

# Create instructor client
client = instructor.from_anthropic(Anthropic())

# Extract structured data
user = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": "John Doe is 30 years old. His email is john@example.com"
}],
response_model=User
)

print(user.name) # "John Doe"
print(user.age) # 30
print(user.email) # "john@example.com"

配合 OpenAI 使用

from openai import OpenAI

client = instructor.from_openai(OpenAI())

user = client.chat.completions.create(
model="gpt-4o-mini",
response_model=User,
messages=[{"role": "user", "content": "Extract: Alice, 25, alice@email.com"}]
)

核心概念

1. 響應模型 (Pydantic)

響應模型定義了 LLM 輸出的結構和驗證規則。

基本模型

from pydantic import BaseModel, Field

class Article(BaseModel):
title: str = Field(description="Article title")
author: str = Field(description="Author name")
word_count: int = Field(description="Number of words", gt=0)
tags: list[str] = Field(description="List of relevant tags")

article = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": "Analyze this article: [article text]"
}],
response_model=Article
)

優勢:

  • 基於 Python 類型提示的類型安全
  • 自動驗證(word_count > 0)
  • 通過 Field 描述實現自文檔化
  • 支持 IDE 自動補全

嵌套模型

class Address(BaseModel):
street: str
city: str
country: str

class Person(BaseModel):
name: str
age: int
address: Address # Nested model

person = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": "John lives at 123 Main St, Boston, USA"
}],
response_model=Person
)

print(person.address.city) # "Boston"

可選字段

from typing import Optional

class Product(BaseModel):
name: str
price: float
discount: Optional[float] = None # Optional
description: str = Field(default="No description") # Default value

# LLM doesn't need to provide discount or description

用於約束的枚舉 (Enums)

from enum import Enum

class Sentiment(str, Enum):
POSITIVE = "positive"
NEGATIVE = "negative"
NEUTRAL = "neutral"

class Review(BaseModel):
text: str
sentiment: Sentiment # Only these 3 values allowed

review = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": "This product is amazing!"
}],
response_model=Review
)

print(review.sentiment) # Sentiment.POSITIVE

2. 驗證

Pydantic 會自動驗證 LLM 輸出。如果驗證失敗,Instructor 會進行重試。

內置驗證器

from pydantic import Field, EmailStr, HttpUrl

class Contact(BaseModel):
name: str = Field(min_length=2, max_length=100)
age: int = Field(ge=0, le=120) # 0 <= age <= 120
email: EmailStr # Validates email format
website: HttpUrl # Validates URL format

# If LLM provides invalid data, Instructor retries automatically

自定義驗證器

from pydantic import field_validator

class Event(BaseModel):
name: str
date: str
attendees: int

@field_validator('date')
def validate_date(cls, v):
"""Ensure date is in YYYY-MM-DD format."""
import re
if not re.match(r'\d{4}-\d{2}-\d{2}', v):
raise ValueError('Date must be YYYY-MM-DD format')
return v

@field_validator('attendees')
def validate_attendees(cls, v):
"""Ensure positive attendees."""
if v < 1:
raise ValueError('Must have at least 1 attendee')
return v

模型級驗證

from pydantic import model_validator

class DateRange(BaseModel):
start_date: str
end_date: str

@model_validator(mode='after')
def check_dates(self):
"""Ensure end_date is after start_date."""
from datetime import datetime
start = datetime.strptime(self.start_date, '%Y-%m-%d')
end = datetime.strptime(self.end_date, '%Y-%m-%d')

if end < start:
raise ValueError('end_date must be after start_date')
return self

3. 自動重試

當驗證失敗時,Instructor 會自動重試,並向 LLM 提供錯誤反饋。

# Retries up to 3 times if validation fails
user = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": "Extract user from: John, age unknown"
}],
response_model=User,
max_retries=3 # Default is 3
)

# If age can't be extracted, Instructor tells the LLM:
# "Validation error: age - field required"
# LLM tries again with better extraction

工作原理:

  1. LLM 生成輸出
  2. Pydantic 進行驗證
  3. 如果無效:將錯誤消息發送回 LLM
  4. LLM 根據錯誤反饋再次嘗試
  5. 重複上述步驟,直到達到 max_retries 上限

4. 流式傳輸

流式傳輸部分結果以實現實時處理。

流式傳輸部分對象

from instructor import Partial

class Story(BaseModel):
title: str
content: str
tags: list[str]

# Stream partial updates as LLM generates
for partial_story in client.messages.create_partial(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": "Write a short sci-fi story"
}],
response_model=Story
):
print(f"Title: {partial_story.title}")
print(f"Content so far: {partial_story.content[:100]}...")
# Update UI in real-time

流式傳輸可迭代對象

class Task(BaseModel):
title: str
priority: str

# Stream list items as they're generated
tasks = client.messages.create_iterable(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": "Generate 10 project tasks"
}],
response_model=Task
)

for task in tasks:
print(f"- {task.title} ({task.priority})")
# Process each task as it arrives

提供商配置

Anthropic Claude

import instructor
from anthropic import Anthropic

client = instructor.from_anthropic(
Anthropic(api_key="your-api-key")
)

# Use with Claude models
response = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[...],
response_model=YourModel
)

OpenAI

from openai import OpenAI

client = instructor.from_openai(
OpenAI(api_key="your-api-key")
)

response = client.chat.completions.create(
model="gpt-4o-mini",
response_model=YourModel,
messages=[...]
)

本地模型 (Ollama)

from openai import OpenAI

# Point to local Ollama server
client = instructor.from_openai(
OpenAI(
base_url="http://localhost:11434/v1",
api_key="ollama" # Required but ignored
),
mode=instructor.Mode.JSON
)

response = client.chat.completions.create(
model="llama3.1",
response_model=YourModel,
messages=[...]
)

常見模式

模式 1:從文本中提取數據

class CompanyInfo(BaseModel):
name: str
founded_year: int
industry: str
employees: int
headquarters: str

text = """
Tesla, Inc. was founded in 2003. It operates in the automotive and energy
industry with approximately 140,000 employees. The company is headquartered
in Austin, Texas.
"""

company = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": f"Extract company information from: {text}"
}],
response_model=CompanyInfo
)

模式 2:分類

class Category(str, Enum):
TECHNOLOGY = "technology"
FINANCE = "finance"
HEALTHCARE = "healthcare"
EDUCATION = "education"
OTHER = "other"

class ArticleClassification(BaseModel):
category: Category
confidence: float = Field(ge=0.0, le=1.0)
keywords: list[str]

classification = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": "Classify this article: [article text]"
}],
response_model=ArticleClassification
)

模式 3:多實體提取

class Person(BaseModel):
name: str
role: str

class Organization(BaseModel):
name: str
industry: str

class Entities(BaseModel):
people: list[Person]
organizations: list[Organization]
locations: list[str]

text = "Tim Cook, CEO of Apple, announced at the event in Cupertino..."

entities = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": f"Extract all entities from: {text}"
}],
response_model=Entities
)

for person in entities.people:
print(f"{person.name} - {person.role}")

模式 4:結構化分析

class SentimentAnalysis(BaseModel):
overall_sentiment: Sentiment
positive_aspects: list[str]
negative_aspects: list[str]
suggestions: list[str]
score: float = Field(ge=-1.0, le=1.0)

review = "The product works well but setup was confusing..."

analysis = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": f"Analyze this review: {review}"
}],
response_model=SentimentAnalysis
)

模式 5:批量處理

def extract_person(text: str) -> Person:
return client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[{
"role": "user",
"content": f"Extract person from: {text}"
}],
response_model=Person
)

texts = [
"John Doe is a 30-year-old engineer",
"Jane Smith, 25, works in marketing",
"Bob Johnson, age 40, software developer"
]

people = [extract_person(text) for text in texts]

高級功能

聯合類型 (Union Types)

from typing import Union

class TextContent(BaseModel):
type: str = "text"
content: str

class ImageContent(BaseModel):
type: str = "image"
url: HttpUrl
caption: str

class Post(BaseModel):
title: str
content: Union[TextContent, ImageContent] # Either type

# LLM chooses appropriate type based on content

動態模型

from pydantic import create_model

# Create model at runtime
DynamicUser = create_model(
'User',
name=(str, ...),
age=(int, Field(ge=0)),
email=(EmailStr, ...)
)

user = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[...],
response_model=DynamicUser
)

自定義模式

# For providers without native structured outputs
client = instructor.from_anthropic(
Anthropic(),
mode=instructor.Mode.JSON # JSON mode
)

# Available modes:
# - Mode.ANTHROPIC_TOOLS (recommended for Claude)
# - Mode.JSON (fallback)
# - Mode.TOOLS (OpenAI tools)

上下文管理

# Single-use client
with instructor.from_anthropic(Anthropic()) as client:
result = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[...],
response_model=YourModel
)
# Client closed automatically

錯誤處理

處理驗證錯誤

from pydantic import ValidationError

try:
user = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
messages=[...],
response_model=User,
max_retries=3
)
except ValidationError as e:
print(f"Failed after retries: {e}")
# Handle gracefully

except Exception as e:
print(f"API error: {e}")

自定義錯誤消息

class ValidatedUser(BaseModel):
name: str = Field(description="Full name, 2-100 characters")
age: int = Field(description="Age between 0 and 120", ge=0, le=120)
email: EmailStr = Field(description="Valid email address")

class Config:
# Custom error messages
json_schema_extra = {
"examples": [
{
"name": "John Doe",
"age": 30,
"email": "john@example.com"
}
]
}

最佳實踐

1. 清晰的字段描述

# ❌ Bad: Vague
class Product(BaseModel):
name: str
price: float

# ✅ Good: Descriptive
class Product(BaseModel):
name: str = Field(description="Product name from the text")
price: float = Field(description="Price in USD, without currency symbol")

2. 使用適當的驗證

# ✅ Good: Constrain values
class Rating(BaseModel):
score: int = Field(ge=1, le=5, description="Rating from 1 to 5 stars")
review: str = Field(min_length=10, description="Review text, at least 10 chars")

3. 在提示詞中提供示例

messages = [{
"role": "user",
"content": """Extract person info from: "John, 30, engineer"

Example format:
{
"name": "John Doe",
"age": 30,
"occupation": "engineer"
}"""
}]

4. 對固定類別使用枚舉

# ✅ Good: Enum ensures valid values
class Status(str, Enum):
PENDING = "pending"
APPROVED = "approved"
REJECTED = "rejected"

class Application(BaseModel):
status: Status # LLM must choose from enum

5. 優雅地處理缺失數據

class PartialData(BaseModel):
required_field: str
optional_field: Optional[str] = None
default_field: str = "default_value"

# LLM only needs to provide required_field

與替代方案的比較

特性Instructor手動 JSONLangChainDSPy
類型安全✅ 是❌ 否⚠️ 部分✅ 是
自動驗證✅ 是❌ 否❌ 否⚠️ 有限
自動重試✅ 是❌ 否❌ 否✅ 是
流式傳輸✅ 是❌ 否✅ 是❌ 否
多提供商支持✅ 是⚠️ 手動✅ 是✅ 是
學習曲線

何時選擇 Instructor:

  • 需要結構化、經過驗證的輸出
  • 想要類型安全和 IDE 支持
  • 需要自動重試
  • 構建數據提取系統

何時選擇替代方案:

  • DSPy:需要提示詞優化
  • LangChain:構建複雜鏈
  • 手動:簡單、一次性提取

資源

另請參閱

  • references/validation.md - 高級驗證模式
  • references/providers.md - 特定於提供程序的配置
  • references/examples.md - 實際用例