跳到主要內容

PEFT 微調

使用 LoRA、QLoRA 及 25+ 種方法對大語言模型(LLM)進行參數高效微調。適用於在 GPU 顯存有限的情況下微調大型模型(7B-70B)、需要以最小的精度損失訓練 <1% 的參數,或用於多適配器服務場景。這是與 transformers 生態系統集成的 HuggingFace 官方庫。

技能元數據

來源可選 — 使用 hermes skills install official/mlops/peft 安裝
路徑optional-skills/mlops/peft
版本1.0.0
作者Orchestra Research
許可證MIT
依賴項peft>=0.13.0, transformers>=4.45.0, torch>=2.0.0, bitsandbytes>=0.43.0
標籤Fine-Tuning, PEFT, LoRA, QLoRA, Parameter-Efficient, Adapters, Low-Rank, Memory Optimization, Multi-Adapter

參考:完整 SKILL.md

信息

以下是 Hermes 在觸發此技能時加載的完整技能定義。這是技能激活時代理所看到的指令。

PEFT(參數高效微調)

使用 LoRA、QLoRA 及 25+ 種適配器方法,通過訓練 <1% 的參數來微調 LLM。

何時使用 PEFT

在以下情況使用 PEFT/LoRA:

  • 在消費級 GPU(RTX 4090, A100)上微調 7B-70B 模型
  • 需要訓練 <1% 的參數(6MB 適配器 vs 14GB 完整模型)
  • 希望通過多個特定任務的適配器進行快速迭代
  • 從一個基礎模型部署多個微調變體

在以下情況使用 QLoRA(PEFT + 量化):

  • 在單個 24GB GPU 上微調 70B 模型
  • 顯存是主要限制因素
  • 可以接受相比全量微調約 5% 的質量權衡

在以下情況改用全量微調:

  • 訓練小型模型(<1B 參數)
  • 需要最高質量且擁有充足的計算預算
  • 顯著的領域偏移需要更新所有權重

快速開始

安裝

# Basic installation
pip install peft

# With quantization support (recommended)
pip install peft bitsandbytes

# Full stack
pip install peft transformers accelerate bitsandbytes datasets

LoRA 微調(標準)

from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer
from peft import get_peft_model, LoraConfig, TaskType
from datasets import load_dataset

# Load base model
model_name = "meta-llama/Llama-3.1-8B"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token

# LoRA configuration
lora_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
r=16, # Rank (8-64, higher = more capacity)
lora_alpha=32, # Scaling factor (typically 2*r)
lora_dropout=0.05, # Dropout for regularization
target_modules=["q_proj", "v_proj", "k_proj", "o_proj"], # Attention layers
bias="none" # Don't train biases
)

# Apply LoRA
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
# Output: trainable params: 13,631,488 || all params: 8,043,307,008 || trainable%: 0.17%

# Prepare dataset
dataset = load_dataset("databricks/databricks-dolly-15k", split="train")

def tokenize(example):
text = f"### Instruction:\n{example['instruction']}\n\n### Response:\n{example['response']}"
return tokenizer(text, truncation=True, max_length=512, padding="max_length")

tokenized = dataset.map(tokenize, remove_columns=dataset.column_names)

# Training
training_args = TrainingArguments(
output_dir="./lora-llama",
num_train_epochs=3,
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
learning_rate=2e-4,
fp16=True,
logging_steps=10,
save_strategy="epoch"
)

trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized,
data_collator=lambda data: {"input_ids": torch.stack([f["input_ids"] for f in data]),
"attention_mask": torch.stack([f["attention_mask"] for f in data]),
"labels": torch.stack([f["input_ids"] for f in data])}
)

trainer.train()

# Save adapter only (6MB vs 16GB)
model.save_pretrained("./lora-llama-adapter")

QLoRA 微調(內存高效)

from transformers import AutoModelForCausalLM, BitsAndBytesConfig
from peft import get_peft_model, LoraConfig, prepare_model_for_kbit_training

# 4-bit quantization config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4", # NormalFloat4 (best for LLMs)
bnb_4bit_compute_dtype="bfloat16", # Compute in bf16
bnb_4bit_use_double_quant=True # Nested quantization
)

# Load quantized model
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.1-70B",
quantization_config=bnb_config,
device_map="auto"
)

# Prepare for training (enables gradient checkpointing)
model = prepare_model_for_kbit_training(model)

# LoRA config for QLoRA
lora_config = LoraConfig(
r=64, # Higher rank for 70B
lora_alpha=128,
lora_dropout=0.1,
target_modules=["q_proj", "v_proj", "k_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
bias="none",
task_type="CAUSAL_LM"
)

model = get_peft_model(model, lora_config)
# 70B model now fits on single 24GB GPU!

LoRA 參數選擇

秩 (r) - 容量與效率

秩 (Rank)可訓練參數量顯存佔用質量使用場景
4~3M最小較低簡單任務,原型設計
8~7M良好推薦的起始點
16~14M中等更好通用微調
32~27M較高複雜任務
64~54M最高領域適配,70B 模型

Alpha (lora_alpha) - 縮放因子

# Rule of thumb: alpha = 2 * rank
LoraConfig(r=16, lora_alpha=32) # Standard
LoraConfig(r=16, lora_alpha=16) # Conservative (lower learning rate effect)
LoraConfig(r=16, lora_alpha=64) # Aggressive (higher learning rate effect)

針對不同架構的目標模塊

# Llama / Mistral / Qwen
target_modules = ["q_proj", "v_proj", "k_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]

# GPT-2 / GPT-Neo
target_modules = ["c_attn", "c_proj", "c_fc"]

# Falcon
target_modules = ["query_key_value", "dense", "dense_h_to_4h", "dense_4h_to_h"]

# BLOOM
target_modules = ["query_key_value", "dense", "dense_h_to_4h", "dense_4h_to_h"]

# Auto-detect all linear layers
target_modules = "all-linear" # PEFT 0.6.0+

加載和合並適配器

加載已訓練的適配器

from peft import PeftModel, AutoPeftModelForCausalLM
from transformers import AutoModelForCausalLM

# Option 1: Load with PeftModel
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B")
model = PeftModel.from_pretrained(base_model, "./lora-llama-adapter")

# Option 2: Load directly (recommended)
model = AutoPeftModelForCausalLM.from_pretrained(
"./lora-llama-adapter",
device_map="auto"
)

將適配器合併到基礎模型中

# Merge for deployment (no adapter overhead)
merged_model = model.merge_and_unload()

# Save merged model
merged_model.save_pretrained("./llama-merged")
tokenizer.save_pretrained("./llama-merged")

# Push to Hub
merged_model.push_to_hub("username/llama-finetuned")

多適配器服務

from peft import PeftModel

# Load base with first adapter
model = AutoPeftModelForCausalLM.from_pretrained("./adapter-task1")

# Load additional adapters
model.load_adapter("./adapter-task2", adapter_name="task2")
model.load_adapter("./adapter-task3", adapter_name="task3")

# Switch between adapters at runtime
model.set_adapter("task1") # Use task1 adapter
output1 = model.generate(**inputs)

model.set_adapter("task2") # Switch to task2
output2 = model.generate(**inputs)

# Disable adapters (use base model)
with model.disable_adapter():
base_output = model.generate(**inputs)

PEFT 方法對比

方法可訓練參數比例顯存佔用速度最佳適用場景
LoRA0.1-1%通用微調
QLoRA0.1-1%極低中等顯存受限場景
AdaLoRA0.1-1%中等自動秩選擇
IA30.01%最小最快少樣本適配
Prefix Tuning0.1%中等生成控制
Prompt Tuning0.001%最小簡單任務適配
P-Tuning v20.1%中等NLU 任務

IA3(極簡參數)

from peft import IA3Config

ia3_config = IA3Config(
target_modules=["q_proj", "v_proj", "k_proj", "down_proj"],
feedforward_modules=["down_proj"]
)
model = get_peft_model(model, ia3_config)
# Trains only 0.01% of parameters!

Prefix Tuning

from peft import PrefixTuningConfig

prefix_config = PrefixTuningConfig(
task_type="CAUSAL_LM",
num_virtual_tokens=20, # Prepended tokens
prefix_projection=True # Use MLP projection
)
model = get_peft_model(model, prefix_config)

集成模式

與 TRL (SFTTrainer) 集成

from trl import SFTTrainer, SFTConfig
from peft import LoraConfig

lora_config = LoraConfig(r=16, lora_alpha=32, target_modules="all-linear")

trainer = SFTTrainer(
model=model,
args=SFTConfig(output_dir="./output", max_seq_length=512),
train_dataset=dataset,
peft_config=lora_config, # Pass LoRA config directly
)
trainer.train()

與 Axolotl (YAML 配置) 集成

# axolotl config.yaml
adapter: lora
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
lora_target_linear: true # Target all linear layers

與 vLLM (推理) 集成

from vllm import LLM
from vllm.lora.request import LoRARequest

# Load base model with LoRA support
llm = LLM(model="meta-llama/Llama-3.1-8B", enable_lora=True)

# Serve with adapter
outputs = llm.generate(
prompts,
lora_request=LoRARequest("adapter1", 1, "./lora-adapter")
)

性能基準測試

顯存使用情況 (Llama 3.1 8B)

方法GPU 顯存可訓練參數量
全量微調60+ GB8B (100%)
LoRA r=1618 GB14M (0.17%)
QLoRA r=166 GB14M (0.17%)
IA316 GB800K (0.01%)

訓練速度 (A100 80GB)

方法Tokens/秒相比全量微調
全量微調2,5001x
LoRA3,2001.3x
QLoRA2,1000.84x

質量 (MMLU 基準測試)

模型全量微調LoRAQLoRA
Llama 2-7B45.344.844.1
Llama 2-13B54.854.253.5

常見問題

訓練期間出現 CUDA OOM(顯存溢出)

# Solution 1: Enable gradient checkpointing
model.gradient_checkpointing_enable()

# Solution 2: Reduce batch size + increase accumulation
TrainingArguments(
per_device_train_batch_size=1,
gradient_accumulation_steps=16
)

# Solution 3: Use QLoRA
from transformers import BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4")

適配器未生效

# Verify adapter is active
print(model.active_adapters) # Should show adapter name

# Check trainable parameters
model.print_trainable_parameters()

# Ensure model in training mode
model.train()

質量下降

# Increase rank
LoraConfig(r=32, lora_alpha=64)

# Target more modules
target_modules = "all-linear"

# Use more training data and epochs
TrainingArguments(num_train_epochs=5)

# Lower learning rate
TrainingArguments(learning_rate=1e-4)

最佳實踐

  1. 從 r=8-16 開始,如果質量不足則增加
  2. 使用 alpha = 2 * rank 作為起始點
  3. 針對注意力機制 + MLP 層以獲得最佳質量/效率比
  4. 啟用梯度檢查點以節省顯存
  5. 頻繁保存適配器(文件小,易於回滾)
  6. 在合併前使用保留數據進行評估
  7. 在消費級硬件上使用 QLoRA 處理 70B+ 模型

參考文獻

資源