Llava
大型語言與視覺助手。支持視覺指令微調及基於圖像的對話。將 CLIP 視覺編碼器與 Vicuna/LLaMA 語言模型相結合。支持多輪圖像聊天、視覺問答和指令遵循。適用於視覺語言聊天機器人或圖像理解任務。最適合用於對話式圖像分析。
技能元數據
| 來源 | 可選 — 使用 hermes skills install official/mlops/llava 安裝 |
| 路徑 | optional-skills/mlops/llava |
| 版本 | 1.0.0 |
| 作者 | Orchestra Research |
| 許可證 | MIT |
| 依賴項 | transformers, torch, pillow |
| 標籤 | LLaVA, Vision-Language, Multimodal, Visual Question Answering, Image Chat, CLIP, Vicuna, Conversational AI, Instruction Tuning, VQA |
參考:完整 SKILL.md
信息
以下是 Hermes 在觸發此技能時加載的完整技能定義。這是技能激活時代理所看到的指令。
LLaVA - 大型語言與視覺助手
用於對話式圖像理解的開源視覺語言模型。
何時使用 LLaVA
適用場景:
- 構建視覺語言聊天機器人
- 視覺問答 (VQA)
- 圖像描述和字幕生成
- 多輪圖像對話
- 視覺指令遵循
- 帶圖像的文檔理解
指標:
- GitHub 星標超過 23,000+
- 達到 GPT-4V 級別的能力(目標)
- Apache 2.0 許可證
- 多種模型尺寸(7B-34B 參數)
改用替代方案:
- GPT-4V:最高質量,基於 API
- CLIP:簡單的零樣本分類
- BLIP-2:僅適用於字幕生成時效果更好
- Flamingo:研究用途,非開源
快速開始
安裝
# Clone repository
git clone https://github.com/haotian-liu/LLaVA
cd LLaVA
# Install
pip install -e .
基本用法
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN
from llava.conversation import conv_templates
from PIL import Image
import torch
# Load model
model_path = "liuhaotian/llava-v1.5-7b"
tokenizer, model, image_processor, context_len = load_pretrained_model(
model_path=model_path,
model_base=None,
model_name=get_model_name_from_path(model_path)
)
# Load image
image = Image.open("image.jpg")
image_tensor = process_images([image], image_processor, model.config)
image_tensor = image_tensor.to(model.device, dtype=torch.float16)
# Create conversation
conv = conv_templates["llava_v1"].copy()
conv.append_message(conv.roles[0], DEFAULT_IMAGE_TOKEN + "\nWhat is in this image?")
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
# Generate response
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).to(model.device)
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=image_tensor,
do_sample=True,
temperature=0.2,
max_new_tokens=512
)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True).strip()
print(response)
可用模型
| 模型 | 參數量 | 顯存 (VRAM) | 質量 |
|---|---|---|---|
| LLaVA-v1.5-7B | 7B | ~14 GB | 良好 |
| LLaVA-v1.5-13B | 13B | ~28 GB | 更好 |
| LLaVA-v1.6-34B | 34B | ~70 GB | 最佳 |
# Load different models
model_7b = "liuhaotian/llava-v1.5-7b"
model_13b = "liuhaotian/llava-v1.5-13b"
model_34b = "liuhaotian/llava-v1.6-34b"
# 4-bit quantization for lower VRAM
load_4bit = True # Reduces VRAM by ~4×
CLI 用法
# Single image query
python -m llava.serve.cli \
--model-path liuhaotian/llava-v1.5-7b \
--image-file image.jpg \
--query "What is in this image?"
# Multi-turn conversation
python -m llava.serve.cli \
--model-path liuhaotian/llava-v1.5-7b \
--image-file image.jpg
# Then type questions interactively
Web UI (Gradio)
# Launch Gradio interface
python -m llava.serve.gradio_web_server \
--model-path liuhaotian/llava-v1.5-7b \
--load-4bit # Optional: reduce VRAM
# Access at http://localhost:7860
多輪對話
# Initialize conversation
conv = conv_templates["llava_v1"].copy()
# Turn 1
conv.append_message(conv.roles[0], DEFAULT_IMAGE_TOKEN + "\nWhat is in this image?")
conv.append_message(conv.roles[1], None)
response1 = generate(conv, model, image) # "A dog playing in a park"
# Turn 2
conv.messages[-1][1] = response1 # Add previous response
conv.append_message(conv.roles[0], "What breed is the dog?")
conv.append_message(conv.roles[1], None)
response2 = generate(conv, model, image) # "Golden Retriever"
# Turn 3
conv.messages[-1][1] = response2
conv.append_message(conv.roles[0], "What time of day is it?")
conv.append_message(conv.roles[1], None)
response3 = generate(conv, model, image)
常見任務
圖像字幕生成
question = "Describe this image in detail."
response = ask(model, image, question)
視覺問答
question = "How many people are in the image?"
response = ask(model, image, question)
對象檢測(文本形式)
question = "List all the objects you can see in this image."
response = ask(model, image, question)
場景理解
question = "What is happening in this scene?"
response = ask(model, image, question)
文檔理解
question = "What is the main topic of this document?"
response = ask(model, document_image, question)
訓練自定義模型
# Stage 1: Feature alignment (558K image-caption pairs)
bash scripts/v1_5/pretrain.sh
# Stage 2: Visual instruction tuning (150K instruction data)
bash scripts/v1_5/finetune.sh
量化(減少顯存佔用)
# 4-bit quantization
tokenizer, model, image_processor, context_len = load_pretrained_model(
model_path="liuhaotian/llava-v1.5-13b",
model_base=None,
model_name=get_model_name_from_path("liuhaotian/llava-v1.5-13b"),
load_4bit=True # Reduces VRAM ~4×
)
# 8-bit quantization
load_8bit=True # Reduces VRAM ~2×
最佳實踐
- 從 7B 模型開始 - 質量良好,顯存佔用可控
- 使用 4-bit 量化 - 顯著減少顯存佔用
- 需要 GPU - CPU 推理極其緩慢
- 清晰的提示詞 - 具體的問題能獲得更好的回答
- 多輪對話 - 保持對話上下文
- Temperature 0.2-0.7 - 平衡創造性與一致性
- max_new_tokens 512-1024 - 用於生成詳細回覆
- 批處理 - 按順序處理多張圖像
性能
| 模型 | 顯存 (FP16) | 顯存 (4-bit) | 速度 (tokens/s) |
|---|---|---|---|
| 7B | ~14 GB | ~4 GB | ~20 |
| 13B | ~28 GB | ~8 GB | ~12 |
| 34B | ~70 GB | ~18 GB | ~5 |
在 A100 GPU 上
基準測試
LLaVA 在以下基準測試中取得了具有競爭力的分數:
- VQAv2: 78.5%
- GQA: 62.0%
- MM-Vet: 35.4%
- MMBench: 64.3%
侷限性
- 幻覺 - 可能會描述圖像中不存在的內容
- 空間推理 - 難以精確判斷位置
- 小文本 - 難以閱讀細小文字
- 對象計數 - 對於大量對象的計數不精確
- 顯存要求 - 需要高性能 GPU
- 推理速度 - 比 CLIP 慢
與框架集成
LangChain
from langchain.llms.base import LLM
class LLaVALLM(LLM):
def _call(self, prompt, stop=None):
# Custom LLaVA inference
return response
llm = LLaVALLM()
Gradio 應用
import gradio as gr
def chat(image, text, history):
response = ask_llava(model, image, text)
return response
demo = gr.ChatInterface(
chat,
additional_inputs=[gr.Image(type="pil")],
title="LLaVA Chat"
)
demo.launch()
資源
- GitHub: https://github.com/haotian-liu/LLaVA ⭐ 23,000+
- 論文: https://arxiv.org/abs/2304.08485
- 演示: https://llava.hliu.cc
- 模型: https://huggingface.co/liuhaotian
- 許可證: Apache 2.0