跳到主要內容

Pinecone

用於生產級 AI 應用的託管向量數據庫。完全託管、自動擴縮,支持混合搜索(稠密 + 稀疏)、元數據過濾和命名空間。低延遲(p95 <100ms)。適用於生產級 RAG、推薦系統或大規模語義搜索。最適合無服務器、託管基礎設施。

技能元數據

來源可選 — 使用 hermes skills install official/mlops/pinecone 安裝
路徑optional-skills/mlops/pinecone
版本1.0.0
作者Orchestra Research
許可證MIT
依賴項pinecone-client
標籤RAG, Pinecone, Vector Database, Managed Service, Serverless, Hybrid Search, Production, Auto-Scaling, Low Latency, Recommendations

參考:完整 SKILL.md

信息

以下是 Hermes 在觸發此技能時加載的完整技能定義。這是技能激活時代理看到的指令。

Pinecone - 託管向量數據庫

用於生產級 AI 應用的向量數據庫。

何時使用 Pinecone

使用時機:

  • 需要託管的無服務器向量數據庫
  • 生產級 RAG 應用
  • 需要自動擴縮
  • 對低延遲有關鍵要求(<100ms)
  • 不想管理基礎設施
  • 需要混合搜索(稠密 + 稀疏向量)

指標

  • 完全託管的 SaaS
  • 自動擴縮至數十億向量
  • p95 延遲 <100ms
  • 99.9% 可用性 SLA

改用替代方案

  • Chroma:自託管,開源
  • FAISS:離線,純相似度搜索
  • Weaviate:自託管,功能更豐富

快速開始

安裝

pip install pinecone-client

基本用法

from pinecone import Pinecone, ServerlessSpec

# Initialize
pc = Pinecone(api_key="your-api-key")

# Create index
pc.create_index(
name="my-index",
dimension=1536, # Must match embedding dimension
metric="cosine", # or "euclidean", "dotproduct"
spec=ServerlessSpec(cloud="aws", region="us-east-1")
)

# Connect to index
index = pc.Index("my-index")

# Upsert vectors
index.upsert(vectors=[
{"id": "vec1", "values": [0.1, 0.2, ...], "metadata": {"category": "A"}},
{"id": "vec2", "values": [0.3, 0.4, ...], "metadata": {"category": "B"}}
])

# Query
results = index.query(
vector=[0.1, 0.2, ...],
top_k=5,
include_metadata=True
)

print(results["matches"])

核心操作

創建索引

# Serverless (recommended)
pc.create_index(
name="my-index",
dimension=1536,
metric="cosine",
spec=ServerlessSpec(
cloud="aws", # or "gcp", "azure"
region="us-east-1"
)
)

# Pod-based (for consistent performance)
from pinecone import PodSpec

pc.create_index(
name="my-index",
dimension=1536,
metric="cosine",
spec=PodSpec(
environment="us-east1-gcp",
pod_type="p1.x1"
)
)

Upsert 向量

# Single upsert
index.upsert(vectors=[
{
"id": "doc1",
"values": [0.1, 0.2, ...], # 1536 dimensions
"metadata": {
"text": "Document content",
"category": "tutorial",
"timestamp": "2025-01-01"
}
}
])

# Batch upsert (recommended)
vectors = [
{"id": f"vec{i}", "values": embedding, "metadata": metadata}
for i, (embedding, metadata) in enumerate(zip(embeddings, metadatas))
]

index.upsert(vectors=vectors, batch_size=100)

查詢向量

# Basic query
results = index.query(
vector=[0.1, 0.2, ...],
top_k=10,
include_metadata=True,
include_values=False
)

# With metadata filtering
results = index.query(
vector=[0.1, 0.2, ...],
top_k=5,
filter={"category": {"$eq": "tutorial"}}
)

# Namespace query
results = index.query(
vector=[0.1, 0.2, ...],
top_k=5,
namespace="production"
)

# Access results
for match in results["matches"]:
print(f"ID: {match['id']}")
print(f"Score: {match['score']}")
print(f"Metadata: {match['metadata']}")

元數據過濾

# Exact match
filter = {"category": "tutorial"}

# Comparison
filter = {"price": {"$gte": 100}} # $gt, $gte, $lt, $lte, $ne

# Logical operators
filter = {
"$and": [
{"category": "tutorial"},
{"difficulty": {"$lte": 3}}
]
} # Also: $or

# In operator
filter = {"tags": {"$in": ["python", "ml"]}}

命名空間

# Partition data by namespace
index.upsert(
vectors=[{"id": "vec1", "values": [...]}],
namespace="user-123"
)

# Query specific namespace
results = index.query(
vector=[...],
namespace="user-123",
top_k=5
)

# List namespaces
stats = index.describe_index_stats()
print(stats['namespaces'])

混合搜索(稠密 + 稀疏)

# Upsert with sparse vectors
index.upsert(vectors=[
{
"id": "doc1",
"values": [0.1, 0.2, ...], # Dense vector
"sparse_values": {
"indices": [10, 45, 123], # Token IDs
"values": [0.5, 0.3, 0.8] # TF-IDF scores
},
"metadata": {"text": "..."}
}
])

# Hybrid query
results = index.query(
vector=[0.1, 0.2, ...],
sparse_vector={
"indices": [10, 45],
"values": [0.5, 0.3]
},
top_k=5,
alpha=0.5 # 0=sparse, 1=dense, 0.5=hybrid
)

LangChain 集成

from langchain_pinecone import PineconeVectorStore
from langchain_openai import OpenAIEmbeddings

# Create vector store
vectorstore = PineconeVectorStore.from_documents(
documents=docs,
embedding=OpenAIEmbeddings(),
index_name="my-index"
)

# Query
results = vectorstore.similarity_search("query", k=5)

# With metadata filter
results = vectorstore.similarity_search(
"query",
k=5,
filter={"category": "tutorial"}
)

# As retriever
retriever = vectorstore.as_retriever(search_kwargs={"k": 10})

LlamaIndex 集成

from llama_index.vector_stores.pinecone import PineconeVectorStore

# Connect to Pinecone
pc = Pinecone(api_key="your-key")
pinecone_index = pc.Index("my-index")

# Create vector store
vector_store = PineconeVectorStore(pinecone_index=pinecone_index)

# Use in LlamaIndex
from llama_index.core import StorageContext, VectorStoreIndex

storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)

索引管理

# List indices
indexes = pc.list_indexes()

# Describe index
index_info = pc.describe_index("my-index")
print(index_info)

# Get index stats
stats = index.describe_index_stats()
print(f"Total vectors: {stats['total_vector_count']}")
print(f"Namespaces: {stats['namespaces']}")

# Delete index
pc.delete_index("my-index")

刪除向量

# Delete by ID
index.delete(ids=["vec1", "vec2"])

# Delete by filter
index.delete(filter={"category": "old"})

# Delete all in namespace
index.delete(delete_all=True, namespace="test")

# Delete entire index
index.delete(delete_all=True)

最佳實踐

  1. 使用無服務器 - 自動擴縮,具有成本效益
  2. 批量 upsert - 更高效(每批 100-200 條)
  3. 添加元數據 - 啟用過濾
  4. 使用命名空間 - 按用戶/租戶隔離數據
  5. 監控使用情況 - 檢查 Pinecone 儀表板
  6. 優化過濾器 - 為經常過濾的字段建立索引
  7. 使用免費層測試 - 1 個索引,10 萬向量免費
  8. 使用混合搜索 - 質量更高
  9. 設置適當的維度 - 與嵌入模型匹配
  10. 定期備份 - 導出重要數據

性能

操作延遲說明
Upsert~50-100ms每批次
查詢 (p50)~50ms取決於索引大小
查詢 (p95)~100msSLA 目標
元數據過濾~+10-20ms額外開銷

定價(截至 2025 年)

無服務器

  • 每百萬讀取單元 $0.096
  • 每百萬寫入單元 $0.06
  • 每 GB 存儲/月 $0.06

免費層

  • 1 個無服務器索引
  • 10 萬向量(1536 維)
  • 非常適合原型設計

資源