Voyage AI

Voyage AI Embedding Cost Calculator

voyage-3 leads MTEB benchmarks with a score of 67 — better than OpenAI's best model, at less than half the price. Calculate your embedding costs at scale.

Embedding Cost Calculator

Enter your workload details below

Total documents to embed

Typical document length in tokens (~750 words = 1000 tokens)

Search/retrieval queries per month

Typical query length in tokens

1536 dimensions · 8,191 max tokens · MTEB 62

Total Tokens to Embed

50.0M

50,000,000 tokens

Embedding Cost

$1.00

One-time cost to embed all documents

Monthly Query Cost

$0.0500

50,000 queries/mo

Total Monthly Cost

$1.05

Embedding + query costs

Annual Cost

$12.60

Cost Per Document

$0.0000

Cheapest Alternative

Switch to text-embedding-005

Google · 768 dims · MTEB 63

Save $1.05/month ($12.60/year)

Vector database providers

Once you generate embeddings, you need somewhere to store and query them. These vector databases handle similarity search at scale.

PineconeWeaviateQdrantChromaMilvus

Need help optimizing AI costs?

Digital Signet builds AI-powered systems and provides fractional CTO leadership. 20+ years shipping software.

This costs you ~$13/year

We'll identify the top 3 drivers and give you a 90-day mitigation plan.

Get a Free Exposure Teardown →

Or email Oliver directly → oliver@digitalsignet.com

Voyage AI Embedding Models

ModelPrice / 1M TokensDimensionsMax TokensMTEB ScoreNotes
voyage-3Top MTEB$0.0601,02432,00067Best general quality
voyage-3-lite$0.02051232,00062Budget tier
voyage-finance-2$0.1201,02432,000N/AFinancial docs
voyage-law-2$0.1201,02432,000N/ALegal docs

32K Token Context Advantage

Voyage models support 32,000 tokens per embedding — 4x OpenAI's 8,191 limit. This means entire articles, legal contracts, or research papers can be embedded as single vectors, dramatically reducing chunking complexity and improving retrieval coherence.

Frequently Asked Questions

How much do Voyage AI embeddings cost?

Voyage AI offers two main embedding tiers: voyage-3 at $0.06 per 1M tokens (the highest-quality option) and voyage-3-lite at $0.02 per 1M tokens (budget tier). Both support up to 32,000 tokens per input — 4x the limit of OpenAI's models. voyage-3 leads MTEB benchmarks with a score of 67, making it the top-performing general embedding model.

Is Voyage AI better than OpenAI for embeddings?

Voyage-3 outperforms OpenAI text-embedding-3-large on MTEB benchmarks (67 vs 65) while costing less ($0.06 vs $0.13 per 1M tokens). Voyage-3-lite matches text-embedding-3-small on quality (both ~62 MTEB) at the same price ($0.02/1M). Voyage's primary advantage is the 32,000 token context limit — 4x OpenAI's 8,191 — which reduces chunking complexity for long documents.

What makes Voyage AI embeddings unique?

Voyage AI's key differentiators are: (1) best-in-class MTEB quality scores — voyage-3 leads the MTEB leaderboard among general-purpose models, (2) 32,000 token context window — enabling embedding of full documents without aggressive chunking, (3) domain-specific models like voyage-finance-2 and voyage-law-2 for specialised use cases, and (4) competitive pricing relative to quality.

When should I use voyage-3 vs voyage-3-lite?

Use voyage-3 ($0.06/1M) when retrieval quality is critical — complex RAG systems, legal or financial document search, or any application where precision directly impacts user outcomes. Use voyage-3-lite ($0.02/1M) for high-volume, quality-tolerant use cases like bulk classification, content deduplication, or recommendation systems where marginal quality differences have low impact.