Pay only for what you use.
Wallet-based metered pricing. Top up via PayPal. Operations are deducted at sub-cent precision. Wallet balance and per-op ledger are visible in the dashboard at all times.
How it works
- Sign up. You get $5 of free credit.
- Top up your wallet via PayPal -- $10, $25, $100, $500, or any custom amount.
- Use the API. Every upsert, query, fetch, and update is metered and deducted from your wallet in real time.
- We email you when your balance drops below $5. Auto-topup optional.
- Cancel anytime. Any unused balance is refundable for 30 days.
Unit rates
| Operation | Rate | Comparable Pinecone |
|---|---|---|
| Upsert | $0.10 / 1M | ~$4.00 / 1M |
| Query | $0.40 / 1M | ~$16.00 / 1M |
| Fetch by ID | $0.05 / 1M | ~$2.00 / 1M |
| Update metadata | $0.10 / 1M | ~$4.00 / 1M |
| Delete | Free | -- |
| Storage (1-bit) | $0.10 / GB / mo | ~$3.20 / GB / mo |
| HD substrate ops -- new | ||
| HD KG add / query / stats | $1.00 / 1M | not offered |
| HD analogy | $1.00 / 1M | not offered |
| HD causal (observe / intervene) | $2.00 / 1M | not offered |
| HD plan (EFE) | $5.00 / 1M | not offered |
Comparable Pinecone column estimated from posted serverless pricing. We don't charge per-namespace, per-pod, or per-read-unit. Just operations. The HD substrate operations have no equivalent in any other vector DB -- they replace LLM calls for the substrate-answerable slice of agent workloads, so the relevant comparison is per-1M-token cost on a frontier model, which runs orders of magnitude higher.
What you actually pay
10k upserts/day, 50k queries/day, 100MB storage. One autonomous coding agent running 24/7.
500k upserts/day, 2M queries/day, 5GB storage. 100 agentic users in production.
10M upserts/day, 40M queries/day, 200GB storage. Equivalent Pinecone bill: ~$50k/mo.
An agent that answers 1M knowledge-graph queries via /v1/hd/kg/query pays $1.00. The equivalent on a frontier LLM -- routing the same questions through a model -- is conservatively 200--2000x more, before you count latency, determinism, or auditability. The substrate isn't cheaper Pinecone. It's cheaper reasoning.
Enterprise
For dedicated capacity, VPC peering, SOC 2 audit support, or on-prem deployment, contact info@neruva.io.