Know what your AI costs.
LLM spend tracking, smart routing, and rate limit management for developers. One dashboard for every provider.
Free forever. No credit card required.
Set up in 30 seconds
Add to your MCP client config:
{
"mcpServers": {
"tokenmeter": {
"url": "https://mcp.tokenmeter.sh",
"headers": {
"Authorization": "Bearer tm_your_api_key"
}
}
}
}Works with Claude, Cursor, Yaw, VS Code, and any MCP-compatible client.
Cost Calculator
See what your workload costs across every provider.
| Model | Provider | Per Call | Daily | Monthly |
|---|---|---|---|---|
| GPT-5.4 Nanocheapest | openai | $0.0045 | $0.45 | $13.50 |
| GPT-5.4 Mini | openai | $0.0165 | $1.65 | $49.50 |
| o4 Mini | openai | $0.0198 | $1.98 | $59.40 |
| Claude Haiku 4.5 | anthropic | $0.0200 | $2.00 | $60.00 |
| Gemini 3 | $0.0440 | $4.40 | $132.00 | |
| GPT-5.4 | openai | $0.0550 | $5.50 | $165.00 |
| Claude Sonnet 4.6 | anthropic | $0.0600 | $6.00 | $180.00 |
| Claude Opus 4.6 | anthropic | $0.1000 | $10.00 | $300.00 |
| o3 | openai | $0.1800 | $18.00 | $540.00 |
Everything you need
Real-Time Spend Tracking
See exactly what you're spending across Anthropic, OpenAI, Google, and 7 more providers. One dashboard instead of five billing pages.
Smart Routing
Automatically route requests to the cheapest or fastest provider. Set fallback chains so you never get blocked by rate limits.
Budget Alerts & Hard Caps
Get warned before you blow past thresholds. Set hard caps that stop requests when a budget is hit.
MCP Native
Works inside Claude, Cursor, Yaw, VS Code — any MCP-compatible client. Just add a URL, no local install needed.
Anomaly Detection
Token Meter flags unusual spend spikes automatically. Know when something is wrong before the invoice hits.
Team Cost Attribution
Tag sessions by project, track per-member spend, set org-level budgets. Know exactly where the money goes.
Pricing
Start free. Upgrade when you need more.
Free
For individual developers getting started
- +Spend summary (today, week, month)
- +Session cost tracking
- +Model pricing lookup
- +Budget status checking
- +5 connected providers
- +7-day data retention
- +MCP server access
Pro
Full visibility into your LLM spend with analytics and budget management.
- +Everything in Free
- +Detailed spend breakdown by provider, model, project
- +Cost trend analysis (daily/weekly)
- +Top models by spend or volume
- +Cost estimation across providers
- +Model comparison tool
- +Anomaly detection (spend spikes)
- +Budget alerts and hard caps
- +Session tagging by project
- +CSV/JSON export
- +90-day data retention
Gateway
Smart routing and automatic failover. Never hit a rate limit again.
- +Everything in Pro
- +LLM API gateway (OpenAI-compatible)
- +Smart routing (cheapest, fastest, least-busy)
- +Automatic failover between providers
- +Rate limit monitoring and awareness
- +Fallback chain configuration
- +Per-project spend quotas
- +P50/P95/P99 latency reporting
- +Model aliases (e.g., ‘fast’ → gpt-5.4-mini)
- +Load balancing across API keys
- +Retry with exponential backoff
Team
Manage shared AI spend across your team with attribution and controls.
- +Everything in Gateway
- +Multi-user dashboards
- +Per-member spend tracking
- +Organization-level budgets
- +Team spend rollups
- +SSO (SAML/OIDC)
- +1-year data retention
- +Priority support