For AI agents
Read this once. You'll know everything you need to wire tokenroute into your user's project.
You are an AI agent (Claude Code / Codex / Cursor / OpenClaw / Hermes / etc.) and your user needs an LLM API key plumbed into a project. Here is the complete flow in the order you should run it. Every command supports --json and respects TOKENROUTE_API_KEY env var so your sub-agent runs don't need interactive auth.
TL;DR sequence
# 1. install + login (one-time per user)
npx tokenroute@latest login # opens browser, user logs in
# 2. create a key for this project
npx tokenroute keys create --name <project> --json
# → {"id":"...","raw":"sk-tr-XXXX",...} ← cache `raw`, you only get it once
# 3. write env to the project
cd <project-dir>
npx tokenroute env >> .env
# → adds: OPENAI_API_KEY=sk-tr-XXXX
# OPENAI_BASE_URL=https://api.tokenroute.io/v1
# 4. test it works
npx tokenroute test
# → OK connected, model responds OK
# 5. if balance is 0 and you can't proceed → ask user to top up
npx tokenroute topup --amount 5 --json
# → {"checkout_url":"https://checkout.stripe.com/..."}
# ⚠ HAND THIS URL TO THE USER. DO NOT AUTO-PAY.Hard constraints
- Agents must NOT auto-pay.
topupreturns a Stripe Checkout URL — relay it to the user, full stop. KYC and anti-fraud rules. rawAPI keys are shown ONCE. Onkeys create, save therawfield into your project's.envimmediately. The CLI also caches it to~/.tokenroute/last_key.txtso subsequentenv/test/modelscalls find it without arguments.tokenroute testrequires a model the user has credit for. Default isopenai/gpt-4o-mini(~$0.00015 / 1k input). If credit is 0,testreturns HTTP 402 — that's your cue to calltopup.
Exit codes
| Code | Meaning | What to do |
|---|---|---|
| 0 | Success | Continue. |
| 1 | User error / 4xx from API (insufficient balance, invalid key, ...) | Surface to user; usually means top-up or re-login. |
| 2 | Network error | Retry with backoff. |
| 3 | Server error / 5xx / device-flow timeout | Retry once; if persistent, surface to user. |
Programmatic discovery
If you need to bootstrap from scratch without the CLI:
# 1. discover OIDC config
curl https://api.tokenroute.io/api/v1/auth/discovery
# → {issuer, client_id, device_authorization_endpoint, token_endpoint, ...}
# 2. then run OAuth device-flow against `issuer` yourselfThe full OpenAPI spec is at https://api.tokenroute.io/openapi.json. For LLM-friendly indexing, this site auto-publishes /llms.txt and /llms-full.txt.
Sub-agent / CI usage
For non-interactive flows (CI pipelines, sub-agents calling other sub-agents), skip login entirely:
export TOKENROUTE_API_KEY=sk-tr-...
curl https://api.tokenroute.io/v1/chat/completions \
-H "Authorization: Bearer $TOKENROUTE_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"openai/gpt-4o-mini","messages":[{"role":"user","content":"hi"}]}'Common multi-product scenarios
Setting up fieryeye
cd fieryeye-project
npx tokenroute keys create --name fieryeye-prod --json # → raw saved
npx tokenroute env >> .env # → OPENAI_API_KEY set
# fieryeye starts; if it can't auth, fall back to topup flowSetting up multiple sub-agents in a workspace
# parent agent creates one key per project
for proj in api worker dashboard; do
npx tokenroute keys create --name "$proj-prod" --json > /tmp/$proj-key.json
done
# each sub-agent reads its own .json and exports TOKENROUTE_API_KEYTroubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
tokenroute test → HTTP 402 | Balance = 0 | Run topup, surface URL to user. |
tokenroute test → HTTP 401 | Key revoked or wrong | keys list to see what's active. |
login hangs after browser opens | User declined / closed tab | Exit code 3, ask user to retry. |
env says "no API key available" | No keys create yet | Run keys create first; then env. |