tokenroute

Introduction

OpenAI-compatible LLM gateway with smart routing and transparent token billing — built for AI agents.

tokenroute is an OpenAI-compatible LLM API gateway that does three things:

  1. One API key, many providers. Point any OpenAI SDK at https://api.tokenroute.io/v1 and call OpenAI, Anthropic, Google, DeepSeek, and more — no per-vendor SDKs, no per-vendor keys.
  2. Auto-routing by prompt complexity. The gateway scores your prompt and picks the cheapest model that can handle it (SIMPLE → REASONING tiers). You stop paying GPT-4 prices to answer "what's 2+2".
  3. Agent-first surface. A tokenroute CLI + remote MCP server expose the entire lifecycle — login, key creation, top-up, usage — so Claude Code, Codex, OpenClaw, Hermes, and other agents can wire it up in a handful of commands.

Why agent-first?

When a coding agent installs a new LLM-powered app for a user, the painful step is getting the API key plumbed in. Most providers force the user to open a browser dashboard, copy a key, paste it into .env, restart. We collapsed that into:

npx tokenroute login                    # OAuth device-flow, browser opens
npx tokenroute keys create --name myapp # raw key shown once
npx tokenroute env >> .env              # OPENAI_API_KEY + BASE_URL written

Every CLI command supports --json and TOKENROUTE_API_KEY env var, so sub-agents and CI pipelines can do the same flow non-interactively.

What's next

On this page