# Assistiv Gateway > Multi-tenant AI gateway. Gives your platform unified access to LLM providers, MCP tool integrations, billing, end-user management, and real-time outbound webhooks through a single OpenAI-compatible API. Assistiv provisions your platform and hands you a `sk-plat_*` key. You connect your own LLM provider keys (OpenAI, Anthropic, Google, xAI), create end users, top up your wallet, and call inference — your end users get their own `sk-eu_*` keys for runtime operations. MCP tool connections (GitHub, Slack, Zoho, Zendesk) let your AI agents call real tools with per-user OAuth authorization. Budget operations (`/budget/topup`, `/budget/debit`, `PATCH /budget`) are strictly idempotent via `Idempotency-Key`, every state change lands in a full transaction ledger (`/budget/transactions`), and a user can be paused via `is_suspended` without losing state. Outbound webhooks push real-time events (`budget.topped_up`, `budget.low_balance`, `budget.suspended`, `budget.unsuspended`, `budget.debited`) via Svix — no polling required. ## Integration Guide (step-by-step) Follow these in order. Each is a self-contained file with full API details. 1. [Setup & Hello World](/docs/integration/step-1-setup-and-hello-world.txt) — Dashboard config, server-side setup, verify connection 2. [Provision End Users](/docs/integration/step-2-provision-end-users.txt) — Create users, manage API keys, idempotent create, scope enforcement 3. [Budgets & Rate Limits](/docs/integration/step-3-budgets-and-rate-limits.txt) — Per-user USD budgets, idempotent topup/debit, suspension, full ledger via /budget/transactions, wallet, rate-limit overrides 4. [Inference](/docs/integration/step-4-inference.txt) — Chat Completions, Responses API, Models, streaming, tool calling, 402 code branching (budget_suspended vs budget_exhausted vs wallet_insufficient) 5. [MCP Tools (Optional)](/docs/integration/step-5-mcp-tools.txt) — OAuth flow, hosted execution, free tier 6. [Monitor & Operate](/docs/integration/step-6-monitor-and-operate.txt) — Outbound webhooks via Svix, self-service, logs, ledger-based cost reconciliation ## Full Reference - [Integration Index + Examples](/llms-full.txt) — Complete TS/Python examples, troubleshooting, key constraints ## API Reference (website docs) - [End Users](/docs/api-reference/end-users): Create users with external_id, auto-generated API keys, optional metadata, opening ledger row on auto-create - [LLM Configurations](/docs/api-reference/llm-configs): Connect OpenAI, Anthropic, Google, xAI provider keys (AES-256-GCM encrypted at rest) - [Wallet](/docs/api-reference/wallet): Platform USD balance, topup, transaction log, atomic debit on each inference call - [Budgets](/docs/api-reference/budgets): Per-user USD budgets, one_time/daily/monthly periods, auto-replenish, is_suspended flag, idempotent topup/debit, full transaction ledger - [Outbound Webhooks](/docs/api-reference/webhook-endpoints): Register endpoints for budget events, event-type filters, Svix-backed delivery + replay portal - [Chat Completions](/docs/api-reference/chat-completions): OpenAI-compatible inference with streaming, tool calling, structured output - [Models](/docs/api-reference/models): List available models filtered by your platform's enabled provider configs - [API Keys](/docs/api-reference/api-keys): Create/revoke platform and end-user keys, key prefix visibility, immediate Redis invalidation - [MCP / Tools](/docs/api-reference/mcp): OAuth connect flow, list apps/connections, execute tools via JSON-RPC 2.0 - [Platforms](/docs/api-reference/platforms): Platform settings, team management, default budget config - [Agents](/docs/api-reference/agents): Agent execution via chat completions with tools (CRUD management coming soon)