> For clean Markdown of any page, append .md to the page URL. > For a complete documentation index, see https://openrouter.ai/docs/guides/overview/auth/llms.txt. > For full documentation content, see https://openrouter.ai/docs/guides/overview/auth/llms-full.txt. # Authentication # OAuth PKCE Users can connect to OpenRouter in one click using [Proof Key for Code Exchange (PKCE)](https://oauth.net/2/pkce/). Here's a step-by-step guide: ## PKCE Guide ### Step 1: Send your user to OpenRouter To start the PKCE flow, send your user to OpenRouter's `/auth` URL with a `callback_url` parameter pointing back to your site: ```txt title="With S256 Code Challenge (Recommended)" wordWrap https://openrouter.ai/auth?callback_url=&code_challenge=&code_challenge_method=S256 ``` ```txt title="With Plain Code Challenge" wordWrap https://openrouter.ai/auth?callback_url=&code_challenge=&code_challenge_method=plain ``` ```txt title="Without Code Challenge" wordWrap https://openrouter.ai/auth?callback_url= ``` The `code_challenge` parameter is optional but recommended. Your user will be prompted to log in to OpenRouter and authorize your app. After authorization, they will be redirected back to your site with a `code` parameter in the URL: ![Alt text](file:b8e1f99c-2a2c-436e-bd4e-f95aaa8613cd) For maximum security, set `code_challenge_method` to `S256`, and set `code_challenge` to the base64 encoding of the sha256 hash of `code_verifier`. For more info, [visit Auth0's docs](https://auth0.com/docs/get-started/authentication-and-authorization-flow/call-your-api-using-the-authorization-code-flow-with-pkce#parameters). #### How to Generate a Code Challenge The following example leverages the Web Crypto API and the Buffer API to generate a code challenge for the S256 method. You will need a bundler to use the Buffer API in the web browser: ```typescript title="Generate Code Challenge" import { Buffer } from 'buffer'; async function createSHA256CodeChallenge(input: string) { const encoder = new TextEncoder(); const data = encoder.encode(input); const hash = await crypto.subtle.digest('SHA-256', data); return Buffer.from(hash).toString('base64url'); } const codeVerifier = 'your-random-string'; const generatedCodeChallenge = await createSHA256CodeChallenge(codeVerifier); ``` #### Localhost Apps If your app is a local-first app or otherwise doesn't have a public URL, it is recommended to test with `http://localhost:3000` as the callback and referrer URLs. When moving to production, replace the localhost/private referrer URL with a public GitHub repo or a link to your project website. ### Step 2: Exchange the code for a user-controlled API key After the user logs in with OpenRouter, they are redirected back to your site with a `code` parameter in the URL: ![Alt text](file:e85aa884-7239-4b0b-a76b-776ee5d4f11e) Extract this code using the browser API: ```typescript title="Extract Code" const urlParams = new URLSearchParams(window.location.search); const code = urlParams.get('code'); ``` Then use it to make an API call to `https://openrouter.ai/api/v1/auth/keys` to exchange the code for a user-controlled API key: ```typescript title="Exchange Code" const response = await fetch('https://openrouter.ai/api/v1/auth/keys', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ code: '', code_verifier: '', // If code_challenge was used code_challenge_method: '', // If code_challenge was used }), }); const { key } = await response.json(); ``` And that's it for the PKCE flow! ### Step 3: Use the API key Store the API key securely within the user's browser or in your own database, and use it to [make OpenRouter requests](/docs/api/reference/overview). ```typescript title="TypeScript SDK" import { OpenRouter } from '@openrouter/sdk'; const openRouter = new OpenRouter({ apiKey: key, // The key from Step 2 }); const completion = await openRouter.chat.send({ model: 'openai/gpt-5.2', messages: [ { role: 'user', content: 'Hello!', }, ], stream: false, }); console.log(completion.choices[0].message); ``` ```typescript title="TypeScript (fetch)" fetch('https://openrouter.ai/api/v1/chat/completions', { method: 'POST', headers: { Authorization: `Bearer ${key}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5.2', messages: [ { role: 'user', content: 'Hello!', }, ], }), }); ``` ## Error Codes * `400 Invalid code_challenge_method`: Make sure you're using the same code challenge method in step 1 as in step 2. * `403 Invalid code or code_verifier`: Make sure your user is logged in to OpenRouter, and that `code_verifier` and `code_challenge_method` are correct. * `405 Method Not Allowed`: Make sure you're using `POST` and `HTTPS` for your request. ## External Tools * [PKCE Tools](https://example-app.com/pkce) * [Online PKCE Generator](https://tonyxu-io.github.io/pkce-generator/) # Management API Keys OpenRouter provides endpoints to programmatically manage your API keys, enabling key creation and management for applications that need to distribute or rotate keys automatically. ## Creating a Management API Key To use the key management API, you first need to create a Management API key: 1. Go to the [Management API Keys page](https://openrouter.ai/settings/management-keys) 2. Click "Create New Key" 3. Complete the key creation process Management keys cannot be used to make API calls to OpenRouter's completion endpoints - they are exclusively for administrative operations. ## Use Cases Common scenarios for programmatic key management include: * **SaaS Applications**: Automatically create unique API keys for each customer instance * **Key Rotation**: Regularly rotate API keys for security compliance * **Usage Monitoring**: Track key usage and automatically disable keys that exceed limits (with optional daily/weekly/monthly limit resets) ## Example Usage All key management endpoints are under `/api/v1/keys` and require a Management API key in the Authorization header. ```typescript title="TypeScript SDK" import { OpenRouter } from '@openrouter/sdk'; const openRouter = new OpenRouter({ apiKey: 'your-management-key', // Use your Management API key }); // List the most recent 100 API keys const keys = await openRouter.apiKeys.list(); // You can paginate using the offset parameter const keysPage2 = await openRouter.apiKeys.list({ offset: 100 }); // Create a new API key const newKey = await openRouter.apiKeys.create({ name: 'Customer Instance Key', limit: 1000, // Optional credit limit }); // Get a specific key const keyHash = ''; const key = await openRouter.apiKeys.get(keyHash); // Update a key const updatedKey = await openRouter.apiKeys.update(keyHash, { name: 'Updated Key Name', disabled: true, // Optional: Disable the key includeByokInLimit: false, // Optional: control BYOK usage in limit limitReset: 'daily', // Optional: reset limit every day at midnight UTC }); // Delete a key await openRouter.apiKeys.delete(keyHash); ``` ```python title="Python" import requests MANAGEMENT_API_KEY = "your-management-key" BASE_URL = "https://openrouter.ai/api/v1/keys" # List the most recent 100 API keys response = requests.get( BASE_URL, headers={ "Authorization": f"Bearer {MANAGEMENT_API_KEY}", "Content-Type": "application/json" } ) # You can paginate using the offset parameter response = requests.get( f"{BASE_URL}?offset=100", headers={ "Authorization": f"Bearer {MANAGEMENT_API_KEY}", "Content-Type": "application/json" } ) # Create a new API key response = requests.post( f"{BASE_URL}/", headers={ "Authorization": f"Bearer {MANAGEMENT_API_KEY}", "Content-Type": "application/json" }, json={ "name": "Customer Instance Key", "limit": 1000 # Optional credit limit } ) # Get a specific key key_hash = "" response = requests.get( f"{BASE_URL}/{key_hash}", headers={ "Authorization": f"Bearer {MANAGEMENT_API_KEY}", "Content-Type": "application/json" } ) # Update a key response = requests.patch( f"{BASE_URL}/{key_hash}", headers={ "Authorization": f"Bearer {MANAGEMENT_API_KEY}", "Content-Type": "application/json" }, json={ "name": "Updated Key Name", "disabled": True, # Optional: Disable the key "include_byok_in_limit": False, # Optional: control BYOK usage in limit "limit_reset": "daily" # Optional: reset limit every day at midnight UTC } ) # Delete a key response = requests.delete( f"{BASE_URL}/{key_hash}", headers={ "Authorization": f"Bearer {MANAGEMENT_API_KEY}", "Content-Type": "application/json" } ) ``` ```typescript title="TypeScript (fetch)" const MANAGEMENT_API_KEY = 'your-management-key'; const BASE_URL = 'https://openrouter.ai/api/v1/keys'; // List the most recent 100 API keys const listKeys = await fetch(BASE_URL, { headers: { Authorization: `Bearer ${MANAGEMENT_API_KEY}`, 'Content-Type': 'application/json', }, }); // You can paginate using the `offset` query parameter const listKeys = await fetch(`${BASE_URL}?offset=100`, { headers: { Authorization: `Bearer ${MANAGEMENT_API_KEY}`, 'Content-Type': 'application/json', }, }); // Create a new API key const createKey = await fetch(`${BASE_URL}`, { method: 'POST', headers: { Authorization: `Bearer ${MANAGEMENT_API_KEY}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ name: 'Customer Instance Key', limit: 1000, // Optional credit limit }), }); // Get a specific key const keyHash = ''; const getKey = await fetch(`${BASE_URL}/${keyHash}`, { headers: { Authorization: `Bearer ${MANAGEMENT_API_KEY}`, 'Content-Type': 'application/json', }, }); // Update a key const updateKey = await fetch(`${BASE_URL}/${keyHash}`, { method: 'PATCH', headers: { Authorization: `Bearer ${MANAGEMENT_API_KEY}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ name: 'Updated Key Name', disabled: true, // Optional: Disable the key include_byok_in_limit: false, // Optional: control BYOK usage in limit limit_reset: 'daily', // Optional: reset limit every day at midnight UTC }), }); // Delete a key const deleteKey = await fetch(`${BASE_URL}/${keyHash}`, { method: 'DELETE', headers: { Authorization: `Bearer ${MANAGEMENT_API_KEY}`, 'Content-Type': 'application/json', }, }); ``` ## Response Format API responses return JSON objects containing key information: ```json { "data": [ { "created_at": "2025-02-19T20:52:27.363244+00:00", "updated_at": "2025-02-19T21:24:11.708154+00:00", "hash": "", "label": "sk-or-v1-abc...123", "name": "Customer Key", "disabled": false, "limit": 10, "limit_remaining": 10, "limit_reset": null, "include_byok_in_limit": false, "usage": 0, "usage_daily": 0, "usage_weekly": 0, "usage_monthly": 0, "byok_usage": 0, "byok_usage_daily": 0, "byok_usage_weekly": 0, "byok_usage_monthly": 0 } ] } ``` When creating a new key, the response will include the key string itself. Read more in the [API reference](/docs/api-reference/api-keys/create-api-key). # BYOK ## Bring your own API Keys OpenRouter supports both OpenRouter credits and the option to bring your own provider keys (BYOK). When you use OpenRouter credits, your rate limits for each provider are managed by OpenRouter. Using provider keys enables direct control over rate limits and costs via your provider account. Your provider keys are securely encrypted and used for all requests routed through the specified provider. Manage keys in your [account settings](/settings/integrations). The cost of using custom provider keys on OpenRouter is **{bn(openRouterBYOKFee.fraction).times(100).toString()}% of what the same model/provider would cost normally on OpenRouter** and will be deducted from your OpenRouter credits. This fee is waived for the first {toHumanNumber(BYOK_FEE_MONTHLY_REQUEST_THRESHOLD)} BYOK requests per-month. ### Key Priority and Fallback OpenRouter always prioritizes using your provider keys when available. By default, if your key encounters a rate limit or failure, OpenRouter will fall back to using shared OpenRouter credits. You can configure individual keys with "Always use this key" to prevent any fallback to OpenRouter credits. When this option is enabled, OpenRouter will only use your key for requests to that provider, which may result in rate limit errors if your key is exhausted, but ensures all requests go through your account. ### BYOK with Provider Ordering When you combine BYOK keys with [provider ordering](/docs/guides/routing/provider-selection#ordering-specific-providers), OpenRouter **always prioritizes BYOK endpoints first**, regardless of where that provider appears in your specified order. After all BYOK endpoints are exhausted, OpenRouter falls back to shared capacity in the order you specified. This means BYOK keys effectively override your provider ordering for the initial routing attempts. There is currently no way to change this behavior. For example, if you have BYOK keys for Amazon Bedrock, Google Vertex AI, and Anthropic, and you send a request with: ```json { "provider": { "allow_fallbacks": true, "order": ["amazon-bedrock", "google-vertex", "anthropic"] } } ``` The routing order will be: 1. Amazon Bedrock (your BYOK key) 2. Google Vertex AI (your BYOK key) 3. Anthropic (your BYOK key) 4. Amazon Bedrock (OpenRouter's shared capacity) 5. Google Vertex AI (OpenRouter's shared capacity) 6. Anthropic (OpenRouter's shared capacity) #### Partial BYOK with Provider Ordering If you only have a BYOK key for some of the providers in your order, the BYOK provider is still tried first. For example, if you specify `order: ["amazon-bedrock", "google-vertex"]` but only have a BYOK key for Google Vertex AI: ```json { "provider": { "allow_fallbacks": true, "order": ["amazon-bedrock", "google-vertex"] } } ``` The routing order will be: 1. Google Vertex AI (your BYOK key) 2. Amazon Bedrock (OpenRouter's shared capacity) 3. Google Vertex AI (OpenRouter's shared capacity) Note that even though Amazon Bedrock is listed first in the `order` array, the Google Vertex AI BYOK endpoint takes priority. If you want to prevent fallback to OpenRouter's shared capacity entirely, configure your API key with "Always use this key" in your [account settings](/settings/integrations). ### Multiple BYOK Keys for the Same Provider If you have multiple BYOK keys configured for the same provider, all of them will be used for routing. However, the order in which multiple keys for the same provider are tried is not guaranteed. If deterministic ordering between keys matters for your use case, consider using separate provider accounts or contacting support. ### Azure API Keys Azure has two resource types, each using a different domain: * **Azure AI Foundry** — resources at `*.services.ai.azure.com`. Uses the model catalog and does not require per-model deployments. * **Azure OpenAI** — resources at `*.openai.azure.com`. Requires explicit per-model deployments. #### Foundry Configuration (Recommended) The simplest way to configure Azure BYOK is with a Foundry configuration. Provide your API key, resource name, and resource type: ```json [ { "api_key": "your-azure-api-key", "resource_name": "your-resource-name", "resource_type": "ai_foundry" } ] ``` * **`api_key`**: Your Azure API key, found under "Keys and Endpoint" in the Azure portal. * **`resource_name`**: The name of your Azure resource (the subdomain portion of your endpoint URL). * **`resource_type`**: Either `"ai_foundry"` for Azure AI Foundry resources (`*.services.ai.azure.com`) or `"openai"` for Azure OpenAI resources (`*.openai.azure.com`). Defaults to `"openai"` if omitted. This configuration works for all models available in your Azure resource — no per-model setup required. #### Per-Deployment Configuration (Legacy) For more control, you can specify individual deployments with full endpoint URLs: ```json [ { "model_slug": "mistralai/mistral-large", "endpoint_url": "https://example-project.openai.azure.com/openai/deployments/mistral-large/chat/completions?api-version=2024-08-01-preview", "api_key": "your-azure-api-key", "model_id": "mistral-large" }, { "model_slug": "openai/gpt-5.2", "endpoint_url": "https://example-project.openai.azure.com/openai/deployments/gpt-5.2/chat/completions?api-version=2024-08-01-preview", "api_key": "your-azure-api-key", "model_id": "gpt-5.2" } ] ``` Each per-deployment configuration requires: 1. **`endpoint_url`**: The full deployment endpoint URL including `/chat/completions` and the API version. See the [Azure Foundry documentation](https://learn.microsoft.com/en-us/azure/ai-foundry/model-inference/concepts/endpoints?tabs=python) for details. 2. **`api_key`**: Your Azure API key. 3. **`model_id`**: The name of your model deployment in Azure. 4. **`model_slug`**: The OpenRouter model identifier you want to use this key for. You can mix Foundry and per-deployment configurations in the same array. Per-deployment configs take priority when a matching model slug is found. ### AWS Bedrock API Keys To use Amazon Bedrock with OpenRouter, you can authenticate using either Bedrock API keys or traditional AWS credentials. #### Option 1: Bedrock API Keys (Recommended) Amazon Bedrock API keys provide a simpler authentication method. Simply provide your Bedrock API key as a string: ``` your-bedrock-api-key-here ``` **Note:** Bedrock API keys are tied to a specific AWS region and cannot be used to change regions. If you need to use models in different regions, use the AWS credentials option below. You can generate Bedrock API keys in the AWS Management Console. Learn more in the [Amazon Bedrock API keys documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/api-keys.html). #### Option 2: AWS Credentials Alternatively, you can use traditional AWS credentials in JSON format. This option allows you to specify the region and provides more flexibility: ```json { "accessKeyId": "your-aws-access-key-id", "secretAccessKey": "your-aws-secret-access-key", "region": "your-aws-region" } ``` You can find these values in your AWS account: 1. **accessKeyId**: This is your AWS Access Key ID. You can create or find your access keys in the AWS Management Console under "Security Credentials" in your AWS account. 2. **secretAccessKey**: This is your AWS Secret Access Key, which is provided when you create an access key. 3. **region**: The AWS region where your Amazon Bedrock models are deployed (e.g., "us-east-1", "us-west-2"). Make sure your AWS IAM user or role has the necessary permissions to access Amazon Bedrock services. At minimum, you'll need permissions for: * `bedrock:InvokeModel` * `bedrock:InvokeModelWithResponseStream` (for streaming responses) Example IAM policy: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream" ], "Resource": "*" } ] } ``` For enhanced security, we recommend creating dedicated IAM users with limited permissions specifically for use with OpenRouter. Learn more in the [AWS Bedrock Getting Started with the API](https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started-api.html) documentation, [IAM Permissions Setup](https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html) guide, or the [AWS Bedrock API Reference](https://docs.aws.amazon.com/bedrock/latest/APIReference/welcome.html). ### Google Vertex API Keys To use Google Vertex AI with OpenRouter, you'll need to provide your Google Cloud service account key in JSON format. The service account key should include all standard Google Cloud service account fields, with an optional `region` field for specifying the deployment region. ```json { "type": "service_account", "project_id": "your-project-id", "private_key_id": "your-private-key-id", "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n", "client_email": "your-service-account@your-project.iam.gserviceaccount.com", "client_id": "your-client-id", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-service-account@your-project.iam.gserviceaccount.com", "universe_domain": "googleapis.com", "region": "global" } ``` You can find these values in your Google Cloud Console: 1. **Service Account Key**: Navigate to the Google Cloud Console, go to "IAM & Admin" > "Service Accounts", select your service account, and create/download a JSON key. 2. **region** (optional): Specify the region for your Vertex AI deployment. Use `"global"` to allow requests to run in any available region, or specify a specific region like `"us-central1"` or `"europe-west1"`. Make sure your service account has the necessary permissions to access Vertex AI services: * `aiplatform.endpoints.predict` Example IAM policy: ```json { "bindings": [ { "role": "roles/aiplatform.user", "members": [ "serviceAccount:your-service-account@your-project.iam.gserviceaccount.com" ] } ] } ``` Learn more in the [Google Cloud Vertex AI documentation](https://cloud.google.com/vertex-ai/docs/start/introduction-unified-platform) and [Service Account setup guide](https://cloud.google.com/iam/docs/service-accounts-create). ### Debugging BYOK Issues If your BYOK requests fail, you can debug the issue by viewing provider responses on the Activity page. #### Viewing Provider Responses 1. Navigate to your [Activity page](https://openrouter.ai/activity) in the OpenRouter dashboard. 2. Find the generation you want to debug and click on it to view the details. 3. Click "View Raw Metadata" to display the raw metadata in JSON format. 4. In the JSON, look for the `provider_responses` field, which shows the HTTP status code from each provider attempt. The `provider_responses` field contains an array of responses from each provider attempted during routing. Each entry includes the provider name and HTTP status code, which can help you identify permission issues, rate limits, or other errors. #### Common BYOK Error Codes When debugging BYOK issues, look for these common HTTP status codes in the provider responses: * **400 Bad Request**: The request format was invalid for the provider. Check that your model and key configuration is correct. * **401 Unauthorized**: Your API key is invalid or has been revoked. Verify your key in your provider's console. * **403 Forbidden**: Your API key doesn't have permission to access the requested resource. For AWS Bedrock, ensure your IAM policy includes the required `bedrock:InvokeModel` permissions. For Google Vertex, verify your service account has `aiplatform.endpoints.predict` permissions. * **429 Too Many Requests**: You've hit the rate limit on your provider account. Check your provider's rate limit settings or wait before retrying. * **500 Server Error**: The provider encountered an internal error. This is typically a temporary issue on the provider's side. #### Debugging Permission Issues If you encounter 403 errors with BYOK, the issue is often related to permissions. For AWS Bedrock, verify that: 1. Your IAM user/role has the `bedrock:InvokeModel` and `bedrock:InvokeModelWithResponseStream` permissions. 2. The model you're trying to access is enabled in your AWS account for the specified region. 3. Your credentials (access key and secret) are correct and active. For Google Vertex, verify that your service account has `aiplatform.endpoints.predict` permissions. You can test your provider permissions directly in the provider's console (AWS Console, Google Cloud Console, etc.) by attempting to invoke the model there first.