Provider Routing

Route requests to the best provider

OpenRouter routes requests to the best available providers for your model. By default, requests are load balanced across the top providers to maximize uptime.

You can customize how your requests are routed using the provider object in the request body for Chat Completions and Completions.

The provider object can contain the following fields:

FieldTypeDefaultDescription
orderstring[]-List of provider names to try in order (e.g. ["anthropic", "openai"]). Learn more
allow_fallbacksbooleantrueWhether to allow backup providers when the primary is unavailable. Learn more
require_parametersbooleanfalseOnly use providers that support all parameters in your request. Learn more
data_collection”allow” | “deny""allow”Control whether to use providers that may store data. Learn more
ignorestring[]-List of provider names to skip for this request. Learn more
quantizationsstring[]-List of quantization levels to filter by (e.g. ["int4", "int8"]). Learn more
sortstring-Sort providers by price or throughput. (e.g. "price" or "throughput"). Learn more

Load Balancing (Default Strategy)

For each model in your request, OpenRouter’s default behavior is to load balance requests across providers.

When you send a request with tools or tool_choice, OpenRouter will only route to providers that natively support tool use. This is currently a beta feature.

Here is OpenRouter’s default load balancing strategy:

  1. Prioritize providers that have not seen significant outages in the last 30 seconds.
  2. For the stable providers, look at the lowest-cost candidates and select one weighted by inverse square of the price (example below).
  3. Use the remaining providers as fallbacks.
A Load Balancing Example

If Provider A costs $1 per million tokens, Provider B costs $2, and Provider C costs $3, and Provider B recently saw a few outages.

  • Your request is routed to Provider A. Provider A is 9x more likely to be first routed to Provider A than Provider C because (1/32=1/9)(1 / 3^2 = 1/9) (inverse square of the price).
  • If Provider A fails, then Provider C will be tried next.
  • If Provider C also fails, Provider B will be tried last.

If you have sort or order set in your provider preferences, load balancing will be disabled.

Provider Sorting

As described above, OpenRouter tries to strike a balance between price and uptime by default.

If you instead want to prioritize throughput, you can include the sort field in the provider preferences, set to "throughput". Load balancing will be disabled, and the router will prioritize providers that have the highest median throughput over the last day.

1fetch('https://openrouter.ai/api/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 'Authorization': 'Bearer <OPENROUTER_API_KEY>',
5 'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
6 'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
7 'Content-Type': 'application/json',
8 },
9 body: JSON.stringify({
10 'model': 'meta-llama/llama-3.1-70b-instruct',
11 'messages': [
12 {
13 'role': 'user',
14 'content': 'Hello'
15 }
16 ],
17 'provider': {
18 'sort': 'throughput'
19 }
20 }),
21});

To always prioritize low prices, and not apply any load balancing, set sort to "price".

Ordering Specific Providers

You can set the providers that OpenRouter will prioritize for your request using the order field.

FieldTypeDefaultDescription
orderstring[]-List of provider names to try in order (e.g. ["anthropic", "openai"]).

The router will prioritize providers in this list, and in this order, for the model you’re using. If you don’t set this field, the router will load balance across the top providers to maximize uptime.

OpenRouter will try them one at a time and proceed to other providers if none are operational. If you don’t want to allow any other providers, you should disable fallbacks as well.

Example: Specifying providers with fallbacks

This example skips over OpenAI (which doesn’t host Mixtral), tries Together, and then falls back to the normal list of providers on OpenRouter:

1fetch('https://openrouter.ai/api/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 'Authorization': 'Bearer <OPENROUTER_API_KEY>',
5 'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
6 'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
7 'Content-Type': 'application/json',
8 },
9 body: JSON.stringify({
10 'model': 'mistralai/mixtral-8x7b-instruct',
11 'messages': [
12 {
13 'role': 'user',
14 'content': 'Hello'
15 }
16 ],
17 'provider': {
18 'order': [
19 'openai',
20 'together'
21 ]
22 }
23 }),
24});

Example: Specifying providers with fallbacks disabled

Here’s an example with allow_fallbacks set to false that skips over OpenAI (which doesn’t host Mixtral), tries Together, and then fails if Together fails:

1fetch('https://openrouter.ai/api/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 'Authorization': 'Bearer <OPENROUTER_API_KEY>',
5 'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
6 'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
7 'Content-Type': 'application/json',
8 },
9 body: JSON.stringify({
10 'model': 'mistralai/mixtral-8x7b-instruct',
11 'messages': [
12 {
13 'role': 'user',
14 'content': 'Hello'
15 }
16 ],
17 'provider': {
18 'order': [
19 'openai',
20 'together'
21 ],
22 'allow_fallbacks': false
23 }
24 }),
25});

Requiring Providers to Support All Parameters (beta)

You can restrict requests only to providers that support all parameters in your request using the require_parameters field.

FieldTypeDefaultDescription
require_parametersbooleanfalseOnly use providers that support all parameters in your request.

With the default routing strategy, providers that don’t support all the LLM parameters specified in your request can still receive the request, but will ignore unknown parameters. When you set require_parameters to true, the request won’t even be routed to that provider.

Example: Excluding providers that don’t support JSON formatting

For example, to only use providers that support JSON formatting:

1fetch('https://openrouter.ai/api/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 'Authorization': 'Bearer <OPENROUTER_API_KEY>',
5 'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
6 'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
7 'Content-Type': 'application/json',
8 },
9 body: JSON.stringify({
10 'messages': [
11 {
12 'role': 'user',
13 'content': 'Hello'
14 }
15 ],
16 'provider': {
17 'require_parameters': true
18 },
19 'response_format': {
20 'type': 'json_object'
21 }
22 }),
23});

Requiring Providers to Comply with Data Policies

You can restrict requests only to providers that comply with your data policies using the data_collection field.

FieldTypeDefaultDescription
data_collection”allow” | “deny""allow”Control whether to use providers that may store data.
  • allow: (default) allow providers which store user data non-transiently and may train on it
  • deny: use only providers which do not collect user data

Some model providers may log prompts, so we display them with a Data Policy tag on model pages. This is not a definitive source of third party data policies, but represents our best knowledge.

Account-Wide Data Policy Filtering

This is also available as an account-wide setting in your privacy settings. You can disable third party model providers that store inputs for training.

Example: Excluding providers that don’t comply with data policies

To exclude providers that don’t comply with your data policies, set data_collection to deny:

1fetch('https://openrouter.ai/api/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 'Authorization': 'Bearer <OPENROUTER_API_KEY>',
5 'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
6 'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
7 'Content-Type': 'application/json',
8 },
9 body: JSON.stringify({
10 'messages': [
11 {
12 'role': 'user',
13 'content': 'Hello'
14 }
15 ],
16 'provider': {
17 'data_collection': 'deny'
18 }
19 }),
20});

Disabling Fallbacks

To guarantee that your request is only served by the top (lowest-cost) provider, you can disable fallbacks.

This is combined with the order field from Ordering Specific Providers to restrict the providers that OpenRouter will prioritize to just your chosen list.

1fetch('https://openrouter.ai/api/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 'Authorization': 'Bearer <OPENROUTER_API_KEY>',
5 'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
6 'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
7 'Content-Type': 'application/json',
8 },
9 body: JSON.stringify({
10 'messages': [
11 {
12 'role': 'user',
13 'content': 'Hello'
14 }
15 ],
16 'provider': {
17 'allow_fallbacks': false
18 }
19 }),
20});

Ignoring Providers

You can ignore providers for a request by setting the ignore field in the provider object.

FieldTypeDefaultDescription
ignorestring[]-List of provider names to skip for this request.

Ignoring multiple providers may significantly reduce fallback options and limit request recovery.

Account-Wide Ignored Providers

You can ignore providers for all account requests by configuring your preferences. This configuration applies to all API requests and chatroom messages.

Note that when you ignore providers for a specific request, the list of ignored providers is merged with your account-wide ignored providers.

Example: Ignoring Azure for a request calling GPT-4 Omni

Here’s an example that will ignore Azure for a request calling GPT-4 Omni:

1fetch('https://openrouter.ai/api/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 'Authorization': 'Bearer <OPENROUTER_API_KEY>',
5 'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
6 'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
7 'Content-Type': 'application/json',
8 },
9 body: JSON.stringify({
10 'model': 'openai/gpt-4o',
11 'messages': [
12 {
13 'role': 'user',
14 'content': 'Hello'
15 }
16 ],
17 'provider': {
18 'ignore': [
19 'azure'
20 ]
21 }
22 }),
23});

Quantization

Quantization reduces model size and computational requirements while aiming to preserve performance.

FieldTypeDefaultDescription
quantizationsstring[]-List of quantization levels to filter by (e.g. ["int4", "int8"]). Learn more

Quantized models may exhibit degraded performance for certain prompts, depending on the method used.

Providers can support various quantization levels for open-weight models.

Quantization Levels

By default, requests are load-balanced across all available providers, ordered by price. To filter providers by quantization level, specify the quantizations field in the provider parameter with the following values:

TODO: Add a generated list of quantization

Example: Requesting FP8 Quantization

Here’s an example that will only use providers that support FP8 quantization:

1fetch('https://openrouter.ai/api/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 'Authorization': 'Bearer <OPENROUTER_API_KEY>',
5 'HTTP-Referer': '<YOUR_SITE_URL>', // Optional. Site URL for rankings on openrouter.ai.
6 'X-Title': '<YOUR_SITE_NAME>', // Optional. Site title for rankings on openrouter.ai.
7 'Content-Type': 'application/json',
8 },
9 body: JSON.stringify({
10 'model': 'meta-llama/llama-3.1-8b-instruct',
11 'messages': [
12 {
13 'role': 'user',
14 'content': 'Hello'
15 }
16 ],
17 'provider': {
18 'quantizations': [
19 'fp8'
20 ]
21 }
22 }),
23});

JSON Schema for Provider Preferences

For a complete list of options, see this JSON schema:

Provider Preferences Schema
1{
2 "$ref": "#/definitions/ProviderPreferencesSchema",
3 "definitions": {
4 "ProviderPreferencesSchema": {
5 "type": "object",
6 "properties": {
7 "allow_fallbacks": {
8 "type": [
9 "boolean",
10 "null"
11 ],
12 "description": "Whether to allow backup providers to serve requests\n- true: (default) when the primary provider (or your custom providers in \"order\") is unavailable, use the next best provider.\n- false: use only the primary/custom provider, and return the upstream error if it's unavailable.\n"
13 },
14 "require_parameters": {
15 "type": [
16 "boolean",
17 "null"
18 ],
19 "description": "Whether to filter providers to only those that support the parameters you've provided. If this setting is omitted or set to false, then providers will receive only the parameters they support, and ignore the rest."
20 },
21 "data_collection": {
22 "anyOf": [
23 {
24 "type": "string",
25 "enum": [
26 "deny",
27 "allow"
28 ]
29 },
30 {
31 "type": "null"
32 }
33 ],
34 "description": "Data collection setting. If no available model provider meets the requirement, your request will return an error.\n- allow: (default) allow providers which store user data non-transiently and may train on it\n- deny: use only providers which do not collect user data.\n"
35 },
36 "order": {
37 "anyOf": [
38 {
39 "type": "array",
40 "items": {
41 "type": "string",
42 "enum": [
43 "OpenAI",
44 "Anthropic",
45 "Google",
46 "Google AI Studio",
47 "Amazon Bedrock",
48 "Groq",
49 "SambaNova",
50 "Cohere",
51 "Mistral",
52 "Together",
53 "Together 2",
54 "Fireworks",
55 "DeepInfra",
56 "Lepton",
57 "Novita",
58 "Avian",
59 "Lambda",
60 "Azure",
61 "Modal",
62 "AnyScale",
63 "Replicate",
64 "Perplexity",
65 "Recursal",
66 "OctoAI",
67 "DeepSeek",
68 "Infermatic",
69 "AI21",
70 "Featherless",
71 "Inflection",
72 "xAI",
73 "Cloudflare",
74 "SF Compute",
75 "Minimax",
76 "Nineteen",
77 "Liquid",
78 "AionLabs",
79 "Alibaba",
80 "Nebius",
81 "Chutes",
82 "Kluster",
83 "Targon",
84 "01.AI",
85 "HuggingFace",
86 "Mancer",
87 "Mancer 2",
88 "Hyperbolic",
89 "Hyperbolic 2",
90 "Lynn 2",
91 "Lynn",
92 "Reflection"
93 ]
94 }
95 },
96 {
97 "type": "null"
98 }
99 ],
100 "description": "An ordered list of provider names. The router will attempt to use the first provider in the subset of this list that supports your requested model, and fall back to the next if it is unavailable. If no providers are available, the request will fail with an error message."
101 },
102 "ignore": {
103 "anyOf": [
104 {
105 "type": "array",
106 "items": {
107 "type": "string",
108 "enum": [
109 "OpenAI",
110 "Anthropic",
111 "Google",
112 "Google AI Studio",
113 "Amazon Bedrock",
114 "Groq",
115 "SambaNova",
116 "Cohere",
117 "Mistral",
118 "Together",
119 "Together 2",
120 "Fireworks",
121 "DeepInfra",
122 "Lepton",
123 "Novita",
124 "Avian",
125 "Lambda",
126 "Azure",
127 "Modal",
128 "AnyScale",
129 "Replicate",
130 "Perplexity",
131 "Recursal",
132 "OctoAI",
133 "DeepSeek",
134 "Infermatic",
135 "AI21",
136 "Featherless",
137 "Inflection",
138 "xAI",
139 "Cloudflare",
140 "SF Compute",
141 "Minimax",
142 "Nineteen",
143 "Liquid",
144 "AionLabs",
145 "Alibaba",
146 "Nebius",
147 "Chutes",
148 "Kluster",
149 "Targon",
150 "01.AI",
151 "HuggingFace",
152 "Mancer",
153 "Mancer 2",
154 "Hyperbolic",
155 "Hyperbolic 2",
156 "Lynn 2",
157 "Lynn",
158 "Reflection"
159 ]
160 }
161 },
162 {
163 "type": "null"
164 }
165 ],
166 "description": "List of provider names to ignore. If provided, this list is merged with your account-wide ignored provider settings for this request."
167 },
168 "quantizations": {
169 "anyOf": [
170 {
171 "type": "array",
172 "items": {
173 "type": "string",
174 "enum": [
175 "int4",
176 "int8",
177 "fp6",
178 "fp8",
179 "fp16",
180 "bf16",
181 "fp32",
182 "unknown"
183 ]
184 }
185 },
186 {
187 "type": "null"
188 }
189 ],
190 "description": "A list of quantization levels to filter the provider by."
191 },
192 "sort": {
193 "anyOf": [
194 {
195 "type": "string",
196 "enum": [
197 "price",
198 "throughput"
199 ]
200 },
201 {
202 "type": "null"
203 }
204 ],
205 "description": "The sorting strategy to use for this request, if \"order\" is not specified. When set, no load balancing is performed."
206 }
207 },
208 "additionalProperties": false
209 }
210 },
211 "$schema": "http://json-schema.org/draft-07/schema#"
212 }
Built with