Reasoning Tokens
For models that support it, the OpenRouter API can return Reasoning Tokens, also known as thinking tokens. OpenRouter normalizes the different ways of customizing the amount of reasoning tokens that the model will use, providing a unified interface across different providers.
Reasoning tokens provide a transparent look into the reasoning steps taken by a model. Reasoning tokens are considered output tokens and charged accordingly.
Reasoning tokens are included in the response by default if the model decides to output them. Reasoning tokens will appear in the reasoning
field of each message, unless you decide to exclude them.
Some reasoning models do not return their reasoning tokens
While most models and providers make reasoning tokens available in the response, some (like the OpenAI o-series and Gemini Flash Thinking) do not.
Controlling Reasoning Tokens
You can control reasoning tokens in your requests using the reasoning
parameter:
The reasoning
config object consolidates settings for controlling reasoning strength across different models. See the Note for each option below to see which models are supported and how other models will behave.
Max Tokens for Reasoning
Supported models
Currently supported by Anthropic thinking models
For models that support reasoning token allocation, you can control it like this:
"max_tokens": 2000
- Directly specifies the maximum number of tokens to use for reasoning
For models that only support reasoning.effort
(see below), the max_tokens
value will be used to determine the effort level.
Reasoning Effort Level
Supported models
Currently supported by the OpenAI o-series
"effort": "high"
- Allocates a large portion of tokens for reasoning (approximately 80% of max_tokens)"effort": "medium"
- Allocates a moderate portion of tokens (approximately 50% of max_tokens)"effort": "low"
- Allocates a smaller portion of tokens (approximately 20% of max_tokens)
For models that only support reasoning.max_tokens
, the effort level will be set based on the percentages above.
Excluding Reasoning Tokens
If you want the model to use reasoning internally but not include it in the response:
"exclude": true
- The model will still use reasoning, but it won’t be returned in the response
Reasoning tokens will appear in the reasoning
field of each message.
Legacy Parameters
For backward compatibility, OpenRouter still supports the following legacy parameters:
include_reasoning: true
- Equivalent toreasoning: {}
include_reasoning: false
- Equivalent toreasoning: { exclude: true }
However, we recommend using the new unified reasoning
parameter for better control and future compatibility.
Examples
Basic Usage with Reasoning Tokens
Using Max Tokens for Reasoning
For models that support direct token allocation (like Anthropic models), you can specify the exact number of tokens to use for reasoning:
Excluding Reasoning Tokens from Response
If you want the model to use reasoning internally but not include it in the response:
Advanced Usage: Reasoning Chain-of-Thought
This example shows how to use reasoning tokens in a more complex workflow. It injects one model’s reasoning into another model to improve its response quality:
Provider-Specific Reasoning Implementation
Anthropic Models with Reasoning Tokens
The latest Claude models, such as anthropic/claude-3.7-sonnet, support working with and returning reasoning tokens.
You can enable reasoning on Anthropic models in two ways:
- Using the
:thinking
variant suffix (e.g.,anthropic/claude-3.7-sonnet:thinking
). The thinking variant defaults to high effort. - Using the unified
reasoning
parameter with eithereffort
ormax_tokens
Reasoning Max Tokens for Anthropic Models
When using Anthropic models with reasoning:
- When using the
reasoning.max_tokens
parameter, that value is used directly with a minimum of 1024 tokens. - When using the
:thinking
variant suffix or thereasoning.effort
parameter, the budget_tokens are calculated based on themax_tokens
value.
The reasoning token allocation is capped at 32,000 tokens maximum and 1024 tokens minimum. The formula for calculating the budget_tokens is: budget_tokens = max(min(max_tokens * {effort_ratio}, 32000), 1024)
effort_ratio is 0.8 for high effort, 0.5 for medium effort, and 0.2 for low effort.
Important: max_tokens
must be strictly higher than the reasoning budget to ensure there are tokens available for the final response after thinking.
Token Usage and Billing
Please note that reasoning tokens are counted as output tokens for billing purposes. Using reasoning tokens will increase your token usage but can significantly improve the quality of model responses.