Reasoning Tokens
For models that support it (DeepSeek R1 being the best example), the OpenRouter API supports retrieving Reasoning Tokens.
Reasoning tokens provide a transparent look into the reasoning steps taken by a model. Reasoning tokens are considered output tokens and charged accordingly. While all reasoning models generate these tokens, only some models and providers make them available in the response.
To retrieve reasoning tokens when available, add include_reasoning: true
to your API request. Reasoning tokens will appear in the reasoning
field of each message:
This can be used in more complex workflows. Below is a toy example that injects R1’s reasoning into a less advanced model to make it smarter. Note the use of the stop
parameter, that will stop the model from generating a completion (only reasoning tokens will be returned).