Reasoning Tokens

For models that support it (DeepSeek R1 being the best example), the OpenRouter API supports retrieving Reasoning Tokens.

Reasoning tokens provide a transparent look into the reasoning steps taken by a model. Reasoning tokens are considered output tokens and charged accordingly. While all reasoning models generate these tokens, only some models and providers make them available in the response.

To retrieve reasoning tokens when available, add include_reasoning: true to your API request. Reasoning tokens will appear in the reasoning field of each message:

Python
1import requests
2import json
3
4url = "https://openrouter.ai/api/v1/chat/completions"
5headers = {
6 "Authorization": f"Bearer {OPENROUTER_API_KEY}",
7 "Content-Type": "application/json"
8}
9payload = {
10 "model": "deepseek/deepseek-r1",
11 "messages": [
12 {"role": "user", "content": "How would you build the world's tallest skyscraper?"}
13 ],
14 "include_reasoning": True
15}
16
17response = requests.post(url, headers=headers, data=json.dumps(payload))
18print(response.json()['choices'][0]['message']['reasoning'])

This can be used in more complex workflows. Below is a toy example that injects R1’s reasoning into a less advanced model to make it smarter. Note the use of the stop parameter, that will stop the model from generating a completion (only reasoning tokens will be returned).

Python
1import requests
2import json
3
4question = "Which is bigger: 9.11 or 9.9?"
5
6url = "https://openrouter.ai/api/v1/chat/completions"
7headers = {
8 "Authorization": f"Bearer {OPENROUTER_API_KEY}",
9 "Content-Type": "application/json"
10}
11
12def do_req(model, content, include_reasoning=False):
13 payload = {
14 "model": model,
15 "messages": [
16 {"role": "user", "content": content}
17 ],
18 "include_reasoning": include_reasoning,
19 "stop": "</think>"
20 }
21 return requests.post(url, headers=headers, data=json.dumps(payload))
22
23# R1 will reliably return "done" for the content portion of the response
24content = f"{question} Please think this through, but don't output an answer"
25reasoning_response = do_req("deepseek/deepseek-r1", content, True)
26reasoning = reasoning_response.json()['choices'][0]['message']['reasoning']
27
28# Let's test! Here's the naive response:
29simple_response = do_req("openai/gpt-4o-mini", question)
30print(simple_response.json()['choices'][0]['message']['content'])
31
32# Here's the response with the reasoning token injected:
33content = f"{question}. Here is some context to help you: {reasoning}"
34smart_response = do_req("openai/gpt-4o-mini", content)
35print(smart_response.json()['choices'][0]['message']['content'])
Built with