Reasoning Tokens for Thinking Models
We're excited to announce Reasoning Tokens, a new feature that lets you observe how models reason, both in the Chatroom and via the API.
Reasoning tokens provide a transparent look into the reasoning steps taken by a model.
To use, add include_reasoning: true
to your API request. When enabled, reasoning tokens will appear in the reasoning
field of each message:
import requests import json url = "https://openrouter.ai/api/v1/chat/completions" headers = { "Authorization": f"Bearer {OPENROUTER_API_KEY}", "Content-Type": "application/json" } payload = { "model": "deepseek/deepseek-r1", "messages": [ {"role": "user", "content": "How would you build the world's tallest skyscraper?"} ], "include_reasoning": True } response = requests.post(url, headers=headers, data=json.dumps(payload)) print(response.json()['choices'][0]['message']['reasoning'])
Reasoning tokens are initially available for DeepSeek R1 models (and derived models), with upcoming support for Gemini Thinking models.
You can of course inspect the reasoning, but this can also potentially be used in more complex workflows. Below is a toy example (inspired by @skirano on X) of injecting R1's reasoning into a much less advanced model to make it smarter. This example is of questionable utility per se, but we're excited to see the evolution of reason token use cases!
import requests import json question = "Which is bigger: 9.11 or 9.9?" url = "https://openrouter.ai/api/v1/chat/completions" headers = { "Authorization": f"Bearer {TOKEN}", "Content-Type": "application/json" } def do_req(model, content, include_reasoning=False): payload = { "model": model, "messages": [ {"role": "user", "content": content} ], "include_reasoning": include_reasoning } return requests.post(url, headers=headers, data=json.dumps(payload)) # R1 will reliably return "done" for the content portion of the response content = f"{question} Please think this through, but don't output an answer -- only think about the problem, and output 'done'." reasoning_response = do_req("deepseek/deepseek-r1", content, True) reasoning = reasoning_response.json()['choices'][0]['message']['reasoning'] # Let's test! simple_response = do_req("openai/gpt-3.5-turbo-instruct", question) print(simple_response.json()['choices'][0]['message']['content']) content = f"{question}. Here is some context to help you: {reasoning}" smart_response = do_req("openai/gpt-3.5-turbo-instruct", content) print(smart_response.json()['choices'][0]['message']['content'])