Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc

    DeepSeek: DeepSeek V3.2

    deepseek/deepseek-v3.2

    Created Dec 1, 2025163,840 context
    $0.28/M input tokens$0.40/M output tokens

    DeepSeek-V3.2 is a large language model designed to harmonize high computational efficiency with strong reasoning and agentic tool-use performance. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism that reduces training and inference cost while preserving quality in long-context scenarios. A scalable reinforcement learning post-training framework further improves reasoning, with reported performance in the GPT-5 class, and the model has demonstrated gold-medal results on the 2025 IMO and IOI. V3.2 also uses a large-scale agentic task synthesis pipeline to better integrate reasoning into tool-use settings, boosting compliance and generalization in interactive environments.

    Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs

    Sample code and API for DeepSeek V3.2

    OpenRouter normalizes requests and responses across providers for you.

    OpenRouter supports reasoning-enabled models that can show their step-by-step thinking process. Use the reasoning parameter in your request to enable reasoning, and access the reasoning_details array in the response to see the model's internal reasoning before the final answer. When continuing a conversation, preserve the complete reasoning_details when passing messages back to the model so it can continue reasoning from where it left off. Learn more about reasoning tokens.

    In the examples below, the OpenRouter-specific headers are optional. Setting them allows your app to appear on the OpenRouter leaderboards.

    Using third-party SDKs

    For information about using third-party SDKs and frameworks with OpenRouter, please see our frameworks documentation.

    See the Request docs for all possible fields, and Parameters for explanations of specific sampling parameters.