Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc

    DeepSeek: DeepSeek V3.2

    deepseek/deepseek-v3.2

    Created Dec 1, 2025163,840 context
    $0.28/M input tokens$0.40/M output tokens

    DeepSeek-V3.2 is a large language model designed to harmonize high computational efficiency with strong reasoning and agentic tool-use performance. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism that reduces training and inference cost while preserving quality in long-context scenarios. A scalable reinforcement learning post-training framework further improves reasoning, with reported performance in the GPT-5 class, and the model has demonstrated gold-medal results on the 2025 IMO and IOI. V3.2 also uses a large-scale agentic task synthesis pipeline to better integrate reasoning into tool-use settings, boosting compliance and generalization in interactive environments.

    Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs

    Apps using DeepSeek V3.2

    Top public apps this week using this model