Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc

    DeepSeek: R1 Distill Qwen 7B

    deepseek/deepseek-r1-distill-qwen-7b

    Created May 30, 2025131,072 context

    DeepSeek-R1-Distill-Qwen-7B is a 7 billion parameter dense language model distilled from DeepSeek-R1, leveraging reinforcement learning-enhanced reasoning data generated by DeepSeek's larger models. The distillation process transfers advanced reasoning, math, and code capabilities into a smaller, more efficient model architecture based on Qwen2.5-Math-7B. This model demonstrates strong performance across mathematical benchmarks (92.8% pass@1 on MATH-500), coding tasks (Codeforces rating 1189), and general reasoning (49.1% pass@1 on GPQA Diamond), achieving competitive accuracy relative to larger models while maintaining smaller inference costs.

    Recent activity on R1 Distill Qwen 7B

    Total usage per day on OpenRouter

    Not enough data to display yet.