Skip to content
/
OpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Apps
  • Models
  • Providers
  • Pricing
  • Enterprise
  • Labs

Company

  • About
  • Announcements
  • CareersHiring
  • Privacy
  • Terms of Service
  • Support
  • State of AI
  • Works With OR
  • Data

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Favicon for deepseek

DeepSeek: R1 Distill Qwen 32B

deepseek/deepseek-r1-distill-qwen-32b:free

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on Qwen 2.5 32B(opens in new tab), using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Modalities

Price

Free

Context

128K

Weekly Tokens

324M

Released

Jan 29, 2025

Activity

Recent activity on R1 Distill Qwen 32B

Total usage per day on OpenRouter

Not enough data to display yet.