DeepSeek: R1 Distill Llama 70B

deepseek/deepseek-r1-distill-llama-70b

Created Jan 23, 2025128,000 context
$0.10/M input tokens$0.40/M output tokens

DeepSeek R1 Distill Llama 70B is a distilled large language model based on Llama-3.3-70B-Instruct, using outputs from DeepSeek R1. The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including:

  • AIME 2024 pass@1: 70.0
  • MATH-500 pass@1: 94.5
  • CodeForces Rating: 1633

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Recent activity on R1 Distill Llama 70B

Tokens processed per day

Jan 23Jan 29Feb 4Feb 10Feb 16Feb 22Feb 28Mar 6Mar 12Mar 18Mar 24Mar 30Apr 5Apr 11Apr 170750M1.5B2.25B3B
    DeepSeek: R1 Distill Llama 70B – Recent Activity | OpenRouter