Model Comparison

Author
Favicon for deepseek
deepseek
Context Length131K

DeepSeek R1 Distill Llama 70B is a distilled large language model based on Llama-3.3-70B-Instruct, using outputs from DeepSeek R1. The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including:

  • AIME 2024 pass@1: 70.0
  • MATH-500 pass@1: 94.5
  • CodeForces Rating: 1633

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Provider

Pricing

Input$0.10 / M tokens
Output$0.40 / M tokens
Images– –

Endpoint Features

Quantizationfp8
Max Tokens (input + output)131K
Max Output Tokens16K
Stream cancellation
Supports Tools
No Prompt Training
Reasoning