DeepSeek: DeepSeek R1 Zero (free)

deepseek/deepseek-r1-zero:free

Created Mar 6, 2025163,840 context
$0/M input tokens$0/M output tokens

DeepSeek-R1-Zero is a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step. It's 671B parameters in size, with 37B active in an inference pass.

It demonstrates remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.

DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. See DeepSeek R1 for the SFT model.

Providers for DeepSeek R1 Zero (free)

OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.

More models from DeepSeek

    DeepSeek: DeepSeek R1 Zero (free) – Provider Status | OpenRouter