Hugging Face: Zephyr 7B (free)


Updated Nov 44,096 context
$0 / 1M input tokens$0 / 1M output tokens

Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1 that was trained on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO).

Note: this is a free, rate-limited version of this model. Outputs may be cached. Read about rate limits here.

OpenRouter first attempts the primary provider, and falls back to others if it encounters an error. Prices displayed per million tokens.