Hugging Face: Zephyr 7B (free)
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1 that was trained on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO).
OpenRouter first attempts the primary provider, and falls back to others if it encounters an error. Prices displayed per million tokens.