Mistral: Mistral Small

mistralai/mistral-small

Updated Jan 1032,000 context
$2 / 1M input tokens$6 / 1M output tokens

This model is currently powered by Mixtral-8X7B-v0.1, a sparse mixture of experts model with 12B active parameters. It has better reasoning, exhibits more capabilities, can produce and reason about code, and is multiligual, supporting English, French, German, Italian, and Spanish. #moe

OpenRouter first attempts the primary provider, and falls back to others if it encounters an error. Prices displayed per million tokens.