Mistral: Mistral Small 3

mistralai/mistral-small-24b-instruct-2501

Created Jan 30, 202532,768 context
$0.07/M input tokens$0.14/M output tokens

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment.

The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. Read the blog post about the model here.

Recent activity on Mistral Small 3

Tokens processed per day

Jan 30Feb 4Feb 9Feb 14Feb 19Feb 24Mar 1Mar 6Mar 11Mar 16Mar 21Mar 26Mar 31Apr 5Apr 10Apr 150550M1.1B1.65B2.2B

More models from Mistral AI

    Mistral: Mistral Small 3 – Recent Activity | OpenRouter