Mistral: Mistral Small 3
mistralai/mistral-small-24b-instruct-2501
Created Jan 30, 202532,000 context
$0.1/M input tokens$0.3/M output tokens
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment.
The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. Read the blog post about the model here.