Mistral: Mixtral 8x22B Instruct

mistralai/mixtral-8x22b-instruct

Created Apr 17, 202465,536 context
$0.9/M input tokens$0.9/M output tokens

Mistral's official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include:

  • strong math, coding, and reasoning
  • large context length (64k)
  • fluency in English, French, Italian, German, and Spanish

See benchmarks on the launch announcement here. #moe

Recent activity on Mixtral 8x22B Instruct

Tokens processed per day

Jan 3Jan 9Jan 15Jan 21Jan 27Feb 2Feb 8Feb 14Feb 20Feb 26Mar 4Mar 10Mar 16Mar 22Mar 28Apr 3030M60M90M120M

More models from Mistral AI

    Mistral: Mixtral 8x22B Instruct – Recent Activity | OpenRouter