Mistral: Mixtral 8x22B (base)

mistralai/mixtral-8x22b

Updated Apr 1065,536 context
$0.9/M input tkns$0.9/M output tkns

Mixtral 8x22B is a large-scale language model from Mistral AI. It consists of 8 experts, each 22 billion parameters, with each token using 2 experts at a time.

It was released via X.

#moe

OpenRouter attempts providers in this order unless you set dynamic routing preferences. Prices displayed per million tokens.