Nous: Hermes 2 Mixtral 8x7B SFT

nousresearch/nous-hermes-2-mixtral-8x7b-sft

Updated Jan 1632,769 context
$0.54/M input tkns$0.54/M output tkns

Nous Hermes 2 Mixtral 8x7B SFT is the supervised finetune only version of the Nous Research model trained over the Mixtral 8x7B MoE LLM.

The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.

#moe

OpenRouter attempts providers in this order unless you set dynamic routing preferences. Prices displayed per million tokens.