A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.
Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.
Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.