Qwen2 7B is a transformer-based model that excels in language understanding, multilingual capabilities, coding, mathematics, and reasoning.
It features SwiGLU activation, attention QKV bias, and group query attention. It is pretrained on extensive data with supervised finetuning and direct preference optimization.
For more details, see this blog post(opens in new tab) and GitHub repo(opens in new tab).
Usage of this model is subject to Tongyi Qianwen LICENSE AGREEMENT(opens in new tab).