Reflection Llama-3.1 70B is trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course.
The model was trained on synthetic data.
These are free, rate-limited endpoints for Reflection 70B. Outputs may be cached. Read about rate limits here.
Claude 3.5 Sonnet delivers better-than-Opus capabilities, faster-than-Sonnet speeds, at the same Sonnet prices. Sonnet is particularly good at:
#multimodal
Qwen2 7B is a transformer-based model that excels in language understanding, multilingual capabilities, coding, mathematics, and reasoning.
It features SwiGLU activation, attention QKV bias, and group query attention. It is pretrained on extensive data with supervised finetuning and direct preference optimization.
For more details, see this blog post and GitHub repo.
Usage of this model is subject to Tongyi Qianwen LICENSE AGREEMENT.