Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc

    Qrwkv 72B

    featherless/qwerky-72b

    Created Mar 20, 202532,768 context

    Qrwkv-72B is a linear-attention RWKV variant of the Qwen 2.5 72B model, optimized to significantly reduce computational cost at scale. Leveraging linear attention, it achieves substantial inference speedups (>1000x) while retaining competitive accuracy on common benchmarks like ARC, HellaSwag, Lambada, and MMLU. It inherits knowledge and language support from Qwen 2.5, supporting approximately 30 languages, making it suitable for efficient inference in large-context applications.

    Recent activity on Qrwkv 72B

    Total usage per day on OpenRouter

    Not enough data to display yet.