Skip to content
  • Status
  • Announcements
  • Docs
  • Support
  • About
  • Partners
  • Enterprise
  • Careers
  • Pricing
  • Privacy
  • Terms
  •  
  • © 2026 OpenRouter, Inc

    ArliAI: QwQ 32B RpR v1

    arliai/qwq-32b-arliai-rpr-v1

    Created Apr 13, 202532,768 context
    $0.03/M input tokens$0.11/M output tokens

    QwQ-32B-ArliAI-RpR-v1 is a 32B parameter model fine-tuned from Qwen/QwQ-32B using a curated creative writing and roleplay dataset originally developed for the RPMax series. It is designed to maintain coherence and reasoning across long multi-turn conversations by introducing explicit reasoning steps per dialogue turn, generated and refined using the base model itself.

    The model was trained using RS-QLORA+ on 8K sequence lengths and supports up to 128K context windows (with practical performance around 32K). It is optimized for creative roleplay and dialogue generation, with an emphasis on minimizing cross-context repetition while preserving stylistic diversity.

    Recent activity on QwQ 32B RpR v1

    Total usage per day on OpenRouter

    Prompt
    217K
    Reasoning
    53K
    Completion
    8K

    Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.