Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc

    ArliAI: QwQ 32B RpR v1 (free)Free variant

    arliai/qwq-32b-arliai-rpr-v1:free

    Created Apr 13, 202532,768 context
    $0/M input tokens$0/M output tokens

    QwQ-32B-ArliAI-RpR-v1 is a 32B parameter model fine-tuned from Qwen/QwQ-32B using a curated creative writing and roleplay dataset originally developed for the RPMax series. It is designed to maintain coherence and reasoning across long multi-turn conversations by introducing explicit reasoning steps per dialogue turn, generated and refined using the base model itself.

    The model was trained using RS-QLORA+ on 8K sequence lengths and supports up to 128K context windows (with practical performance around 32K). It is optimized for creative roleplay and dialogue generation, with an emphasis on minimizing cross-context repetition while preserving stylistic diversity.

    Recent activity on QwQ 32B RpR v1 (free)

    Total usage per day on OpenRouter