Search/
Skip to content
/
OpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Models
  • Providers
  • Pricing
  • Enterprise

Company

  • About
  • Announcements
  • CareersHiring
  • Privacy
  • Terms of Service
  • Support
  • State of AI
  • Works With OR

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Collections/Roleplay

Best AI Models for Roleplay (RP) and Creative Writing

Model rankings updated February 2026 based on real usage data.

Discover the top AI models for roleplay (RP), character chat and creative writing, ranked by real usage data on OpenRouter. These LLMs excel at maintaining consistent personas, rich dialogue and immersive storytelling across long-context sessions.

Whether you're using Janitor AI, SillyTavern or another frontend, or building your own character chatbot or interactive fiction engine, OpenRouter gives you access to the best roleplay models through a single API.

LLM Leaderboard for Roleplay Models

1.
Deepseek V3.2
by deepseek
512B
32.0%
2.
Grok 4.1 Fast
by x-ai
124B
7.8%
3.
gpt-oss-120b
by openai
79.2B
5.0%
4.
Gemini 3 Flash Preview
by google
77.3B
4.8%
5.
Gemini 2.5 Flash Lite
by google
71.9B
4.5%
6.
GLM 5
by z-ai
64.4B
4.0%
7.
Gemini 2.5 Flash
by google
54.7B
3.4%
8.
Step 3.5 Flash (free)
by stepfun
40.6B
2.5%
9.
GLM 4.7
by z-ai
39.5B
2.5%
10.
Others
by unknown
536B
33.5%

Top Roleplay Models on OpenRouter

Based on top weekly usage data from millions of users accessing AI models for roleplay through OpenRouter.

Favicon for google

Google: Gemini 3 Flash Preview

1.01T tokens

Gemini 3 Flash Preview is a high speed, high value thinking model designed for agentic workflows, multi turn chat, and coding assistance. It delivers near Pro level reasoning and tool use performance with substantially lower latency than larger Gemini variants, making it well suited for interactive development, long running agent loops, and collaborative coding tasks. Compared to Gemini 2.5 Flash, it provides broad quality improvements across reasoning, multimodal understanding, and reliability.

The model supports a 1M token context window and multimodal inputs including text, images, audio, video, and PDFs, with text output. It includes configurable reasoning via thinking levels (minimal, low, medium, high), structured output, tool use, and automatic context caching. Gemini 3 Flash Preview is optimized for users who want strong reasoning and agentic behavior without the cost or latency of full scale frontier models.

by google1.05M context$0.50/M input tokens$3/M output tokens$1/M audio tokens
Favicon for deepseek

DeepSeek: DeepSeek V3.2

830B tokens

DeepSeek-V3.2 is a large language model designed to harmonize high computational efficiency with strong reasoning and agentic tool-use performance. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism that reduces training and inference cost while preserving quality in long-context scenarios. A scalable reinforcement learning post-training framework further improves reasoning, with reported performance in the GPT-5 class, and the model has demonstrated gold-medal results on the 2025 IMO and IOI. V3.2 also uses a large-scale agentic task synthesis pipeline to better integrate reasoning into tool-use settings, boosting compliance and generalization in interactive environments.

Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs

by deepseek164K context$0.25/M input tokens$0.40/M output tokens
Favicon for x-ai

xAI: Grok 4.1 Fast

779B tokens

Grok 4.1 Fast is xAI's best agentic tool calling model that shines in real-world use cases like customer support and deep research. 2M context window.

Reasoning can be enabled/disabled using the reasoning enabled parameter in the API. Learn more in our docs

by x-ai2M context$0.20/M input tokens$0.50/M output tokens
Favicon for z-ai

Z.ai: GLM 5

717B tokens

GLM-5 is Z.ai’s flagship open-source foundation model engineered for complex systems design and long-horizon agent workflows. Built for expert developers, it delivers production-grade performance on large-scale programming tasks, rivaling leading closed-source models. With advanced agentic planning, deep backend reasoning, and iterative self-correction, GLM-5 moves beyond code generation to full-system construction and autonomous execution.

by z-ai205K context$0.95/M input tokens$2.55/M output tokens
Favicon for google

Google: Gemini 2.5 Flash

494B tokens

Gemini 2.5 Flash is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater accuracy and nuanced context handling.

Additionally, Gemini 2.5 Flash is configurable through the "max tokens for reasoning" parameter, as described in the documentation (https://openrouter.ai/docs/use-cases/reasoning-tokens#max-tokens-for-reasoning).

by google1.05M context$0.30/M input tokens$2.50/M output tokens$1/M audio tokens
Favicon for stepfun

StepFun: Step 3.5 Flash (free)

447B tokens

Step 3.5 Flash is StepFun's most capable open-source foundation model. Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token. It is a reasoning model that is incredibly speed efficient even at long contexts.

by stepfun256K context$0/M input tokens$0/M output tokens
Favicon for google

Google: Gemini 2.5 Flash Lite

414B tokens

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the Reasoning API parameter to selectively trade off cost for intelligence.

by google1.05M context$0.10/M input tokens$0.40/M output tokens$0.30/M audio tokens
Favicon for openai

OpenAI: gpt-oss-120b

326B tokens

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.

by openai131K context$0.039/M input tokens$0.19/M output tokens
Favicon for google

Google: Gemini 2.5 Pro

95.8B tokens

Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy and nuanced context handling. Gemini 2.5 Pro achieves top-tier performance on multiple benchmarks, including first-place positioning on the LMArena leaderboard, reflecting superior human-preference alignment and complex problem-solving abilities.

by google1.05M context$1.25/M input tokens$10/M output tokens$1.25/M audio tokens
Favicon for qwen

Qwen: Qwen3 235B A22B Instruct 2507

87.1B tokens

Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks).

Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.

by qwen262K context$0.071/M input tokens$0.10/M output tokens