Search/
Skip to content
/
OpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Apps
  • Models
  • Providers
  • Pricing
  • Enterprise
  • Labs

Company

  • About
  • Announcements
  • CareersHiring
  • Privacy
  • Terms of Service
  • Support
  • State of AI
  • Works With OR
  • Data

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Favicon for xiaomi

Xiaomi

Access 3 Xiaomi models on OpenRouter including MiMo-V2-Omni, MiMo-V2-Pro, and MiMo-V2-Flash. Compare pricing, context windows, and capabilities.

Xiaomi tokens processed on OpenRouter

  • Xiaomi: MiMo-V2-OmniMiMo-V2-Omni
    31B tokens

    MiMo-V2-Omni is a frontier omni-modal model that natively processes image, video, and audio inputs within a unified architecture. It combines strong multimodal perception with agentic capability - visual grounding, multi-step planning, tool use, and code execution - making it well-suited for complex real-world tasks that span modalities. 256K context window.

    by xiaomiMar 18, 2026262K context$0.40/M input tokens
$2/M output tokens
  • Xiaomi: MiMo-V2-ProMiMo-V2-Pro
    209B tokens

    MiMo-V2-Pro is Xiaomi's flagship foundation model, featuring over 1T total parameters and a 1M context length, deeply optimized for agentic scenarios. It is highly adaptable to general agent frameworks like OpenClaw. It ranks among the global top tier in the standard PinchBench and ClawBench benchmarks, with perceived performance approaching that of Opus 4.6. MiMo-V2-Pro is designed to serve as the brain of agent systems, orchestrating complex workflows, driving production engineering tasks, and delivering results reliably.

    by xiaomiMar 18, 20261.05M context$1/M input tokens$3/M output tokens
  • Xiaomi: MiMo-V2-FlashMiMo-V2-Flash
    484B tokens

    MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a hybrid-thinking toggle and a 256K context window, and excels at reasoning, coding, and agent scenarios. On SWE-bench Verified and SWE-bench Multilingual, MiMo-V2-Flash ranks as the top #1 open-source model globally, delivering performance comparable to Claude Sonnet 4.5 while costing only about 3.5% as much. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs.

    by xiaomiDec 14, 2025262K context$0.09/M input tokens$0.29/M output tokens