Search/
Skip to content
/
OpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Apps
  • Models
  • Providers
  • Pricing
  • Enterprise
  • Labs

Company

  • About
  • Announcements
  • CareersHiring
  • Privacy
  • Terms of Service
  • Support
  • State of AI
  • Works With OR
  • Data

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Collections/Vision Models

AI Models with Vision: Multimodal LLMs for Image Understanding

Model rankings updated April 2026 based on real usage data.

Discover AI models with vision capabilities that can analyze images, understand documents and answer questions about visual content. These multimodal LLMs combine image understanding with powerful language capabilities, enabling applications from document analysis to visual question answering.

Whether you're building tools to interpret screenshots, analyze charts and diagrams, extract text from images or process video frames, OpenRouter provides access to leading vision models from Anthropic, Google, OpenAI and more through a single API.

Top Vision Models on OpenRouter

Favicon for moonshotai

MoonshotAI Kimi Latest

2.09T tokens

This model always redirects to the latest model in the MoonshotAI Kimi family.

by moonshotai256K context$0.7448/M input tokens$4.655/M output tokens
Favicon for moonshotai

MoonshotAI: Kimi K2.6

2.09T tokens

Kimi K2.6 is Moonshot AI's next-generation multimodal model, designed for long-horizon coding, coding-driven UI/UX generation, and multi-agent orchestration. It handles complex end-to-end coding tasks across Python, Rust, and Go, and can convert prompts and visual inputs into production-ready interfaces. Its agent swarm architecture scales to hundreds of parallel sub-agents for autonomous task decomposition - delivering documents, websites, and spreadsheets in a single run without human oversight.

by moonshotai256K context$0.7448/M input tokens$4.655/M output tokens
Favicon for anthropic

Anthropic Claude Sonnet Latest

1.51T tokens

This model always redirects to the latest model in the Anthropic Claude Sonnet family.

by anthropic1M context$3/M input tokens$15/M output tokens
Favicon for anthropic

Anthropic: Claude Sonnet 4.6

1.51T tokens

Sonnet 4.6 is Anthropic's most capable Sonnet-class model yet, with frontier performance across coding, agents, and professional work. It excels at iterative development, complex codebase navigation, end-to-end project management with memory, polished document creation, and confident computer use for web QA and workflow automation.

by anthropic1M context$3/M input tokens$15/M output tokens
Favicon for anthropic

Anthropic: Claude Opus Latest

1.28T tokens

This model always redirects to the latest model in the Claude Opus family.

by anthropic1M context$5/M input tokens$25/M output tokens
Favicon for anthropic

Anthropic: Claude Opus 4.7

1.28T tokens

Opus 4.7 is the next generation of Anthropic's Opus family, built for long-running, asynchronous agents. Building on the coding and agentic strengths of Opus 4.6, it delivers stronger performance on complex, multi-step tasks and more reliable agentic execution across extended workflows. It is especially effective for asynchronous agent pipelines where tasks unfold over time - large codebases, multi-stage debugging, and end-to-end project orchestration.

Beyond coding, Opus 4.7 brings improved knowledge work capabilities - from drafting documents and building presentations to analyzing data. It maintains coherence across very long outputs and extended sessions, making it a strong default for tasks that require persistence, judgment, and follow-through.

For users upgrading from earlier Opus versions, see our official migration guide here

by anthropic1M context$5/M input tokens$25/M output tokens
Favicon for google

Google Gemini Flash Latest

1.13T tokens

This model always redirects to the latest model in the Google Gemini Flash family.

by google1.05M context$0.50/M input tokens$3/M output tokens$1/M audio tokens
Favicon for google

Google: Gemini 3 Flash Preview

1.13T tokens

Gemini 3 Flash Preview is a high speed, high value thinking model designed for agentic workflows, multi turn chat, and coding assistance. It delivers near Pro level reasoning and tool use performance with substantially lower latency than larger Gemini variants, making it well suited for interactive development, long running agent loops, and collaborative coding tasks. Compared to Gemini 2.5 Flash, it provides broad quality improvements across reasoning, multimodal understanding, and reliability.

The model supports a 1M token context window and multimodal inputs including text, images, audio, video, and PDFs, with text output. It includes configurable reasoning via thinking levels (minimal, low, medium, high), structured output, tool use, and automatic context caching. Gemini 3 Flash Preview is optimized for users who want strong reasoning and agentic behavior without the cost or latency of full scale frontier models.

by google1.05M context$0.50/M input tokens$3/M output tokens$1/M audio tokens
Favicon for x-ai

xAI: Grok 4.1 Fast

835B tokens

Grok 4.1 Fast is xAI's best agentic tool calling model that shines in real-world use cases like customer support and deep research. 2M context window.

Reasoning can be enabled/disabled using the reasoning enabled parameter in the API. Learn more in our docs

by x-ai2M context$0.20/M input tokens$0.50/M output tokens
Favicon for anthropic

Anthropic: Claude Opus 4.6

736B tokens

Opus 4.6 is Anthropic’s strongest model for coding and long-running professional tasks. It is built for agents that operate across entire workflows rather than single prompts, making it especially effective for large codebases, complex refactors, and multi-step debugging that unfolds over time. The model shows deeper contextual understanding, stronger problem decomposition, and greater reliability on hard engineering tasks than prior generations.

Beyond coding, Opus 4.6 excels at sustained knowledge work. It produces near-production-ready documents, plans, and analyses in a single pass, and maintains coherence across very long outputs and extended sessions. This makes it a strong default for tasks that require persistence, judgment, and follow-through, such as technical design, migration planning, and end-to-end project execution.

For users upgrading from earlier Opus versions, see our official migration guide here

by anthropic1M context$5/M input tokens$25/M output tokens