- Anthropic: Claude 3 Haiku
Claude 3 Haiku is Anthropic's fastest and most compact model for near-instant responsiveness. Quick and accurate targeted performance. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$0.25/M input tkns$1.25/M output tkns$0.4/K input imgs120M tokens this week - Anthropic: Claude 3 Haiku (self-moderated)
This is a lower-latency version of Claude 3 Haiku, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Claude 3 Haiku is Anthropic's fastest and most compact model for near-instant responsiveness. Quick and accurate targeted performance. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$0.25/M input tkns$1.25/M output tkns$0.4/K input imgs23.8M tokens this week - Anthropic: Claude 3 Opus
Claude 3 Opus is Anthropic's most powerful model for highly complex tasks. It boasts top-level performance, intelligence, fluency, and understanding. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$15/M input tkns$75/M output tkns$24/K input imgs25.7M tokens this week - Anthropic: Claude 3 Sonnet
Claude 3 Sonnet is an ideal balance of intelligence and speed for enterprise workloads. Maximum utility at a lower price, dependable, balanced for scaled deployments. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$3/M input tkns$15/M output tkns$4.8/K input imgs14.8M tokens this week - Anthropic: Claude 3 Opus (self-moderated)
This is a lower-latency version of Claude 3 Opus, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Claude 3 Opus is Anthropic's most powerful model for highly complex tasks. It boasts top-level performance, intelligence, fluency, and understanding. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$15/M input tkns$75/M output tkns$24/K input imgs14.3M tokens this week - Anthropic: Claude 3 Sonnet (self-moderated)
This is a lower-latency version of Claude 3 Sonnet, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Claude 3 Sonnet is an ideal balance of intelligence and speed for enterprise workloads. Maximum utility at a lower price, dependable, balanced for scaled deployments. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$3/M input tkns$15/M output tkns$4.8/K input imgs11.1M tokens this week - Anthropic: Claude v2 (self-moderated)
This is a lower-latency version of Claude v2, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.
by anthropic200K context$8/M input tkns$24/M output tkns218K tokens this week - Anthropic: Claude v2.0 (self-moderated)
This is a lower-latency version of Claude v2.0, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Anthropic's flagship model. Superior performance on tasks that require complex reasoning. Supports hundreds of pages of text.
by anthropic100K context$8/M input tkns$24/M output tkns236K tokens this week - Anthropic: Claude v2.1 (self-moderated)
This is a lower-latency version of Claude v2.1, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.
by anthropic200K context$8/M input tkns$24/M output tkns1.75M tokens this week - Anthropic: Claude Instant v1 (self-moderated)
This is a lower-latency version of Claude Instant v1, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$0.8/M input tkns$2.4/M output tkns1K tokens this week - Anthropic: Claude v2
Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.
by anthropic200K context$8/M input tkns$24/M output tkns440K tokens this week - Anthropic: Claude v2.1
Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.
by anthropic200K context$8/M input tkns$24/M output tkns1.87M tokens this week - Anthropic: Claude Instant (older v1.1)
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$0.8/M input tkns$2.4/M output tkns95 tokens this week - Anthropic: Claude v2.0
Anthropic's flagship model. Superior performance on tasks that require complex reasoning. Supports hundreds of pages of text.
by anthropic100K context$8/M input tkns$24/M output tkns3.57M tokens this week - Anthropic: Claude Instant v1
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$0.8/M input tkns$2.4/M output tkns685K tokens this week - Anthropic: Claude v1
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$8/M input tkns$24/M output tkns285K tokens this week - Anthropic: Claude (older v1)
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$8/M input tkns$24/M output tkns840K tokens this week - Anthropic: Claude Instant (older v1)
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$0.8/M input tkns$2.4/M output tkns873 tokens this week - Auto (best for prompt)
Depending on their size, subject, and complexity, your prompts will be sent to Mistral Large, Claude 3 Sonnet or GPT-4o. To see which model was used, visit Activity.
by openrouter200K context