- Anthropic: Claude 3 Haiku
Claude 3 Haiku is Anthropic's fastest and most compact model for near-instant responsiveness. Quick and accurate targeted performance. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$0.25/M input tkns$1.25/M output tkns$0.4/K input imgs2.31B tokens this week - Anthropic: Claude 3 Haiku (self-moderated)
This is a lower-latency version of Claude 3 Haiku, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Claude 3 Haiku is Anthropic's fastest and most compact model for near-instant responsiveness. Quick and accurate targeted performance. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$0.25/M input tkns$1.25/M output tkns$0.4/K input imgs488M tokens this week - Anthropic: Claude 3 Opus
Claude 3 Opus is Anthropic's most powerful model for highly complex tasks. It boasts top-level performance, intelligence, fluency, and understanding. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$15/M input tkns$75/M output tkns$24/K input imgs706M tokens this week - Anthropic: Claude 3 Sonnet
Claude 3 Sonnet is an ideal balance of intelligence and speed for enterprise workloads. Maximum utility at a lower price, dependable, balanced for scaled deployments. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$3/M input tkns$15/M output tkns$4.8/K input imgs543M tokens this week - Anthropic: Claude 3 Opus (self-moderated)
This is a lower-latency version of Claude 3 Opus, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Claude 3 Opus is Anthropic's most powerful model for highly complex tasks. It boasts top-level performance, intelligence, fluency, and understanding. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$15/M input tkns$75/M output tkns$24/K input imgs423M tokens this week - Anthropic: Claude 3 Sonnet (self-moderated)
This is a lower-latency version of Claude 3 Sonnet, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Claude 3 Sonnet is an ideal balance of intelligence and speed for enterprise workloads. Maximum utility at a lower price, dependable, balanced for scaled deployments. See the launch announcement and benchmark results here #multimodal
by anthropic200K context$3/M input tkns$15/M output tkns$4.8/K input imgs490M tokens this week - Anthropic: Claude v2 (self-moderated)
This is a lower-latency version of Claude v2, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.
by anthropic200K context$8/M input tkns$24/M output tkns3.63M tokens this week - Anthropic: Claude v2.0 (self-moderated)
This is a lower-latency version of Claude v2.0, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Anthropic's flagship model. Superior performance on tasks that require complex reasoning. Supports hundreds of pages of text.
by anthropic100K context$8/M input tkns$24/M output tkns8.44M tokens this week - Anthropic: Claude v2.1 (self-moderated)
This is a lower-latency version of Claude v2.1, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.
by anthropic200K context$8/M input tkns$24/M output tkns24.9M tokens this week - Anthropic: Claude Instant v1 (self-moderated)
This is a lower-latency version of Claude Instant v1, made available in collaboration with Anthropic, that is self-moderated: response moderation happens on the model's side instead of OpenRouter's. It's in beta, and may change in the future. Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$0.8/M input tkns$2.4/M output tkns2.86M tokens this week - Anthropic: Claude v2
Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.
by anthropic200K context$8/M input tkns$24/M output tkns18.6M tokens this week - Anthropic: Claude v2.1
Claude 2 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and a new beta feature: tool use.
by anthropic200K context$8/M input tkns$24/M output tkns28.1M tokens this week - Anthropic: Claude Instant (older v1.1)
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$0.8/M input tkns$2.4/M output tkns4.53M tokens this week - Anthropic: Claude v2.0
Anthropic's flagship model. Superior performance on tasks that require complex reasoning. Supports hundreds of pages of text.
by anthropic100K context$8/M input tkns$24/M output tkns53M tokens this week - Anthropic: Claude Instant v1
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$0.8/M input tkns$2.4/M output tkns26.6M tokens this week - Anthropic: Claude v1
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$8/M input tkns$24/M output tkns5.27M tokens this week - Anthropic: Claude (older v1)
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$8/M input tkns$24/M output tkns13.1M tokens this week - Anthropic: Claude Instant (older v1)
Anthropic's model for low-latency, high throughput text generation. Supports hundreds of pages of text.
by anthropic100K context$0.8/M input tkns$2.4/M output tkns1.11M tokens this week - Auto (best for prompt)
Depending on their size, subject, and complexity, your prompts will be sent to Mistral Large, Claude 3 Sonnet or GPT-4o. To see which model was used, visit Activity.
by openrouter200K context