Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc

    OpenAI: GPT-5.1-Codex-Max

    openai/gpt-5.1-codex-max

    Created Dec 4, 2025400,000 context
    $1.25/M input tokens$10/M output tokens

    GPT-5.1-Codex-Max is OpenAI’s latest agentic coding model, designed for long-running, high-context software development tasks. It is based on an updated version of the 5.1 reasoning stack and trained on agentic workflows spanning software engineering, mathematics, and research. GPT-5.1-Codex-Max delivers faster performance, improved reasoning, and higher token efficiency across the development lifecycle.

    Providers for GPT-5.1-Codex-Max

    OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.

    Performance for GPT-5.1-Codex-Max

    Compare different providers across OpenRouter

    Apps using GPT-5.1-Codex-Max

    Top public apps this week using this model

    Recent activity on GPT-5.1-Codex-Max

    Total usage per day on OpenRouter

    Uptime stats for GPT-5.1-Codex-Max

    Uptime stats for GPT-5.1-Codex-Max across all providers

    Sample code and API for GPT-5.1-Codex-Max

    OpenRouter normalizes requests and responses across providers for you.

    OpenRouter supports reasoning-enabled models that can show their step-by-step thinking process. Use the reasoning parameter in your request to enable reasoning, and access the reasoning_details array in the response to see the model's internal reasoning before the final answer. When continuing a conversation, preserve the complete reasoning_details when passing messages back to the model so it can continue reasoning from where it left off. Learn more about reasoning tokens.

    In the examples below, the OpenRouter-specific headers are optional. Setting them allows your app to appear on the OpenRouter leaderboards.

    Using third-party SDKs

    For information about using third-party SDKs and frameworks with OpenRouter, please see our frameworks documentation.

    See the Request docs for all possible fields, and Parameters for explanations of specific sampling parameters.