Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc
    Favicon for deepcogito

    deepcogito

    Browse models from deepcogito

    4 models

    Tokens processed on OpenRouter

    • Deep Cogito: Cogito V2 Preview Llama 405BCogito V2 Preview Llama 405B
      9.52M tokens

      Cogito v2 405B is a dense hybrid reasoning model that combines direct answering capabilities with advanced self-reflection. It represents a significant step toward frontier intelligence with dense architecture delivering performance competitive with leading closed models. This advanced reasoning system combines policy improvement with massive scale for exceptional capabilities.

      by deepcogito131K context$3.50/M input tokens$3.50/M output
    tokens
  3. Deep Cogito: Cogito V2 Preview Llama 70BCogito V2 Preview Llama 70B
    375 tokens

    Cogito v2 70B is a dense hybrid reasoning model that combines direct answering capabilities with advanced self-reflection. Built with iterative policy improvement, it delivers strong performance across reasoning tasks while maintaining efficiency through shorter reasoning chains and improved intuition.

    by deepcogito131K context$0.88/M input tokens$0.88/M output tokens
  4. Cogito V2 Preview Llama 109BCogito V2 Preview Llama 109B
    8.4M tokens

    An instruction-tuned, hybrid-reasoning Mixture-of-Experts model built on Llama-4-Scout-17B-16E. Cogito v2 can answer directly or engage an extended “thinking” phase, with alignment guided by Iterated Distillation & Amplification (IDA). It targets coding, STEM, instruction following, and general helpfulness, with stronger multilingual, tool-calling, and reasoning performance than size-equivalent baselines. The model supports long-context use (up to 10M tokens) and standard Transformers workflows. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs

    by deepcogito131K context$0.18/M input tokens$0.59/M output tokens
  5. Deep Cogito: Cogito V2 Preview Deepseek 671BCogito V2 Preview Deepseek 671B
    16.2M tokens

    Cogito v2 is a multilingual, instruction-tuned Mixture of Experts (MoE) large language model with 671 billion parameters. It supports both standard and reasoning-based generation modes. The model introduces hybrid reasoning via Iterated Distillation and Amplification (IDA)—an iterative self-improvement strategy designed to scale alignment with general intelligence. Cogito v2 has been optimized for STEM, programming, instruction following, and tool use. It supports 128k context length and offers strong performance in both multilingual and code-heavy environments. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs

    by deepcogito131K context$1.25/M input tokens$1.25/M output tokens